• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1698
  • 419
  • 238
  • 214
  • 136
  • 93
  • 31
  • 26
  • 25
  • 21
  • 20
  • 15
  • 9
  • 8
  • 7
  • Tagged with
  • 3613
  • 598
  • 433
  • 364
  • 360
  • 359
  • 347
  • 328
  • 326
  • 296
  • 282
  • 257
  • 214
  • 214
  • 210
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Maximum margin learning under uncertainty

Tzelepis, Christos January 2018 (has links)
In this thesis we study the problem of learning under uncertainty using the statistical learning paradigm. We rst propose a linear maximum margin classi er that deals with uncertainty in data input. More speci cally, we reformulate the standard Support Vector Machine (SVM) framework such that each training example can be modeled by a multi-dimensional Gaussian distribution described by its mean vector and its covariance matrix { the latter modeling the uncertainty. We address the classi cation problem and de ne a cost function that is the expected value of the classical SVM cost when data samples are drawn from the multi-dimensional Gaussian distributions that form the set of the training examples. Our formulation approximates the classical SVM formulation when the training examples are isotropic Gaussians with variance tending to zero. We arrive at a convex optimization problem, which we solve e - ciently in the primal form using a stochastic gradient descent approach. The resulting classi er, which we name SVM with Gaussian Sample Uncertainty (SVM-GSU), is tested on synthetic data and ve publicly available and popular datasets; namely, the MNIST, WDBC, DEAP, TV News Channel Commercial Detection, and TRECVID MED datasets. Experimental results verify the e ectiveness of the proposed method. Next, we extended the aforementioned linear classi er so as to lead to non-linear decision boundaries, using the RBF kernel. This extension, where we use isotropic input uncertainty and we name Kernel SVM with Isotropic Gaussian Sample Uncertainty (KSVM-iGSU), is used in the problems of video event detection and video aesthetic quality assessment. The experimental results show that exploiting input uncertainty, especially in problems where only a limited number of positive training examples are provided, can lead to better classi cation, detection, or retrieval performance. Finally, we present a preliminary study on how the above ideas can be used under the deep convolutional neural networks learning paradigm so as to exploit inherent sources of uncertainty, such as spatial pooling operations, that are usually used in deep networks.
462

Asymptotic theory for Bayesian nonparametric procedures in inverse problems

Ray, Kolyan Michael January 2015 (has links)
The main goal of this thesis is to investigate the frequentist asymptotic properties of nonparametric Bayesian procedures in inverse problems and the Gaussian white noise model. In the first part, we study the frequentist posterior contraction rate of nonparametric Bayesian procedures in linear inverse problems in both the mildly and severely ill-posed cases. This rate provides a quantitative measure of the quality of statistical estimation of the procedure. A theorem is proved in a general Hilbert space setting under approximation-theoretic assumptions on the prior. The result is applied to non-conjugate priors, notably sieve and wavelet series priors, as well as in the conjugate setting. In the mildly ill-posed setting, minimax optimal rates are obtained, with sieve priors being rate adaptive over Sobolev classes. In the severely ill-posed setting, oversmoothing the prior yields minimax rates. Previously established results in the conjugate setting are obtained using this method. Examples of applications include deconvolution, recovering the initial condition in the heat equation and the Radon transform. In the second part of this thesis, we investigate Bernstein--von Mises type results for adaptive nonparametric Bayesian procedures in both the Gaussian white noise model and the mildly ill-posed inverse setting. The Bernstein--von Mises theorem details the asymptotic behaviour of the posterior distribution and provides a frequentist justification for the Bayesian approach to uncertainty quantification. We establish weak Bernstein--von Mises theorems in both a Hilbert space and multiscale setting, which have applications in $L^2$ and $L^\infty$ respectively. This provides a theoretical justification for plug-in procedures, for example the use of certain credible sets for sufficiently smooth linear functionals. We use this general approach to construct optimal frequentist confidence sets using a Bayesian approach. We also provide simulations to numerically illustrate our approach and obtain a visual representation of the different geometries involved.
463

Corporate Investment and Cash Savings under Uncertainty

Chen, Guojun January 2016 (has links)
This dissertation focuses the corporate behaviors in a dynamic world with uncertainty. Especially, I am interested in how firms tradeoff their investment and cash savings when external financing is costly. The first two chapters fit into this theme. One considers optimal investment and financing policies when uncertainty itself is time-varying, the second investigates how firms prepare themselves against devaluation risks. Both chapters build dynamic corporate theories and test them empirically. The third chapter steps back by asking why aggregate volatility is time-varying and why is it persistent in a dynamic general equilibrium with endogenous growth. I show that endogenous asset allocation between different assets can be the reason. In the first chapter I study how firms manage their cash savings, financing, and investment when aggregate uncertainty is time-varying. I develop and estimate a dynamic model featuring aggregate uncertainty shocks, costly external financing, investment irreversibility, and time-varying risk premia. In my model, firms have a precautionary-savings motive and real options to wait, both of which interact with time-varying uncertainty and are reinforced by state-dependent risk premia. My model confirms previous findings that firms save more in cash and invest less when aggregate uncertainty is high. In addition, I show that in the high uncertainty states, (1) firms with high profitability and low cash are more likely to delay equity issuance, (2) firms with low profitability and high cash are more likely to delay payout, and (3) aggregate equity issuance and payout are both lower. Finally, counterfactual experiments show that (1) a model without dynamic uncertainties cannot explain the observed firm behaviors in high uncertainty states, and (2) time-varying risk premia amplify the impact of the aggregate uncertainty shocks. In the second chapter, I investigate the relationship between investment and cash savings in a special setting: devaluation episodes in emerging markets. Devaluation events are typically anticipated by the economy but affect local firms in the tradable versus nontradable sectors differently. Tradable firms expect higher cash flows but nontradable firms expect lower ones, even their current cash flows are stable because of the currency-pegging. I build a model to show that, investment and cash savings are both complementarity, because of future prospects, and substitution because of limited current cash balance. Before devaluation, tradable firms invest more due to better expectation of the future but have to substitute for a lower cash savings tomorrow. Empirically, I use difference-in-difference approach and two devaluation episodes in Mexico and Argentina to test these predictions. I find strong evidence in Mexico that tradable firms invested more than nontradable firms and save less, as the devaluation was approaching. Evidence in Argentina is not strong. We discuss the potential remedies and future works to do. The final chapter explores asset allocation decisions and endogenous growth volatilities in an economy with endogenous growth. Firms have two produced inputs, capital and technology. When a representative firm optimally allocates the investment between the two inputs, both the consumption growth and its volatility are functions of the economy's technology-to-capital ratio. As a result, not only the long run consumption growth is volatile, but also its volatility is endogenously stochastic. Moreover, after a large negative or positive shock, the economy is away from its optimal allocation. This takes time for the economy to travel back to the optimal allocation because of the convex adjustment costs. As a result, both the consumption growth and its stochastic volatility are persistent. Finally, we discuss the asset pricing implication of the model and show that it microfounds Bansal and Yaron (2004) long-run risk model with time-varying volatilities.
464

Advances in Multiscale Methods with Applications in Optimization, Uncertainty Quantification and Biomechanics

Hu, Nan January 2016 (has links)
Advances in multiscale methods are presented from two perspectives which address the issue of computational complexity of optimizing and inverse analyzing nonlinear composite materials and structures at multiple scales. The optimization algorithm provides several solutions to meet the enormous computational challenge of optimizing nonlinear structures at multiple scales including: (i) enhanced sampling procedure that provides superior performance of the well-known ant colony optimization algorithm, (ii) a mapping-based meshing of a representative volume element that unlike unstructured meshing permits sensitivity analysis on coarse meshes, and (iii) a multilevel optimization procedure that takes advantage of possible weak coupling of certain scales. We demonstrate the proposed optimization procedure on elastic and inelastic laminated plates involving three scales. We also present an adaptive variant of the measure-theoretic approach (MTA) for stochastic characterization of micromechanical properties based on the observations of quantities of interest at the coarse (macro) scale. The salient features of the proposed nonintrusive stochastic inverse solver are: identification of a nearly optimal sampling domain using enhanced ant colony optimization algorithm for multiscale problems, incremental Latin-hypercube sampling method, adaptive discretization of the parameter and observation spaces, and adaptive selection of number of samples. A complete test data of the TORAY T700GC-12K-31E and epoxy #2510 material system from the NIAR report is employed to characterize and validate the proposed adaptive nonintrusive stochastic inverse algorithm for various unnotched and open-hole laminates. Advances in Multiscale methods also provides us a unique tool to study and analyze human bones, which can be seen as a composite material, too. We used two multiscale approaches for fracture analysis of full scale femur. The two approaches are the reduced order homogenization (ROH) and the novel accelerated reduced order homogenization (AROH). The AROH is based on utilizing ROH calibrated to limited data as a training tool to calibrate a simpler, single-scale anisotropic damage model. For bone tissue orientation, we take advantage of so-called Wolff’s law. The meso-phase properties are identified from the least square minimization of error between the overall cortical and trabecular bone properties and those predicted from the homogenization. The overall elastic and inelastic properties of the cortical and trabecular bone microstructure are derived from bone density that can be estimated from the Hounsfield units (HU). For model validation, we conduct ROH and AROH simulations of full scale finite element model of femur created from the QCT and compare the simulation results with available experimental data.
465

A avaliação econômico-financeira de investimentos sob condição de incerteza: uma comparação entre o método de Monte Carlo e o VPL fuzzy / The financial and economic evaluation of investments under uncertainly: a comparison between the Monte Carlo method and NPV fuzzy

Oliveira, Mário Henrique da Fonseca 19 September 2008 (has links)
Os métodos determinísticos utilizados para avaliação econômico-financeira de projetos de investimentos, como o Valor Presente Líquido (VPL) e a Taxa Interna de Retorno (TIR), contemplam exatidão do comportamento futuro das variáveis inerentes ao projeto. Porém, as imprevisibilidades futuras acrescidas da alta volatilidade da economia e tecnologia mundial tornam as análises determinísticas frágeis em situações onde existam incertezas, o que pode levar gestores e investidores a tomar decisões equivocadas quanto à alocação de capital. O presente trabalho tem como objetivo geral a comparação entre dois métodos que podem ser utilizados para avaliação de investimentos que abordam a condição de incerteza. O método de Monte Carlo, em seu caráter estatístico, permite que as variáveis presentes sejam consideradas por meio de distribuições de probabilidade, as quais associadas a geração de números aleatórios fornecem uma resposta que considera as incertezas presentes. O Valor Presente Líquido fuzzy constitui-se em um método alternativo para análise, o qual considera as variáveis incertas como números nebulosos, ou seja, concepções matemáticas que não apresentam fronteiras rígidas. Por meio da aplicação dos métodos em uma situação real de investimento, buscou-se realizar uma análise comparativa, que levasse em conta os resultados numéricos obtidos e a conceituação teórica envolvida. / The deterministic methods used for economical and financial evaluation of investments projects, such as Net Present Value (NPV) and Internal Rate Return (IRR) consider the future comportment of the project variables as exact values. Nevertheless, the future unpredictability and the high volatility of world economy and technology make fragile the deterministic analysis under situations that uncertainty is present, what may lead managers and investors to take bad decisions about capital allocation. The main objective of this assignment is two compare two different methods used to evaluate investments under uncertainty. The Monte Carlo method is its statistical character, allows to associate probability distributions with random numbers and this application provides that includes uncertainty. The fuzzy Net Present Value is an alternative method to analyze investments, which consider uncertainty variable as fuzzy numbers, i.e., mathematics conceptions that do not present absolute borders. Two different kinds of comparisons were produced by applications of both methods in a real investment situation: the first was developed based on numeric analysis; the second is based on a theory that involves those methods.
466

Análise de expressões gênicas com erros de medida e aplicação em dados reais / Gene expression analysis taking into account measurement errors and application to real data

Ribeiro, Adèle Helena 03 June 2014 (has links)
Toda medida, desde que feita por um instrumento real, tem uma imprecisão associada. Neste trabalho, abordamos a questão das imprecisões em experimentos de microarranjos de cDNA de dois canais, uma tecnologia que tem sido muito explorada nos últimos anos e que ainda é um importante auxiliar nos estudos de expressões gênicas. Dezenas de milhares de representantes de genes são impressos em uma lâmina de vidro e hibridizados simultaneamente com RNA mensageiro de duas amostras diferentes de células. Essas amostras são marcadas com corantes fluorescentes diferentes e a lâmina, após a hibridização, é digitalizada, obtendo-se duas imagens. As imagens são analisadas com programas especiais que segmentam os locais que estavam os genes e extraem estatísticas dos píxeis de cada local. Por exemplo, a média, a mediana e a variância das intensidades do conjunto de píxeis de cada local (o mesmo é feito normalmente para uma área em volta de cada local, chamada de fundo). Estimadores estatísticos como o da variância nos dão uma estimativa de quão precisa é uma certa medida. Uma vez de posse das estimativas das intensidades de cada local, para se obter a efetiva expressão de um gene, algumas transformações são feitas nos dados de forma a eliminar variabilidades sistemáticas. Neste trabalho, mostramos como podem ser feitas as análises a partir de uma medida de expressão gênica com um erro estimado. Mostramos como estimar essa imprecisão e estudamos, em termos de propagação da imprecisão, os efeitos de algumas transformações realizadas nos dados, por exemplo, a remoção do viés estimado pelo método de regressão local robusta, mais conhecido como \\textit{lowess}. Uma vez obtidas as estimativas das imprecisões propagadas, mostramos também como utilizá-las na determinação dos genes diferencialmente expressos entre as amostras estudadas. Por fim, comparamos os resultados com os obtidos por formas clássicas de análise, em que são desconsideradas as imprecisões das medidas. Concluímos que a modelagem das imprecisões das medidas pode favorecer as análises, já que os resultados obtidos em uma aplicação com dados reais de expressões gênicas foram condizentes com os que encontramos na literatura. / Any measurement, since it is made for a real instrument, has an uncertainty associated with it. In the present paper, we address this issue of uncertainty in two-channel cDNA Microarray experiments, a technology that has been widely used in recent years and is still an important tool for gene expression studies. Tens of thousands of gene representatives are printed onto a glass slide and hybridized simultaneously with mRNA from two different cell samples. Different fluorescent dyes are used for labeling both samples. After hybridization, the glass slide is scanned yielding two images. Image processing and analysis programs are used for spot segmentation and pixel statistics computation, for instance, the mean, median and variance of pixel intensities for each spot. The same statistics are computed for the pixel intensities in the background region. Statistical estimators such as the variance gives us an estimate of the accuracy of a measurement. Based on the intensity estimates for each spot, some data transformations are applied in order to eliminate systematic variability so we can obtain the effective gene expression. This paper shows how to analyze gene expression measurements with an estimated error. We presented an estimate of this uncertainty and we studied, in terms of error propagation, the effects of some data transformations. An example of data transformation is the correction of the bias estimated by a robust local regression method, also known as \\textit{lowess}. With the propagated errors obtained, we also showed how to use them for detecting differentially expressed genes between different conditions. Finally, we compared the results with those obtained by classical analysis methods, in which the measurement errors are disregarded. We conclude that modeling the measurements uncertainties can improve the analysis, since the results obtained in a real gene expressions data base were consistent with the literature.
467

Stochastic response determination and spectral identification of complex dynamic structural systems

Brudastova, Olga January 2018 (has links)
Uncertainty propagation in engineering mechanics and dynamics is a highly challenging problem that requires development of analytical/numerical techniques for determining the stochastic response of complex engineering systems. In this regard, although Monte Carlo simulation (MCS) has been the most versatile technique for addressing the above problem, it can become computationally daunting when faced with high-dimensional systems or with computing very low probability events. Thus, there is a demand for pursuing more computationally efficient methodologies. Further, most structural systems are likely to exhibit nonlinear and time-varying behavior when subjected to extreme events such as severe earthquake, wind and sea wave excitations. In such cases, a reliable identification approach is behavior and for assessing its reliability. Current work addresses two research themes in the field of stochastic engineering dynamics related to the above challenges. In the first part of the dissertation, the recently developedWiener Path Integral (WPI) technique for determining the joint response probability density function (PDF) of nonlinear systems subject to Gaussian white noise excitation is generalized herein to account for non-white, non-Gaussian, and non-stationary excitation processes. Specifically, modeling the excitation process as the output of a filter equation with Gaussian white noise as its input, it is possible to define an augmented response vector process to be considered in the WPI solution technique. A significant advantage relates to the fact that the technique is still applicable even for arbitrary excitation power spectrum forms. In such cases, it is shown that the use of a filter approximation facilitates the implementation of the WPI technique in a straightforward manner, without compromising its accuracy necessarily. Further, in addition to dynamical systems subject to stochastic excitation, the technique can also account for a special class of engineering mechanics problems where the media properties are modeled as non-Gaussian and non-homogeneous stochastic fields. Several numerical examples pertaining to both single- and multi-degree-of freedom systems are considered, including a marine structural system exposed to flow-induced non-white excitation, as well as a beam with a non-Gaussian and non-homogeneous Young’s modulus. Comparisons with MCS data demonstrate the accuracy of the technique. In the second part of the dissertation, a novel multiple-input/single-output (MISO) system identification technique is developed for parameter identification of nonlinear time-variant multi-degree-of-freedom oscillators with fractional derivative terms subject to incomplete non-stationary data. The technique utilizes a representation of the nonlinear restoring forces as a set of parallel linear subsystems. In this regard, the oscillator is transformed into an equivalent MISO system in the wavelet domain. Next, a recently developed L1-norm minimization procedure based on compressive sampling theory is applied for determining the wavelet coefficients of the available incomplete non-stationary input-output (excitation-response) data. Finally, these wavelet coefficients are utilized to determine appropriately defined time- and frequency-dependent wavelet based frequency response functions and related oscillator parameters. A nonlinear time-variant system with fractional derivative elements is used as a numerical example to demonstrate the reliability of the technique even in cases of noise corrupted and incomplete data.
468

Práticas contraceptivas e gestão da heterossexualidade: agência individual, contextos relacionais e gênero. / Contraceptive practices and heterosexual management: individual agency. Relational context and gender

Cristiane da Silva Cabral 18 April 2011 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / A tese versa sobre as grandes questões relativas à contracepção no Brasil. Integra um esforço por analisar condutas referentes à contracepção, segundo lógicas que priorizam a situacionalidade e a relacionalidade de tais fenômenos. As estratégias para gerir a fecundidade são constitutivas da sexualidade heterossexual. Mulheres e homens podem usar ou não contracepção; as razões dessa conduta extrapolam aspectos concernentes a informação e acesso. Busca-se compreender as práticas contraceptivas a partir do processo do aprendizado das lógicas relacionais e de gênero, em diferentes momentos dos percursos biográficos: o início da trajetória afetivo-sexual, os contextos de irrupção de uma gravidez e o encerramento da potencialidade reprodutiva, por meio da esterilização contraceptiva. Este compósito demandou a utilização de materiais empíricos distintos para a construção e análise das etapas eleitas dos percursos biográficos. Enfoca-se, primeiramente, o momento de passagem à sexualidade com parceiro. Problematiza-se a ideia de relaxamento das práticas contraceptivas, a partir da iniciação sexual, concepção corrente na literatura nacional em função do decréscimo de uso de preservativo em relações sexuais posteriores. Aborda-se, em seguida, as atitudes e as questões presentes no processo de construção da prática contraceptiva, no momento em que a vida sexual se torna regular. A proposição da perspectiva da gestão contraceptiva sublinha as posições dos protagonistas, marcadas pelo gênero. Por último, analisa-se as circunstâncias biográficas e os cenários relacionais da esterilização contraceptiva, a qual emerge como uma estratégia de estabilização ou de consolidação de um percurso contraceptivo/reprodutivo. O debate em torno da contracepção no Brasil apresenta a tendência a enfatizar a determinação social para explicar as gestações imprevistas. Contudo, salienta-se, com base em uma literatura crítica, as dimensões de agência individual, ainda que circunscritas por um campo delimitado de possibilidades. / The thesis deals with the major issues relating to contraception in Brazil. Integrates an effort to analyze behaviors related to contraception, according to logics that prioritize the situatedness and relationality of such phenomena. Strategies to manage fertility are constitutive of heterosexual sexuality. Women and men can use contraception or not; the reasons for that conduct extrapolate aspects concerning information and access. We seek to understand contraceptive practices from the process of learning of relational logic and gender, at different times of biographies: the beginning of the affective-sexual trajectory, the contexts of eruption of pregnancy and termination of reproductive potential by means contraceptive sterilization. This composite require use of different empirical materials for the construction and analysis of the stages of elected biographies. It focuses, first, the passing moment sexuality with a partner. Problematizes the idea of relaxation of contraceptive practices, from the sexual initiation, current design in the national literature due to the decrease in condom use in sex later. We discuss then the attitudes and issues present in the construction process of contraceptive practice, at the time the sexual life becomes regular. The proposition from the perspective of contraceptive management emphasizes the positions of the protagonists, marked by gender. Finally, we analyze the biographical circumstances and relational scenarios of contraceptive sterilization, which emerges as a strategy of stabilization or consolidation of a contraceptive / reproductive route. The debate around the contraception in Brazil has a tendency to emphasize the social determining to explain unexpected pregnancies. However, it is noted based on a critical literature, the dimensions of individual agency, although circumscribed by a delimited range of possibilities.
469

Estudo sobre a determinação de antimônio em amostras ambientais pelo método de análise por ativação com nêutrons. validação da metodologia e determinação da incerteza da medição / A study on antimony determination in environmental samples by neutron activation analysis. validation of the methodology and determination of the uncertainty of the measurement

Tassiane Cristina Martins Matsubara 09 September 2011 (has links)
O antimônio é um elemento encontrado em baixas concentrações no meio ambiente. No entanto, a sua determinação tem despertado grande interesse devido ao conhecimento de sua toxicidade e da crescente aplicação na indústria. A determinação de antimônio tem sido um desafio para os pesquisadores uma vez que o elemento é encontrado em baixas concentrações, o que faz de sua análise uma tarefa difícil. Portanto, embora a análise por ativação de nêutrons (NAA) seja um método adequado para a determinação de vários elementos em diferentes tipos de matriz, no caso de Sb, a análise apresenta algumas dificuldades. A principal dificuldade é devido às interferências espectrais. O objetivo desta pesquisa foi validar o método de NAA para a determinação de Sb em amostras ambientais. Para estabelecer condições adequadas para a determinação de Sb, ensaios preliminares foram realizados para posterior análise de materiais de referência certificados (MRC). O procedimento experimental consistiu em irradiar amostras juntamente com padrão sintético de Sb por períodos de 8 ou 16 horas no reator nuclear de pesquisa IEA-R1, seguido de espectrometria de raios gama. A quantificação de Sb foi realizada pela medição dos radioisótopos de 122Sb e 124Sb. Ensaios preliminares indicaram a presença de Sb em papel de filtro Whatman, utilizado no preparo do padrão, porém em teor muito baixo, podendo ser considerado desprezível. No caso do material plástico utilizado como invólucro para a irradiação da amostra, foi verificado que ele deve ser escolhido cuidadosamente, pois dependendo do plástico, este pode conter Sb. A análise da estabilidade da solução padrão diluída de Sb, dentro do período de oito meses, mostrou que não há alteração significativa na concentração deste elemento. Os resultados obtidos nas análises dos materiais de referência certificados indicaram a formação de radioisótopos de 76As e também de 134Cs e 152Eu, podendo interferir na determinação de Sb pela medição de 122Sb, devido à proximidade de energias dos raios gama emitidos. Além disso, a alta atividade do 24Na pode mascarar o pico do 122Sb e dificultar a sua detecção. As análises dos MRC indicaram que a exatidão e a precisão dos resultados de Sb dependem principalmente do tipo e composição da matriz, da sua concentração na amostra, do radioisotopo medido e do tempo de decaimento utilizado para a medição. A avaliação dos componentes que contribuem para a medição da incerteza da concentração de Sb, mostrou que a maior contribuição da incerteza é dada pela estatística de contagem da amostra. Os resultados da avaliação da incerteza indicaram também que o valor da incerteza padrão combinada depende do radioisótopo medido e do tempo de decaimento utilizado para as contagens. Este estudo mostrou que a NAA é um método bastante adequado na determinação de Sb em amostras ambientais, possibilitando a obtenção de resultados com baixos valores de incerteza e por ser uma técnica puramente instrumental, permite a análise de um grande número de amostras. / Antimony is an element found in low concentrations in the environment. However, its determination has attracted great interest due to the knowledge of its toxicity and increasing application in industry. The determination of antimony has been a challenge for researchers since this element is found in low concentrations which make its analysis a difficult task. Therefore, although neutron activation analysis (NAA) is an appropriate method for the determination of various elements in different types of matrix, in the case of Sb its analysis presents some difficulties, mainly due to spectral interferences. The objective of this research was to validate the NAA method for Sb determination in environmental samples. To establish appropriate conditions for Sb determinations, preliminary assays were carried out for further analysis of certified reference materials (CRM). The experimental procedure was to irradiate samples with a synthetic Sb standard for a period of 8 or 16 hours in the IEA-R1 nuclear research reactor, followed by gamma ray spectrometry. The quantification of Sb was performed by measuring the radioactive isotopes of 122Sb and 124Sb. The results of preliminary assays indicated the presence of Sb in Whatman no 40 filter paper used in the preparation of the synthetic standard, but at very low concentrations, which could be considered negligible. In the case of the plastic material used in bags for the sample irradiation, it should be chosen carefully, because depending on the thickness, they may contain Sb. The analyses of the stability of the diluted Sb standard solution showed no change in the Sb concentration within eight months after its preparation. Results obtained in the analysis of certified reference materials indicated the interference of 76As and also of 134Cs and 152Eu in the Sb determinations by measuring 122Sb, due to the proximity of the gamma ray energies. The high activity of 24Na can also mask the peak of 122Sb hindering its detection. The analysis of CRM indicated that the accuracy and precision of the results depend on the type of matrix analyzed, its concentration in the sample, radioisotope measured and of the decay time used for the measurements. The analysis of the components that contribute to the uncertainty of the Sb concentration indicated that the largest uncertainty contribution is given by statistical counting of the sample. The findings also showed that the value of combined standard uncertainty depends on the radioisotopes of Sb measured and the decay time used for counting. This study showed that NAA is a very adequate method for Sb determinations in environmental samples furnishing results with low uncertainty values.
470

A geometrical framework for forecasting cost uncertainty in innovative high value manufacturing

Schwabe, Oliver January 2018 (has links)
Increasing competition and regulation are raising the pressure on manufacturing organisations to innovate their products. Innovation is fraught by significant uncertainty of whole product life cycle costs and this can lead to hesitance in investing which may result in a loss of competitive advantage. Innovative products exist when the minimum information for creating accurate cost models through contemporary forecasting methods does not exist. The scientific research challenge is that there are no forecasting methods available where cost data from only one time period suffices for their application. The aim of this research study was to develop a framework for forecasting cost uncertainty using cost data from only one time period. The developed framework consists of components that prepare minimum information for conversion into a future uncertainty range, forecast a future uncertainty range, and propagate the uncertainty range over time. The uncertainty range is represented as a vector space representing the state space of actual cost variance for 3 to n reasons, the dimensionality of that space is reduced through vector addition and a series of basic operators is applied to the aggregated vector in order to create a future state space of probable cost variance. The framework was validated through three case studies drawn from the United States Department of Defense. The novelty of the framework is found in the use of geometry to increase the amount of insights drawn from the cost data from only one time period and the propagation of cost uncertainty based on the geometric shape of uncertainty ranges. In order to demonstrate its benefits to industry, the framework was implemented at an aerospace manufacturing company for identifying potentially inaccurate cost estimates in early stages of the whole product life cycle.

Page generated in 0.0353 seconds