• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 13
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 76
  • 32
  • 16
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Geostatistics for constrained variables: positive data, compositions and probabilities. Applications to environmental hazard monitoring

Tolosana Delgado, Raimon 19 December 2005 (has links)
Aquesta tesi estudia com estimar la distribució de les variables regionalitzades l'espai mostral i l'escala de les quals admeten una estructura d'espai Euclidià. Apliquem el principi del treball en coordenades: triem una base ortonormal, fem estadística sobre les coordenades de les dades, i apliquem els output a la base per tal de recuperar un resultat en el mateix espai original. Aplicant-ho a les variables regionalitzades, obtenim una aproximació única consistent, que generalitza les conegudes propietats de les tècniques de kriging a diversos espais mostrals: dades reals, positives o composicionals (vectors de components positives amb suma constant) són tractades com casos particulars. D'aquesta manera, es generalitza la geostadística lineal, i s'ofereix solucions a coneguts problemes de la no-lineal, tot adaptant la mesura i els criteris de representativitat (i.e., mitjanes) a les dades tractades. L'estimador per a dades positives coincideix amb una mitjana geomètrica ponderada, equivalent a l'estimació de la mediana, sense cap dels problemes del clàssic kriging lognormal. El cas composicional ofereix solucions equivalents, però a més permet estimar vectors de probabilitat multinomial. Amb una aproximació bayesiana preliminar, el kriging de composicions esdevé també una alternativa consistent al kriging indicador. Aquesta tècnica s'empra per estimar funcions de probabilitat de variables qualsevol, malgrat que sovint ofereix estimacions negatives, cosa que s'evita amb l'alternativa proposada. La utilitat d'aquest conjunt de tècniques es comprova estudiant la contaminació per amoníac a una estació de control automàtic de la qualitat de l'aigua de la conca de la Tordera, i es conclou que només fent servir les tècniques proposades hom pot detectar en quins instants l'amoni es transforma en amoníac en una concentració superior a la legalment permesa. / This Thesis presents an estimation procedure for the distribution of regionalized variables with sample space and scale admitting an Euclidean structure. We apply the principle of working on coordinates: choose an orthonormal basis; do statistics on the coordinates of your observations on that basis; and, by applying the output to the basis, you will recover a result within the original space. Applying this procedure to regionalized variables, we obtain a unified, consistent method, with the same properties of classical linear kriging techniques, but valid for several sample spaces: real data, positive data and compositions (vectors of positive components summing up to a constant) are regarded as particular cases. In this way we generalize the linear kriging techniques, and offer a solution to several well-known problems of the non-linear ones, by adapting the measure of the space and the averaging criterion (the way means are computed) to the data. The obtained estimator for positive variables is a weighted geometric mean, equivalent to estimate the median, which has none of the drawback of classical lognormal kriging. For compositional data, equivalent results are obtained, but which also serve to treat multinomial probability vectors. By combining this with a preliminary Bayesian estimation, our kriging for compositions become also a valid alternative to indicator kriging, without its order-relation problems (i.e. the rather-usual negative estimates of some probabilities). These techniques are validated by studying the ammonia pollution hazard in an automatic water quality control station placed in a small Mediterranean river. Only the proposed techniques allow us to assess when the secondary pollution by ammonia exceeds the existing legal threshold.
42

Anbudsstrategi vid offentlig upphandling : Beslutsmodell vid analys av anbud och prissättning hos Permobil AB

Krohn, Lisa, Henriksson, Julia January 2015 (has links)
Syftet med studien har varit att undersöka om det är möjligt att tillämpa en beslutsmodell för att lösa ett problem avseende anbudsstrategi vid offentlig upphandling. När ett företag har en kund som omfattas av lagen om offentlig upphandling gäller särskilda regler vid upphandlingar. För ett företag är det viktigt att känna till dessa regler vid inlämning av anbud. Detta är dock oftast inte tillräckligt för att vinna en upphandling, eftersom det då också gäller att deras produkts jämförelsepris är lägre än konkurrenternas. En beslutsmodell, baserad på data från tidigare upphandlingar, har tagits fram för att kunna underbygga ett verktyg gällande anbudsstrategi. Beslutsmodellen är uppbyggd av diverse teorier som beslutsmatris, beslutsträd, lognormal fördelning och förväntat monetärt värde. Tillvägagångssättet har bestått av insamling av information och data via intervjuer, samt andra källor såsom litteratur, artiklar, uppsatser och upphandlingar, där metoden design science har använts. Utifrån en generell beslutsmatris och ett beslutsträd samt beräkningar har beslutsmodellen kunnat tas fram. Beslutsmodellen är i första hand avsedd för att underbygga ett verktyg för företag som är leverantör av eldrivna rullstolar men skulle även kunna appliceras i andra typer av upphandlingar. Beslutsmodellen kan ge stöd till ett verktyg som i sin tur skulle kunna användas av beslutsfattare. Beslutsfattarna får dock inte endast utgå från dess information, som är baserad på tidigare upphandlingar, utan bör också analysera konkurrenternas nutidssituation. Nyckelord: Beslutsmodell, offentlig upphandling, anbudspris, jämförelsepris, beslutsfattare, beslutsmatris, beslutsträd, lognormal fördelning, förväntat monetärt värde / The aim of the study has been to investigate if it is possible to apply a decision model when solving a problem with bidding strategies in public procurement. When a company has costumers comprehended by the laws in public procurement, there are certain rules involved. For a company it is important to be aware about these rules, when setting their bids. Though this is often not enough for a winning procurement, because of the fact that the products comparison prices need to be lower than the competitors'. A decision model, based on data from earlier procurements, has been developed to reinforce a tool with bidding strategies. The decision model is composed by various theories, like decision matrix, decision tree, lognormal distribution and expected monetary value. The procedure through the study has contained data and information acquisition via interviews and other references like literature, articles, thesis and procurements. Based on a general decision matrix and a decision tree, as with calculations, the decision model has been generated. The decision model is mainly designed to reinforce a tool for companies supplying electric wheelchairs to costumers but could also be applicable in other types of procurements. The decision model could be a support when constructing a tool, which consequently could be used by decision makers. The decision makers can however not only adopt the ideas based on earlier procurements. They also need to analyse the competitors’ situations in present time. Keywords: Decision model, public procurement, bidding prices, comparison prices, decision maker, decision matrix, decision tree, lognormal distribution, expected monetary value
43

Análise bayesiana objetiva para as distribuições normal generalizada e lognormal generalizada

Jesus, Sandra Rêgo de 21 November 2014 (has links)
Made available in DSpace on 2016-06-02T20:04:53Z (GMT). No. of bitstreams: 1 6424.pdf: 5426262 bytes, checksum: 82bb9386f85845b0d3db787265ea8236 (MD5) Previous issue date: 2014-11-21 / The Generalized Normal (GN) and Generalized lognormal (logGN) distributions are flexible for accommodating features present in the data that are not captured by traditional distribution, such as the normal and the lognormal ones, respectively. These distributions are considered to be tools for the reduction of outliers and for the obtention of robust estimates. However, computational problems have always been the major obstacle to obtain the effective use of these distributions. This paper proposes the Bayesian reference analysis methodology to estimate the GN and logGN. The reference prior for a possible order of the model parameters is obtained. It is shown that the reference prior leads to a proper posterior distribution for all the proposed model. The development of Monte Carlo Markov Chain (MCMC) is considered for inference purposes. To detect possible influential observations in the models considered, the Bayesian method of influence analysis on a case based on the Kullback-Leibler divergence is used. In addition, a scale mixture of uniform representation of the GN and logGN distributions are exploited, as an alternative method in order, to allow the development of efficient Gibbs sampling algorithms. Simulation studies were performed to analyze the frequentist properties of the estimation procedures. Real data applications demonstrate the use of the proposed models. / As distribuições normal generalizada (NG) e lognormal generalizada (logNG) são flexíveis por acomodarem características presentes nos dados que não são capturadas por distribuições tradicionais, como a normal e a lognormal, respectivamente. Essas distribuições são consideradas ferramentas para reduzir as observações aberrantes e obter estimativas robustas. Entretanto o maior obstáculo para a utilização eficiente dessas distribuições tem sido os problemas computacionais. Este trabalho propõe a metodologia da análise de referência Bayesiana para estimar os parâmetros dos modelos NG e logNG. A função a priori de referência para uma possível ordem dos parâmetros do modelo é obtida. Mostra-se que a função a priori de referência conduz a uma distribuição a posteriori própria, em todos os modelos propostos. Para fins de inferência, é considerado o desenvolvimento de métodos Monte Carlo em Cadeias de Markov (MCMC). Para detectar possíveis observações influentes nos modelos considerados, é utilizado o método Bayesiano de análise de influência caso a caso, baseado na divergência de Kullback-Leibler. Além disso, uma representação de mistura de escala uniforme para as distribuições NG e logNG é utilizada, como um método alternativo, para permitir o desenvolvimento de algoritmos de amostrador de Gibbs. Estudos de simulação foram desenvolvidos para analisar as propriedades frequentistas dos processos de estimação. Aplicações a conjuntos de dados reais mostraram a aplicabilidade dos modelos propostos.
44

Reconstrução de energia em calorímetros operando em alta luminosidade usando estimadores de máxima verossimilhança / Reconstrution of energy in calorimeters operating in high brigthness enviroments using maximum likelihood estimators

Paschoalin, Thiago Campos 15 March 2016 (has links)
Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2016-08-12T11:54:08Z No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Rejected by Adriana Oliveira (adriana.oliveira@ufjf.edu.br), reason: Isabela, verifique que no resumo há algumas palavras unidas. on 2016-08-15T13:06:32Z (GMT) / Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2016-08-15T13:57:16Z No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Rejected by Adriana Oliveira (adriana.oliveira@ufjf.edu.br), reason: separar palavras no resumo e palavras-chave on 2016-08-16T11:34:37Z (GMT) / Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2016-12-19T13:07:02Z No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Rejected by Adriana Oliveira (adriana.oliveira@ufjf.edu.br), reason: Consertar palavras unidas no resumo on 2017-02-03T12:27:10Z (GMT) / Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2017-02-03T12:51:52Z No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-02-03T12:54:15Z (GMT) No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Made available in DSpace on 2017-02-03T12:54:15Z (GMT). No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) Previous issue date: 2016-03-15 / Esta dissertação apresenta técnicas de processamento de sinais a fim de realizar a Estimação da energia, utilizando calorimetria de altas energias. O CERN, um dos mais importantes centros de pesquisa de física de partículas, possui o acelerador de partículas LHC, onde está inserido o ATLAS. O TileCal, importante calorímetro integrante do ATLAS, possui diversos canais de leitura, operando com altas taxas de eventos. A reconstrução da energia das partículas que interagem com este calorímetro é realizada através da estimação da amplitude do sinal gerado nos canais do mesmo. Por este motivo, a modelagem correta do ruído é importante para se desenvolver técnicas de estimação eficientes. Com o aumento da luminosidade (número de partículas que incidem no detector por unidade de tempo) no TileCal, altera-se o modelo do ruído, o que faz com que as técnicas de estimação utilizadas anteriormente apresentem uma queda de desempenho. Com a modelagem deste novo ruído como sendo uma Distribuição Lognormal, torna possível o desenvolvimento de uma nova técnica de estimação utilizando Estimadores de Máxima Verossimilhança (do inglês Maximum Likelihood Estimator MLE), aprimorando a estimação dos parâmetros e levando à uma reconstrução da energia do sinal de forma mais correta. Uma nova forma de análise da qualidade da estimação é também apresentada, se mostrando bastante eficiente e útil em ambientes de alta luminosidade. A comparação entre o método utilizado pelo CERN e o novo método desenvolvido mostrou que a solução proposta é superior em desempenho, sendo adequado o seu uso no novo cenário de alta luminosidade no qual o TileCal estará sujeito a partir de 2018. / This paper presents signal processing techniques that performs signal detection and energy estimation using calorimetry high energies. The CERN, one of the most important physics particles research center, has the LHC, that contains the ATLAS. The TileCal, important device of the ATLAS calorimeter, is the component that involves a lot of parallel channels working, involving high event rates. The reconstruction of the signal energy that interact with this calorimeter is performed through estimation of the amplitude of signal generated by this calorimter. So, accurate noise modeling is important to develop efficient estimation techniques. With high brightness in TileCal, the noise model modifies, which leads a performance drop of estimation techniques used previously. Modelling this new noise as a lognormal distribution allows the development of a new estimation technique using the MLE (Maximum Like lihood Estimation), improving parameter sestimation and leading to a more accurately reconstruction of the signal energy. A new method to analise the estimation quality is presented, wich is very effective and useful in high brightness enviroment conditions. The comparison between the method used by CERN and the new method developed revealed that the proposed solution is superior and is suitable to use in this kind of ambient that TileCal will be working from 2018.
45

On the calibration of Lévy option pricing models / Izak Jacobus Henning Visagie

Visagie, Izak Jacobus Henning January 2015 (has links)
In this thesis we consider the calibration of models based on Lévy processes to option prices observed in some market. This means that we choose the parameters of the option pricing models such that the prices calculated using the models correspond as closely as possible to these option prices. We demonstrate the ability of relatively simple Lévy option pricing models to nearly perfectly replicate option prices observed in nancial markets. We speci cally consider calibrating option pricing models to barrier option prices and we demonstrate that the option prices obtained under one model can be very accurately replicated using another. Various types of calibration are considered in the thesis. We calibrate a wide range of Lévy option pricing models to option price data. We con- sider exponential Lévy models under which the log-return process of the stock is assumed to follow a Lévy process. We also consider linear Lévy models; under these models the stock price itself follows a Lévy process. Further, we consider time changed models. Under these models time does not pass at a constant rate, but follows some non-decreasing Lévy process. We model the passage of time using the lognormal, Pareto and gamma processes. In the context of time changed models we consider linear as well as exponential models. The normal inverse Gaussian (N IG) model plays an important role in the thesis. The numerical problems associated with the N IG distribution are explored and we propose ways of circumventing these problems. Parameter estimation for this distribution is discussed in detail. Changes of measure play a central role in option pricing. We discuss two well-known changes of measure; the Esscher transform and the mean correcting martingale measure. We also propose a generalisation of the latter and we consider the use of the resulting measure in the calculation of arbitrage free option prices under exponential Lévy models. / PhD (Risk Analysis), North-West University, Potchefstroom Campus, 2015
46

On the calibration of Lévy option pricing models / Izak Jacobus Henning Visagie

Visagie, Izak Jacobus Henning January 2015 (has links)
In this thesis we consider the calibration of models based on Lévy processes to option prices observed in some market. This means that we choose the parameters of the option pricing models such that the prices calculated using the models correspond as closely as possible to these option prices. We demonstrate the ability of relatively simple Lévy option pricing models to nearly perfectly replicate option prices observed in nancial markets. We speci cally consider calibrating option pricing models to barrier option prices and we demonstrate that the option prices obtained under one model can be very accurately replicated using another. Various types of calibration are considered in the thesis. We calibrate a wide range of Lévy option pricing models to option price data. We con- sider exponential Lévy models under which the log-return process of the stock is assumed to follow a Lévy process. We also consider linear Lévy models; under these models the stock price itself follows a Lévy process. Further, we consider time changed models. Under these models time does not pass at a constant rate, but follows some non-decreasing Lévy process. We model the passage of time using the lognormal, Pareto and gamma processes. In the context of time changed models we consider linear as well as exponential models. The normal inverse Gaussian (N IG) model plays an important role in the thesis. The numerical problems associated with the N IG distribution are explored and we propose ways of circumventing these problems. Parameter estimation for this distribution is discussed in detail. Changes of measure play a central role in option pricing. We discuss two well-known changes of measure; the Esscher transform and the mean correcting martingale measure. We also propose a generalisation of the latter and we consider the use of the resulting measure in the calculation of arbitrage free option prices under exponential Lévy models. / PhD (Risk Analysis), North-West University, Potchefstroom Campus, 2015
47

Recherche d'éléments répétés par analyse des distributions de fréquences d'oligonucléotides

Provencher, Benjamin January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
48

Developing accident-speed relationships using a new modelling approach

Imprialou, Maria-Ioanna January 2015 (has links)
Changing speed limit leads to proportional changes in average speeds which may affect the number of traffic accident occurrences. It is however critical and challenging to evaluate the impact of a speed limit alteration on the number and severity of accidents due primarily to the unavailability of adequate data and the inherent limitations of existing approaches. Although speed is regarded as one of the main contributory factors in traffic accident occurrences, research findings are inconsistent. Independent of the robustness of their statistical approaches, accident frequency models typically use accident grouping concepts based on spatial criteria (e.g. accident counts by link termed as a link-based approach). In the link-based approach, the variability of accidents is explained by highly aggregated average measures of explanatory variables that may be inappropriate, especially for time-varying variables such as speed and volume. This thesis re-examines accident-speed relationships by developing a new accident data aggregation method that enables improved representation of the road conditions just before accident occurrences in order to evaluate the impact of a potential speed limit increase on the UK motorways (e.g. from 70 mph to 80 mph). In this work, accidents are aggregated according to the similarity of their pre-accident traffic and geometric conditions, forming an alternative accident count dataset termed as the condition-based approach. Accident-speed relationships are separately developed and compared for both approaches (i.e. link-based and condition-based) by employing the reported annual accidents that occurred on the Strategic Road Network of England in 2012 along with traffic and geometric variables. Accident locations were refined using a fuzzy-logic-based algorithm designed for the study area with 98.9% estimated accuracy. The datasets were modelled by injury severity (i.e. fatal and serious or slight) and by number of vehicles involved (i.e. single-vehicle and multiple-vehicle) using the multivariate Poisson lognormal regression, with spatial effects for the link-based model under a full Bayesian inference method. The results of the condition-based models imply that single-vehicle accidents of all severities and multiple-vehicle accidents with fatal or serious injuries increase at higher speed conditions, particularly when these are combined with lower volumes. Multiple-vehicle slight injury accidents were not found to be related with higher speeds, but instead with congested traffic. The outcomes of the link-based model were almost the opposite; suggesting that the speed-accident relationship is negative. The differences between the results reveal that data aggregation may be crucial, yet so far overlooked in the methodological aspect of accident data analyses. By employing the speed elasticity of motorway accidents that was derived from the calibrated condition-based models it has been found that a 10 mph increase in UK motorway speed limit (i.e. from 70 mph to 80 mph) would result in a 6-12% increase in fatal and serious injury accidents and 1-3% increase in slight injury accidents.
49

Statistical Analysis and Mechanistic Modeling of Water Quality: Hillsborough Bay, Florida

Hackett, Keith 01 January 2011 (has links)
Nutrient pollution has been identified as a significant threat to U.S. coastal and estuarine water quality. Though coastal and estuarine waters need nutrients to maintain a healthy, productive ecosystem, excess nutrients can lead to eutrophication. There are significant potential negative consequences associated with eutrophication, including loss of habitat, loss of economic activity, and direct threats to human health. Hillsborough Bay experienced eutrophication in the 1960s and 1970s due to a rapidly growing population and associated increases in nutrient pollution. These eutrophic conditions led to more frequent phytoplankton and macroalgae blooms and declines in seagrasses. To address these problems, a series of actions were taken including legislation limiting nutrient concentrations from domestic wastewater treatment plants, development of water quality and nutrient loading targets, and establishment of seagrass restoration and protection goals. Since the 1970s, water quality improvements and increasing seagrass acreages have been documented throughout Tampa Bay. In the current study, a series of analyses and tools are developed to obtain a more in depth understanding of water quality in Hillsborough Bay. The first tool is a linked hydrodynamic and water quality model (Environmental Fluid Dynamics Code) of Hillsborough Bay which can be employed to predict water quality responses to proposed management actions. In the second part of the study, a series of water quality indices were evaluated. The most appropriate index for determining overall water quality in Hillsborough Bay was identified. Chlorophyll a is one of the constituents in the water quality index and is currently used to evaluate annual water quality conditions in Hillsborough Bay. Therefore, the statistical distribution that describes chlorophyll a concentrations in Hillsborough Bay was identified and robust confidence intervals were developed to better understand the uncertainty associated with chlorophyll a measurements. Previous work linked chlorophyll a concentrations in Hillsborough Bay to explanatory variables based on monthly estimates. These relationships were used to develop water quality targets for the system. In this study, the previously developed relationship was revisited, resulting in an improved statistical model that is more robust. This improved model can also be used to evaluate the previously proposed targets and to better predict future changes due to climate change, sea level rise, and management actions. Lastly, a new method was developed to estimate atmospheric temperature in the contiguous United States.
50

Recherche d'éléments répétés par analyse des distributions de fréquences d'oligonucléotides

Provencher, Benjamin January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal

Page generated in 0.0417 seconds