• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 17
  • 14
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 90
  • 90
  • 19
  • 18
  • 16
  • 13
  • 13
  • 11
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Bastardizing Black-Scholes: The Recovery of Option-Implied Probability Distributions and How They React to Corporate Take Announcement

Oetting, Andrew Henry 01 January 2012 (has links)
The purpose of this paper is threefold. First, the paper builds on the work done previously done in the area of option implied probability distribution functions (PDFs) by extending the methods described by Breeden and Litzenberger (1978) to individual equity options. Second, it describes a closed-form, onto mapping from a two-dimensional volatility surface to the risk-neutral PDF. Lastly the paper performs an event study on the implied risk-neutral PDFs of companies which are the target of corporate takeover. While there was not sufficient data to determine any statistical relationship, there is observational evidence that option market implied PDFs may be predictive of future takeovers.
12

A probabilistic pricing model for a company's projects / En probabilistisk prissättningsmodell för ett företags projekt

Malmquist, Daniel January 2012 (has links)
The company’s pricing is often highly impacted by the estimation of competitors’ project costs, which also is the main scope in this degree project. The purpose is to develop a pricing model dealing with uncertainties, since this is a main issue in the current pricing process. A pre-study has been performed, followed by a model implementation. An analysis of the model was then made, before conclusions were drawn. Project cost estimation foremost, but also probability distribution functions and pricing as a general concept, were investigated in the mainly literary pre-study. Two suitable methods for project cost estimation were identified; Monte Carlo simulation and Hierarchy Probability Cost Analysis. These lead to a theoretical project cost estimation model. A model was implemented in Matlab. It treats project cost estimation, but no other pricing aspects. The model was developed based on the theoretical one to the extent possible. Project costs were broken down in sub costs which were included in a Monte Carlo simulation. Competitors’ project costs were estimated using this technique. To analyse the model’s accuracy was difficult. It differs from the theoretical one in terms of how probability distribution functions and correlations are estimated. These problems depend on projects with shifting characteristics and limited data and time. A solid framework has been created though. Improvement possibilities exist, e.g. more accurate estimates and a model handling other pricing aspects. The major threat is that nobody maintains the model. Anyway, estimates are not more than just estimates. The model should therefore be viewed as a helpful tool, not an answer. / Företagets prissättning påverkas ofta till stor del av estimeringen av konkurrenters projektkostnader, vilket också är huvudområdet i detta examensarbete. Syftet är att utveckla en prissättningsmodell som hanterar osäkerheter, då detta är ett stort problem i rådande prissättningsprocess. En förstudie har utförts, följt av en modellimplementation. En analys av modellen gjordes sedan, innan slutsatser drogs. Projektkostnadsestimering främst, men även sannolikhetsfunktioner och prissättning som ett allmänt koncept, undersöktes i den i huvudsak litterära förstudien. Två lämpliga metoder för projektkostnadsestimering identifierades; Monte Carlo-simulering och Hierarchy Probability Cost Analysis. Dessa ledde till en teoretisk modell för projektkostnadsestimering. En modell implementerades i Matlab. Den behandlar projektkostnadsestimering, men inga andra prissättningsaspekter. Modellen utvecklades baserat på den teoretiska i möjlig utsträckning. Projektkostnader bröts ner i delkostnader som estimerades för konkurrenterna. Dessa ingick i en Monte Carlo-simulering. Konkurrenters projektkostnader estimerades med hjälp av denna teknik. Att analysera modellens noggrannhet var svårt. Den skiljer sig från den teoretiska beträffande hur sannolikhetsfunktioner och korrelationer estimeras. Dessa problem beror på projekt med skiftande karaktärsdrag samt begränsad data och tid. Ett solitt ramverk har dock skapats. Förbättringsmöjligheter finns, t.ex. noggrannare estimat och en modell som behandlar andra prissättningsaspekter. Det huvudsakliga hotet är att ingen underhåller modellen. Hur som helst är estimat inte mer än estimat. Modellen ska därför ses som ett hjälpverktyg, inte ett facit.
13

An LTE implementation based on a road traffic density model

Rashid, Muhammad Asim January 2013 (has links)
The increase in vehicular traffic has created new challenges in determining the behavior of performance of data and safety measures in traffic. Hence, traffic signals on intersection used as cost effective and time saving tools for traffic management in urban areas. But on the other hand the signalized intersections in congested urban areas are the key source of high traffic density and slow traffic. High traffic density causes the slow network traffic data rate between vehicle to vehicle and vehicle to infrastructure. To match up with the emerging technologies, LTE takes the lead with good packet delivery and versatile to changes in the network due to vehicular movements and density. This thesis is about analyzing of LTE implementation based on a road traffic density model. This thesis work is aimed to use probability distribution function to calculate density values and develop a real traffic scenario in LTE network using density values. In order to analyze the traffic behavior, Aimsun simulator software has been used to represent the real situation of traffic density on a model intersection. For a realistic traffic density model field measurement were used for collection of input data. After calibration and validation process, a close to realty results extracted and used a logistic curve of probability distribution function to find out the density situation on each part of intersection. Similar traffic scenarios were implemented on MATLAB based LTE system level simulator. Results were concluded with the whole traffic scenario of 90 seconds and calculating the throughput at every traffic signal time and section. It is quite evident from the results that LTE system adopts the change of traffic behavior with dynamic nature and allocates more bandwidth where it is more needed.
14

Entropy and Graphs

Changiz Rezaei, Seyed Saeed January 2014 (has links)
The entropy of a graph is a functional depending both on the graph itself and on a probability distribution on its vertex set. This graph functional originated from the problem of source coding in information theory and was introduced by J. K\"{o}rner in 1973. Although the notion of graph entropy has its roots in information theory, it was proved to be closely related to some classical and frequently studied graph theoretic concepts. For example, it provides an equivalent definition for a graph to be perfect and it can also be applied to obtain lower bounds in graph covering problems. In this thesis, we review and investigate three equivalent definitions of graph entropy and its basic properties. Minimum entropy colouring of a graph was proposed by N. Alon in 1996. We study minimum entropy colouring and its relation to graph entropy. We also discuss the relationship between the entropy and the fractional chromatic number of a graph which was already established in the literature. A graph $G$ is called \emph{symmetric with respect to a functional $F_G(P)$} defined on the set of all the probability distributions on its vertex set if the distribution $P^*$ maximizing $F_G(P)$ is uniform on $V(G)$. Using the combinatorial definition of the entropy of a graph in terms of its vertex packing polytope and the relationship between the graph entropy and fractional chromatic number, we prove that vertex transitive graphs are symmetric with respect to graph entropy. Furthermore, we show that a bipartite graph is symmetric with respect to graph entropy if and only if it has a perfect matching. As a generalization of this result, we characterize some classes of symmetric perfect graphs with respect to graph entropy. Finally, we prove that the line graph of every bridgeless cubic graph is symmetric with respect to graph entropy.
15

Geoarchaelogical Investigation Of Central Anatolian Caravanserais Using Gis

Ertepinar Kaymakci, Pinar 01 June 2005 (has links) (PDF)
This study comprises analysis of geological, geomorphological constraints that played role in the site selection of caravanserais. In order to do this, 15 caravanserais located along a route from NevSehir-Aksaray-Konya to BeySehir were used. The data used in the study include a caravanserai database, lithological maps, and digital elevation model of the area. GIS analyses performed in the study are proximity, visibility, and probability distribution (PDA). The first step is the generation of the ancient trade route which is used as a reference in other analysis. Results of the analysis indicate that the average distance between consequent caravanserais is 10 km. PDA suggests that there should be two more caravanserais between BeySehir - Yunuslar and one caravanserai between Obruk - Sulatnahani hans. Caravanserais are very close to a water source but not at their immediate vicinity. Groundwater is not considered in this study / dominant water sources are streams, springs and lakes. Their visibility tested in an area of 78 km2 shows a great variation suggesting that visibility is not considered during the site selection. Ignimbrite, limestone and marble are preferred rocks types although other rocks such as clastic rocks are exposed in closer distances.
16

A qualitative model of evolutionary algorithms

Fagan, Francois 04 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: Evolutionary Algorithms (EAs) are stochastic techniques, based on the idea of biological evolution, for finding near-optimal solutions to optimisation problems. Due to their generality and computational speed, they have been applied very successfully in a wide range of disciplines. However, as a consequence of their stochasticity and generality, very little has been rigorously established about their performance. Developing models for explaining and predicting algorithmic performance is, in fact, one of the most important challenges facing the field of optimisation. A qualitative version of such a model of EAs is developed in this thesis. There are two paradigms for explaining why EAs are expected to converge toward an optimum. The traditional explanation is that of Universal Darwinism, but an alternative explanation is that they are hill climbing algorithms which utilise all possible escape strategies — restarting local search, stochastic search and acceptance of non-improving solutions. The combination of the hill climbing property and the above escape strategies leads to a fast algorithm that is able to avoid premature convergence. Due to the difficulty in mathematically or empirically explaining the performance of EAs, terms such as exploitation, exploration, intensity and diversity are routinely employed for this purpose. Six prevalent views on exploitation and exploration are identified in the literature, each expressing a different facet of these notions. The coherence of these views is substantiated by their deducibility from the proposed novel definitions of exploitation and exploration. This substantiation is based on a novel hypothetical construct, namely that of a Probable Fitness Landscape (PFL), which both unifies and clarifies the surrounding terminology and our understanding of the performance of EAs. The PFL is developed into a qualitative model of EAs by extending it to the notion of an Ideal Probability Distribution (IPD). This notion, along with the criteria of diversity and computational speed, forms a method for judging the performance of EA operators. It is used to explain why the principal operators of EAs, namely mutation and selection, are effective. There are three main types of EAs, namely Genetic Algorithms (GAs), Evolution Strategies and Evolutionary Programming, each of which employ their own unique operators. Important facets of the crossover operator (which is particular to GAs) are identified, such as: opposite step vectors, genetic drift and ellipsoidal parent-centred probability distributions with variance proportional to the distance between parents. The shape of the crossover probability distribution motivates a comparison with a novel continuous approximation of mutation, which reveals very similar underlying distributions, although for crossover the distribution is adaptive whereas for mutation it is fixed. The PFL and IPD are used to analyse the crossover operator, the results of which are contrasted with the traditional explanations of the Schema Theorem and Building Block Hypothesis as well as the Evolutionary Progress Principle and Genetic Repair Hypothesis. It emerges that the facetwise nature of the PFL extracts more sound conclusions than the other explanations which, falsely, attempt to prove GAs to be superior. / AFRIKAANSE OPSOMMING: Evolusionere Algoritmes (EAs) is stogastiese tegnieke vir die bepaling van naby-optimale oplossings vir optimeringsprobleme wat gebaseer is op die beginsel van biologiese evolusie. As gevolg van hul algemene toepasbaarheid en hoe berekeningspoed, is hierdie algoritmes al met groot sukses in ’n wye verskeidenheid dissiplines toegepas. Die stogastiese aard en algemene toepasbaarheid van hierdie klas van algoritmes het egter tot gevolg dat baie min al oor hul werkverrigting formeel bewys is. Die ontwikkeling van modelle waarmee die doeltreffendheid van algoritmes verklaar en voorspel kan word, is trouens een van die grootste uitdagings in die studieveld van optimering. ’n Kwalitatiewe weergawe van so ’n model word in hierdie verhandeling vir EAs daargestel. Daar bestaan twee paradigmas vir die verklaring van waarom daar van EAs verwag word om na ’n optimum te konvergeer. Die tradisionele verklaring geskied aan die hand van Universele Darwinisme, maar ’n alternatiewe verklaring is dat hierdie algoritmes bergtop-soekend is en van alle moontlike ontsnapstrategiee gebruik maak — lokale soekstrategiee, stogastiese soekstrategiee en die aanvaarding van minderwaardige oplossings. Die kombinasie van die bergtop-soekende eienskap en die insluiting van die bogenoemde ontsnapstrategiee gee aanleiding tot vinnige algoritmes wat daartoe in staat is om voortydige konvergensie te vermy. Omdat dit moeilik is om die werkverrigting van EAs wiskundig of empiries te verklaar, word terminologie soos uitbuiting, verkenning, intensiteit en diversiteit roetinegewys vir hierdie doel ingespan. Ses heersende menings in die literatuur oor uitbuiting en verkenning word ge¨ıdentifiseer wat elkeen ’n ander faset van hierdie begrippe uitlig. Die samehang van hierdie menings word deur hul afleibaarheid uit nuwe definisies van uitbuiting en verkenning gedemonstreer. Hierdie demonstrasie is gebaseer op ’n nuwe hipotetiese konstruk, naamlik die van ’n Waarskynlike Fiksheidslandskap (WFL), wat beide die omliggende terminologie¨e en ons begrip van die werking van EAs enersyds verenig en andersyds verduidelik. Die begrip van ’n WFL word tot ’n kwantitatiewe model vir EAs ontwikkel deur dit tot die konstruk van ’n Ideale Waarskynlikheidsverdeling (IWV) uit te brei. Hierdie konsep word saam met die kriteria van diversiteit en berekeningspoed gebruik om ’n metode te ontwikkel waarmee die werkverrigting van EAs beoordeel kan word. Die IWV word gebruik om te verklaar waarom die hoofoperatore van EAs, naamlik mutasie en seleksie, doeltreffend is. Daar is drie tipes van EAs, naamlik Genetiese Algoritmes (GAs), Evolusionere Strategiee en Evolusionere Programmering, wat elk hul eie, unieke operatore bevat. Belangrike fasette van die oorgangsoperator (wat eie is aan GAs) word uitgelig, soos regoorstaande trapvektore, genetiese neiging en ellipsoıdale ouer-gesentreerde waarskynlikheidsverdelings met variansies wat eweredig is aan die afstand tussen ouers. Die vorm van die oorgangs-waarskynlikheidsverdeling gee aanleiding tot ’n vergelyking tussen die begrip van oorgang en ’n nuwe, kontinue benadering van mutasie. Daar word gevind dat die onderliggende verdelings baie soortgelyk is, alhoewel die oorgangsverdeling aanpasbaar is, terwyl die verdeling vir mutasie vas is. Die WFL en IWV word gebruik om die oorgangsoperator te analiseer en die resultate van hierdie analise word teenoor die tradisionele verklarings van die Skemastelling en Boublok-hipotese sowel as die Evolusionere Vooruitgangsbeginsel en die Genetiese Herstel-hipotese gekontrasteer. Dit blyk dat meer grondige gevolgtrekkings gemaak kan word uit die fasetgewyse aard van die WFL as uit ander verklarings wat valslik poog om die meer doeltreffende werkverrigting van GAs te demonstreer. Die gebruik van faset-gewyse en kwalitatiewe modelle word geregverdig deur hul sukses in terme van die verklaring van EA werkverrigting. Die argument word gemaak dat die beste rigting vir voortgesette navorsing oor EAs is om weg te bly van vergelykende studies en die afleiding van sogenaamde vergelykings van beweging, maar om eerder die ontwikkeling van wetenskaplikgefundeerde, faset-gewyse modelle vir algoritmiese werkverrigting na te streef.
17

The Normal Curve Approximation to the Hypergeometric Probability Distribution

Willman, Edward N. (Edward Nicholas) 12 1900 (has links)
The classical normal curve approximation to cumulative hypergeometric probabilities requires that the standard deviation of the hypergeometric distribution be larger than three which limits the usefulness of the approximation for small populations. The purposes of this study are to develop clearly-defined rules which specify when the normal curve approximation to the cumulative hypergeometric probability distribution may be successfully utilized and to determine where maximum absolute differences between the cumulative hypergeometric and normal curve approximation of 0.01 and 0.05 occur in relation to the proportion of the population sampled.
18

Equação de Fokker-Planck para potenciais polinomiais /

Santos, Saiara Fabiana Menezes dos January 2018 (has links)
Orientador: Elso Drigo Filho / Resumo: Tem-se como objetivo estudar a relação da equação de Fokker-Planck mapeada em uma equação tipo Schrödinger e assim usar supersimetria para resolução de alguns potenciais polinomiais encontrando sua distribuição de probabilidade P(x,t) e o tempo de passagem entre barreiras de potenciais e a partir destes dados compreender melhor o sistema físico proposto. / Abstract: The objective of this work is to study the relationship Fokker-Planck equation a Schrödinger-type equation . Thus, it is used supersymmetry for is to solve some polynomial potential in order to find the probability distribution, P (x, t), and the passage time between barriers of potential. These data prit us a better understand of the proposed physical system. / Mestre
19

Modelos de urnas e loterias / Models of urns and lotteries

Oliveira, Paulo Roberto de 11 July 2014 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2015-01-27T13:45:09Z No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Dissertação - Paulo Roberto de Oliveira - 2014.pdf: 3459807 bytes, checksum: b199bab67e773541a99d42adfc9dce03 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-01-28T11:28:00Z (GMT) No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Dissertação - Paulo Roberto de Oliveira - 2014.pdf: 3459807 bytes, checksum: b199bab67e773541a99d42adfc9dce03 (MD5) / Made available in DSpace on 2015-01-28T11:28:00Z (GMT). No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Dissertação - Paulo Roberto de Oliveira - 2014.pdf: 3459807 bytes, checksum: b199bab67e773541a99d42adfc9dce03 (MD5) Previous issue date: 2014-07-11 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Many monthly, others weekly play in lottery games ignoring the randomness of the results, believing in luck or strategies that are sold to them in books about games. This monograph aims to show some concepts of probability and statistics unexplored in high school and also day to day situations that contain mathematical concepts of probability more accessible to this level of education showing some mathematical theories applied in practice games. Concepts will be discussed here: some probability distributions, their hope and variance, as well as lottery games and their probability calculations. Probability distributions will be calculated and listed in situations created from models of urns with two colors of balls, always having green as the color whose extraction will be considered successful and the red, whose extraction will be considered a failure. Now extractions with replacement balls will be made and sometimes extractions will be done without replacing them. Also, there is the case where new balls are added to both colors or one color. / Muitos jogam mensalmente, outros semanalmente, em jogos de loterias desconhecendo a aleatoriedade dos seus resultados, acreditando na sorte ou em estratégias que lhes são vendidas em livros sobre jogos. A presente monogra a tem como objetivo mostrar alguns conceitos da Probabilidade e Estatística não explorados no Ensino Médio e também situações do dia a dia que contenham conceitos matemáticos sobre Probabilidade mais acessíveis a este nível de ensino, mostrando um pouco de teorias matemáticas aplicadas na prática de jogos. Serão conceitos aqui discutidos: algumas distribuições de probabilidade, sua esperan ça e variância, além de jogos de loterias e seus cálculos de probabilidade. As distribuições de probabilidade serão enunciadas e calculadas em situações criadas a partir de modelos de urnas com duas cores de bolas, tendo sempre o verde como a cor cuja extração será considerada sucesso e, o vermelho, cuja extração será considerada insucesso. Ora serão feitas extrações com reposições das bolas e ora serão feitas extrações sem a reposição das mesmas. Também, há o caso em que serão adicionadas novas bolas de ambas as cores ou uma cor apenas.
20

Simulação estocástica de elementos do clima para estimação da produtividade de cana planta / Stochastic simulation of climate elements to estimate the sugarcane productivity

Simone Toni Ruiz Correa 22 February 2013 (has links)
Os modelos de simulação devem ser ajustados para que os valores dos parâmetros e variáveis de entrada forneçam resultados que melhor representem os valores observados. Assim, face às imprecisões a que estão sujeitos os resultados obtidos a partir do ajuste, há a necessidade de implementar métodos que permitam a avaliação das incertezas, quer seja nos parâmetros de cultura ou nas variáveis de entrada do modelo. Tem-se, para a cana-de-açúcar, que a máxima produtividade de açúcar ocorre no momento em que a Pol e a biomassa de colmos (TCH) são potencializados. Sendo assim, este trabalho objetivou: (i) testar a aderência de três distribuições de probabilidade (normal, gama e Weibull) aos dados diários de temperaturas média do ar e radiação solar global, em Piracicaba, SP; (ii) simular as variáveis temperatura do ar e radiação solar global por intermédio das distribuições normal bivariada, gama e Weibull, e a precipitação por intermédio da distribuição gama; (iii) propor um modelo, utilizando abordagem determinística para a variedade de interesse, para caracterizar a variação temporal do crescimento de biomassa seca de cana-de-açúcar e estimar a ordem de magnitude do período útil de industrialização (PUI), do dia de ocorrência do valor máximo da produtividade de sacarose (expressa em TPH) e das produtividades potencial e deplecionada de cana-de-açúcar, em dois ambientes de produção; (iv) estimar as variações das produtividades potencial, deplecionada por água e TPH (ciclo cana planta) por intermédio de procedimento estocástico para os elementos do clima (temperatura, radiação fotossinteticamente ativa e precipitação). As simulações utilizando as distribuições normal bivariada e gama são apropriadas por representarem melhor os elementos do clima; o modelo para a estimação das produtividades potencial, deplecionada e de sacarose apresentou resultados satisfatórios quanto aos objetivos propostos (abordagem determinística); o desempenho das variações das produtividades ocorreu de forma semelhante, no que se refere a magnitude de valores, para as simulações utilizando as distribuições normal bivariada e gama, e apresentou tendência de superestimar os valores das produtividades, para a simulação utilizando a distribuição Weibull (abordagem estocástica dos elementos do clima). / Simulation models should be adjusted by values of parameters and input variables in order to provide results that best represent the observed values. Thus, due to inaccuracies that are subject the adjustment´s results, it is necessary to implement methods for uncertainties evaluation for either, the parameters of the culture or input variables of the model. For the sugarcane, the maximum productivity of sugar occurs when both, Pol and biomass of stems (TCH) are the maximum. The aims of this study were: (i) to verify the adherence of three probability distributions (normal, gama and Weibull) to the daily data of average air temperature and solar radiation, in Piracicaba, SP; (ii) to simulate the variables air temperature and solar radiation through the bivariate normal, gama and Weibull distributions, and precipitation through the gama distribution; (iii) to propose a model, by using deterministic approach to the genotype of interest, to characterize the temporal variation in dry matter growth of sugarcane estimating the magnitude order of the useful period of industrialization, the date of occurrence in maximum sucrose yield (expressed by TPH) and the sugarcane potential and depleted productivity, in two production environments; (iv) to estimate the variability of potential, depleted by water and TPH (sugarcane plant cycle) through a stochastic procedure for the climate elements (temperature, photosynthetically active radiation and precipitation). The simulations by using bivariate normal and gama distributions are appropriate to better represent the climate elements; the model to estimate potential, depleted and sucrose productivity showed satisfactory results for the proposed objectives (deterministic approach); yield variability was similar, as regard the magnitude of values, for the simulations by using bivariate normal and gama distributions and it presented a tendency to overestimate the productivity for simulations by using Weibull distribution (stochastic approach of climate elements).

Page generated in 0.1231 seconds