Spelling suggestions: "subject:"poisson process"" "subject:"boisson process""
1 
Performance of multistate Markov modulated queuing in ATM networksYousef, Sufian Yacoub Salameh January 1998 (has links)
No description available.

2 
Modelling and inference for a class of doubly stochastic point processesWei, Gang January 1995 (has links)
No description available.

3 
Fractional Poisson Process in Terms of AlphaStable DensitiesCahoy, Dexter Odchigue 06 June 2007 (has links)
No description available.

4 
Merton JumpDiffusion Modeling of Stock Price DataTang, Furui January 2018 (has links)
In this thesis, we investigate two stock price models, the BlackScholes (BS) model and the Merton JumpDiffusion (MJD) model. Comparing the logarithmic return of the BS model and the MJD model with empirical stock price data, we conclude that the Merton JumpDiffusion Model is substantially more suitable for the stock market. This is concluded visually not only by comparing the density functions but also by analyzing mean, variance, skewness and kurtosis of the logreturns. One technical contribution to the thesis is a suggested decision rule for initial guess of a maximum likelihood estimation of the MJDmodeled parameters.

5 
Modelagem para dados de parasitismo / Modelling parasitism dataMota, João Mauricio Araújo 23 September 2005 (has links)
Experimentos com diferentes objetivos têm sido conduzidos a fim de se estudar o mecanismo do parasitismo, sendo muito comuns os bioensaios para encontrar condições ótimas para a produção de parasitas e para definir estratégias para liberações inundativas no campo. Assim, por exemplo, o número de ovos parasitados depende de fatores como: espécie, tipo e densidade do hospedeiro, longevidade do adulto e densidade do parasita, tipo de alimentação, temperatura, umidade etc. Logo, o objetivo de um determinado ensaio pode ser, então, estudar o comportamento da variável resposta como função do número de parasitóides ou do número de hospedeiros ou ainda do tipo de alimentação. Podese verificar que, em geral, as variáveis observáveis so contagens ou somas aleatórias de variáveis aleatórias ou proporções com denominadores fixos ou aleatórios. A distribuição padrão para modelar contagens é a Poisson enquanto que para proporções é a binomial. Em geral, elas não se ajustam dados oriundos do processo de parasitismo, pois suas pressuposições não são satisfeitas, e surgiram modelos alternativos e que levam em consideração o mecanismo de evitar o superparasitismo. Alguns deles supõem que a probabilidade de fuga (evitar o superparasitismo) é função do número de ovos presentes no hospedeiro (Bakker, 1967,1972; Rogers, 1975; Griffits, 1977), outros não consideram tal processo (Daley; Maindonald, 1989; Griffths, 1977). Outros, ainda, incluem o comportamento seletivo do parasita na escolha do hospedeiro e a habilidade do hospedeiro em atrair o parasita (Hemerik et al, 2002). Alguns deles surgiram independemente, outros como generalizações, sendo, portanto, de interesse um estudo adicional para ressaltar pontos comuns entre eles. No presente trabalho, são estudados 19 modelos probabilísticos para explicar a distribuição do número de ovos postos por um parasita em um determinado hospedeiro. Foi mostrada a equivalência entre alguns deles e, além disso, foi provado que um modelo usado por Faddy (1997) na estimação do tamanho de população animal generalizaos. As propriedades desse modelo são apresentadas e discutidas. O uso do modelo de Faddy para a distribuição do número de ovos no sistema parasitahospedeiro é o principal resultado teórico dessa tese. / Experiments with distinct aims have been conducted in order to study the parasitism mechanism. Bioassays to find optimum conditions for parasite production and to define strategies for application in the field are very common. In this way, for example, the number of parasitized eggs depends on factors such as: species, type and host density, adult longevity and parasite density, type of food, temperature, humidity etc. Hence, the objective of a specific assay can be to study the response variable behavior as a function of the number of parasitoids, number of hosts or type of food. In general, the observable variables are counts or random sums of random variables or proportions with fixed or random denominators. Poisson is the standard distribution used to model counts, while the distribution used for proportions is the binomial. In general, they do not fit to standard distributions dont fit to data generated by the parasitism process, because their assumptions are not satisfied and alternative models that consider the mechanism of avoidance of superparasitism have appeared in the literature. Some of them consider that the refuse probability (avoidance of superparasitism) is a function of the number of eggs in the host (Bakker, 1967, 1972; Rogers, 1975; Griffiths, 1977) others dont consider such process (Daley and Maindonald 1989; Griffiths, 1977). Some include the selective behavior of the parasite in the choice of the host and the host ability in attract the parasite (Hemerick et al., 2002). Some of the models have appeared independently, others as generalizations. There is therefore some interest in making a study of the common points of the models. In this work 19 probability models found in the literature are presented to explain the distribution of the number of eggs laid by a parasite on a specific host. As an initial result, the equivalence between some of these models is shown. Also it is shown that a model developed by Faddy (1997) can be considered as a generalization of 18 of the models. The properties of this model are presented and discussed. The equivalence between the Janardan and Faddy models is an original and interesting result. The use of Faddy´s model as a general probability model for the distribution of the number of eggs in a parasitehost system is the main theoretical result of this thesis.

6 
Reverse Auction Bidding  Bid Arrivals AnalysisYuan, Shu 16 December 2013 (has links)
Reverse Auction Bidding (RAB) is a recently developed procurement method that can be used by the construction industry. The technique is different from a traditional auction system, since RAB system uses a bidding activity method that is completed anonymously by prequalified bidders during a fixed auction time. The basic premise for the auction is that the current best auction price is available for viewing during the whole auction process by both bidders and owner. The apparent incentive is for noncompetitive bidders to lower the price. There are however controlling factor beyond the reach of owners, such as market demand, lending restrictions, stakeholder expectations and risk tolerance levels, that impact on price levels. However, owners continue to attempt to drive down prices using this technique.
A study into the mechanics of RAB was launched at Texas A&M University in 2004. This ongoing study of RAB continues to this time with eighteen case studies. This nineteenth study looks at the time series bid data from some of the prior work. Nine case studies were selected from the previous case studies. These nine studies provided untainted data with 6674 RAB bid arrivals by prior investigator actions. This study concerns the statistical process of bid arrivals with time.
The hypothesis to be tested is that the RAB bid arrivals timing can be modeled with a statistical process. The analysis reviewed the fit for several types of distribution, including Gaussian and Poissionian. The best fit was modeled by nonhomogeneous Poisson process (NHPP). The first conclusion from the analysis is that RAB bid arrivals follows a Poisson process, termed nonhomogeneous Poisson process (NHPP). The second conclusion is that the controlling Poissionian process has a square root factor. The NHPP model for RAB provides a tool for future studies of RAB in real time. Future work is suggested on the intertime periods for the bidding.

7 
Modelagem para dados de parasitismo / Modelling parasitism dataJoão Mauricio Araújo Mota 23 September 2005 (has links)
Experimentos com diferentes objetivos têm sido conduzidos a fim de se estudar o mecanismo do parasitismo, sendo muito comuns os bioensaios para encontrar condições ótimas para a produção de parasitas e para definir estratégias para liberações inundativas no campo. Assim, por exemplo, o número de ovos parasitados depende de fatores como: espécie, tipo e densidade do hospedeiro, longevidade do adulto e densidade do parasita, tipo de alimentação, temperatura, umidade etc. Logo, o objetivo de um determinado ensaio pode ser, então, estudar o comportamento da variável resposta como função do número de parasitóides ou do número de hospedeiros ou ainda do tipo de alimentação. Podese verificar que, em geral, as variáveis observáveis so contagens ou somas aleatórias de variáveis aleatórias ou proporções com denominadores fixos ou aleatórios. A distribuição padrão para modelar contagens é a Poisson enquanto que para proporções é a binomial. Em geral, elas não se ajustam dados oriundos do processo de parasitismo, pois suas pressuposições não são satisfeitas, e surgiram modelos alternativos e que levam em consideração o mecanismo de evitar o superparasitismo. Alguns deles supõem que a probabilidade de fuga (evitar o superparasitismo) é função do número de ovos presentes no hospedeiro (Bakker, 1967,1972; Rogers, 1975; Griffits, 1977), outros não consideram tal processo (Daley; Maindonald, 1989; Griffths, 1977). Outros, ainda, incluem o comportamento seletivo do parasita na escolha do hospedeiro e a habilidade do hospedeiro em atrair o parasita (Hemerik et al, 2002). Alguns deles surgiram independemente, outros como generalizações, sendo, portanto, de interesse um estudo adicional para ressaltar pontos comuns entre eles. No presente trabalho, são estudados 19 modelos probabilísticos para explicar a distribuição do número de ovos postos por um parasita em um determinado hospedeiro. Foi mostrada a equivalência entre alguns deles e, além disso, foi provado que um modelo usado por Faddy (1997) na estimação do tamanho de população animal generalizaos. As propriedades desse modelo são apresentadas e discutidas. O uso do modelo de Faddy para a distribuição do número de ovos no sistema parasitahospedeiro é o principal resultado teórico dessa tese. / Experiments with distinct aims have been conducted in order to study the parasitism mechanism. Bioassays to find optimum conditions for parasite production and to define strategies for application in the field are very common. In this way, for example, the number of parasitized eggs depends on factors such as: species, type and host density, adult longevity and parasite density, type of food, temperature, humidity etc. Hence, the objective of a specific assay can be to study the response variable behavior as a function of the number of parasitoids, number of hosts or type of food. In general, the observable variables are counts or random sums of random variables or proportions with fixed or random denominators. Poisson is the standard distribution used to model counts, while the distribution used for proportions is the binomial. In general, they do not fit to standard distributions dont fit to data generated by the parasitism process, because their assumptions are not satisfied and alternative models that consider the mechanism of avoidance of superparasitism have appeared in the literature. Some of them consider that the refuse probability (avoidance of superparasitism) is a function of the number of eggs in the host (Bakker, 1967, 1972; Rogers, 1975; Griffiths, 1977) others dont consider such process (Daley and Maindonald 1989; Griffiths, 1977). Some include the selective behavior of the parasite in the choice of the host and the host ability in attract the parasite (Hemerick et al., 2002). Some of the models have appeared independently, others as generalizations. There is therefore some interest in making a study of the common points of the models. In this work 19 probability models found in the literature are presented to explain the distribution of the number of eggs laid by a parasite on a specific host. As an initial result, the equivalence between some of these models is shown. Also it is shown that a model developed by Faddy (1997) can be considered as a generalization of 18 of the models. The properties of this model are presented and discussed. The equivalence between the Janardan and Faddy models is an original and interesting result. The use of Faddy´s model as a general probability model for the distribution of the number of eggs in a parasitehost system is the main theoretical result of this thesis.

8 
Topologie algébrique de complexes simpliciaux aléatoires et applications aux réseaux de capteurs / Algebraic topology of random simplicial complexes and applications to sensor networksFerraz, Eduardo 22 February 2012 (has links)
Cette thèse est composée de deux parties. La première partie utilise l’analyse stochastique pour fournir des bornes pour la probabilité de surcharge de diﬀérents systèmes grâce aux inégalités de concentration. Bien qu’ils soient généraux, nous appliquons ces résultats à des réseaux sansﬁl réels tels que le WiMax et le trafﬁc utilisateur multiclasse dans un système OFDMA. Dans la seconde partie, nous trouvons des liens entre la topologie de la couverture dans un réseau de capteur et celle du complexe simplicial correspondant. Cette analogie met en valeur de nouvelles facettes des certains objets mathématiques comme les nombres de Betti, le nombre de ksimplexes, et la caractéristique d’Euler. Puis, nous utilisons conjointement la topologie algébrique et l’analyse stochastique, en considérant que les positions des capteurs sont une réalisation d’un processus ponctuel de Poisson. Nous en déduisons les statistiques du nombre de ksimplexe et de la caractéristique d’Euler, ainsi que des bornes supérieures pour la distribution des nombres de Betti, le tout en d dimen sions. Nous démontrons aussi que le nombre de ksimplexes converge vers une distribution Gaussienne quand la densité de capteurs tend vers l’inﬁni à une vitesse de convergence connue. Enﬁn, nous nous limitons au cas unidimensionnel. Dans ce cas, le problème devient équivalent à résoudre une ﬁle M/M/1/1 préemptive. Nous obtenons ainsi des résultats analytiques pour des quantités telles que la distribution du nombre de composantes connexes et la probabilité de couverture totale. / This thesis has two main parts. Part I uses stochastic anlysis to provide bounds for the overload probability of diﬀerent systems thanks to concentration inequalities. Although the results are general, we apply them to real wireless network systems such as WiMax and mutliclass user traﬃc in an OFDMA system. In part I I, we ﬁnd more connections between the topology of the coverage of a sensor network and the topology of its corresponding simplicial complex. These connections highlight new aspects of Betti numbers, the number of ksimplices, and Euler characteristic. Then, we use algebraic topology in conjunction with stochastic analysis, after assuming that the positions of the sensors are points of a Point point process. As a consequence we obtain, in d dimensions, the statistics of the number of ksimplices and of Euler characteristic, as well as upper bounds for the distribution of Betti numbers. We also prove that the number of ksimplices tends to a Gaussian distribution as the density of sensors grows, and we specify the convergence rate. Finally, we restrict ourselves to one dimension. In this case, the problem becomes equivalent to solving a M/M/1/1 preemptive queue. We obtain analytical results for quantites such as the distribution of the number of connected components and the probability of complete coverage.

9 
Analysis And Optimization Of Queueing Models With Markov Modulated Poisson InputHemachandra, Nandyala 06 1900 (has links) (PDF)
No description available.

10 
Experimentation on dynamic congestion control in Software Defined Networking (SDN) and Network Function Virtualisation (NFV)Kamaruddin, Amalina Farhan January 2017 (has links)
In this thesis, a novel framework for dynamic congestion control has been proposed. The study is about the congestion control in broadband communication networks. Congestion results when demand temporarily exceeds capacity and leads to severe degradation of Quality of Service (QoS) and possibly loss of traffic. Since traffic is stochastic in nature, high demand may arise anywhere in a network and possibly causing congestion. There are different ways to mitigate the effects of congestion, by rerouting, by aggregation to take advantage of statistical multiplexing, and by discarding too demanding traffic, which is known as admission control. This thesis will try to accommodate as much traffic as possible, and study the effect of routing and aggregation on a rather general mix of traffic types. Software Defined Networking (SDN) and Network Function Virtualization (NFV) are concepts that allow for dynamic configuration of network resources by decoupling control from payload data and allocation of network functions to the most suitable physical node. This allows implementation of a centralised control that takes the state of the entire network into account and configures nodes dynamically to avoid congestion. Assumes that node controls can be expressed in commands supported by OpenFlow v1.3. Due to state dependencies in space and time, the network dynamics are very complex, and resort to a simulation approach. The load in the network depends on many factors, such as traffic characteristics and the traffic matrix, topology and node capacities. To be able to study the impact of control functions, some parts of the environment is fixed, such as the topology and the node capacities, and statistically average the traffic distribution in the network by randomly generated traffic matrices. The traffic consists of approximately equal intensity of smooth, bursty and long memory traffic. By designing an algorithm that route traffic and configure queue resources so that delay is minimised, this thesis chooses the delay to be the optimisation parameter because it is additive and realtime applications are delay sensitive. The optimisation being studied both with respect to total endtoend delay and maximum endtoend delay. The delay is used as link weights and paths are determined by Dijkstra's algorithm. Furthermore, nodes are configured to serve the traffic optimally which in turn depends on the routing. The proposed algorithm is a fixedpoint system of equations that iteratively evaluates routing  aggregation  delay until an equilibrium point is found. Three strategies are compared: static node configuration where each queue is allocated 1/3 of the node resources and no aggregation, aggregation of realtime (taken as smooth and bursty) traffic onto the same queue, and dynamic aggregation based on the entropy of the traffic streams and their aggregates. The results of the simulation study show good results, with gains of 1040% in the QoS parameters. By simulation, the positive effects of the proposed routing and aggregation strategy and the usefulness of the algorithm. The proposed algorithm constitutes the central control logic, and the resulting control actions are realisable through the SDN/NFV architecture.

Page generated in 0.0833 seconds