• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 462
  • 32
  • 16
  • 16
  • 15
  • 14
  • 14
  • 14
  • 14
  • 14
  • 13
  • 13
  • 10
  • 6
  • 6
  • Tagged with
  • 683
  • 683
  • 142
  • 141
  • 115
  • 89
  • 86
  • 57
  • 55
  • 49
  • 49
  • 40
  • 38
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Aspects of cash-flow valuation

Armerin, Fredrik January 2004 (has links)
<p>This thesis consists of five papers. In the first two papers we consider a general approach to cash flow valuation, focusing on dynamic properties of the value of a stream of cash flows. The third paper discusses immunization theory, where old results are shown to hold in general deterministic models, but often fail to be true in stochastic models. In the fourth paper we comment on the connection between arbitrage opportunities and an immunized position. Finally, in the last paper we study coherent and convex measure of risk applied to portfolio optimization and insurance.</p>
452

Types and levels of data arrangement and representation in statistics as modeled by grade 4 - 7 learners

Wessels, Helena Margaretha 28 February 2006 (has links)
The crucial role of representation in mathematical and statistical modeling and problem solving as evident in learners' arrangement and representation of statistical data were investigated with focus points data arrangement, data representation and statistical thinking levels. The representation tasks required learners to arrange and represent data through modeling, focusing on spontaneous representations. Successful transnumeration determine the ultimate success of a representation and the ability to organise data is regarded as critical. Arrangement types increased in sophistication with increased grade level and the hierarchical nature of arrangement types became apparent when regarded in the context of an adapted SOLO Taxonomy framework. A higher level arrangement strategy pointed to a higher SOLO level of statistical thinking. Learners in the two tasks produced a rich variety of representations which included idiosyncratic, unsophisticated responses as well as standard statistical representations. The context of the two tasks, the quantitative versus qualitative nature of the data in the tasks, and the statistical tools or representational skills learners have at their disposal, played an important role in their representations. Well-planned data handling activities develop representational and higher order thinking skills. The variety of responses and different response levels elicited in the two tasks indicate that the nature of the tasks rather than the size of the data set play a conclusive role in data tasks. Multiple representations by an individual were an indication of successful modeling, are effective in problem solving and are associated with good performance. The SOLO model which incorporates a structural approach as well as a multimodal component proved valuable in the analysis of responses. Using this model with accompanying acknowledgement of different problem solving paths and the contribution of ikonic support in the concrete symbolic mode, promote the in-depth analysis of responses. This study contributes to the research in the field of data representation and statistical thinking. The analysis and results led to an integrated picture of Grade 4-7 learners' representation of statistical data and of the statistical thinking levels evident in their representations. / Educational Studies / D. Ed. (Didactics)
453

Stochastic Modeling and Simulation of the TCP protocol

Olsén, Jörgen January 2003 (has links)
<p>The success of the current Internet relies to a large extent on a cooperation between the users and the network. The network signals its current state to the users by marking or dropping packets. The users then strive to maximize the sending rate without causing network congestion. To achieve this, the users implement a flow-control algorithm that controls the rate at which data packets are sent into the Internet. More specifically, the <i>Transmission Control Protocol (TCP)</i> is used by the users to adjust the sending rate in response to changing network conditions. TCP uses the observation of packet loss events and estimates of the round trip time (RTT) to adjust its sending rate. </p><p>In this thesis we investigate and propose stochastic models for TCP. The models are used to estimate network performance like throughput, link utilization, and packet loss rate. The first part of the thesis introduces the TCP protocol and contains an extensive TCP modeling survey that summarizes the most important TCP modeling work. Reviewed models are categorized as renewal theory models, fixed-point methods, fluid models, processor sharing models or control theoretic models. The merits of respective category is discussed and guidelines for which framework to use for future TCP modeling is given. </p><p>The second part of the thesis contains six papers on TCP modeling. Within the renewal theory framework we propose single source TCP-Tahoe and TCP-NewReno models. We investigate the performance of these protocols in both a DropTail and a RED queuing environment. The aspects of TCP performance that are inherently depending on the actual implementation of the flow-control algorithm are singled out from what depends on the queuing environment.</p><p>Using the fixed-point framework, we propose models that estimate packet loss rate and link utilization for a network with multiple TCP-Vegas, TCP-SACK and TCP-Reno on/off sources. The TCP-Vegas model is novel and is the first model capable of estimating the network's operating point for TCP-Vegas sources sending on/off traffic. All TCP and network models in the contributed research papers are validated via simulations with the network simulator <i>ns-2</i>. </p><p>This thesis serves both as an introduction to TCP and as an extensive orientation about state of the art stochastic TCP models.</p>
454

Empirical Bayes Methods for DNA Microarray Data

Lönnstedt, Ingrid January 2005 (has links)
<p>cDNA microarrays is one of the first high-throughput gene expression technologies that has emerged within molecular biology for the purpose of functional genomics. cDNA microarrays compare the gene expression levels between cell samples, for thousands of genes simultaneously. </p><p>The microarray technology offers new challenges when it comes to data analysis, since the thousands of genes are examined in parallel, but with very few replicates, yielding noisy estimation of gene effects and variances. Although careful image analyses and normalisation of the data is applied, traditional methods for inference like the Student <i>t</i> or Fisher’s <i>F</i>-statistic fail to work.</p><p>In this thesis, four papers on the topics of empirical Bayes and full Bayesian methods for two-channel microarray data (as e.g. cDNA) are presented. These contribute to proving that empirical Bayes methods are useful to overcome the specific data problems. The sample distributions of all the genes involved in a microarray experiment are summarized into prior distributions and improves the inference of each single gene.</p><p>The first part of the thesis includes biological and statistical background of cDNA microarrays, with an overview of the different steps of two-channel microarray analysis, including experimental design, image analysis, normalisation, cluster analysis, discrimination and hypothesis testing. The second part of the thesis consists of the four papers. Paper I presents the empirical Bayes statistic <i>B</i>, which corresponds to a <i>t</i>-statistic. Paper II is based on a version of <i>B</i> that is extended for linear model effects. Paper III assesses the performance of empirical Bayes models by comparisons with full Bayes methods. Paper IV provides extensions of <i>B</i> to what corresponds to <i>F</i>-statistics.</p>
455

Stochastic Modeling and Simulation of the TCP protocol

Olsén, Jörgen January 2003 (has links)
The success of the current Internet relies to a large extent on a cooperation between the users and the network. The network signals its current state to the users by marking or dropping packets. The users then strive to maximize the sending rate without causing network congestion. To achieve this, the users implement a flow-control algorithm that controls the rate at which data packets are sent into the Internet. More specifically, the Transmission Control Protocol (TCP) is used by the users to adjust the sending rate in response to changing network conditions. TCP uses the observation of packet loss events and estimates of the round trip time (RTT) to adjust its sending rate. In this thesis we investigate and propose stochastic models for TCP. The models are used to estimate network performance like throughput, link utilization, and packet loss rate. The first part of the thesis introduces the TCP protocol and contains an extensive TCP modeling survey that summarizes the most important TCP modeling work. Reviewed models are categorized as renewal theory models, fixed-point methods, fluid models, processor sharing models or control theoretic models. The merits of respective category is discussed and guidelines for which framework to use for future TCP modeling is given. The second part of the thesis contains six papers on TCP modeling. Within the renewal theory framework we propose single source TCP-Tahoe and TCP-NewReno models. We investigate the performance of these protocols in both a DropTail and a RED queuing environment. The aspects of TCP performance that are inherently depending on the actual implementation of the flow-control algorithm are singled out from what depends on the queuing environment. Using the fixed-point framework, we propose models that estimate packet loss rate and link utilization for a network with multiple TCP-Vegas, TCP-SACK and TCP-Reno on/off sources. The TCP-Vegas model is novel and is the first model capable of estimating the network's operating point for TCP-Vegas sources sending on/off traffic. All TCP and network models in the contributed research papers are validated via simulations with the network simulator ns-2. This thesis serves both as an introduction to TCP and as an extensive orientation about state of the art stochastic TCP models.
456

Empirical Bayes Methods for DNA Microarray Data

Lönnstedt, Ingrid January 2005 (has links)
cDNA microarrays is one of the first high-throughput gene expression technologies that has emerged within molecular biology for the purpose of functional genomics. cDNA microarrays compare the gene expression levels between cell samples, for thousands of genes simultaneously. The microarray technology offers new challenges when it comes to data analysis, since the thousands of genes are examined in parallel, but with very few replicates, yielding noisy estimation of gene effects and variances. Although careful image analyses and normalisation of the data is applied, traditional methods for inference like the Student t or Fisher’s F-statistic fail to work. In this thesis, four papers on the topics of empirical Bayes and full Bayesian methods for two-channel microarray data (as e.g. cDNA) are presented. These contribute to proving that empirical Bayes methods are useful to overcome the specific data problems. The sample distributions of all the genes involved in a microarray experiment are summarized into prior distributions and improves the inference of each single gene. The first part of the thesis includes biological and statistical background of cDNA microarrays, with an overview of the different steps of two-channel microarray analysis, including experimental design, image analysis, normalisation, cluster analysis, discrimination and hypothesis testing. The second part of the thesis consists of the four papers. Paper I presents the empirical Bayes statistic B, which corresponds to a t-statistic. Paper II is based on a version of B that is extended for linear model effects. Paper III assesses the performance of empirical Bayes models by comparisons with full Bayes methods. Paper IV provides extensions of B to what corresponds to F-statistics.
457

Types and levels of data arrangement and representation in statistics as modeled by grade 4 - 7 learners

Wessels, Helena Margaretha 28 February 2006 (has links)
The crucial role of representation in mathematical and statistical modeling and problem solving as evident in learners' arrangement and representation of statistical data were investigated with focus points data arrangement, data representation and statistical thinking levels. The representation tasks required learners to arrange and represent data through modeling, focusing on spontaneous representations. Successful transnumeration determine the ultimate success of a representation and the ability to organise data is regarded as critical. Arrangement types increased in sophistication with increased grade level and the hierarchical nature of arrangement types became apparent when regarded in the context of an adapted SOLO Taxonomy framework. A higher level arrangement strategy pointed to a higher SOLO level of statistical thinking. Learners in the two tasks produced a rich variety of representations which included idiosyncratic, unsophisticated responses as well as standard statistical representations. The context of the two tasks, the quantitative versus qualitative nature of the data in the tasks, and the statistical tools or representational skills learners have at their disposal, played an important role in their representations. Well-planned data handling activities develop representational and higher order thinking skills. The variety of responses and different response levels elicited in the two tasks indicate that the nature of the tasks rather than the size of the data set play a conclusive role in data tasks. Multiple representations by an individual were an indication of successful modeling, are effective in problem solving and are associated with good performance. The SOLO model which incorporates a structural approach as well as a multimodal component proved valuable in the analysis of responses. Using this model with accompanying acknowledgement of different problem solving paths and the contribution of ikonic support in the concrete symbolic mode, promote the in-depth analysis of responses. This study contributes to the research in the field of data representation and statistical thinking. The analysis and results led to an integrated picture of Grade 4-7 learners' representation of statistical data and of the statistical thinking levels evident in their representations. / Educational Studies / D. Ed. (Didactics)
458

Risk properties and parameter estimation on mean reversion and Garch models

Sypkens, Roelf 09 1900 (has links)
Most of the notations and terminological conventions used in this thesis are Statistical. The aim in risk management is to describe the risk factors present in time series. In order to group these risk factors, one needs to distinguish between different stochastic processes and put them into different classes. The risk factors discussed in this thesis are fat tails and mean reversion. The presence of these risk factors fist need to be found in the historical dataset. I will refer to the historical dataset as the original dataset. The Ljung- Box-Pierce test will be used in this thesis to determine if the distribution of the original dataset has mean reversion or no mean reversion. / Mathematical Sciences / M.Sc. (Applied Mathematics)
459

A comparison of support vector machines and traditional techniques for statistical regression and classification

Hechter, Trudie 04 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2004. / ENGLISH ABSTRACT: Since its introduction in Boser et al. (1992), the support vector machine has become a popular tool in a variety of machine learning applications. More recently, the support vector machine has also been receiving increasing attention in the statistical community as a tool for classification and regression. In this thesis support vector machines are compared to more traditional techniques for statistical classification and regression. The techniques are applied to data from a life assurance environment for a binary classification problem and a regression problem. In the classification case the problem is the prediction of policy lapses using a variety of input variables, while in the regression case the goal is to estimate the income of clients from these variables. The performance of the support vector machine is compared to that of discriminant analysis and classification trees in the case of classification, and to that of multiple linear regression and regression trees in regression, and it is found that support vector machines generally perform well compared to the traditional techniques. / AFRIKAANSE OPSOMMING: Sedert die bekendstelling van die ondersteuningspuntalgoritme in Boser et al. (1992), het dit 'n populêre tegniek in 'n verskeidenheid masjienleerteorie applikasies geword. Meer onlangs het die ondersteuningspuntalgoritme ook meer aandag in die statistiese gemeenskap begin geniet as 'n tegniek vir klassifikasie en regressie. In hierdie tesis word ondersteuningspuntalgoritmes vergelyk met meer tradisionele tegnieke vir statistiese klassifikasie en regressie. Die tegnieke word toegepas op data uit 'n lewensversekeringomgewing vir 'n binêre klassifikasie probleem sowel as 'n regressie probleem. In die klassifikasiegeval is die probleem die voorspelling van polisvervallings deur 'n verskeidenheid invoer veranderlikes te gebruik, terwyl in die regressiegeval gepoog word om die inkomste van kliënte met behulp van hierdie veranderlikes te voorspel. Die resultate van die ondersteuningspuntalgoritme word met dié van diskriminant analise en klassifikasiebome vergelyk in die klassifikasiegeval, en met veelvoudige linêere regressie en regressiebome in die regressiegeval. Die gevolgtrekking is dat ondersteuningspuntalgoritmes oor die algemeen goed vaar in vergelyking met die tradisionele tegnieke.
460

Edgeworth Expansion and Saddle Point Approximation for Discrete Data with Application to Chance Games

Basna, Rani January 2010 (has links)
<p>We investigate mathematical tools, Edgeworth series expansion and the saddle point method, which are approximation techniques that help us to estimate the distribution function for the standardized mean of independent identical distributed random variables where we will take into consideration the lattice case. Later on we will describe one important application for these mathematical tools where game developing companies can use them to reduce the amount of time needed to satisfy their standard requests before they approve any game</p>

Page generated in 0.1861 seconds