• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 31
  • 29
  • 13
  • 12
  • 10
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 408
  • 158
  • 59
  • 58
  • 57
  • 57
  • 55
  • 52
  • 49
  • 45
  • 42
  • 41
  • 39
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Optimal Design and Inference for Correlated Bernoulli Variables using a Simplified Cox Model

Bruce, Daniel January 2008 (has links)
<p>This thesis proposes a simplification of the model for dependent Bernoulli variables presented in Cox and Snell (1989). The simplified model, referred to as the simplified Cox model, is developed for identically distributed and dependent Bernoulli variables.</p><p>Properties of the model are presented, including expressions for the loglikelihood function and the Fisher information. The special case of a bivariate symmetric model is studied in detail. For this particular model, it is found that the number of design points in a locally D-optimal design is determined by the log-odds ratio between the variables. Under mutual independence, both a general expression for the restrictions of the parameters and an analytical expression for locally D-optimal designs are derived.</p><p>Focusing on the bivariate case, score tests and likelihood ratio tests are derived to test for independence. Numerical illustrations of these test statistics are presented in three examples. In connection to testing for independence, an E-optimal design for maximizing the local asymptotic power of the score test is proposed.</p><p>The simplified Cox model is applied to a dental data. Based on the estimates of the model, optimal designs are derived. The analysis shows that these optimal designs yield considerably more precise parameter estimates compared to the original design. The original design is also compared against the E-optimal design with respect to the power of the score test. For most alternative hypotheses the E-optimal design provides a larger power compared to the original design.</p>
192

Optimal Design and Inference for Correlated Bernoulli Variables using a Simplified Cox Model

Bruce, Daniel January 2008 (has links)
This thesis proposes a simplification of the model for dependent Bernoulli variables presented in Cox and Snell (1989). The simplified model, referred to as the simplified Cox model, is developed for identically distributed and dependent Bernoulli variables. Properties of the model are presented, including expressions for the loglikelihood function and the Fisher information. The special case of a bivariate symmetric model is studied in detail. For this particular model, it is found that the number of design points in a locally D-optimal design is determined by the log-odds ratio between the variables. Under mutual independence, both a general expression for the restrictions of the parameters and an analytical expression for locally D-optimal designs are derived. Focusing on the bivariate case, score tests and likelihood ratio tests are derived to test for independence. Numerical illustrations of these test statistics are presented in three examples. In connection to testing for independence, an E-optimal design for maximizing the local asymptotic power of the score test is proposed. The simplified Cox model is applied to a dental data. Based on the estimates of the model, optimal designs are derived. The analysis shows that these optimal designs yield considerably more precise parameter estimates compared to the original design. The original design is also compared against the E-optimal design with respect to the power of the score test. For most alternative hypotheses the E-optimal design provides a larger power compared to the original design.
193

Algorithmic Trading : Hidden Markov Models on Foreign Exchange Data

Idvall, Patrik, Jonsson, Conny January 2008 (has links)
In this master's thesis, hidden Markov models (HMM) are evaluated as a tool for forecasting movements in a currency cross. With an ever increasing electronic market, making way for more automated trading, or so called algorithmic trading, there is constantly a need for new trading strategies trying to find alpha, the excess return, in the market. HMMs are based on the well-known theories of Markov chains, but where the states are assumed hidden, governing some observable output. HMMs have mainly been used for speech recognition and communication systems, but have lately also been utilized on financial time series with encouraging results. Both discrete and continuous versions of the model will be tested, as well as single- and multivariate input data. In addition to the basic framework, two extensions are implemented in the belief that they will further improve the prediction capabilities of the HMM. The first is a Gaussian mixture model (GMM), where one for each state assign a set of single Gaussians that are weighted together to replicate the density function of the stochastic process. This opens up for modeling non-normal distributions, which is often assumed for foreign exchange data. The second is an exponentially weighted expectation maximization (EWEM) algorithm, which takes time attenuation in consideration when re-estimating the parameters of the model. This allows for keeping old trends in mind while more recent patterns at the same time are given more attention. Empirical results shows that the HMM using continuous emission probabilities can, for some model settings, generate acceptable returns with Sharpe ratios well over one, whilst the discrete in general performs poorly. The GMM therefore seems to be an highly needed complement to the HMM for functionality. The EWEM however does not improve results as one might have expected. Our general impression is that the predictor using HMMs that we have developed and tested is too unstable to be taken in as a trading tool on foreign exchange data, with too many factors influencing the results. More research and development is called for.
194

Estimation of Nonlinear Dynamic Systems : Theory and Applications

Schön, Thomas B. January 2006 (has links)
This thesis deals with estimation of states and parameters in nonlinear and non-Gaussian dynamic systems. Sequential Monte Carlo methods are mainly used to this end. These methods rely on models of the underlying system, motivating some developments of the model concept. One of the main reasons for the interest in nonlinear estimation is that problems of this kind arise naturally in many important applications. Several applications of nonlinear estimation are studied. The models most commonly used for estimation are based on stochastic difference equations, referred to as state-space models. This thesis is mainly concerned with models of this kind. However, there will be a brief digression from this, in the treatment of the mathematically more intricate differential-algebraic equations. Here, the purpose is to write these equations in a form suitable for statistical signal processing. The nonlinear state estimation problem is addressed using sequential Monte Carlo methods, commonly referred to as particle methods. When there is a linear sub-structure inherent in the underlying model, this can be exploited by the powerful combination of the particle filter and the Kalman filter, presented by the marginalized particle filter. This algorithm is also known as the Rao-Blackwellized particle filter and it is thoroughly derived and explained in conjunction with a rather general class of mixed linear/nonlinear state-space models. Models of this type are often used in studying positioning and target tracking applications. This is illustrated using several examples from the automotive and the aircraft industry. Furthermore, the computational complexity of the marginalized particle filter is analyzed. The parameter estimation problem is addressed for a relatively general class of mixed linear/nonlinear state-space models. The expectation maximization algorithm is used to calculate parameter estimates from batch data. In devising this algorithm, the need to solve a nonlinear smoothing problem arises, which is handled using a particle smoother. The use of the marginalized particle filter for recursive parameterestimation is also investigated. The applications considered are the camera positioning problem arising from augmented reality and sensor fusion problems originating from automotive active safety systems. The use of vision measurements in the estimation problem is central to both applications. In augmented reality, the estimates of the camera’s position and orientation are imperative in the process of overlaying computer generated objects onto the live video stream. The objective in the sensor fusion problems arising in automotive safety systems is to provide information about the host vehicle and its surroundings, such as the position of other vehicles and the road geometry. Information of this kind is crucial for many systems, such as adaptive cruise control, collision avoidance and lane guidance.
195

Multi-antenna Relay Beamforming with Per-antenna Power Constraints

Xiao, Qiang 27 November 2012 (has links)
Multi-antenna relay beamforming is a promising candidate in the next generation wireless communication systems. The assumption of sum power constraint at the relay in previous work is often unrealistic in practice, since each antenna of the relay is limited by its own front-end power amplifier and thus has its own individual power constraint. In this thesis, given per-antenna power constraints, we obtain the semi-closed form solution for the optimal relay beamforming design in the two-hop amplify-and-forward relay beamforming and establish its duality with the point-to-point single-input multiple-output (SIMO) beamforming system. Simulation results show that the per-antenna power constraint case has much lower per-antenna peak power and much smaller variance of per-antenna power usage than the sum-power constraint case. A heuristic iterative algorithm to minimize the total power of relay network is proposed.
196

Multi-antenna Relay Beamforming with Per-antenna Power Constraints

Xiao, Qiang 27 November 2012 (has links)
Multi-antenna relay beamforming is a promising candidate in the next generation wireless communication systems. The assumption of sum power constraint at the relay in previous work is often unrealistic in practice, since each antenna of the relay is limited by its own front-end power amplifier and thus has its own individual power constraint. In this thesis, given per-antenna power constraints, we obtain the semi-closed form solution for the optimal relay beamforming design in the two-hop amplify-and-forward relay beamforming and establish its duality with the point-to-point single-input multiple-output (SIMO) beamforming system. Simulation results show that the per-antenna power constraint case has much lower per-antenna peak power and much smaller variance of per-antenna power usage than the sum-power constraint case. A heuristic iterative algorithm to minimize the total power of relay network is proposed.
197

Ramsey Pricing In Turkey Postal Services

Ozugur, Ozgur 01 September 2003 (has links) (PDF)
This study aims to provide an empirical investigation of Postal Services pricing in Turkey by way of computing Ramsey prices and examining the sensitivity of Ramsey prices to changes in demand and cost parameters. In this study, the Ramsey pricing problem is stated as maximizing a welfare function subject to the Post Office attaining a certain degree of profitability. The conditions necessary for the Post Office to be able to price efficiently have implications for Ramsey pricing. We estimate demand functions and cost structure of letters and express mail using data from Turkish Postal Services. The robustness of the Ramsey rule is assessed under alternative estimates of demand and similarly, in the absence of reliable data, under alternative intervals of marginal cost. Ramsey prices for two letter categories and welfare gains of moving from the existing pricing structure to Ramsey are determined and examined. Sensitivity analysis indicates that the existing policy is not Ramsey optimal and that this is a fairly robust result.
198

Learning with Feed-forward Neural Networks: Three Schemes to Deal with the Bias/Variance Trade-off

Romero Merino, Enrique 30 November 2004 (has links)
In terms of the Bias/Variance decomposition, very flexible (i.e., complex) Supervised Machine Learning systems may lead to unbiased estimators but with high variance. A rigid model, in contrast, may lead to small variance but high bias. There is a trade-off between the bias and variance contributions to the error, where the optimal performance is achieved.In this work we present three schemes related to the control of the Bias/Variance decomposition for Feed-forward Neural Networks (FNNs) with the (sometimes modified) quadratic loss function:1. An algorithm for sequential approximation with FNNs, named Sequential Approximation with Optimal Coefficients and Interacting Frequencies (SAOCIF). Most of the sequential approximations proposed in the literature select the new frequencies (the non-linear weights) guided by the approximation of the residue of the partial approximation. We propose a sequential algorithm where the new frequency is selected taking into account its interactions with the previously selected ones. The interactions are discovered by means of their optimal coefficients (the linear weights). A number of heuristics can be used to select the new frequencies. The aim is that the same level of approximation may be achieved with less hidden units than if we only try to match the residue as best as possible. In terms of the Bias/Variance decomposition, it will be possible to obtain simpler models with the same bias. The idea behind SAOCIF can be extended to approximation in Hilbert spaces, maintaining orthogonal-like properties. In this case, the importance of the interacting frequencies lies in the expectation of increasing the rate of approximation. Experimental results show that the idea of interacting frequencies allows to construct better approximations than matching the residue.2. A study and comparison of different criteria to perform Feature Selection (FS) with Multi-Layer Perceptrons (MLPs) and the Sequential Backward Selection (SBS) procedure within the wrapper approach. FS procedures control the Bias/Variance decomposition by means of the input dimension, establishing a clear connection with the curse of dimensionality. Several critical decision points are studied and compared. First, the stopping criterion. Second, the data set where the value of the loss function is measured. Finally, we also compare two ways of computing the saliency (i.e., the relative importance) of a feature: either first train a network and then remove temporarily every feature or train a different network with every feature temporarily removed. The experiments are performed for linear and non-linear models. Experimental results suggest that the increase in the computational cost associated with retraining a different network with every feature temporarily removed previous to computing the saliency can be rewarded with a significant performance improvement, specially if non-linear models are used. Although this idea could be thought as very intuitive, it has been hardly used in practice. Regarding the data set where the value of the loss function is measured, it seems clear that the SBS procedure for MLPs takes profit from measuring the loss function in a validation set. A somewhat non-intuitive conclusion is drawn looking at the stopping criterion, where it can be seen that forcing overtraining may be as useful as early stopping.3. A modification of the quadratic loss function for classification problems, inspired in Support Vector Machines (SVMs) and the AdaBoost algorithm, named Weighted Quadratic Loss (WQL) function. The modification consists in weighting the contribution of every example to the total error. In the linearly separable case, the solution of the hard margin SVM also minimizes the proposed loss function. The hardness of the resulting solution can be controlled, as in SVMs, so that this scheme may also be used for the non-linearly separable case. The error weighting proposed in WQL forces the training procedure to pay more attention to the points with a smaller margin. Therefore, variance tries to be controlled by not attempting to overfit the points that are already well classified. The model shares several properties with the SVMs framework, with some additional advantages. On the one hand, the final solution is neither restricted to have an architecture with so many hidden units as points (or support vectors) in the data set nor to use kernel functions. The frequencies are not restricted to be a subset of the data set. On the other hand, it allows to deal with multiclass and multilabel problems in a natural way. Experimental results are shown confirming these claims.A wide experimental work has been done with the proposed schemes, including artificial data sets, well-known benchmark data sets and two real-world problems from the Natural Language Processing domain. In addition to widely used activation functions, such as the hyperbolic tangent or the Gaussian function, other activation functions have been tested. In particular, sinusoidal MLPs showed a very good behavior. The experimental results can be considered as very satisfactory. The schemes presented in this work have been found to be very competitive when compared to other existing schemes described in the literature. In addition, they can be combined among them, since they deal with complementary aspects of the whole learning process.
199

Neutrino velocity measurement with the OPERA experiment in the CNGS beam

Brunetti, Giulia 20 May 2011 (has links) (PDF)
The thesis concerns the measurement of the neutrino velocity with the OPERA experiment in the CNGS beam. There are different theoretical models that allow for Lorentz violating effects which can be investigated with measurements on terrestrial neutrino beams. The MINOS experiment published in 2007 a measure on the muon neutrinos over a distance of 730 km finding a deviation with respect to the expected time of flight of 126 ns with a statistical error of 32 ns and a systematic error of 64 ns. The OPERA experiment observes as well muon neutrinos 730 km away from the source, with a sensitivity significantly better than MINOS thanks to the higher number of interactions in the detector due to the higher energy beam and the much more sophisticated timing system explicitly upgraded in view of the neutrino velocity measurement. This system is composed by atomic cesium clocks and GPS receivers operating in "common view mode". Thanks to this system a time-transfer between the two sites with a precision at the level of 1 ns is possible. Moreover, a Fast Waveform Digitizer was installed along the proton beam line at CERN in order to measure the internal time structure of the proton pulses that are sent to the CNGS target. The result on the neutrino velocity is the most precise measurement so far with terrestrial neutrino beams: the neutrino time of flight was determined with a statistical uncertainty of about 10 ns and a systematic uncertainty smaller than 20 ns.
200

The bioeconomic analysis of shark bycatch in tuna fishery

Chiou, Mei-Jing 27 January 2011 (has links)
Abstract Because the number of sharks is decreasing , the problem has been concerned in recent years. In this paper , I conduct a research about the shark bycatch problem¡Ataking the target specie of bigeye tunas and the bycatch specie of sharks as the research objects . Initially I collect from 2000 to 2007 Atlantic bigeye tuna and shark catches datas for resource assessment.and then making comparasion with resource equilibrium values of open access fishery model and net present value maximization fishery model. it is found during the period it has showed a upward trend for bigeye tuna resources¡Abut it has not showed the bigeye tuna resources achive the optimal level. It has showed a diminishing trend for shark resources, the result shows the resources will face extinction crisis if the fishery is not controlled well. Then, doing sensitivity analysis to understand the effects of exogenous parameters to bigeye tuna and shark catches, resources and efforts. Finally, facing with the sharks bycatch problem , discussing the effect of improved fishing gears on sharks bycatch control by doing the sensitivity analysis of fishing hook cost and bycatch coefficient to shark resources and catches¡OFrom the results , it shows that no matter affecting the fishing hook cost or bycatch coefficient , the amount of shark can be reduced effectively by adopting improved fishing gears.

Page generated in 0.0638 seconds