• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 37
  • 37
  • 37
  • 12
  • 10
  • 10
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Improved Methods and Selecting Classification Types for Time-Dependent Covariates in the Marginal Analysis of Longitudinal Data

Chen, I-Chen 01 January 2018 (has links)
Generalized estimating equations (GEE) are popularly utilized for the marginal analysis of longitudinal data. In order to obtain consistent regression parameter estimates, these estimating equations must be unbiased. However, when certain types of time-dependent covariates are presented, these equations can be biased unless an independence working correlation structure is employed. Moreover, in this case regression parameter estimation can be very inefficient because not all valid moment conditions are incorporated within the corresponding estimating equations. Therefore, approaches using the generalized method of moments or quadratic inference functions have been proposed for utilizing all valid moment conditions. However, we have found that such methods will not always provide valid inference and can also be improved upon in terms of finite-sample regression parameter estimation. Therefore, we propose a modified GEE approach and a selection method that will both ensure the validity of inference and improve regression parameter estimation. In addition, these modified approaches assume the data analyst knows the type of time-dependent covariate, although this likely is not the case in practice. Whereas hypothesis testing has been used to determine covariate type, we propose a novel strategy to select a working covariate type in order to avoid potentially high type II error rates with these hypothesis testing procedures. Parameter estimates resulting from our proposed method are consistent and have overall improved mean squared error relative to hypothesis testing approaches. Finally, for some real-world examples the use of mean regression models may be sensitive to skewness and outliers in the data. Therefore, we extend our approaches from their use with marginal quantile regression to modeling the conditional quantiles of the response variable. Existing and proposed methods are compared in simulation studies and application examples.
22

Observed score equating with covariates

Bränberg, Kenny January 2010 (has links)
In test score equating the focus is on the problem of finding the relationship between the scales of different test forms. This can be done only if data are collected in such a way that the effect of differences in ability between groups taking different test forms can be separated from the effect of differences in test form difficulty. In standard equating procedures this problem has been solved by using common examinees or common items. With common examinees, as in the equivalent groups design, the single group design, and the counterbalanced design, the examinees taking the test forms are either exactly the same, i.e., each examinee takes both test forms, or random samples from the same population. Common items (anchor items) are usually used when the samples taking the different test forms are assumed to come from different populations. The thesis consists of four papers and the main theme in three of these papers is the use of covariates, i.e., background variables correlated with the test scores, in observed score equating. We show how covariates can be used to adjust for systematic differences between samples in a non-equivalent groups design when there are no anchor items. We also show how covariates can be used to decrease the equating error in an equivalent groups design or in a non-equivalent groups design. The first paper, Paper I, is the only paper where the focus is on something else than the incorporation of covariates in equating. The paper is an introduction to test score equating, and the author's thoughts on the foundation of test score equating. There are a number of different definitions of test score equating in the literature. Some of these definitions are presented and the similarities and differences between them are discussed. An attempt is also made to clarify the connection between the definitions and the most commonly used equating functions. In Paper II a model is proposed for observed score linear equating with background variables. The idea presented in the paper is to adjust for systematic differences in ability between groups in a non-equivalent groups design by using information from background variables correlated with the observed test scores. It is assumed that conditional on the background variables the two samples can be seen as random samples from the same population. The background variables are used to explain the systematic differences in ability between the populations. The proposed model consists of a linear regression model connecting the observed scores with the background variables and a linear equating function connecting observed scores on one test forms to observed scores on the other test form. Maximum likelihood estimators of the model parameters are derived, using an assumption of normally distributed test scores, and data from two administrations of the Swedish Scholastic Assessment Test are used to illustrate the use of the model. In Paper III we use the model presented in Paper II with two different data collection designs: the non-equivalent groups design (with and without anchor items) and the equivalent groups design. Simulated data are used to examine the effect - in terms of bias, variance and mean squared error - on the estimators, of including covariates. With the equivalent groups design the results show that using covariates can increase the accuracy of the equating. With the non-equivalent groups design the results show that using an anchor test together with covariates is the most efficient way of reducing the mean squared error of the estimators. Furthermore, with no anchor test, the background variables can be used to adjust for the systematic differences between the populations and produce unbiased estimators of the equating relationship, provided that the “right” variables are used, i.e., the variables explaining those differences. In Paper IV we explore the idea of using covariates as a substitute for an anchor test with a non-equivalent groups design in the framework of Kernel Equating. Kernel Equating can be seen as a method including five different steps: presmoothing, estimation of score probabilities, continuization, equating, and calculating the standard error of equating. For each of these steps we give the theoretical results when observations on covariates are used as a substitute for scores on an anchor test. It is shown that we can use the method developed for Post-Stratification Equating in the non-equivalent groups with anchor test design, but with observations on the covariates instead of scores on an anchor test. The method is illustrated using data from the Swedish Scholastic Assessment Test.
23

Communications over noncoherent doubly selective channels

Pachai Kannu, Arun 27 March 2007 (has links)
No description available.
24

MIMO discrete wavelet transform for the next generation wireless systems

Asif, Rameez, Ghazaany, Tahereh S., Abd-Alhameed, Raed, Noras, James M., Jones, Steven M.R., Rodriguez, Jonathan, See, Chan H. January 2013 (has links)
No / Study is presented into the performance of Fast Fourier Transform (FFT) and Discrete Wavelet Transform (DWT) and MIMO-DWT with transmit beamforming. Feedback loop has been used between the equalizer at the transmitter to the receiver which provided the channel state information which was then used to construct a steering matrix for the transmission sequence such that the received signals at the transmitter can be combined constructively in order to provide a reliable and improved system for next generation wireless systems. As convolution in time domain equals multiplication in frequency domain no such counterpart exist for the symbols in space, means linear convolution and Intersymbol Interference (ISI) generation so both zero forcing (ZF) and minimum mean squared error (MMSE) equalizations have been employed. The results show superior performance improvement and in addition allow keeping the processing, power and implementation cost at the transmitter which has less constraints and the results also show that both equalization algorithms perform alike in wavelets and the ISI is spread equally between different wavelet domains.
25

Výpočty variability vývojových trojúhelníků v neživotním pojištění / Variability estimation of development triangles in nonlife insurance

Havlíková, Tereza January 2013 (has links)
The aim of this thesis is to describe calculation methods for variability esti- mation of claims reserve in non-life insurance. The thesis focuses on three main categories of models: Mack's stochastic Chain-Ladder, generalized linear models and bootstrap. Both the theoretical and also the empirical parts are included. Empirical part is devoted to application of all the models described above on both real and simulated data. 1
26

MSE-based Linear Transceiver Designs for Multiuser MIMO Wireless Communications

Tenenbaum, Adam 11 January 2012 (has links)
This dissertation designs linear transceivers for the multiuser downlink in multiple-input multiple-output (MIMO) systems. The designs rely on an uplink/downlink duality for the mean squared error (MSE) of each individual data stream. We first consider the design of transceivers assuming channel state information (CSI) at the transmitter. We consider minimization of the sum-MSE over all users subject to a sum power constraint on each transmission. Using MSE duality, we solve a computationally simpler convex problem in a virtual uplink. The transformation back to the downlink is simplified by our demonstrating the equality of the optimal power allocations in the uplink and downlink. Our second set of designs maximize the sum throughput for all users. We establish a series of relationships linking MSE to the signal-to-interference-plus-noise ratios of individual data streams and the information theoretic channel capacity under linear minimum MSE decoding. We show that minimizing the product of MSE matrix determinants is equivalent to sum-rate maximization, but we demonstrate that this problem does not admit a computationally efficient solution. We simplify the problem by minimizing the product of mean squared errors (PMSE) and propose an iterative algorithm based on alternating optimization with near-optimal performance. The remainder of the thesis considers the more practical case of imperfections in CSI. First, we consider the impact of delay and limited-rate feedback. We propose a system which employs Kalman prediction to mitigate delay; feedback rate is limited by employing adaptive delta modulation. Next, we consider the robust design of the sum-MSE and PMSE minimizing precoders with delay-free but imperfect estimates of the CSI. We extend the MSE duality to the case of imperfect CSI, and consider a new optimization problem which jointly optimizes the energy allocations for training and data stages along with the sum-MSE/PMSE minimizing transceivers. We prove the separability of these two problems when all users have equal estimation error variances, and propose several techniques to address the more challenging case of unequal estimation errors.
27

MSE-based Linear Transceiver Designs for Multiuser MIMO Wireless Communications

Tenenbaum, Adam 11 January 2012 (has links)
This dissertation designs linear transceivers for the multiuser downlink in multiple-input multiple-output (MIMO) systems. The designs rely on an uplink/downlink duality for the mean squared error (MSE) of each individual data stream. We first consider the design of transceivers assuming channel state information (CSI) at the transmitter. We consider minimization of the sum-MSE over all users subject to a sum power constraint on each transmission. Using MSE duality, we solve a computationally simpler convex problem in a virtual uplink. The transformation back to the downlink is simplified by our demonstrating the equality of the optimal power allocations in the uplink and downlink. Our second set of designs maximize the sum throughput for all users. We establish a series of relationships linking MSE to the signal-to-interference-plus-noise ratios of individual data streams and the information theoretic channel capacity under linear minimum MSE decoding. We show that minimizing the product of MSE matrix determinants is equivalent to sum-rate maximization, but we demonstrate that this problem does not admit a computationally efficient solution. We simplify the problem by minimizing the product of mean squared errors (PMSE) and propose an iterative algorithm based on alternating optimization with near-optimal performance. The remainder of the thesis considers the more practical case of imperfections in CSI. First, we consider the impact of delay and limited-rate feedback. We propose a system which employs Kalman prediction to mitigate delay; feedback rate is limited by employing adaptive delta modulation. Next, we consider the robust design of the sum-MSE and PMSE minimizing precoders with delay-free but imperfect estimates of the CSI. We extend the MSE duality to the case of imperfect CSI, and consider a new optimization problem which jointly optimizes the energy allocations for training and data stages along with the sum-MSE/PMSE minimizing transceivers. We prove the separability of these two problems when all users have equal estimation error variances, and propose several techniques to address the more challenging case of unequal estimation errors.
28

A Study of Gamma Distributions and Some Related Works

Chou, Chao-Wei 11 May 2004 (has links)
Characterization of distributions has been an important topic in statistical theory for decades. Although there have been many well known results already developed, it is still of great interest to find new characterizations of commonly used distributions in application, such as normal or gamma distribution. In practice, sometimes we make guesses on the distribution to be fitted to the data observed, sometimes we use the characteristic properties of those distributions to do so. In this paper we will restrict our attention to the characterizations of gamma distribution as well as some related studies on the corresponding parameter estimation based on the characterization properties. Some simulation studies are also given.
29

Optimizing dense wireless networks of MIMO links

Cortes-Pena, Luis Miguel 27 August 2014 (has links)
Wireless communication systems have exploded in popularity over the past few decades. Due to their popularity, the demand for higher data rates by the users, and the high cost of wireless spectrum, wireless providers are actively seeking ways to improve the spectral efficiency of their networks. One promising technique to improve spectral efficiency is to equip the wireless devices with multiple antennas. If both the transmitter and receiver of a link are equipped with multiple antennas, they form a multiple-input multiple-output (MIMO) link. The multiple antennas at the nodes provide degrees-of-freedom that can be used for either sending multiple streams of data simultaneously (a technique known as spatial multiplexing), or for suppressing interference through linear combining, but not both. Due to this trade-off, careful allocation of how many streams each link should carry is important to ensure that each node has enough degrees-of-freedom available to suppress the interference and support its desired streams. How the streams are sent and received and how interference is suppressed is ultimately determined by the beamforming weights at the transmitters and the combining weights at the receivers. Determining these weights is, however, made difficult by their inherent interdependency. Our focus is on unplanned and/or dense single-hop networks, such as WLANs and femtocells, where each single-hop network is composed of an access point serving several associated clients. The objective of this research is to design algorithms for maximizing the performance of dense single-hop wireless networks of MIMO links. We address the problems of determining which links to schedule together at each time slot, how many streams to allocate to each link (if any), and the beamforming and combining weights that support those streams. This dissertation describes four key contributions as follows: - We classify any interference suppression technique as either unilateral interference suppression or bilateral interference suppression. We show that a simple bilateral interference suppression approach outperforms all known unilateral interference suppression approaches, even after searching for the best unilateral solution. - We propose an algorithm based on bilateral interference suppression whose goal is to maximize the sum rate of a set of interfering MIMO links by jointly optimizing which subset of transmitters should transmit, the number of streams for each transmitter (if any), and the beamforming and combining weights that support those streams. - We propose a framework for optimizing dense single-hop wireless networks. The framework implements techniques to address several practical issues that arise when implementing interference suppression, such as the overhead of performing channel measurements and communicating channel state information, the overhead of computing the beamforming and combining weights, and the overhead of cooperation between the access points. - We derive the optimal scheduler that maximizes the sum rate subject to proportional fairness. Simulations in ns-3 show that the framework, using the optimal scheduler, increases the proportionally fair aggregate goodput by up to 165% as compared to the aggregate goodput of 802.11n for the case of four interfering single-hop wireless networks with two clients each.
30

Extension au cadre spatial de l'estimation non paramétrique par noyaux récursifs / Extension to spatial setting of kernel recursive estimation

Yahaya, Mohamed 15 December 2016 (has links)
Dans cette thèse, nous nous intéressons aux méthodes dites récursives qui permettent une mise à jour des estimations séquentielles de données spatiales ou spatio-temporelles et qui ne nécessitent pas un stockage permanent de toutes les données. Traiter et analyser des flux des données, Data Stream, de façon effective et efficace constitue un défi actif en statistique. En effet, dans beaucoup de domaines d'applications, des décisions doivent être prises à un temps donné à la réception d'une certaine quantité de données et mises à jour une fois de nouvelles données disponibles à une autre date. Nous proposons et étudions ainsi des estimateurs à noyau de la fonction de densité de probabilité et la fonction de régression de flux de données spatiales ou spatio-temporelles. Plus précisément, nous adaptons les estimateurs à noyau classiques de Parzen-Rosenblatt et Nadaraya-Watson. Pour cela, nous combinons la méthodologie sur les estimateurs récursifs de la densité et de la régression et celle d'une distribution de nature spatiale ou spatio-temporelle. Nous donnons des applications et des études numériques des estimateurs proposés. La spécificité des méthodes étudiées réside sur le fait que les estimations prennent en compte la structure de dépendance spatiale des données considérées, ce qui est loin d'être trivial. Cette thèse s'inscrit donc dans le contexte de la statistique spatiale non-paramétrique et ses applications. Elle y apporte trois contributions principales qui reposent sur l'étude des estimateurs non-paramétriques récursifs dans un cadre spatial/spatio-temporel et s'articule autour des l'estimation récursive à noyau de la densité dans un cadre spatial, l'estimation récursive à noyau de la densité dans un cadre spatio-temporel, et l'estimation récursive à noyau de la régression dans un cadre spatial. / In this thesis, we are interested in recursive methods that allow to update sequentially estimates in a context of spatial or spatial-temporal data and that do not need a permanent storage of all data. Process and analyze Data Stream, effectively and effciently is an active challenge in statistics. In fact, in many areas, decisions should be taken at a given time at the reception of a certain amount of data and updated once new data are available at another date. We propose and study kernel estimators of the probability density function and the regression function of spatial or spatial-temporal data-stream. Specifically, we adapt the classical kernel estimators of Parzen-Rosenblatt and Nadaraya-Watson. For this, we combine the methodology of recursive estimators of density and regression and that of a distribution of spatial or spatio-temporal data. We provide applications and numerical studies of the proposed estimators. The specifcity of the methods studied resides in the fact that the estimates take into account the spatial dependence structure of the relevant data, which is far from trivial. This thesis is therefore in the context of non-parametric spatial statistics and its applications. This work makes three major contributions. which are based on the study of non-parametric estimators in a recursive spatial/space-time and revolves around the recursive kernel density estimate in a spatial context, the recursive kernel density estimate in a space-time and recursive kernel regression estimate in space.

Page generated in 0.4494 seconds