Spelling suggestions: "subject:"square""
711 |
Inference for Discrete Time Stochastic Processes using Aggregated Survey DataDavis, Brett Andrew, Brett.Davis@abs.gov.au January 2003 (has links)
We consider a longitudinal system in which transitions between the states are governed by a discrete time finite state space stochastic process X. Our aim, using aggregated sample survey data of the form typically collected by official statistical agencies, is to undertake model based inference for the underlying process X. We will develop inferential techniques for continuing sample surveys of two distinct types. First, longitudinal surveys in which the same individuals are sampled in each cycle of the survey. Second, cross-sectional
surveys which sample the same population in successive cycles but with no attempt to track particular individuals from one cycle to the next. Some of the basic results have appeared in Davis et al (2001) and Davis et al (2002).¶ Longitudinal surveys provide data in the form of transition frequencies between the states of X. In Chapter Two we develop a method for modelling and estimating the one-step transition probabilities in the case where X is a non-homogeneous Markov chain and transition frequencies are observed at unit time intervals. However, due to their expense, longitudinal surveys are typically conducted at widely, and sometimes irregularly, spaced time points. That is, the observable frequencies pertain to multi-step transitions. Continuing to assume the Markov property for X, in Chapter Three, we show that these multi-step transition frequencies can be stochastically interpolated to provide accurate estimates of the one-step transition probabilities of the underlying process. These estimates for a unit time increment can be used to calculate estimates of expected future occupation time, conditional on an individuals state at initial point of observation, in the different states of X.¶ For reasons of cost, most statistical collections run by official agencies are cross-sectional sample surveys. The data observed from an on-going survey of this type are marginal frequencies in the states of X at a sequence of time points. In Chapter Four we develop a model based technique for estimating the marginal probabilities of X using data of this form. Note that, in contrast to the longitudinal case, the Markov assumption does not simplify inference based on marginal frequencies. The marginal probability estimates enable estimation of future occupation times (in each of the states of X) for an individual of unspecified initial state. However, in the applications of the technique that we discuss (see Sections 4.4 and 4.5) the estimated occupation times will be conditional on both gender and initial age of individuals.¶ The longitudinal data envisaged in Chapter Two is that obtained from the surveillance of the same sample in each cycle of an on-going survey. In practice, to preserve data quality it is necessary to control respondent burden using sample rotation. This is usually achieved using a mechanism known as rotation group sampling. In Chapter Five we consider the particular form of rotation group sampling used by the Australian Bureau of Statistics in their Monthly Labour Force Survey (from which official estimates of labour force participation rates are produced). We show that our approach to estimating the one-step transition probabilities of X from transition frequencies observed at incremental time intervals, developed in Chapter Two, can be modified to deal with data collected under this sample rotation scheme. Furthermore, we show that valid inference is possible even when the Markov property does not hold for the underlying process.
|
712 |
Construction of Minimal Partially Replicated Orthogonal Main-Effect Plans with 3 Factors朱正中, Chu, Cheng-Chung Unknown Date (has links)
正交主效應計畫(Orthogonal main-effect plans)因可無相關地估計主效應,故常被應用於一般工業上作為篩選因子之用。然而,實驗通常費時耗財。因此,如何設計一個較經濟且有效的計劃是很重要的。回顧過去相關的研究,Jacroux (1992)提供了最小正交主效應計劃的充份條件及正交主效應計畫之最少實驗次數表(Jacroux 1992),張純明(1998)針對此表提出修正與補充。在此,我們再次的補足此表。
正交主效應計畫中,如有重複實驗點,則純誤差可被估計,且據此檢定模型之適合度。Jacroux (1993)及張純明(1998)皆曾提出具最多部份重複之正交主效應計畫(Partially replicated orthogonal main-effect plans)。在此,我們討論所有三因子部份重複正交主效應計畫中,可能重複之最大次數,且具體提出建構此最大部份重複之最小正交主效應計畫之方法。 / Orthogonal main-effect plans (OMEP's), being able to estimate the main effects without correlation, are often employed in industrial situations for screening purpose. But experiments are expensive and time consuming. When an economical and efficient design is desired, a minimal orthogonal main-effect plans is a good choice. Jacroux (1992) derived a sufficient condition for OEMP's to have minimal number of runs and provided a table of minimal OMEP run numbers. Chang (1998) corrected and supplemented the table. In this paper, we try to improve the table to its perfection.
A minimal OMEP with replicated runs is appreciated even more since then the pure error can be estimated and the goodness-of-fit of the model can be tested. Jacroux (1993) and Chang (1998) gave some partially replicated orthogonal main-effect plans (PROMEP's) with maximal number of replicated points. Here, we discuss minimal PROMEP's with 3 factors in detail. Methods of constructing minimal PROMEP's with replicated runs are provided, and the number of replicated runs are maximal for most cases.
|
713 |
Analysis of 2 x 2 x 2 TensorsRovi, Ana January 2010 (has links)
<p>The question about how to determine the rank of a tensor has been widely studied in the literature. However the analytical methods to compute the decomposition of tensors have not been so much developed even for low-rank tensors.</p><p>In this report we present analytical methods for finding real and complex PARAFAC decompositions of 2 x 2 x 2 tensors before computing the actual rank of the tensor.</p><p>These methods are also implemented in MATLAB.</p><p>We also consider the question of how best lower-rank approximation gives rise to problems of degeneracy, and give some analytical explanations for these issues.</p>
|
714 |
Analysis of Fix‐point Aspects for Wireless Infrastructure SystemsGrill, Andreas, Englund, Robin January 2009 (has links)
<p>A large amount of today’s telecommunication consists of mobile and short distance wireless applications, where the effect of the channel is unknown and changing over time, and thus needs to be described statistically. Therefore the received signal can not be accurately predicted and has to be estimated. Since telecom systems are implemented in real-time, the hardware in the receiver for estimating the sent signal can for example be based on a DSP where the statistic calculations are performed. A fixed-point DSP with a limited number of bits and a fixed binary point causes larger quantization errors compared to floating point operations with higher accuracy.</p><p>The focus on this thesis has been to build a library of functions for handling fixed-point data. A class that can handle the most common arithmetic operations and a least squares solver for fixed-point have been implemented in MATLAB code.</p><p>The MATLAB Fixed-Point Toolbox could have been used to solve this task, but in order to have full control of the algorithms and the fixed-point handling an independent library was created.</p><p>The conclusion of the simulation made in this thesis is that the least squares result are depending more on the number of integer bits then the number of fractional bits.</p> / <p>En stor del av dagens telekommunikation består av mobila trådlösa kortdistanstillämpningar där kanalens påverkan är okänd och förändras över tid. Signalen måste därför beskrivas statistiskt, vilket gör att den inte kan bestämmas exakt, utan måste estimeras. Eftersom telekomsystem arbetar i realtid består hårdvaran i mottagaren av t.ex. en DSP där de statistiska beräkningarna görs. En fixtals DSP har ett bestämt antal bitar och fast binärpunkt, vilket introducerar ett större kvantiseringsbrus jämfört med flyttalsoperationer som har en större noggrannhet.</p><p>Tyngdpunkten på det här arbetet har varit att skapa ett bibliotek av funktioner för att hantera fixtal. En klass har skapats i MATLAB-kod som kan hantera de vanligaste aritmetiska operationerna och lösa minsta-kvadrat-problem.</p><p>MATLAB:s Fixed-Point Toolbox skulle kunna användas för att lösa den här uppgiften men för att ha full kontroll över algoritmerna och fixtalshanteringen behövs ett eget bibliotek av funktioner som är oberoende av MATLAB:s Fixed-Point Toolbox.</p><p>Slutsatsen av simuleringen gjord i detta examensarbete är att resultatet av minsta-kvadrat-metoden är mer beroende av antalet heltalsbitar än antalet binaler.</p> / fixtal, telekommunikation, DSP, MATLAB, Fixed-Point Toolbox, minsta-kvadrat-lösning, flyttal, Householder QR faktorisering, saturering, kvantiseringsbrus
|
715 |
Mean preservation in censored regression using preliminary nonparametric smoothingHeuchenne, Cédric 18 August 2005 (has links)
In this thesis, we consider the problem of estimating the regression function in location-scale regression models.
This model assumes that the random vector (X,Y) satisfies Y = m(X) + s(X)e, where m(.) is an
unknown location function (e.g. conditional mean, median, truncated mean,...), s(.) is an unknown scale function,
and e is independent of X. The response Y is subject to random right censoring, and the covariate X is completely
observed.
In the first part of the thesis, we assume that
m(x) = E(Y|X=x) follows a polynomial model.
A new estimation
procedure for the unknown regression parameters is proposed, which extends the classical least squares procedure to
censored data. The proposed method is inspired by the method of Buckley and James (1979), but is, unlike the latter method, a
non-iterative procedure due to nonparametric preliminary estimation. The asymptotic normality of the estimators is established.
Simulations are carried out for both methods and they show that the proposed estimators have usually smaller variance and smaller
mean squared error than the Buckley-James estimators.
For the second part, suppose that m(.)=E(Y|.) belongs to some parametric class of
regression functions. A new estimation procedure for the true, unknown vector of parameters is proposed, that extends the
classical least squares procedure for nonlinear regression to the case where the response is subject to censoring. The proposed
technique uses new `synthetic' data points that are constructed by using a nonparametric relation between Y and X.
The consistency and asymptotic normality of the proposed estimator are established, and the estimator is compared via simulations
with an estimator proposed by Stute in 1999.
In the third part, we study the nonparametric estimation of the regression function m(.). It is well known that
the completely nonparametric estimator of the conditional distribution F(.|x) of Y given X=x suffers from inconsistency
problems in the right tail (Beran, 1981), and hence the location function m(x) cannot be estimated consistently in a completely
nonparametric way, whenever m(x) involves the right tail of F(.|x) (like e.g. for the conditional mean).
We propose two alternative estimators of m(x), that do not share the above inconsistency problems. The idea is to make use of the
assumed location-scale model, in order to improve the estimation of F(.|x), especially in the right tail.
We obtain the asymptotic properties of the two proposed estimators of m(x). Simulations show that the proposed estimators outperform
the completely nonparametric estimator in many cases.
|
716 |
General conditional linear models with time-dependent coefficients under censoring and truncationTeodorescu, Bianca 19 December 2008 (has links)
In survival analysis interest often lies in the relationship between the survival function and a certain number of covariates. It usually happens that for some individuals we cannot observe the event of interest, due to the presence of right censoring and/or left truncation. A typical example is given by a retrospective medical study, in which one is interested in the time interval between birth and death due to a certain disease. Patients who die of the disease at early age will rarely have entered the study before death and are therefore left truncated. On the other hand, for patients who are alive at the end of the study, only a lower bound of the true survival time is known and these patients are hence right censored.
In the case of censored and/or truncated responses, lots of models exist in the literature that describe the relationship between the survival function and the covariates (proportional hazards model or Cox model, log-logistic model, accelerated failure time model, additive risks model, etc.). In these models, the regression coefficients are usually supposed to be constant over time. In practice, the structure of the data might however be more complex, and it might therefore be better to consider coefficients that can vary over time. In the previous examples, certain covariates (e.g. age at diagnosis, type of surgery, extension of tumor, etc.) can have a relatively high impact on early age survival, but a lower influence at higher age. This motivated a number of authors to extend the Cox model to allow for time-dependent coefficients or consider other type of time-dependent coefficients models like the additive hazards model.
In practice it is of great use to have at hand a method to check the validity of the above mentioned models.
First we consider a very general model, which includes as special cases the above mentioned models (Cox model, additive model, log-logistic model, linear transformation models, etc.) with time-dependent coefficients and study the parameter estimation by means of a least squares approach. The response is allowed to be subject to right censoring and/or left truncation.
Secondly we propose an omnibus goodness-of-fit test that will test if the general time-dependent model considered above fits the data. A bootstrap version, to approximate the critical values of the test is also proposed.
In this dissertation, for each proposed method, the finite sample performance is evaluated in a simulation study and then applied to a real data set.
|
717 |
Méthodes multivariées pour l'analyse jointe de données de neuroimagerie et de génétiqueLe Floch, Edith 28 September 2012 (has links) (PDF)
L'imagerie cérébrale connaît un intérêt grandissant, en tant que phénotype intermédiaire, dans la compréhension du chemin complexe qui relie les gènes à un phénotype comportemental ou clinique. Dans ce contexte, un premier objectif est de proposer des méthodes capables d'identifier la part de variabilité génétique qui explique une certaine part de la variabilité observée en neuroimagerie. Les approches univariées classiques ignorent les effets conjoints qui peuvent exister entre plusieurs gènes ou les covariations potentielles entre régions cérébrales.Notre première contribution a été de chercher à améliorer la sensibilité de l'approche univariée en tirant avantage de la nature multivariée des données génétiques, au niveau local. En effet, nous adaptons l'inférence au niveau du cluster en neuroimagerie à des données de polymorphismes d'un seul nucléotide (SNP), en cherchant des clusters 1D de SNPs adjacents associés à un même phénotype d'imagerie. Ensuite, nous prolongeons cette idée et combinons les clusters de voxels avec les clusters de SNPs, en utilisant un test simple au niveau du "cluster 4D", qui détecte conjointement des régions cérébrale et génomique fortement associées. Nous obtenons des résultats préliminaires prometteurs, tant sur données simulées que sur données réelles.Notre deuxième contribution a été d'utiliser des méthodes multivariées exploratoires pour améliorer la puissance de détection des études d'imagerie génétique, en modélisant la nature multivariée potentielle des associations, à plus longue échelle, tant du point de vue de l'imagerie que de la génétique. La régression Partial Least Squares et l'analyse canonique ont été récemment proposées pour l'analyse de données génétiques et transcriptomiques. Nous proposons ici de transposer cette idée à l'analyse de données de génétique et d'imagerie. De plus, nous étudions différentes stratégies de régularisation et de réduction de dimension, combinées avec la PLS ou l'analyse canonique, afin de faire face au phénomène de sur-apprentissage dû aux très grandes dimensions des données. Nous proposons une étude comparative de ces différentes stratégies, sur des données simulées et des données réelles d'IRM fonctionnelle et de SNPs. Le filtrage univarié semble nécessaire. Cependant, c'est la combinaison du filtrage univarié et de la PLS régularisée L1 qui permet de détecter une association généralisable et significative sur les données réelles, ce qui suggère que la découverte d'associations en imagerie génétique nécessite une approche multivariée.
|
718 |
A Theoretical Model for Telemedicine : Social and Value Outcomes in Sub-Saharan AfricaKifle Gelan, Mengistu January 2006 (has links)
The Sub-Saharan Africa (SSA) region is faced with limited medical personnel and healthcare services to address the many healthcare problems of the region. Poor health indicators reflect the overall decline in socio-economic development. Shortages of access to health services in the region is further complicated by the concentration of health services in urban areas, the region’s multiple medical problems (over 70% of HIV/AIDS cases in the world); and the brain drain phenomenon – it is estimated one-third of African physicians emigrate to North America and Europe. The result is that the SSA region is left with about 10 physicians, and 20 beds, per 100,000 patients. Telemedicine has been found to offer socio-economic benefits, reduce costs, and improve access to healthcare service providers by patients, but previous attempts to move various information technologies from developers in the industrial world to the developing world have failed because of a clear neglect of infrastructural and cultural factors that influence such transfers. The objective of this study is to address key factors that challenge the introduction of telemedicine technology into the health sector in SSA in particular, and by extension, other developing countries with similar socio-economic structures. This research offers a distinctive perspective, focusing on visually-based clinical applications in the SSA region, and considerable attention to the national infrastructure and cultural impact of telemedicine transfer (social and value) outcomes. Two research models and its associated hypotheses are proposed and empirically tested using quantitative data collected from SSA physicians and other health professionals. The study also contributes to the ongoing debate on the potential of telemedicine in improving access and reducing costs. This research can help to understand the socio-economic impact of telemedicine outcomes in a comprehensive way. The finding from the survey shows the rapid advances in telemedicine technology specifically, visual clinical applications may become an essential healthcare tool in the near future within SSA countries.
|
719 |
Analysis of 2 x 2 x 2 TensorsRovi, Ana January 2010 (has links)
The question about how to determine the rank of a tensor has been widely studied in the literature. However the analytical methods to compute the decomposition of tensors have not been so much developed even for low-rank tensors. In this report we present analytical methods for finding real and complex PARAFAC decompositions of 2 x 2 x 2 tensors before computing the actual rank of the tensor. These methods are also implemented in MATLAB. We also consider the question of how best lower-rank approximation gives rise to problems of degeneracy, and give some analytical explanations for these issues.
|
720 |
Simulation Of Conjugate Heat Transfer Problems Using Least Squares Finite Element MethodGoktolga, Mustafa Ugur 01 October 2012 (has links) (PDF)
In this thesis study, a least-squares finite element method (LSFEM) based conjugate heat transfer solver was developed. In the mentioned solver, fluid flow and heat transfer computations were performed separately. This means that the calculated velocity values in the flow calculation part were exported to the heat transfer part to be used in the convective part of the energy equation. Incompressible Navier-Stokes equations were used in the flow simulations. In conjugate heat transfer computations, it is required to calculate the heat transfer in both flow field and solid region. In this study, conjugate behavior was accomplished in a fully coupled manner, i.e., energy equation for fluid and solid regions was solved simultaneously and no boundary conditions were defined on the fluid-solid interface. To assure that the developed solver works properly, lid driven cavity flow, backward facing step flow and thermally driven cavity flow problems were simulated in three dimensions and the findings compared well with the available data from the literature. Couette flow and thermally driven cavity flow with conjugate heat transfer in two dimensions were modeled to further validate the solver. Finally, a microchannel conjugate heat transfer problem was simulated. In the flow solution part of the microchannel problem, conservation of mass was not achieved. This problem was expected since the LSFEM has problems related to mass conservation especially in high aspect ratio channels. In order to overcome the mentioned problem, weight of continuity equation was increased by multiplying it with a constant. Weighting worked for the microchannel problem and the mass conservation issue was resolved. Obtained results for microchannel heat transfer problem were in good agreement in general with the previous experimental and numerical works.
In the first computations with the solver / quadrilateral and triangular elements for two dimensional problems, hexagonal and tetrahedron elements for three dimensional problems were tried. However, since only the quadrilateral and hexagonal elements gave satisfactory results, they were used in all the above mentioned simulations.
|
Page generated in 0.0609 seconds