Spelling suggestions: "subject:"aximum likelihood ,"" "subject:"amaximum likelihood ,""
111 |
Finns det blockpolitiska skillnader i kommunal skattepolitik?Hägerdal, Erik, Sjögren, Joakim January 2011 (has links)
Med denna uppsats har vi försökt modellera sannolikheten för förändringar i den kommunala skattenivån i svenska kommuner givet deras politiska styre, som är kodade enligt definition av Sveriges Kommuner och Landsting (SKL). Genom att göra det försöker vi få svar på frågan om det finns blockpolitiska skillnader i kommunal skattepolitik. Genom att kombinera SKL:s klassificering av kommunala politiska styren 2002-2006 med 2006-2010 har vi skapat så kallade maktskifteskategorier, som vi sedan använder som kategoriska variabler när vi modellerar sannolikheten för olika förändringar i de kommunala skattenivåerna. För att i viss mån renodla politikens inflytande och skilja det från andra strukturer som kan tänkas påverka skattenivån har vi skapat variabler som vi kallar för kontrollvariabler. Dessa försöker spegla strukturella förutsättningar för den kommunala ekonomin. Två av dessa är förändring i befolkningstäthet samt andelen av den kommunala totalbefolkningen som är förvärvsarbetande. För att ge läsaren en beskrivande bild av vårt datamaterial har vi i den deskriptiva delen av uppsatsen använt oss av så kallade markovkedjor för att ge en initial bild av sannolikheten för höjd, sänkt respektive oförändrad skattenivå givet förändringar i det politiska styret före och efter valet 2006. Sannolikheterna för de tre möjliga utfallen höjd, sänkt respektive oförändrad skatt modelleras med två multinominala logistiska regressioner – först utan kontrollvariabler och sedan med. För att tydligare kunna se politikens påverkan har vi valt att utesluta 30 kommuner enligt vissa avgränsningskriterier. Vi har uteslutit kommuner som haft ”hoppande” majoriteter mellan valen. I slutet av resultatdelen provar vi även att estimera en multinominal logistisk modell där vi uteslutit minoritetsstyren. Inom vissa politiska maktskifteskategorier hade vi för få observationer för att kunna modelera sannolikheten för sänkt skatt; data saknas data för att kunna göra en stabil modell med signifikanta parametrar. Efter estimationen av våra modeller jämför vi de koefficienter som blivit signifikanta på 5%-nivån, och jämföra koefficientestimatens värde för att rangordna maktskiften efter vilken påverkan de har på sannolikheten för höjd skatt. Vi utvärderar sen vårt resultat och vilka faktorer och egenskaper i den kommunala skattepolitiken under 2006-2010 som kan ha påverkat det. Vårt resultat visar att flera typer av maktskiften har en signifikant inverkan på sannolikhetsfördelningen för höjd skatt, och samtidigt som vi inte kan dra några slutsatser om sannolikheten för sänkt skatt hjälper estimationerna ändå oss att analysera blockpolitikens påverkan på den kommunala skattepolitiken.
|
112 |
Knotting statistics after a local strand passage in unknotted self-avoiding polygons in Z<sup>3</sup>Szafron, Michael Lorne 15 April 2009 (has links)
We study here a model for a strand passage in a ring polymer about a randomly chosen location at which two strands of the polymer have been brought gcloseh together. The model is based on ¦-SAPs, which are unknotted self-avoiding polygons in Z^3 that contain a fixed structure ¦ that forces two segments of the polygon to be close together. To study this model, the Composite Markov Chain Monte Carlo (CMCMC) algorithm, referred to as the CMC ¦-BFACF algorithm, that I developed and proved to be ergodic for unknotted ¦-SAPs in my M. Sc. Thesis, is used. Ten simulations (each consisting of 9.6~10^10 time steps) of the CMC ¦-BFACF algorithm are performed and the results from a statistical analysis of the simulated data are presented. To this end, a new maximum likelihood method, based on previous work of Berretti and Sokal, is developed for obtaining maximum likelihood estimates of the growth constants and critical exponents associated respectively with the numbers of unknotted (2n)-edge ¦-SAPs, unknotted (2n)-edge successful-strand-passage ¦-SAPs, unknotted (2n)-edge failed-strand-passage ¦-SAPs, and (2n)-edge after-strand-passage-knot-type-K unknotted successful-strand-passage ¦-SAPs. The maximum likelihood estimates are consistent with the result (proved here) that the growth constants are all equal, and provide evidence that the associated critical exponents are all equal.<p>
We then investigate the question gGiven that a successful local strand passage occurs at a random location in a (2n)-edge knot-type K ¦-SAP, with what probability will the ¦-SAP have knot-type Kf after the strand passage?h. To this end, the CMCMC data is used to obtain estimates for the probability of knotting given a (2n)-edge successful-strand-passage ¦-SAP and the probability of an after-strand-passage polygon having knot-type K given a (2n)-edge successful-strand-passage ¦-SAP. The computed estimates numerically support the unproven conjecture that these probabilities, in the n¨ limit, go to a value lying strictly between 0 and 1. We further prove here that the rate of approach to each of these limits (should the limits exist) is less than exponential.<p>
We conclude with a study of whether or not there is a difference in the gsizeh of an unknotted successful-strand-passage ¦-SAP whose after-strand-passage knot-type is K when compared to the gsizeh of a ¦-SAP whose knot-type does not change after strand passage. The two measures of gsizeh used are the expected lengths of, and the expected mean-square radius of gyration of, subsets of ¦-SAPs. How these two measures of gsizeh behave as a function of a polygonfs length and its after-strand-passage knot-type is investigated.
|
113 |
Inversion Method for Spectral Analysis of Surface Waves (SASW)Orozco, M. Catalina (Maria Catalina) 07 January 2004 (has links)
This research focuses on estimating the shear wave velocity (Vs) profile based on the dispersion curve obtained from SASW field test data (i.e., inversion of SASW data). It is common for the person performing the inversion to assume the prior information required to constrain the problem based on his/her own judgment. Additionally, the Vs profile is usually shown as unique without giving a range of possible solutions. For these reasons, this work focuses on: (i) studying the non-uniqueness of the solution to the inverse problem; (ii) implementing an inversion procedure that presents the estimated model parameters in a way that reflects their uncertainties; and (iii) evaluating tools that help choose the appropriate prior information.
One global and one local search procedures were chosen to accomplish these purposes: a pure Monte Carlo method and the maximum likelihood method, respectively. The pure Monte Carlo method was chosen to study the non-uniqueness by looking at the range of acceptable solutions (i.e., Vs profiles) obtained with as few constraints as possible. The maximum likelihood method was chosen because it is a statistical approach, which enables us to estimate the uncertainties of the resulting model parameters and to apply tools such as the Bayesian criterion to help select the prior information objectively.
The above inversion methods were implemented for synthetic data, which was produced with the same forward algorithm used during inversion. This implies that all uncertainties were caused by the nature of the SASW inversion problem (i.e., there were no uncertainties added by experimental errors in data collection, analysis of the data to create the dispersion curve, layered model to represent a real 3-D soil stratification, or wave propagation theory). At the end of the research, the maximum likelihood method of inversion and the tools for the selection of prior information were successfully used with real experimental data obtained in Memphis, Tennessee.
|
114 |
Second Level Cluster Dependencies: A Comparison of Modeling Software and Missing Data TechniquesLarsen, Ross Allen Andrew 2010 August 1900 (has links)
Dependencies in multilevel models at the second level have never been thoroughly examined. For certain designs first-level subjects are independent over time, but the second level subjects may exhibit nonzero covariances over time. Following a review of revelant literature the first study investigated which widely used computer programs adequately take into account these dependencies in their analysis. This was accomplished through a simulation study with SAS, and examples of analyses with Mplus and LISREL. The second study investigated the impact of two different missing data techniques for such designs in the case where data is missing at the first level with a simulation study in SAS. The first study simulated data produced in a multiyear study varying the numbers of subjects in the first and second levels, the number of data waves, the magnitude of effects at both the first and second level, and the magnitude of the second level covariance. Results showed that SAS and the MULTILEV component in LISREL analyze such data well while Mplus does not. The second study compared two missing data techniques in the presence of a second level dependency, multiple imputation (MI) and full information maximum likelihood (FIML). They were compared in a SAS simulation study in which the data was simulated with all the factors of the first study and the addition of missing data varied in amounts and patterns (missing completely at random or missing at random). Results showed that FIML is superior to MI because it produces lower bias and correctly estimates standard errors
|
115 |
A Viterbi Decoder Using System C For Area Efficient Vlsi ImplementationSozen, Serkan 01 September 2006 (has links) (PDF)
In this thesis, the VLSI implementation of Viterbi decoder using a design and simulation platform called SystemC is studied. For this purpose, the architecture of Viterbi decoder is tried to be optimized for VLSI implementations. Consequently, two novel area efficient structures for reconfigurable Viterbi decoders have been suggested.
The traditional and SystemC design cycles are compared to show the advantages of SystemC, and the C++ platforms supporting SystemC are listed, installation issues and examples are discussed.
The Viterbi decoder is widely used to estimate the message encoded by Convolutional encoder. For the implementations in the literature, it can be found that special structures called trellis have been formed to decrease the complexity and the area.
In this thesis, two new area efficient reconfigurable Viterbi decoder approaches are suggested depending on the rearrangement of the states of the trellis structures to eliminate the switching and memory addressing complexity.
The first suggested architecture based on reconfigurable Viterbi decoder reduces switching and memory addressing complexity. In the architectures, the states are reorganized and the trellis structures are realized by the usage of the same structures in subsequent instances. As the result, the area is minimized and power consumption is reduced. Since the addressing complexity is reduced, the speed is expected to increase.
The second area efficient Viterbi decoder is an improved version of the first one and has the ability to configure the parameters of constraint length, code rate, transition probabilities, trace-back depth and generator polynomials.
|
116 |
Pairwise Multiple Comparisons Under Short-tailed Symmetric DistributionBalci, Sibel 01 May 2007 (has links) (PDF)
In this thesis, pairwise multiple comparisons and multiple comparisons with a control are studied when the observations have short-tailed symmetric distributions.
Under non-normality, the testing procedure is given and Huber estimators, trimmed mean with winsorized standard deviation, modified maximum likelihood estimators and ordinary sample mean and sample variance used in this procedure are reviewed.
Finally, robustness properties of the stated estimators are compared with each other and it is shown that the test based on the modified maximum likelihood estimators has better robustness properties under short-tailed symmetric distribution.
|
117 |
Bayesian Inference In Anova ModelsOzbozkurt, Pelin 01 January 2010 (has links) (PDF)
Estimation of location and scale parameters from a random sample of size n is of paramount importance in Statistics. An estimator is called fully efficient if it attains the Cramer-Rao minimum variance bound besides being unbiased. The method that yields such estimators, at any rate for large n, is the method of modified maximum likelihood estimation. Apparently, such estimators cannot be made more efficient by using sample based classical methods. That makes room for Bayesian method of estimation which engages prior distributions and likelihood functions. A formal combination of the prior knowledge and the sample information is called posterior distribution. The posterior distribution is maximized with respect to the unknown parameter(s). That gives HPD (highest probability density) estimator(s). Locating the maximum of the posterior distribution is, however, enormously difficult (computationally and analytically) in most situations. To alleviate these difficulties, we use modified likelihood function in the posterior distribution instead of the likelihood function. We derived the HPD estimators of location and scale parameters of distributions in the family of Generalized Logistic. We have extended the work to experimental design, one way ANOVA. We have obtained the HPD estimators of the block effects and the scale parameter (in the distribution of errors) / they have beautiful algebraic forms. We have shown that they are highly efficient. We have given real life examples to illustrate the usefulness of our results. Thus, the enormous computational and analytical difficulties with the traditional Bayesian method of estimation are circumvented at any rate in the context of experimental design.
|
118 |
Adaptive Estimation And Hypothesis Testing MethodsDonmez, Ayca 01 March 2010 (has links) (PDF)
For statistical estimation of population parameters, Fisher&rsquo / s maximum likelihood estimators (MLEs) are commonly used. They are consistent, unbiased and efficient, at any rate for large n. In most situations, however, MLEs are elusive because of computational difficulties. To alleviate these difficulties, Tiku&rsquo / s modified maximum likelihood estimators (MMLEs) are used. They are explicit functions of sample observations and easy to compute. They are asymptotically equivalent to MLEs and, for small n, are equally efficient. Moreover, MLEs and MMLEs are numerically very close to one another. For calculating MLEs and MMLEs, the functional form of the underlying distribution has to be known. For machine data processing, however, such is not the case. Instead, what is reasonable to assume for machine data processing is that the underlying distribution is a member of a broad class of distributions. Huber assumed that the underlying distribution is long-tailed symmetric and developed the so called M-estimators. It is very desirable for an estimator to be robust and have bounded influence function. M-estimators, however, implicitly censor certain sample observations which most practitioners do not appreciate. Tiku and Surucu suggested a modification to Tiku&rsquo / s MMLEs. The new MMLEs are robust and have bounded influence functions. In fact, these new estimators are overall more efficient than M-estimators for long-tailed symmetric distributions. In this thesis, we have proposed a new modification to MMLEs. The resulting estimators are robust and have bounded influence functions. We have also shown that they can be used not only for long-tailed symmetric distributions but for skew distributions as well. We have used the proposed modification in the context of experimental design and linear regression. We have shown that the resulting estimators and the hypothesis testing procedures based on them are indeed superior to earlier such estimators and tests.
|
119 |
Robust Estimation And Hypothesis Testing In Microarray AnalysisUlgen, Burcin Emre 01 August 2010 (has links) (PDF)
Microarray technology allows the simultaneous measurement of thousands of gene expressions simultaneously. As a result of this, many statistical methods emerged for identifying differentially expressed genes. Kerr et al. (2001) proposed analysis of variance (ANOVA) procedure for the analysis of gene expression data. Their estimators are based on the assumption of normality, however the parameter estimates and residuals from this analysis are notably heavier-tailed than normal as they commented. Since non-normality complicates the data analysis and results in inefficient estimators, it is very important to develop statistical procedures which are efficient and robust. For this reason, in this work, we use Modified Maximum Likelihood (MML) and Adaptive Maximum Likelihood estimation method (Tiku and Suresh, 1992) and show that MML and AMML estimators are more efficient and robust. In our study we compared MML and AMML method with widely used statistical analysis methods via simulations and real microarray data sets.
|
120 |
On Multivariate Longitudinal Binary Data Models And Their Applications In ForecastingAsar, Ozgur 01 July 2012 (has links) (PDF)
Longitudinal data arise when subjects are followed over time. This type of data is typically dependent, due to including repeated observations and this type of dependence is termed as within-subject dependence. Often the scientific interest is on multiple longitudinal measurements which introduce two additional types of associations, between-response and cross-response temporal dependencies. Only the statistical methods which take these association structures might yield reliable and valid statistical inferences. Although the methods for univariate longitudinal data have been mostly studied, multivariate longitudinal data still needs more work. In this thesis, although we mainly focus on multivariate longitudinal binary data models, we also consider other types of response families when necessary. We extend a work on multivariate marginal models, namely multivariate marginal models with response specific parameters (MMM1), and propose multivariate marginal models with shared regression parameters (MMM2). Both of these models are generalized estimating equation (GEE) based, and are valid for several response families such as Binomial, Gaussian, Poisson, and Gamma. Two different R packages, mmm and mmm2 are proposed to fit them, respectively. We further develop a marginalized multilevel model, namely probit normal marginalized transition random effects models (PNMTREM) for multivariate longitudinal binary response. By this model, implicit function theorem is introduced to explicitly link the levels of marginalized multilevel models with transition structures for the first time. An R package, bf pnmtrem is proposed to fit the model. PNMTREM is applied to data collected through Iowa Youth and Families Project (IYFP). Five different models, including univariate and multivariate ones, are considered to forecast multivariate longitudinal binary data. A comparative simulation study, which includes a model-independent data simulation process, is considered for this purpose. Forecasting independent variables are taken into account as well. To assess the forecasts, several accuracy measures, such as expected proportion of correct prediction (ePCP), area under the receiver operating characteristic (AUROC) curve, mean absolute scaled error (MASE) are considered. Mother' / s Stress and Children' / s Morbidity (MSCM) data are used to illustrate this comparison in real life. Results show that marginalized models yield better forecasting results compared to marginal models. Simulation results are in agreement with these results as well.
|
Page generated in 0.039 seconds