• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 200
  • 156
  • 37
  • 7
  • 7
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 487
  • 487
  • 176
  • 159
  • 152
  • 146
  • 123
  • 70
  • 63
  • 52
  • 47
  • 40
  • 38
  • 38
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Seasonal Adjustment and Dynamic Linear Models

Tongur, Can January 2013 (has links)
Dynamic Linear Models are a state space model framework based on the Kalman filter. We use this framework to do seasonal adjustments of empirical and artificial data. A simple model and an extended model based on Gibbs sampling are used and the results are compared with the results of a standard seasonal adjustment method. The state space approach is then extended to discuss direct and indirect seasonal adjustments. This is achieved by applying a seasonal level model with no trend and some specific input variances that render different signal-to-noise ratios. This is illustrated for a system consisting of two artificial time series. Relative efficiencies between direct, indirect and multivariate, i.e. optimal, variances are then analyzed. In practice, standard seasonal adjustment packages do not support optimal/multivariate seasonal adjustments, so a univariate approach to simultaneous estimation is presented by specifying a Holt-Winters exponential smoothing method. This is applied to two sets of time series systems by defining a total loss function that is specified with a trade-off weight between the individual series’ loss functions and their aggregate loss function. The loss function is based on either the more conventional squared errors loss or on a robust Huber loss. The exponential decay parameters are then estimated by minimizing the total loss function for different trade-off weights. It is then concluded what approach, direct or indirect seasonal adjustment, is to be preferred for the two time series systems. The dynamic linear modeling approach is also applied to Swedish political opinion polls to assert the true underlying political opinion when there are several polls, with potential design effects and bias, observed at non-equidistant time points. A Wiener process model is used to model the change in the proportion of voters supporting either a specific party or a party block. Similar to stock market models, all available (political) information is assumed to be capitalized in the poll results and is incorporated in the model by assimilating opinion poll results with the model through Bayesian updating of the posterior distribution. Based on the results, we are able to assess the true underlying voter proportion and additionally predict the elections. / <p>At the time of doctoral defence the following papers were unpublished and had a status as follows: Paper 3: Manuscript; Paper 4: Manuscripts</p>
272

Homogeneïtat d'estil en El Tirant Lo Blanc

Riba Civil, Alexandre 20 September 2002 (has links)
En la tesi s'aborda el problema de l'homogeneïtat d'estil en el Tirant lo Blanc mitjançant l'ús de l'estilometria. Les hipòtesis al voltant de l'autoria del Tirant lo Blanc van des de l'autoria única de Joanot Martorell a la intervenció d'un segon autor, be a l'última part de la novel·la o be al llarg de tota ella, passant per altres teories més heterodoxes. A la primera part de la tesi es fa un breu repàs dels problemes que aborda l'estilometria i d'algunes eines estadístiques útils a l'hora de fer un estudi quantitatiu de l'estil literari, es resumeix la qüestió de l'autoria del Tirant lo Blanc, i es descriu la base de dades que s'ha construït per la quantificació de l'estil en el Tirant. Per atacar el problema, hem començat adaptant tècniques d'anàlisi descriptiva de dades, com els gràfics de control i l'anàlisi de correspondències. Per explotar la base de dades, proposem un mètode pràctic per estimar un o més d'un punt de canvi en seqüències de normals, de binomials i de multinomials. El mètode es basa en l'ajust de models i troba els estimadors màxim versemblants del(s) punt(s) de canvi. També hem utilitzat un mètode cluster basat en l'ajust de models per a dades politòmiques, per a agrupar les files d'una taula de contingència. Vam començar l'estudi fent un estudi comparatiu de 12 maneres diferents de mesurar la riquesa i diversitat de vocabulari. Pel que fa a les unitats lexicomètriques la llargada de paraula i l'ús de paraules freqüents i lliures del context ens han sigut molt útils per a l'estimació del punt de canvi i l'atribució d'estil als capítols. L'ús de lletres, tot i ser menys útil, serveix per a reforçar l'evidència del que trobem amb les unitats abans esmentades. La llargada de frase i la de capítol no ens ha sigut útils per a determinar una frontera d'estil en el Tirant.Per tot el que hem anat trobant estem convençuts que hi ha un canvi sobtat en l'estil entre els capítols 371 i 382, que difícilment pot ser atribuïble a l'argument. També hem trobat que després del punt de canvi conviuen capítols amb els dos estils, el que probablement reforça la teoria de que un segon autor va afegir capítols sobre un original pràcticament acabat. De totes maneres, no ens pertoca a nosaltres descobrir que el canvi d'estil no pugui ser degut a altres raons. / En la tesis se aborda el problema de la homogeneidad de estilo en el Tirant lo Blanc mediante el uso de la estilometría. Las hipótesis sobre la autoría del Tirant lo Blanc van desde la autoría única de Joanot Martorell a la intervención de un segundo autor, bien en la última parte de la novela o bien a lo largo de toda ella, pasando por otras teorías más heterodoxas. En la primera parte de la tesis se hace un breve repaso de los problemas que aborda la estilometría i de algunas herramienta estadísticas útiles para el estudio cuantitativo del estilo literario, se resume la cuestión de la autoría del Tirant lo Blanc, y se describe la base de datos que s ha construido para la ciantificación del estilo en el Tirant. Para atacar el problema, hemos empezado adaptando técnicas de análisis descriptivo de datos, como los gráficos de control y el análisis de correspondencias. Para explotar la base de datos, proponemos un método práctico para estimar uno o más de un punto de cambio en secuencias de normales, de binomiales y de multinomiales. El método se basa en el ajuste de modelos y halla los estimadores máximo verosímiles del (de los) punto(s) de cambio. También hemo utilizado un método cluster basado en el ajuste de modelos para a datos politómicos, para agrupar las filas de una tabla de contingencia. Empezamos el estudio realizando un estudio comparativo de 12 formas diferentes de medir la riqueza y diversidad de vocabulario. Las unidades lexicométricas como la longitud de palabra y el uso de palabras frecuentes y libres del contexto nos han sido muy útiles para la estimación del punto de cambio y la atribución de estilo a los capítulos. El uso de letras, a pesar de ser menos útil, sirve para reforzar la evidencia de lo que hallamos con las unidades antes citadas. La longitud de frase y la de capítulo no nos han sido útiles para a determinar una frontera de estilo en el Tirant.Por todos los resultados que hemos ido obteniendo, estamos convencidos que hay un cambio repentino en el estilo entre los capítulos 371 y 382, que difícilmente puede ser atribuible al argumento. También hemos observado que después del punto de cambio conviven capítulos con los dos estilos, lo que probablemente refuerza la teoría de que un segundo autor añadió capítulos sobre un original prácticamente acabado. De todas maneras, no es nuestra misión descubrir que el cambio de estilo no pueda ser debido a otras razones. / This Ph.D. Thesis tackles the problem of the homogeneity of style in Tirant lo Blanc, using the statistical analysis of stylistic features that are measurable but rarely consciously controlled by the author. The goal is to determine whether the style in the book is homogeneous and, if it is not, to find stylistic boundaries. Tirant lo Blanc is the main work in Catalan literature, a chivalry book hailed to be 'the best book of its kind in the world' by Cervantes in Don Quixote, and is considered to be the first modern novel in Europe. There has been an intense and long lasting debate around its authorship originating from conflicting information given in its first edition; while the dedicatory letter states that Joanot Martorell takes sole responsibility for writing the book, the colophon states that the last quarter of the book was written by Martí Joan de Galba, after the death of Martorell. Neither of the two candidate authors left any text comparable to the one under study, and therefore one can not use discriminant analysis to help classify the chapters in the book by author. The majority opinion among medievalists leans towards the single-authorship hypothesis, even though there is a rather strong dissenting minority. In the first part of the thesis we summarize some useful statistical techniques for the quantitative analysis of literary style, we describe the problems that stylometry deals with and we give the state-of-the-art of the authorship attribution problem in Tirant lo Blanc. The data base built by the quantification of style is described as well. The analysis is started by the use of graphical, Statistical Process Control and Correspondence Analysis techniques. In order to obtain maximum likelihood estimates of one or more than one change points in either normal, binomial or multinomial sequences, we propose a practical method based on the fitting of Generalized Linear Models. A cluster method for the rows of a contingency table, based on the fitting of models, is proposed too. We analyze the evolution of the diversity of the vocabulary used in the book through twelve different diversity indices. Following the lead of the extensive stylometry literature, we use word length, and the use of function words to estimate the change point and the attribution of style to the 489 chapters of the book. The use of letters, in spite of being less useful, reinforces the evidences found with the units previously cited. The sentence length and the chapter length weren't useful to determine a style boundary in Tirant The statistical analysis consistently detects a change in style somewhere between chapters 371 and 382, even though a few chapters at the end have a style similar to the ones before that boundary. It is important to remark that even though the statistical analysis supports the existence of two authors, it is not up to us to exclude the possibility that the stylistic boundary found could be explained otherwise.
273

Tillståndsskattning i robotmodell med accelerometrar / State estimation in a robot model using accelerometers

Ankelhed, Daniel, Stenlind, Lars January 2005 (has links)
The purpose of this report is to evaluate different methods for identifying states in robot models. Both linear and non-linear filters exist among these methods and are compared to each other. Advantages, disadvantages and problems that can occur during tuning and running are presented. Additional measurements from accelerometers are added and their use with above mentioned methods for state estimation is evaluated. The evaluation of methods in this report is mainly based on simulations in Matlab, even though some experiments have been performed on laboratory equipment. The conclusion indicates that simple non-linear models with few states can be more accurately estimated with a Kalman filter than with an extended Kalman filter, as long as only linear measurements are used. When non-linear measurements are used an extended Kalman filteris more accurate than a Kalman filter. Non-linear measurements are introduced through accelerometers with non-linear measurement equations. Using accelerometers generally leads to better state estimation when the measure equations have a simple relation to the model.
274

Recursive Residuals and Model Diagnostics for Normal and Non-Normal State Space Models

Frühwirth-Schnatter, Sylvia January 1994 (has links) (PDF)
Model diagnostics for normal and non-normal state space models is based on recursive residuals which are defined from the one-step ahead predictive distribution. Routine calculation of these residuals is discussed in detail. Various tools of diagnostics are suggested to check e.g. for wrong observation distributions and for autocorrelation. The paper also covers such topics as model diagnostics for discrete time series, model diagnostics for generalized linear models, and model discrimination via Bayes factors. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
275

A Method For Robust Design Of Products Or Processes With Categorical Response

Erdural, Serkan 01 December 2006 (has links) (PDF)
In industrial processes decreasing variation is very important while achieving the targets. For manufacturers, finding out optimal settings of product and process parameters that are capable of producing desired results under great conditions is crucial. In most cases, the quality response is measured on a continuous scale. However, in some cases, the desired quality response may be qualitative (categorical). There are many effective methods to design robust products/process through industrial experimentation when the response variable is continuous. But methods proposed so far in the literature for robust design with categorical response variables have various limitations. This study offers a simple and effective method for the analysis of categorical response data for robust product or process design. This method handles both location and dispersion effects to explore robust settings in an effective way. The method is illustrated on two cases: A foam molding process design and an iron-casting process design.
276

Inference Of Piecewise Linear Systems With An Improved Method Employing Jump Detection

Selcuk, Ahmet Melih 01 September 2007 (has links) (PDF)
Inference of regulatory relations in dynamical systems is a promising active research area. Recently, most of the investigations in this field have been stimulated by the researches in functional genomics. In this thesis, the inferential modeling problem for switching hybrid systems is studied. The hybrid systems refers to dynamical systems in which discrete and continuous variables regulate each other, in other words the jumps and flows are interrelated. In this study, piecewise linear approximations are used for modeling purposes and it is shown that piecewise linear models are capable of displaying the evolutionary characteristics of switching hybrid systems approxi- mately. For the mentioned systems, detection of switching instances and inference of locally linear parameters from empirical data provides a solid understanding about the system dynamics. Thus, the inference methodology is based on these issues. The primary difference of the inference algorithm is the idea of transforming the switch- ing detection problem into a jump detection problem by derivative estimation from discrete data. The jump detection problem has been studied extensively in signal processing literature. So, related techniques in the literature has been analyzed care- fully and suitable ones adopted in this thesis. The primary advantage of proposed method would be its robustness in switching detection and derivative estimation. The theoretical background of this robustness claim and the importance of robustness for real world applications are explained in detail.
277

En applicering av generaliserade linjära modeller på interndata för operativa risker.

Bengtsson Ranneberg, Emil, Hägglund, Mikael January 2015 (has links)
Examensarbetet använder generaliserade linjära modeller för att identifiera och analysera enhetsspecifika egenskaper som påverkar risken för operativa förluster. Företag exponeras sällan mot operativa förluster vilket gör att det finns lite information om dessa förluster. De generaliserade linjära modellerna använder statistiska metoder som gör det möjligt att analysera all tillgänglig interndata trots att den är begränsad. Dessutom möjliggör metoden att analysera frekvensen av förlusterna samt magnituden av förlusterna var för sig. Det är fördelaktigt att göra två separata analyser, oberoende av varandra, för att identifiera vilka enhetsspecifika egenskaper som påverkar förlustfrekvensen respektive förlustmagnituden. För att modellera frekvensen av förlusterna används en Poissonfördelning. För att modellera magnituden av förlusterna används en Tweediefördelning som baseras på en semiparametrisk fördelning. Frekvens- och magnitudmodellen kombineras till en gemensam modell för att analysera vad som påverkar den totala kostnaden för operativa förluster. Resultatet visar att enhetens region, inkomst per tjänstgjord timme, storlek, internbetyg och erfarenhet hos personalen påverkar kostnaden för operativa förluster. / The objective of this Master’s Thesis is to identify and analyze explanatory variables that affect operational losses. This is achieved by applying Generalized Linear Models and selecting a number of explanatory variables that are based on the company’s unit attributes. An operational loss is a rare event and as a result, there is a limited amount of internal data. Generalized Linear Models uses a range of statistical tools to give reliable estimates although the data is scarce.  By performing two separate and independent analyses, it is possible to identify and analyze various unit attributes and their impact of the loss frequency and loss severity. When modeling the loss frequency, a Poisson distribution is applied. When modeling the loss severity, a Tweedie distribution that is based on a semi-parametric distribution is applied. To analyze the total cost as a consequence of operational losses for a single unit with certain attributes, the frequency model and the severity model are combined to form one common model. The result from the analysis shows that the geographical location of the unit, the size of the unit, the income per working hour, the working experience of the employees and the internal rating of the unit are all attributes that affects the cost of operational losses.
278

The Effect of a Comprehensive English Language/Literacy Intervention in Bilingual Classrooms on the Development of English Reading Fluency for English-Language Learners, Grades 2-3

Trevino, Elizabeth Pauline, 1978- 14 March 2013 (has links)
English-language learners (ELLs) demonstrate lower levels of English reading proficiency than do native English-speaking students. Oral reading fluency (ORF), the number of words read correctly in 1 min, is one indicator of reading proficiency. Within second language (L2) reading research, there have been few studies of L2 ORF development. The purposes of this study were to: (a) model the trajectory (i.e., initial status and growth) of English ORF in Grades 2 and 3 for Spanish-speaking ELLs in bilingual education programs, and (b) determine the effect of a 4-year structured intervention in English language and reading on L2 ORF development. Data were archived from Project ELLA, a longitudinal, randomized study documenting ELLs' acquisition of English language and reading from kindergarten through third grade. Data included 1,470 observations of English ORF from 283 ELLs at 17 schools. Schools were randomly assigned to the intervention (n=8) or control (n=9) condition. In intervention schools, a one-way dual language program and a comprehensive ESL intervention were implemented. The intervention emphasized L2 oral language development in kindergarten and first grades, basic L2 reading skills in second grade, and content-area reading skills in third grade. In the control schools, the district's typical transitional bilingual education program and ESL curricula were implemented. L2 ORF was measured using DIBELS ORF on six occasions. Piecewise multilevel growth models were used for data analysis. In Grades 2 and 3, ELLs followed a two-stage linear growth trajectory in English ORF, with a large decrease in level between grades. Slope parameters were positive in both grades but decreased slightly in third grade. Participating in Project ELLA added 1.52 wcpm per month to students? ORF scores in Grade 2. Both intervention and control groups improved at the same rate in Grade 3; however, intervention students maintained the higher level of ORF that was attained during second grade. Therefore, the ELLA intervention accelerated L2 ORF growth in second grade, such that intervention students read with greater fluency compared to control students throughout second and third grades.
279

Statistical Methods for Dating Collections of Historical Documents

Tilahun, Gelila 31 August 2011 (has links)
The problem in this thesis was originally motivated by problems presented with documents of Early England Data Set (DEEDS). The central problem with these medieval documents is the lack of methods to assign accurate dates to those documents which bear no date. With the problems of the DEEDS documents in mind, we present two methods to impute missing features of texts. In the first method, we suggest a new class of metrics for measuring distances between texts. We then show how to combine the distances between the texts using statistical smoothing. This method can be adapted to settings where the features of the texts are ordered or unordered categoricals (as in the case of, for example, authorship assignment problems). In the second method, we estimate the probability of occurrences of words in texts using nonparametric regression techniques of local polynomial fitting with kernel weight to generalized linear models. We combine the estimated probability of occurrences of words of a text to estimate the probability of occurrence of a text as a function of its feature -- the feature in this case being the date in which the text is written. The application and results of our methods to the DEEDS documents are presented.
280

Bayesian model estimation and comparison for longitudinal categorical data

Tran, Thu Trung January 2008 (has links)
In this thesis, we address issues of model estimation for longitudinal categorical data and of model selection for these data with missing covariates. Longitudinal survey data capture the responses of each subject repeatedly through time, allowing for the separation of variation in the measured variable of interest across time for one subject from the variation in that variable among all subjects. Questions concerning persistence, patterns of structure, interaction of events and stability of multivariate relationships can be answered through longitudinal data analysis. Longitudinal data require special statistical methods because they must take into account the correlation between observations recorded on one subject. A further complication in analysing longitudinal data is accounting for the non- response or drop-out process. Potentially, the missing values are correlated with variables under study and hence cannot be totally excluded. Firstly, we investigate a Bayesian hierarchical model for the analysis of categorical longitudinal data from the Longitudinal Survey of Immigrants to Australia. Data for each subject is observed on three separate occasions, or waves, of the survey. One of the features of the data set is that observations for some variables are missing for at least one wave. A model for the employment status of immigrants is developed by introducing, at the first stage of a hierarchical model, a multinomial model for the response and then subsequent terms are introduced to explain wave and subject effects. To estimate the model, we use the Gibbs sampler, which allows missing data for both the response and explanatory variables to be imputed at each iteration of the algorithm, given some appropriate prior distributions. After accounting for significant covariate effects in the model, results show that the relative probability of remaining unemployed diminished with time following arrival in Australia. Secondly, we examine the Bayesian model selection techniques of the Bayes factor and Deviance Information Criterion for our regression models with miss- ing covariates. Computing Bayes factors involve computing the often complex marginal likelihood p(y|model) and various authors have presented methods to estimate this quantity. Here, we take the approach of path sampling via power posteriors (Friel and Pettitt, 2006). The appeal of this method is that for hierarchical regression models with missing covariates, a common occurrence in longitudinal data analysis, it is straightforward to calculate and interpret since integration over all parameters, including the imputed missing covariates and the random effects, is carried out automatically with minimal added complexi- ties of modelling or computation. We apply this technique to compare models for the employment status of immigrants to Australia. Finally, we also develop a model choice criterion based on the Deviance In- formation Criterion (DIC), similar to Celeux et al. (2006), but which is suitable for use with generalized linear models (GLMs) when covariates are missing at random. We define three different DICs: the marginal, where the missing data are averaged out of the likelihood; the complete, where the joint likelihood for response and covariates is considered; and the naive, where the likelihood is found assuming the missing values are parameters. These three versions have different computational complexities. We investigate through simulation the performance of these three different DICs for GLMs consisting of normally, binomially and multinomially distributed data with missing covariates having a normal distribution. We find that the marginal DIC and the estimate of the effective number of parameters, pD, have desirable properties appropriately indicating the true model for the response under differing amounts of missingness of the covariates. We find that the complete DIC is inappropriate generally in this context as it is extremely sensitive to the degree of missingness of the covariate model. Our new methodology is illustrated by analysing the results of a community survey.

Page generated in 0.2163 seconds