• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 639
  • 99
  • 46
  • 40
  • 22
  • 13
  • 10
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • Tagged with
  • 989
  • 989
  • 989
  • 139
  • 127
  • 105
  • 105
  • 94
  • 92
  • 88
  • 84
  • 83
  • 79
  • 67
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Spectral analysis of marine atmosphere time series.

Jakobsson, Thor Edward January 1973 (has links)
No description available.
372

UMVU estimation of phase and group delay with small samples

Ramsey, Philip J. January 1989 (has links)
Group delay between two univariate time series is a measure, in units of time, of how one series leads or lags the other at specific frequencies. The only published method of estimating group delay is Hannan and Thomson (1973); however, their method is highly asymptotic and does not allow inference to be performed on the group delay parameter in finite samples. In fact, spectral analysis in general does not allow for small sample inference which is a difficulty with the frequency domain approach to time series analysis. The reason that no statistical inference may be performed in small samples is the fact that distribution theory for spectral estimates is highly asymptotic and one can never be certain in a particular application what finite sample size is required to justify the asymptotic result. In the dissertation the asymptotic distribution theory is circumvented by use of the Box-Cox power transformation on the observed sample phase function. Once transformed, it is assumed that the sample phase is approximately normally distributed and the relationship between phase and frequency is modelled by a simple linear regression model. ln order to estimate group delay it is necessary to inversely transform the predicted values to the original scale of measurement and this is done by expanding the inverse Box-Cox transformation function in a Taylor Series expansion. The group delay estimates are generated by using the derivative of the Taylor Series expansion for phase. The UMVUE property comes from the fact that the Taylor Series expansions are functions of complete, sufficient statistics from the transformed domain and the Lehmann-Scheffe' result (1950) is invoked to justify the UMVUE property. / Ph. D.
373

Empirical Bayes procedures in time series regression models

Wu, Ying-keh January 1986 (has links)
In this dissertation empirical Bayes estimators for the coefficients in time series regression models are presented. Due to the uncontrollability of time series observations, explanatory variables in each stage do not remain unchanged. A generalization of the results of O'Bryan and Susarla is established and shown to be an extension of the results of Martz and Krutchkoff. Alternatively, as the distribution function of sample observations is hard to obtain except asymptotically, the results of Griffin and Krutchkoff on empirical linear Bayes estimation are extended and then applied to estimating the coefficients in time series regression models. Comparisons between the performance of these two approaches are also made. Finally, predictions in time series regression models using empirical Bayes estimators and empirical linear Bayes estimators are discussed. / Ph. D.
374

A new representation for binary or categorical-valued time series data in the frequency domain

Lee, Hoonja 07 June 2006 (has links)
The classical Fourier analysis of time series data can be used to detect periodic trends that are of sinusoidal shape. However, this analysis can be misleading when time series trends are not sinusoidal. When the time series process of interest is binary or categorical-valued data, it might be more reasonable that the time process be represented by a square or rectangular form of functions instead of sinusoidal functions. The WalshFourier analysis takes this approach using a square form of functions. The Walsh-Fourier analysis is based on the Walsh functions. The Walsh functions are a square form of functions that take on only two values + 1 and -1. But, unlike sinusoidals, the Walsh functions are not periodic. Harmuth (1969) introduced the term sequency to describe generalized frequency to identify functions that are not periodic, such as Walsh functions. The term sequency is interpreted as the nun1ber of zero crossings or sign changes per unit time. While the Walsh-Fourier analysis is reasonable in theory for binary or categorical-valued time series data, the interpretation of sequency is often difficult. In this dissertation, using a sequence of periodic functions, we develop the theory and method that can be applied to binary or categorical-valued data where patterns more naturally follow a rectangular shape. The theory parallels the Fourier theory and leads to a "Fourier-like" data transform that is specifically suited to the identification of rectangular trends. / Ph. D.
375

Parameter estimation for series observed with round-off error

Koons, Bruce K. January 1989 (has links)
Time series data often is observed with measurement error. One type of measurement error almost always present is rounding error. A procedure is proposed for estimating parameters of a finite moving average time series which is observed only after rounding. Method of moments estimators are proposed for estimation of parameters of time series observed with general measurement error, including error, ε<sub>n</sub>, which is correlated with the series X<sub>t</sub> being measured. This procedure requires knowledge of the autocovariance function (ACF) of ε<sub>t</sub>, and the cross covariances between X<sub>t</sub> and ε<sub>r</sub>. For rounding error, the rounding error series is shown to approach uniform white noise as the rounding interval width, R, approaches zero, and the cross correlations between X<sub>t</sub>, and rounding error ε<sub>t</sub>, are shown to approach zero as R -> 0. For both small R and large R, the ACF of ε<sub>t</sub>, and the cross covariances between X<sub>t</sub> and ε<sub>t</sub>, are approximated. These values are then used to estimate the parameters of the moving average model for X<sub>t</sub> when X<sub>t</sub> is observed after rounding. / Ph. D.
376

Univariat tidsserieanalys : En fallstudie i riskhantering på logistikavdelningen, Cytiva Umeå / Univariate time series analysis : A case study in risk management at the logistics department, Cytiva Umeå

Trigell, Martin January 2021 (has links)
There is little room for error when the societally important production must increase in a fast-growing company in the midst of the corona pandemic. In order for production to be able to increase, process flow and an enormous amount of data must be analyzed and described in detail in order to provide a good foundation to be able to create new efficient processes where risks are minimized so that the production can continue without problems. By applying well-proven models and theories from the financial world’s risk management, new opportunities are created to visualize otherwise forgotten and overlooked information. Data that can provide the production planning department with important information on whether to set up production to minimize waste and the risk of production stoppage. The purpose of this work is to investigate how uni- and multivariate time series analysis can be implemented in the best possible way, to be able to visualize and forecast lead times for the logistics department at Cytiva Umeå. The goals are to create both a new way of using otherwise disregarded data and a methodology to implement similar models in supply chain-related activities. There is a huge amount of limitations and opportunities so the work has thus been carried out to create a version as simple as possible so that the benefit for the company in future implementations is the greatest. In the report, we have succeeded in producing, sorting, and modeling the relevant data. Several different models are created based on theories and methods applied in a new area to, in an efficient and simple way describe the current situation. The best implementation was one where we included additional information about the system so that the model learns how to distinguish different properties which have their derivation from what the process looks like. Based on this model, the company can then forecast the lead time and thus be able to use all the information that is available. This is to then be able to act on information and thereby create an additional value for the company.
377

Spectral factor model for time series learning

Alexander Miranda, Abhilash 24 November 2011 (has links)
Today's computerized processes generate<p>massive amounts of streaming data.<p>In many applications, data is collected for modeling the processes. The process model is hoped to drive objectives such as decision support, data visualization, business intelligence, automation and control, pattern recognition and classification, etc. However, we face significant challenges in data-driven modeling of processes. Apart from the errors, outliers and noise in the data measurements, the main challenge is due to a large dimensionality, which is the number of variables each data sample measures. The samples often form a long temporal sequence called a multivariate time series where any one sample is influenced by the others.<p>We wish to build a model that will ensure robust generation, reviewing, and representation of new multivariate time series that are consistent with the underlying process.<p><p>In this thesis, we adopt a modeling framework to extract characteristics from multivariate time series that correspond to dynamic variation-covariation common to the measured variables across all the samples. Those characteristics of a multivariate time series are named its 'commonalities' and a suitable measure for them is defined. What makes the multivariate time series model versatile is the assumption regarding the existence of a latent time series of known or presumed characteristics and much lower dimensionality than the measured time series; the result is the well-known 'dynamic factor model'.<p>Original variants of existing methods for estimating the dynamic factor model are developed: The estimation is performed using the frequency-domain equivalent of the dynamic factor model named the 'spectral factor model'. To estimate the spectral factor model, ideas are sought from the asymptotic theory of spectral estimates. This theory is used to attain a probabilistic formulation, which provides maximum likelihood estimates for the spectral factor model parameters. Then, maximum likelihood parameters are developed with all the analysis entirely in the spectral-domain such that the dynamically transformed latent time series inherits the commonalities maximally.<p><p>The main contribution of this thesis is a learning framework using the spectral factor model. We term learning as the ability of a computational model of a process to robustly characterize the data the process generates for purposes of pattern matching, classification and prediction. Hence, the spectral factor model could be claimed to have learned a multivariate time series if the latent time series when dynamically transformed extracts the commonalities reliably and maximally. The spectral factor model will be used for mainly two multivariate time series learning applications: First, real-world streaming datasets obtained from various processes are to be classified; in this exercise, human brain magnetoencephalography signals obtained during various cognitive and physical tasks are classified. Second, the commonalities are put to test by asking for reliable prediction of a multivariate time series given its past evolution; share prices in a portfolio are forecasted as part of this challenge.<p><p>For both spectral factor modeling and learning, an analytical solution as well as an iterative solution are developed. While the analytical solution is based on low-rank approximation of the spectral density function, the iterative solution is based on the expectation-maximization algorithm. For the human brain signal classification exercise, a strategy for comparing similarities between the commonalities for various classes of multivariate time series processes is developed. For the share price prediction problem, a vector autoregressive model whose parameters are enriched with the maximum likelihood commonalities is designed. In both these learning problems, the spectral factor model gives commendable performance with respect to competing approaches.<p><p>Les processus informatisés actuels génèrent des quantités massives de flux de données. Dans nombre d'applications, ces flux de données sont collectées en vue de modéliser les processus. Les modèles de processus obtenus ont pour but la réalisation d'objectifs tels que l'aide à la décision, la visualisation de données, l'informatique décisionnelle, l'automatisation et le contrôle, la reconnaissance de formes et la classification, etc. La modélisation de processus sur la base de données implique cependant de faire face à d’importants défis. Outre les erreurs, les données aberrantes et le bruit, le principal défi provient de la large dimensionnalité, i.e. du nombre de variables dans chaque échantillon de données mesurées. Les échantillons forment souvent une longue séquence temporelle appelée série temporelle multivariée, où chaque échantillon est influencé par les autres. Notre objectif est de construire un modèle robuste qui garantisse la génération, la révision et la représentation de nouvelles séries temporelles multivariées cohérentes avec le processus sous-jacent.<p><p>Dans cette thèse, nous adoptons un cadre de modélisation capable d’extraire, à partir de séries temporelles multivariées, des caractéristiques correspondant à des variations - covariations dynamiques communes aux variables mesurées dans tous les échantillons. Ces caractéristiques sont appelées «points communs» et une mesure qui leur est appropriée est définie. Ce qui rend le modèle de séries temporelles multivariées polyvalent est l'hypothèse relative à l'existence de séries temporelles latentes de caractéristiques connues ou présumées et de dimensionnalité beaucoup plus faible que les séries temporelles mesurées; le résultat est le bien connu «modèle factoriel dynamique». Des variantes originales de méthodes existantes pour estimer le modèle factoriel dynamique sont développées :l'estimation est réalisée en utilisant l'équivalent du modèle factoriel dynamique au niveau du domaine de fréquence, désigné comme le «modèle factoriel spectral». Pour estimer le modèle factoriel spectral, nous nous basons sur des idées relatives à la théorie des estimations spectrales. Cette théorie est utilisée pour aboutir à une formulation probabiliste, qui fournit des estimations de probabilité maximale pour les paramètres du modèle factoriel spectral. Des paramètres de probabilité maximale sont alors développés, en plaçant notre analyse entièrement dans le domaine spectral, de façon à ce que les séries temporelles latentes transformées dynamiquement héritent au maximum des points communs.<p><p>La principale contribution de cette thèse consiste en un cadre d'apprentissage utilisant le modèle factoriel spectral. Nous désignons par apprentissage la capacité d'un modèle de processus à caractériser de façon robuste les données générées par le processus à des fins de filtrage par motif, classification et prédiction. Dans ce contexte, le modèle factoriel spectral est considéré comme ayant appris une série temporelle multivariée si la série temporelle latente, une fois dynamiquement transformée, permet d'extraire les points communs de façon fiable et maximale. Le modèle factoriel spectral sera utilisé principalement pour deux applications d'apprentissage de séries multivariées :en premier lieu, des ensembles de données sous forme de flux venant de différents processus du monde réel doivent être classifiés; lors de cet exercice, la classification porte sur des signaux magnétoencéphalographiques obtenus chez l'homme au cours de différentes tâches physiques et cognitives; en second lieu, les points communs obtenus sont testés en demandant une prédiction fiable d'une série temporelle multivariée étant donnée l'évolution passée; les prix d'un portefeuille d'actions sont prédits dans le cadre de ce défi.<p><p>À la fois pour la modélisation et pour l'apprentissage factoriel spectral, une solution analytique aussi bien qu'une solution itérative sont développées. Tandis que la solution analytique est basée sur une approximation de rang inférieur de la fonction de densité spectrale, la solution itérative est basée, quant à elle, sur l'algorithme de maximisation des attentes. Pour l'exercice de classification des signaux magnétoencéphalographiques humains, une stratégie de comparaison des similitudes entre les points communs des différentes classes de processus de séries temporelles multivariées est développée. Pour le problème de prédiction des prix des actions, un modèle vectoriel autorégressif dont les paramètres sont enrichis avec les points communs de probabilité maximale est conçu. Dans ces deux problèmes d’apprentissage, le modèle factoriel spectral atteint des performances louables en regard d’approches concurrentes. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
378

Vilket pris avgör vad du handlar? : En kvantitativ jämförande studie av krympflations påverkan på försäljning

Hummelgren, Axel January 2020 (has links)
Konsumtionsbeteende är idag en viktig undersökningspunkt för att med säkerhet kunna genomföra implementeringar av nya policys inom konsumentpolitiken. Både klassiska nationalekonomiska teorier och beteendeekonomiska teorier används för att beskriva och förutsäga dessa beteenden, men det saknas undersökningar på deras faktiska kopplingar till olika typer av prissättning. Denna uppsats har gjort ett försök till att undersöka vilken påverkan en förändring i pris genom en förändring i paketstorlek har på efterfrågan. Den har även försökt ge en analys till om de förändringar som noteras är kopplade till beteendeekonomi eller klassisk nationalekonomisk teori. Med hjälp av en vanlig tidstrendsanalys tillsammans med en interrupted-time-series-analysis har försäljningstrenderna för försäljning i KG för två substituerande produkter skapats och jämförts. Dessa fastställer att förändringens påverkan framförallt stämmer överens med teorier gällande beteendeekonomi men att sambandet mellan en förändring i försäljningsnivå och en förändring i paketstorlek inte är säkerställt. Analyserna gjorda i denna studie blir därför inte fastställda och möjligtvis otillräckliga för att besvara den fråga som ställts. Jag som författare vill därför uppmana till att flera utvecklande studier inom ämnet bör utföras för att säkerställa möjliga resultat. / Consumer behaviour is today an important aspect of making quality decisions regarding policies on the consumer market. Both classical economical models and behavioural economical models are used to describe and predict these kinds of behaviours. Although todays studies on their connections to different methods of pricing are lacking. This paper tries to investigate what kind of impact a change in price by changing the size of the good has on demand. It also tries to produce an analysis on if this impact is connected with bevioural or classical economic theories. Based on a classical time-trend analysis together with an interrupted-time-series-analysis different trends for sales in KG regarding two substitutional products have been created. These determine that the effects on demand are most likely connected to behavioural economics but that the effects aren’t statistically significant. The analysis done in this paper therefore cannot be statistically determined and indicates that further studies on the subject need to be done to answer these questions with more certainty.
379

Time series and spatial analysis of crop yield

Assefa, Yared January 1900 (has links)
Master of Science / Department of Statistics / Juan Du / Space and time are often vital components of research data sets. Accounting for and utilizing the space and time information in statistical models become beneficial when the response variable in question is proved to have a space and time dependence. This work focuses on the modeling and analysis of crop yield over space and time. Specifically, two different yield data sets were used. The first yield and environmental data set was collected across selected counties in Kansas from yield performance tests conducted for multiple years. The second yield data set was a survey data set collected by USDA across the US from 1900-2009. The objectives of our study were to investigate crop yield trends in space and time, quantify the variability in yield explained by genetics and space-time (environment) factors, and study how spatio-temporal information could be incorporated and also utilized in modeling and forecasting yield. Based on the format of these data sets, trend of irrigated and dryland crops was analyzed by employing time series statistical techniques. Some traditional linear regressions and smoothing techniques are first used to obtain the yield function. These models were then improved by incorporating time and space information either as explanatory variables or as auto- or cross- correlations adjusted in the residual covariance structures. In addition, a multivariate time series modeling approach was conducted to demonstrate how the space and time correlation information can be utilized to model and forecast yield and related variables. The conclusion from this research clearly emphasizes the importance of space and time components of data sets in research analysis. That is partly because they can often adjust (make up) for those underlying variables and factor effects that are not measured or not well understood.
380

Festivals and sustainability : reducing energy related greenhouse gas emissions at music festivals

Marchini, Ben January 2013 (has links)
This thesis investigates the potential to reduce greenhouse gas emissions relating to electrical power provision at UK music festivals. It has been carried out in partnership with a number of UK festival organisers and power providers. The thesis provides a literature review of sustainable event management and the associated electrical power provision, before then investigating the existing methodologies for quantifying greenhouse gas emissions at festivals. This review identified a lack of data regarding energy demand at events other than total fuel demand. While energy data does not improve the accuracy of GHG accounting, it provides more detail which can identify opportunities to reduce these emissions. Data was gathered from 73 power systems at 18 music festivals from 2009-2012. This produced typical festival power load profiles for different system types including stages, traders and site infrastructure. These load profiles were characterised using a series of indicators that can create performance benchmarks, in addition to increasing the detail of carbon auditing. Analysis of the load profiles identifies opportunities for emission reduction. These address either the supply or demand for power in order to reduce on site fuel consumption. These opportunities include changes in operating procedure to reduce demand during non-operational periods, utilising low energy equipment on stages, and using a power provision system capable of adjusting power plant supply to meet demand. The work has documented power demand at festivals, and highlighted opportunities for change that can reduce costs and emissions, as well as informing festivals on their practices.

Page generated in 0.0767 seconds