• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 642
  • 99
  • 46
  • 40
  • 22
  • 13
  • 10
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • Tagged with
  • 992
  • 992
  • 992
  • 140
  • 128
  • 107
  • 105
  • 94
  • 93
  • 88
  • 84
  • 83
  • 79
  • 68
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Time series analysis of Saudi Arabia oil production data

Albarrak, Abdulmajeed Barrak 14 December 2013 (has links)
Saudi Arabia is the largest petroleum producer and exporter in the world. Saudi Arabian economy hugely depends on production and export of oil. This motivates us to do research on oil production of Saudi Arabia. In our research the prime objective is to find the most appropriate models for analyzing Saudi Arabia oil production data. Initially we think of considering integrated autoregressive moving average (ARIMA) models to fit the data. But most of the variables under study show some kind of volatility and for this reason we finally decide to consider autoregressive conditional heteroscedastic (ARCH) models for them. If there is no ARCH effect, it will automatically become an ARIMA model. But the existence of missing values for almost each of the variable makes the analysis part complicated since the estimation of parameters in an ARCH model does not converge when observations are missing. As a remedy to this problem we estimate missing observations first. We employ the expectation maximization (EM) algorithm for estimating the missing values. But since our data are time series data, any simple EM algorithm is not appropriate for them. There is also evidence of the presence of outliers in the data. Therefore we finally employ robust regression least trimmed squares (LTS) based EM algorithm to estimate the missing values. After the estimation of missing values we employ the White test to select the most appropriate ARCH models for all sixteen variables under study. Normality test on resulting residuals is performed for each of the variable to check the validity of the fitted model. / ARCH/GARCH models, outliers and robustness : tests for normality and estimation of missing values in time series -- Outlier analysis and estimation of missing values by robust EM algorithm for Saudi Arabia oil production data -- Selection of ARCH models for Saudi Arabia oil production data. / Department of Mathematical Sciences
482

Stellar Variability: A Broad and Narrow Perspective

Parks, James 12 August 2014 (has links)
A broad near-infrared photometric survey is conducted of 1678 stars in the direction of the $\rho$ Ophiuchi ($\rho$ Oph) star forming region using data from the 2MASS Calibration Database. The survey involves up to 1584 photometric measurements in the \emph{J}, \emph{H} and \emph{K$_{s}$} bands with an $\sim$1 day cadence spanning 2.5 years. Identified are 101 variable stars with $\Delta$\emph{K$_{s}$} band amplitudes from 0.044 to 2.31 mag and $\Delta$(\emph{J}-\emph{K$_{s}$}) color amplitudes ranging from 0.053 to 1.47 mag. Of the 72 $\rho$ Oph star cluster members, 79$\%$ are variable; in addition, 22 variable stars are identified as candidate members. The variability is categorized as periodic, long timescale, or irregular based on the \emph{K$_{s}$} time series morphology. The dominant variability mechanisms are assigned based on the correlation between the stellar color and single band variability. Periodic signals are found in 32 variable stars with periods between 0.49 to 92 days. The most common variability mechanism among these stars is rotational modulation of cool starspots. Periodic eclipse-like variability is identified in 6 stars with periods ranging from 3 to 8 days; in these cases the variability mechanism may be warped circumstellar material driven by a hot proto-Jupiter. Aperiodic, long time scale variability is identified in 31 stars with time series ranging from 64 to 790 days. The variability mechanism is split evenly between either variable extinction or mass accretion. The remaining 40 stars exhibit sporadic, aperiodic variability with no discernible time scale or variability mechanism. Interferometric images of the active giant $\lambda$ Andromedae ($\lambda$ And) were obtained for 27 epochs spanning November. 2007 to September, 2011. The \emph{H} band angular diameter and limb darkening coefficient of $\lambda$ And are 2.777 $\pm$ 0.027 mas and 0.241 $\pm$ 0.014, respectively. Starspot properties are extracted via a parametric model and an image reconstruction program. High fidelity images are obtained from the 2009, 2010, and 2011 data sets. Stellar rotation, consistent with the photometrically determined period, is traced via starspot motion in 2010 and 2011. The orientation of $\lambda$ And is fully characterized with a sky position angle and inclination angle of 23$\degree$ and 78$\degree$, respectively.
483

Suicide in Russia : A macro-sociological study

Jukkala, Tanya January 2013 (has links)
This work constitutes a macro-sociological study of suicide. The empirical focus is on suicide mortality in Russia, which is among the highest in the world and has, moreover, developed in a dramatic manner over the second half of the 20th century. Suicide mortality in contemporary Russia is here placed within the context of development over a longer time period through empirical studies on 1) the general and sex- and age-specific developments in suicide over the period 1870–2007, 2) underlying dynamics of Russian suicide mortality 1956–2005 pertaining to differences between age groups, time periods, and particular generations and 3) the continuity in the aggregate-level relationship between heavy alcohol consumption and suicide mortality from late Tsarist period to post-World War II Russia. In addition, a fourth study explores an alternative to Émile Durkheim’s dominating macro-sociological perspective on suicide by making use of Niklas Luhmann’s theory of social systems. With the help of Luhmann’s macro-sociological perspective it is possible to consider suicide and its causes also in terms of processes at the individual level (i.e. at the level of psychic systems) in a manner that contrasts with the ‘holistic’ perspective of Durkheim. The results of the empirical studies show that Russian suicide mortality, despite its exceptionally high level and dramatic changes in the contemporary period, shares many similarities with the patterns seen in Western countries when examined over a longer time period. Societal modernization in particular seems to have contributed to the increased rate of suicide in Russia in a manner similar to what happened earlier in Western Europe. In addition, the positive relationship between heavy alcohol consumption and suicide mortality proved to be remarkably stable across the past one and a half centuries. These results were interpreted using the Luhmannian perspective on suicide developed in this work.
484

Solar Panel Anomaly Detection and Classification

Hu, Bo 11 May 2012 (has links)
The number of solar panels deployed worldwide has rapidly increased. Solar panels are often placed in areas not easily accessible. It is also difficult for panel owners to be aware of their operating condition. Many environmental factors have negative effects on the efficiency of solar panels. To reduce the power lost caused by environmental factors, it is necessary to detect and classify the anomalous events occurring on the surface of solar panels. This thesis designs and studies a device to continuously measure the voltage output of solar panels and to transmit the time series data back to a personal computer using wireless communication. A program was developed to store and model this time series data. It also detected the existence of anomalies and classified the anomalies by modeling the data. In total, ten types of anomalies were considered. These anomaly types include temporary shading, permanent shading, fallen leaves, accumulating snow and melting snow among others. Previous time series anomaly detection algorithms do not perform well for reallife situations and are only capable of dealing with at most four different types of anomalies. In this work, a general mathematical model is proposed to give better performance in real-life test cases and to cover more than four types of anomalies. We note that the models can be generalized to detect and to classify anomalies for general time series data which is not necessarily generated from solar panel. We compared several techniques to detect and to classify anomalies including the auto-regressive integrated moving average model (ARIMA), neural networks, support vector machines and k-nearest-neighbors classification. We found that anomaly classification using the k-nearest-neighbors classification was able to accurately detect and classify 97% of the anomalies in our test set. The devices and algorithms have been tested with two small 12-volt solar panels.
485

Chemical exposure in the work place : mental models of workers and experts

Pettersson-Strömbäck, Anita January 2008 (has links)
Many workers are daily exposed to chemical risks in their work place that has to be assessed and controlled. Due to exposure variability, repeated and random measurements should be conducted for valid estimates of the average exposure. Traditionally, experts such as safety engineers, work environment inspectors, and occupational hygienists, have performed the measurements. In self assessment of exposure (SAE), the workers perform unsupervised exposure measurements of chemical agents. This thesis studies a prerequisite for SAE, i.e. the workers’ mental models of chemical exposure. Further, the workers’ mental models are contrasted with experts’ reasons and decision criteria for measurement. Both qualitative and quantitative data generated from three studies (Paper I, II, and III) were used to describe the workers’ mental model of chemical exposure. SAE was introduced to workers in three different industries; transports (benzene), sawmill industry (monoterpenes), and reinforced plastic industry (styrene). By interviews, qualitative data were collected on the workers’ interpretation of measurement results and preventive actions. To evaluate the validity of worker measurement, the measurements were compared with expert measurements. The association between each worker’s number of performed measurement and mean level and variability in exposure concentrations was calculated. Mean absolute percent/forecast error (MAPE) was used to assess whether the workers’ decision models were in accordance with a coherence or correspondence model. In Paper IV, experts (safety engineers, work environment inspectors, and occupational hygienists) were interviewed to elucidate their mental models about the triggers and decision criteria for exposure measurements. The results indicate that the workers’ measurement results were in agreement with experts’. However, the measurement results were not a strong enough signal to induce workers to take preventive actions and sustained exposure measurements even if the measurement result were close to the occupational exposure limit. The fit was best for the median model, indicating that the workers’ mental models for interpretation of measurement data can best be described by the coherence theory rather than by the correspondence theory. The workers seemed to mentally reduce the variation in the exposure to a measure of central tendency (the median), and underestimated the average exposure level. The experts were found to directly take preventive actions instead of performing exposure measurements. When they performed exposure measurements, a worst case sampling strategy was most common. An important trigger for measurement for the experts was “request from the employer” (safety engineers), “legal demands” (work environment inspectors), and “symptoms among workers” (occupational hygienists). When there was a trigger, all experts mentioned expectations of high exposure level as a decision criterion for measurements. In conclusion, the studies suggest that workers’ mental interpretation model is best described in terms of a coherence model rather than a model of correspondence. The workers reduced the variation mentally in favor of an estimate of average exposure (median), which may imply that they underestimate short-term, high exposure health risks. A consequence is that interpretation of measurements such as SAE cannot be given to the individual worker without some support, e.g. from an expert. However, experts often chose to directly take preventive actions, without measuring the exposure. The results indicate that also the experts need support e.g. from the legal system if exposure measurements are to be done.
486

Factor analysis of high dimensional time series

Heaton, Chris, Economics, Australian School of Business, UNSW January 2008 (has links)
This thesis presents the results of research into the use of factor models for stationary economic time series. Two basic scenarios are considered. The first is a situation where a large number of observations are available on a relatively small number variables, and a dynamic factor model is specified. It is shown that a dynamic factor model may be derived as a representation of a VARMA model of reduced spectral rank observed subject to measurement error. In some cases the resulting factor model corresponds to a minimal state-space representation of the VARMA plus noise model. Identification is discussed and proved for a fairly general class of dynamic factor model, and a frequency domain estimation procedure is proposed which has the advantage of generalising easily to models with rich dynamic structures. The second scenario is one where both the number of variables and the number of observations jointly diverge to infinity. The principal components estimator is considered in this case, and consistency is proved under assumptions which allow for much more error cross-correlation than the previously published theorems. Ancillary results include finite sample/variables bounds linking population principal components to population factors, and consistency results for principal components in a dual limit framework under a `gap' condition on the eigenvalues. A new factor model, named the Grouped Variable Approximate Factor Model, is introduced. This factor model allows for arbitrarily strong correlation between some of the errors, provided that the variables corresponding to the strongly correlated errors may be arranged into groups. An approximate instrumental variables estimator is proposed for the model and consistency is proved.
487

Modelling dynamical systems via behaviour criteria

Kilminster, Devin January 2002 (has links)
An important part of the study of dynamical systems is the fitting of models to time-series data. That is, given the data, a series of observations taken from a (not fully understood) system of interest, we would like to specify a model, a mathematical system which generates a sequence of “simulated” observations. Our aim is to obtain a “good” model — one that is in agreement with the data. We would like this agreement to be quantitative — not merely qualitative. The major subject of this thesis is the question of what good quantitative agreement means. Most approaches to this question could be described as “predictionist”. In the predictionist approach one builds models by attempting to answer the question, “given that the system is now here, where will it be next?” The quality of the model is judged by the degree to which the states of the model and the original system agree in the near future, conditioned on the present state of the model agreeing with that of the original system. Equivalently, the model is judged on its ability to make good short-term predictions on the original system. The main claim of this thesis is that prediction is often not the most appropriate criterion to apply when fitting models. We show, for example, that one can have models that, while able to make good predictions, have long term (or free-running) behaviour bearing little resemblance to that exhibited in the original time-series. We would hope to be able to use our models for a wide range of purposes other than just prediction — certainly we would like our models to exhibit good free-running behaviour. This thesis advocates a “behaviourist” approach, in which the criterion for a good model is that its long-term behaviour matches that exhibited by the data. We suggest that the behaviourist approach enjoys a certain robustness over the predictionist approaches. We show that good predictors can often be very poorly behaved, and suggest that well behaved models cannot perform too badly at the task of prediction. The thesis begins by comparing the predictionist and behaviourist approaches in the context of a number of simplified model-building problems. It then presents a simple theory for the understanding of the differences between the two approaches. Effective methods for the construction of well-behaved models are presented. Finally, these methods are applied to two real-world problems — modelling of the response of a voltage-clamped squid “giant” axon, and modelling of the “yearly sunspot number”.
488

Modelling nonlinear time series using selection methods and information criteria

Nakamura, Tomomichi January 2004 (has links)
[Truncated abstract] Time series of natural phenomena usually show irregular fluctuations. Often we want to know the underlying system and to predict future phenomena. An effective way of tackling this task is by time series modelling. Originally, linear time series models were used. As it became apparent that nonlinear systems abound in nature, modelling techniques that take into account nonlinearity in time series were developed. A particularly convenient and general class of nonlinear models is the pseudolinear models, which are linear combinations of nonlinear functions. These models can be obtained by starting with a large dictionary of basis functions which one hopes will be able to describe any likely nonlinearity, selecting a small subset of it, and taking a linear combination of these to form the model. The major component of this thesis concerns how to build good models for nonlinear time series. In building such models, there are three important problems, broadly speaking. The first is how to select basis functions which reflect the peculiarities of the time series as much as possible. The second is how to fix the model size so that the models can reflect the underlying system of the data and the influences of noise included in the data are removed as much as possible. The third is how to provide good estimates for the parameters in the basis functions, considering that they may have significant bias when the noise included in the time series is significant relative to the nonlinearity. Although these problems are mentioned separately, they are strongly interconnected
489

Device signal detection methods and time frequency analysis

Ravirala, Narayana, January 2007 (has links) (PDF)
Thesis (M.S.)--University of Missouri--Rolla, 2007. / Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed March 18, 2008) Includes bibliographical references (p. 89-90).
490

A case-based approach for classification of physiological time-series /

Nilsson, Markus, January 2004 (has links) (PDF)
Lic.-avh. Västerås : Mälardalens högskola, 2004. / S. 29-33: Bibliografi.

Page generated in 0.0979 seconds