• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 8
  • 3
  • 2
  • Tagged with
  • 67
  • 67
  • 67
  • 45
  • 43
  • 24
  • 18
  • 18
  • 16
  • 14
  • 14
  • 13
  • 12
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Measurement and time series analysis of emotion in music /

Schubert, Emery. January 1999 (has links)
Thesis (Ph. D.)--University of New South Wales, 1999.
2

A review of "longitudinal study" in developmental psychology

Finley, Emily H. 01 January 1972 (has links)
The purpose of this library research thesis is to review the "longitudinal study" in terms of problems and present use. A preliminary search of the literature on longitudinal method revealed problems centering around two areas: (1) definition of "longitudinal study" and (2) practical problems of method itself. The purpose of this thesis then is to explore through a search of books and journals the following questions: 1. How can “longitudinal study” be defined? 2. What problems are inherent in the study of the same individuals over time and how can these problems be solved? A third question which emerges from these two is: 3. How is “longitudinal study” being used today? This thesis differentiates traditional longitudinal study from other methods of study: the cross-sectional study, the time-lag study, the experimental study, the retrospective study, and the study from records. Each of these methods of study is reviewed according to its unique problems and best uses and compared with the longitudinal study. Finally, the traditional longitudinal study is defined as the study: (1) of individual change under natural conditions not controlled by the experimenter, (2) which proceeds over time from the present to the future by measuring the same individuals repeatedly, and (3) which retains individuality of data in analyses. Some problem areas of longitudinal study are delineated which are either unique to this method or especially difficult. The following problems related to planning the study are reviewed: definition of study objectives, selection of method of study, statistical methods, cost, post hoc analysis and replication of the study, time factor in longitudinal study, and the problem of allowing variables to operate freely. Cultural shift and attrition are especially emphasized. The dilemma is examined which is posed by sample selection with its related problems of randomization and generalizability of the study, together with the problems of repeated measurements and selection of control groups. These problems are illustrated with studies from the literature. Not only are these problems delineated cut considerable evidence is shown that we have already started to accumulate data that will permit their solution. This paper presents a number of studies which have considered these problems separately or as a side issue of a study on some other topic. Some recommendations for further research in problem areas are suggested. At the same time that this thesis notes differentiation of the longitudinal study from other studies, it also notes integration of results of longitudinal studies with results of other studies. The tenet adopted here is: scientific knowledge is cumulative and not dependent on one crucial experiment. Trends in recent longitudinal studies are found to be toward more strict observance of scientific protocols and toward limitation of time and objectives of the study. When objectives of the study are well defined and time is limited to only enough for specified change to take place, many of the problems of longitudinal study are reduced to manageable proportions. Although modern studies are of improved quality, longitudinal method is not being sufficiently used today to supply the demand for this type of data. Longitudinal study is necessary to answer some of the questions in developmental psychology. We have no alternative but to continue to develop this important research tool.
3

Podpora v rozhodovacích procesech použitím analýzy časových řad / Support for Decision-Making Processes Using Time Series Analysis

Koláček, Jozef January 2011 (has links)
This master’s thesis determines qualities of the application used as a supporting tool in decision-making processes and which is useful for automatization of time series analysis. The thesis describes solution created by following the formulated criteria. It shows examples of use of the created application in praxis and interprets the outputs.
4

An Empirical Evaluation of Neural Process Meta-Learners for Financial Forecasting

Patel, Kevin G 01 June 2023 (has links) (PDF)
Challenges of financial forecasting, such as a dearth of independent samples and non- stationary underlying process, limit the relevance of conventional machine learning towards financial forecasting. Meta-learning approaches alleviate some of these is- sues by allowing the model to generalize across unrelated or loosely related tasks with few observations per task. The neural process family achieves this by con- ditioning forecasts based on a supplied context set at test time. Despite promise, meta-learning approaches remain underutilized in finance. To our knowledge, ours is the first application of neural processes to realized volatility (RV) forecasting and financial forecasting in general. We propose a hybrid temporal convolutional network attentive neural process (ANP- TCN) for the purpose of financial forecasting. The ANP-TCN combines a conven- tional and performant financial time series embedding model (TCN) with an ANP objective. We found ANP-TCN variant models outperformed the base TCN for equity index realized volatility forecasting. In addition, when stack-ensembled with a tree- based model to forecast a trading signal, the ANP-TCN outperformed the baseline buy-and-hold strategy and base TCN model in out-of-sample performance. Across four liquid US equity indices (incl. S&P 500) tested over ∼15 years, the best long-short models (reported by median trajectory) resulted in the following out-of-sample (∼3 years) performance ranges: directional accuracy of 58.65% to 62.26%, compound an- nual growth rate (CAGR) of 0.2176 to 0.4534, and annualized Sharpe ratio of 2.1564 to 3.3375. All project code can be found at: https://github.com/kpa28-git/thesis-code.
5

Us and Them: The Role of Inter-Group Distance and Size in Predicting Civil Conflict

Moffett, Michaela E 01 January 2015 (has links)
Recent large-N studies conclude that inequality and ethnic distribution have no significant impact on the risk of civil conflict. This study argues that such conclusions are erroneous and premature due to incorrect specification of independent variables and functional forms. Case studies suggest that measures of inter-group inequality (horizontal inequality) and polarization (ethnic distribution distance from a bipolar equilibrium) are more accurate predictors of civil conflict, as they better capture the group-motivation aspect of conflict. This study explores whether indicators of inequality and ethnic distribution impact the probability of civil conflict across 38 developing countries in the period 1986 to 2004. Analysis reveals that horizontal inequality and polarization have significant, robust relationships with civil conflict. Furthermore, vertical, or individual, inequality is a robust, significant predictor of civil conflict when specified as a nonlinear function.
6

Time Series Decomposition Using Singular Spectrum Analysis

Deng, Cheng 01 May 2014 (has links)
Singular Spectrum Analysis (SSA) is a method for decomposing and forecasting time series that recently has had major developments but it is not yet routinely included in introductory time series courses. An international conference on the topic was held in Beijing in 2012. The basic SSA method decomposes a time series into trend, seasonal component and noise. However there are other more advanced extensions and applications of the method such as change-point detection or the treatment of multivariate time series. The purpose of this work is to understand the basic SSA method through its application to the monthly average sea temperature in a point of the coast of South America, near where “EI Ni˜no” phenomenon originates, and to artificial time series simulated using harmonic functions. The output of the basic SSA method is then compared with that of other decomposition methods such as classic seasonal decomposition, X-11 decomposition using moving averages and seasonal decomposition by Loess (STL) that are included in some time series courses.
7

Investigating Post-Earnings-Announcement Drift Using Principal Component Analysis and Association Rule Mining

Schweickart, Ian R. W. 01 January 2017 (has links)
Post-Earnings-Announcement Drift (PEAD) is commonly accepted in the fields of accounting and finance as evidence for stock market inefficiency. Less accepted are the numerous explanations for this anomaly. This project aims to investigate the cause for PEAD by harnessing the power of machine learning algorithms such as Principle Component Analysis (PCA) and a rule-based learning technique, applied to large stock market data sets. Based on the notion that the market is consumer driven, repeated occurrences of irrational behavior exhibited by traders in response to news events such as earnings reports are uncovered. The project produces findings in support of the PEAD anomaly using non-accounting nor financial methods. In particular, this project finds evidence for delayed price response exhibited in trader behavior, a common manifestation of the PEAD phenomenon.
8

Improved Standard Error Estimation for Maintaining the Validities of Inference in Small-Sample Cluster Randomized Trials and Longitudinal Studies

Tanner, Whitney Ford 01 January 2018 (has links)
Data arising from Cluster Randomized Trials (CRTs) and longitudinal studies are correlated and generalized estimating equations (GEE) are a popular analysis method for correlated data. Previous research has shown that analyses using GEE could result in liberal inference due to the use of the empirical sandwich covariance matrix estimator, which can yield negatively biased standard error estimates when the number of clusters or subjects is not large. Many techniques have been presented to correct this negative bias; However, use of these corrections can still result in biased standard error estimates and thus test sizes that are not consistently at their nominal level. Therefore, there is a need for an improved correction such that nominal type I error rates will consistently result. First, GEEs are becoming a popular choice for the analysis of data arising from CRTs. We study the use of recently developed corrections for empirical standard error estimation and the use of a combination of two popular corrections. In an extensive simulation study, we find that nominal type I error rates can be consistently attained when using an average of two popular corrections developed by Mancl and DeRouen (2001, Biometrics 57, 126-134) and Kauermann and Carroll (2001, Journal of the American Statistical Association 96, 1387-1396) (AVG MD KC). Use of this new correction was found to notably outperform the use of previously recommended corrections. Second, data arising from longitudinal studies are also commonly analyzed with GEE. We conduct a simulation study, finding two methods to attain nominal type I error rates more consistently than other methods in a variety of settings: First, a recently proposed method by Westgate and Burchett (2016, Statistics in Medicine 35, 3733-3744) that specifies both a covariance estimator and degrees of freedom, and second, AVG MD KC with degrees of freedom equaling the number of subjects minus the number of parameters in the marginal model. Finally, stepped wedge trials are an increasingly popular alternative to traditional parallel cluster randomized trials. Such trials often utilize a small number of clusters and numerous time intervals, and these components must be considered when choosing an analysis method. A generalized linear mixed model containing a random intercept and fixed time and intervention covariates is the most common analysis approach. However, the sole use of a random intercept applies assumptions that will be violated in practice. We show, using an extensive simulation study based on a motivating example and a more general design, alternative analysis methods are preferable for maintaining the validity of inference in small-sample stepped wedge trials with binary outcomes. First, we show the use of generalized estimating equations, with an appropriate bias correction and a degrees of freedom adjustment dependent on the study setting type, will result in nominal type I error rates. Second, we show the use of a cluster-level summary linear mixed model can also achieve nominal type I error rates for equal cluster size settings.
9

Provision of Hospital-based Palliative Care and the Impact on Organizational and Patient Outcomes

Roczen, Marisa L 01 January 2016 (has links)
Hospital-based palliative care services aim to streamline medical care for patients with chronic and potentially life-limiting illnesses by focusing on individual patient needs, efficient use of hospital resources, and providing guidance for patients, patients’ families and clinical providers toward making optimal decisions concerning a patient’s care. This study examined the nature of palliative care provision in U.S. hospitals and its impact on selected organizational and patient outcomes, including hospital costs, length of stay, in-hospital mortality, and transfer to hospice. Hospital costs and length of stay are viewed as important economic indicators. Specifically, lower hospital costs may increase a hospital’s profit margin and shorter lengths of stay can enable patient turnover and efficiency of care. Higher rates of hospice transfers and lower in-hospital mortality may be considered positive outcomes from a patient perspective, as the majority of patients prefer to die at home or outside of the hospital setting. Several data sources were utilized to obtain information about patient, hospital, and county characteristics; patterns of hospitals’ palliative care provision; and patients’ hospital costs, length of stay, in-hospital mortality, and transfer to hospice (if a patient survived hospitalization). The study sample consisted of 3,763,339 patients; 348 urban, general, short-term, acute care, non-federal hospitals; and 111 counties located in six states over a 5-year study (2007-2011). Hospital-based palliative care provision was measured by the presence of three palliative care services, including inpatient palliative care consultation services (PAL), inpatient palliative care units (IPAL), and hospice programs (HOSPC). Derived from Institutional Theory, Resource Dependence Theory, and Donabedian’s Structure Process-Outcome framework, 13 hypotheses were tested using a hierarchical (generalized) linear modeling approach. The study findings suggested that hospital size was associated with a higher probability of hospital-based palliative care provision. Conversely, the presence of palliative care services through a hospital’s health system, network, or joint venture was associated with a lower probability of hospital-based palliative care provision. The study findings also indicated that hospitals with an IPAL or HOSPC incurred lower hospital costs, whereas hospitals with PAL incurred higher hospital costs. The presence of PAL, IPAL, and HOSPC was generally associated with a lower probability of in-hospital mortality and transfer to hospice. Finally, the effects of hospital-based palliative care services on length of stay were mixed, and further research is needed to understand this relationship.
10

Applying Localized Realized Volatility Modeling to Futures Indices

Fu, Luella 01 January 2011 (has links)
This thesis extends the application of the localized realized volatility model created by Ying Chen, Wolfgang Karl Härdle, and Uta Pigorsch to other futures markets, particularly the CAC 40 and the NI 225. The research attempted to replicate results though ultimately, those results were invalidated by procedural difficulties.

Page generated in 0.11 seconds