• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 342
  • 79
  • 65
  • 30
  • 29
  • 12
  • 10
  • 9
  • 8
  • 7
  • 4
  • 4
  • 4
  • 4
  • 2
  • Tagged with
  • 743
  • 743
  • 106
  • 85
  • 78
  • 77
  • 69
  • 64
  • 62
  • 60
  • 58
  • 49
  • 48
  • 47
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Improving the observation of time-variable gravity using GRACE RL04 data

Bonin, Jennifer Anne 14 February 2011 (has links)
The Gravity Recovery and Climate Experiment (GRACE) project has two primary goals: to determine the Earth’s mean gravitational field over the lifetime of the mission and to observe the time-variable nature of the gravitational field. The Center for Space Research's (CSR) Release 4 (RL04) GRACE solutions are currently created via a least-squares process that assimilates data collected over a month using a simple boxcar window and determines a spherical harmonic representation of the monthly gravitational field. The nature of this technique obscures the time-variable gravity field on time scales shorter than one month and spatial scales shorter than a few hundred kilometers. A computational algorithm is developed here that allows increased temporal resolution of the GRACE gravity information, thus allowing the Earth's time-variable gravity to be more clearly observed. The primary technique used is a sliding-window algorithm attached to a weighted version of batch least squares estimation. A number of different temporal windowing functions are evaluated. Their results are investigated via both spectral and spatial analyses, and globally as well as in localized regions. In addition to being compared to each other, the solutions are also compared to external models and data sets, as well as to other high-frequency GRACE solutions made outside CSR. The results demonstrate that a GRACE solution made from at least eight days of data will provide a well-conditioned solution. A series of solutions made with windows of at least that length is capable of observing the expected near-annual signal. The results also indicate that the signals at frequencies greater than 3 cycles/year are often smaller than the GRACE errors, making detection unreliable. Altering the windowing technique does not noticeably improve the resolution, since the spectra of the expected errors and the expected non-annual signals are very similar, leading any window to affect them in the same manner. / text
12

Characterisation of interference on high angle H.F. data links

Dutta, S. January 1979 (has links)
No description available.
13

Image transmission over time varying channels

Chippendale, Paul January 1998 (has links)
No description available.
14

An investigation of soft tissue ultrasonic microimaging

Eavis, Joe January 2000 (has links)
No description available.
15

High frequency and large dimension volatility

Shi, Zhangbo January 2010 (has links)
Three main issues are explored in this thesis—volatility measurement, volatility spillover and large-dimension covariance matrices. For the first question of volatility measurement, this thesis compares two newly-proposed, high-frequency volatility measurement models, namely realized volatility and realized range-based volatility. It does so in the aim of trying to use empirical results to assess whether one volatility model is better than the other. The realized volatility model and realized range-based volatility model are compared based on three markets, five forecast models, two data frequencies and two volatility proxies, making sixty scenarios in total. Seven different loss functions are also used for the evaluation tests. This necessarily ensures that the empirical results are highly robust. After making some simple adjustments to the original realized range-based volatility, this thesis concludes that it is clear that the scaled realized range-based volatility model outperforms the realized volatility model. For the second research question on volatility spillover, realized range-based volatility and realized volatility models are employed to study the volatility spillover among the S&P 500 index markets, with the aim of finding out empirically whether volatility spillover exists between the markets. Volatility spillover is divided into the two categories of statistically significant volatility spillover and economically significant volatility spillover. Economically significant spillover is defined as spillover that can help forecast the volatility of another market, and is therefore a more powerful measurement than statistically significant spillover. The findings show that, in reality, the existence of volatility spillover depends on the choice of model, choice of volatility proxy and value of parameters used. The third and final research question in this thesis involves the comparison of various large-dimension multivariate models. The main contribution made by this specific study is threefold. First, a number of good performance multivariate volatility models are introduced by adjusting some commonly used models. Second, different models and various choices of parameters for these models are tested based on 26 currency pairs. Third, the evaluation criteria adopted possess much more practical implications than those used in most other papers on this subject area.
16

Selected results from clustering and analyzing stock market trade data

Zhang, Zhihan January 1900 (has links)
Master of Science / Department of Statistics / Michael Higgins / The amount of data generated from stock market trading is massive. For example, roughly 10 million trades are performed each day on the NASDAQ stock exchange. A significant proportion of these trades are made by high-frequency traders. These entities make on the order of thousands or more trades a day. However, the stock-market factors that drive the decisions of high-frequency traders are poorly understood. Recently, hybridized threshold clustering (HTC) has been proposed as a way of clustering large-to-massive datasets. In this report, we use three months of NASDAQ HFT data---a dataset containing information on all trades of 120 different stocks including identifiers on whether the buyer and/or seller were high-frequency traders---to investigate the trading patterns of high-frequency traders, and we explore the use of HTC to identify these patterns. We find that, while HTC can be successfully performed on the NASDAQ HFT dataset, the amount of information gleaned from this clustering is limited. Instead, we show that an understanding of the habits of high-frequency traders may be gained by looking at \textit{janky} trades---those in which the number of shares traded is not a multiple of 10. We demonstrate evidence that janky trades are more common for high-frequency traders. Additionally, we suggest that a large number of small, janky trades may help signal that a large trade will happen shortly afterward.
17

A comparative study on large multivariate volatility matrix modeling for high-frequency financial data

Jiang, Dongchen 30 April 2015 (has links)
Modeling and forecasting the volatilities of high-frequency data observed on the prices of financial assets are vibrant research areas in econometrics and statistics. However, most of the available methods are not directly applicable when the number of assets involved is large, due to the lack of accuracy in estimating high-dimensional matrices. This paper compared two methodologies of vast volatility matrix estimation for high-frequency data. One is to estimate the Average Realized Volatility Matrix and to regularize it by banding and thresholding. In this method, first we select grids as pre-sampling frequencies,construct a realized volatility matrix using previous tick method according to each pre-sampling frequency and then take the average of the constructed realized volatility matrices as the stage one estimator, which we call the ARVM estimator. Then we regularize the ARVM estimator to yield good consistent estimators of the large integrated volatility matrix. We consider two regularizations: thresholding and banding. The other is Dynamic Conditional Correlation(DCC) which can be estimated for two stage, where in the rst stage univariate GARCH models are estimated for each residual series, and in the second stage, the residuals are used to estimate the parameters of the dynamic correlation. Asymptotic theory for the two proposed methodologies shows that the estimator are consistent. In numerical studies, the proposed two methodologies are applied to simulated data set and real high-frequency prices from top 100 S&P 500 stocks according to the trading volume over a period of 3 months, 64 trading days in 2013. From the perfomances of estimators, the conclusion is that TARVM estimator performs better than DCC volatility matrix. And its largest eigenvalues are more stable than those of DCC model so that it is more approriable in eigen-based anaylsis.
18

The Effect of Compliance Changes on Delivered Volumes in an Adult Patient Ventilated with High Frequency Oscillatory Ventilation: A Bench Model

England, John 15 September 2009 (has links)
Clinical concerns exist regarding the delivered tidal volume (Vt) during high-frequency oscillatory ventilation (HFOV). HFOV is increasingly being used as a lung protective mode of ventilation for patients with Adult Respiratory Distress Syndrome (ARDS), but caution must be utilized. The purpose of this study was to investigate the effect of airway compliance on Vt delivered by HFOV to the adult patient. Method: An in vitro model was used to simulate an adult passive patient with ARDS, using a high fidelity breathing simulator (ASL 5000, IngMar Medical). The simulation included independent lung ventilation with a fixed resistance and adjustable compliance for each lung. Compliances of 10, 15, 20 and 25 ml/cmH2O were used and resistance (Raw) was fixed at 15 cm H2O/L/s. The ventilator SensorMedics 3100B (Cardinal Health, Dublin, Ohio) was set to a fixed power setting of 6.0, insp-% of 33%, bias flow =30 L/min, and 50% oxygen and Hz of 5.0 (n=5) for each compliance setting. Mean airway pressure (mPaw) and amplitude (AMP) varied as the compliance changes were made. Approximately 250 breaths were recorded at each compliance setting and the data was collected via the host computer and transferred to a log to be analyzed by SPSS v. 10. Data Analysis: The data analysis was performed using SPSS v. 10 to determine the statistical significance of the delivered Vt with different compliances, different AMP and a fixed power setting. A probability of (p < 0.05) was accepted as statistically significant. Results: The average delivered Vt with each compliance was 124.181 mL (range of 116.4276 mL and 132.6637 mL) and average AMP of 84.85 cm/H2O (range 82.0 cm/H2O and 88.0801 cm/H2O) n=5. There was an inverse relationship between Vt and AMP at a fixed power of 6.0. As compliance improved Vt increased and there was a corresponding decrease in AMP. The one-way ANOVA test showed that there were significant differences between the delivered tidal volume and AMP at a fixed power setting. When the post hoc Bonferroni test was used the data showed significant differences between AMP achieved with each compliance change and a fixed power of 6.0. When the post hoc Bonferroni test was used the data showed significant differences between Vt delivered with each compliance change and a fixed power setting of 6.0. Conclusion: Vt is not constant during HFOV. Compliance is one determinant of Vt in adults with ARDS during HFOV. AMP and Vt are inversely related during HFOV at a fixed power setting and improving compliance.
19

Can algorithmic trading beat the market? : An experiment with S&amp;P 500, FTSE 100, OMX Stockholm 30 Index

Kiselev, Ilya January 2012 (has links)
The research at hand aims to define effectiveness of algorithmic trading, comparing with different benchmarks represented by several types of indexes. How big returns can be gotten by algorithmic trading, taking into account the costs of informational and trading infrastructure needed for robot trading implementation? To get the result, it’s necessary to compare two opposite trading strategies: 1) Algorithmic trading (implemented by high-frequency trading robot (based on statistic arbitrage strategy) and trend-following trading robot (based on the indicator Exponential Moving Average with the Variable Factor of Smoothing)) 2) Index investing strategy (classical index strategies “buy and hold”, implemented by four different types of indexes: Capitalization weight index, Fundamental indexing, Equal-weighted indexing, Risk-based indexation/minimal variance). According to the results, it was found that at the current phase of markets’ development, it is theoretically possible for algorithmic trading (and especially high-frequency strategies) to exceed the returns of index strategy, but we should note two important factors: 1) Taking into account all of the costs of organization of high-frequency trading (brokerage and stock exchanges commissions, trade-related infrastructure maintenance, etc.), the difference in returns (with superiority of high-frequency strategy) will be much less . 2) Given the fact that “markets’ efficiency” is growing every year (see more about it further in thesis), and the returns of high-frequency strategies tends to decrease with time (see more about it further in thesis), it is quite logical to assume that it will be necessary to invest more and more in trading infrastructure to “fix” the returns of high-frequency trading strategies on a higher level, than the results of index investing strategies.
20

Rapid, Predictive Modeling for High Frequency Interconnect on Low Cost Substrates

Shin, Jaemin 13 May 2005 (has links)
In this dissertation, a predictive (scalable) measurement-based PEEC modeling method for high-frequency interconnects on low-cost FR4 substrates is proposed and demonstrated. The interconnects are modeled with equivalent circuits of scalable building blocks using a rapid and accurate optimization method to fit parameter data up to 10 GHz. The predictive power of the developed scalable models is demonstrated in several extended interconnect structures and the ability to use interpolation to predict the high frequency performance of structures with differently sized building blocks is demonstrated. The usefulness of the proposed modeling method is validated by comparing predictions to measurements both in frequency domain and in time domain. The efficiency and accuracy of the method are also compared with the Advanced Design System (ADS) momentum simulation tool. The results show that this proposed high-frequency interconnect modeling method is very much more efficient in terms of simulation time, while maintaining comparable accuracy, compared to momentum simulations and measured behavior.

Page generated in 0.0575 seconds