• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 298
  • 54
  • 49
  • 24
  • 11
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 525
  • 352
  • 129
  • 102
  • 66
  • 63
  • 60
  • 58
  • 56
  • 54
  • 50
  • 50
  • 44
  • 44
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

The dynamics and phase behavior of suspensions of stimuli-responsive colloids

Cho, Jae Kyu 29 July 2009 (has links)
The studies of the dynamics, phase behavior, interparticle interactions, and hydrodynamics of stimuli-responsive pNIPAm-co-AAc microgels were described in this thesis. Due to their responsiveness to external stimuli, these colloidal particles serve as excellent model systems to probe the relationship between colloidal interactions and phase behavior. As a first step, we established our core experimental methodology, by demonstrating that particle tracking video microscopy is an effective technique to quantify various parameters in colloidal systems. Then we used the technique in combination with a microfluidic device that provides in situ control over sample pH to probe the phase behavior of pNIPAm-co-AAc microgel suspensions. In essence, the experimental set-up enables changes in effective particle volume fractions by changing pH, which can be used to construct the phase diagram. In order to explain the unique features of the microgel phase diagram, we measured the underlying pairwise interparticle potential of pNIPAm-co-AAc microgels directly in quasi-2D suspension and proved that the interactions are pH dependent and can range from weakly attractive to soft repulsive. Finally, the hindered Brownian diffusion due of colloidal particles confined by hard walls was investigated systematically and striking differences between hard sphere and soft sphere were found, with soft pNIPAm-co-AAc microgels showing surprising mobility even under strong confinement.
192

Parameter estimation error: a cautionary tale in computational finance

Popovic, Ray 17 May 2010 (has links)
We quantify the effects on contingent claim valuation of using an estimator for the volatility of a geometric Brownian motion (GBM) process. That is, we show what difficulties can arise when failing to account for estimation risk. Our working problem uses a direct estimator of volatility based on the sample standard deviation of increments from the underlying Brownian motion. After substituting into the GBM the direct volatility estimator for the true, but unknown, value of the parameter sigma, we derive the resulting marginal distribution of the approximated GBM. This allows us to derive post-estimation distributions and valuation formulae for an assortment of European contingent claims that are in accord with the basic properties of the underlying risk-neutral process. Next we extend our work to the contingent claim sensitivities associated with an assortment of European option portfolios that are based on the direct estimator of the volatility of the GBM process. Our approach to the option sensitivities - the Greeks - uses the likelihood function technique. This allows us to obtain computable results for the technically more-complicated formulae associated with our post-estimation process. We discuss an assortment of difficulties that can ensue when failing to account for estimation risk in valuation and hedging formulae.
193

New control charts for monitoring univariate autocorrelated processes and high-dimensional profiles

Lee, Joongsup 18 August 2011 (has links)
In this thesis, we first investigate the use of automated variance estimators in distribution-free statistical process control (SPC) charts for univariate autocorrelated processes. We introduce two variance estimators---the standardized time series overlapping area estimator and the so-called quick-and-dirty autoregressive estimator---that can be obtained from a training data set and used effectively with distribution-free SPC charts when those charts are applied to processes exhibiting nonnormal responses or correlation between successive responses. In particular, we incorporate the two estimators into DFTC-VE, a new distribution-free tabular CUSUM chart developed for autocorrelated processes; and we compare its performance with other state-of-the-art distribution-free SPC charts. Using either of the two variance estimators, the DFTC-VE outperforms its competitors in terms of both in-control and out-of-control average run lengths when all the competing procedures are tested on the same set of independently sampled realizations of selected autocorrelated processes with normal or nonnormal noise components. Next, we develop WDFTC, a wavelet-based distribution-free CUSUM chart for detecting shifts in the mean of a high-dimensional profile with noisy components that may exhibit nonnormality, variance heterogeneity, or correlation between profile components. A profile describes the relationship between a selected quality characteristic and an input (design) variable over the experimental region. Exploiting a discrete wavelet transform (DWT) of the mean in-control profile, WDFTC selects a reduced-dimension vector of the associated DWT components from which the mean in-control profile can be approximated with minimal weighted relative reconstruction error. Based on randomly sampled Phase I (in-control) profiles, the covariance matrix of the corresponding reduced-dimension DWT vectors is estimated using a matrix-regularization method; then the DWT vectors are aggregated (batched) so that the nonoverlapping batch means of the reduced-dimension DWT vectors have manageable covariances. To monitor shifts in the mean profile during Phase II operation, WDFTC computes a Hotelling's T-square--type statistic from successive nonoverlapping batch means and applies a CUSUM procedure to those statistics, where the associated control limits are evaluated analytically from the Phase I data. We compare WDFTC with other state-of-the-art profile-monitoring charts using both normal and nonnormal noise components having homogeneous or heterogenous variances as well as independent or correlated components; and we show that WDFTC performs well, especially for local shifts of small to medium size, in terms of both in-control and out-of-control average run lengths.
194

Topics on fractional Brownian motion and regular variation for stochastic processes

Hult, Henrik January 2003 (has links)
<p>The first part of this thesis studies tail probabilities forelliptical distributions and probabilities of extreme eventsfor multivariate stochastic processes. It is assumed that thetails of the probability distributions satisfy a regularvariation condition. This means, roughly speaking, that thereis a non-negligible probability for very large or extremeoutcomes to occur. Such models are useful in applicationsincluding insurance, finance and telecommunications networks.It is shown how regular variation of the marginals, or theincrements, of a stochastic process implies regular variationof functionals of the process. Moreover, the associated tailbehavior in terms of a limit measure is derived.</p><p>The second part of the thesis studies problems related toparameter estimation in stochastic models with long memory.Emphasis is on the estimation of the drift parameter in somestochastic differential equations driven by the fractionalBrownian motion or more generally Volterra-type processes.Observing the process continuously, the maximum likelihoodestimator is derived using a Girsanov transformation. In thecase of discrete observations the study is carried out for theparticular case of the fractional Ornstein-Uhlenbeck process.For this model Whittle’s approach is applied to derive anestimator for all unknown parameters.</p>
195

A mathematical model for colloidal aggregation [electronic resource] / by Colleen S. O'Brien.

O'Brien, Colleen S. January 2003 (has links)
Title from PDF of title page. / Document formatted into pages; contains 313 pages. / Thesis (M.S.Ch.E.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: The characterization of fine particles is an area of immense significance to many industrial endeavors. It has been estimated that 70% of all industrial processes deal with fine particles at some point in the process. A natural phenomenon occurring in these processes is colloidal aggregation. This study examines aggregation in colloidal systems in order to characterize, examine, and control this occurrence in industrial processes. The study of particle aggregation has been broken into many different areas, such as collision mechanisms, interaction energy etc, but a complete model that integrates these different aspects has never been fully realized. A new model is required to accurately predict the aggregation behavior of colloidal particles. In this work, a new model is developed that integrates Smoluchowski kinetics, total interaction energy between particles, and stability ratios for perikinetic and orthokinetic collision mechanisms. / ABSTRACT: The total particle interaction energy necessary for the calculation of stability ratios is represented by the summation of electrostatic and van der Waals interactions. The electrostatic interactions are modeled using DLVO theory, the linear Poisson-Boltzmann equation, and a numerical solution for the non-linear Poisson-Boltzmann Equation, while the van der Waals interactions are represented by Hamaker theory. The mathematical model is solved using an adjustable discretion technique, which is tested against a specific analytic solution, and yields an assessment of the error intrinsic in the discretization method. The basis of the mathematical model is a population balance framework. The model developed in this study is general in many respects, but could be readily applied to many different aggregation systems with minor modification. / ABSTRACT: A comparison of the mathematical model with previous experiments conducted by Scott Fisher (1998) is carried out for the perikinetic and orthokinetic transport-limited aggregation regimes. The fractal nature of solid-sphere aggregates is considered when comparing the mathematical model predictions with experimental measurements. The previous experiments that are used for comparison utilized polystyrene particles ranging from 100 nm to 500 nm in initial diameter, several initial particle concentrations, and various stirring rates. Zeta potential measurements are presented in order to set the range of transport-limited aggregation. An assessment of the results of the mathematical model with the experimental results show good agreement for transport-limited aggregation within the perikinetic and orthokinetic transport-limited aggregation, with average particle sizes ranging from 100 nm to well over 2 microns. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
196

Interim analysis of clinical trials : simulation studies of fractional Brownian motion.

Huang, Jin. Swint, John Michael, Kapadia, Asha Seth, Lai, Dejian, January 2009 (has links)
Source: Dissertation Abstracts International, Volume: 70-03, Section: B, page: 1576. Advisers: Dejian Lai; Asha S. Kapadia. Includes bibliographical references.
197

High-performance algorithms and software for large-scale molecular simulation

Liu, Xing 08 June 2015 (has links)
Molecular simulation is an indispensable tool in many different disciplines such as physics, biology, chemical engineering, materials science, drug design, and others. Performing large-scale molecular simulation is of great interest to biologists and chemists, because many important biological and pharmaceutical phenomena can only be observed in very large molecule systems and after sufficiently long time dynamics. On the other hand, molecular simulation methods usually have very steep computational costs, which limits current molecular simulation studies to relatively small systems. The gap between the scale of molecular simulation that existing techniques can handle and the scale of interest has become a major barrier for applying molecular simulation to study real-world problems. In order to study large-scale molecular systems using molecular simulation, it requires developing highly parallel simulation algorithms and constantly adapting the algorithms to rapidly changing high performance computing architectures. However, many existing algorithms and codes for molecular simulation are from more than a decade ago, which were designed for sequential computers or early parallel architectures. They may not scale efficiently and do not fully exploit features of today's hardware. Given the rapid evolution in computer architectures, the time has come to revisit these molecular simulation algorithms and codes. In this thesis, we demonstrate our approach to addressing the computational challenges of large-scale molecular simulation by presenting both the high-performance algorithms and software for two important molecular simulation applications: Hartree-Fock (HF) calculations and hydrodynamics simulations, on highly parallel computer architectures. The algorithms and software presented in this thesis have been used by biologists and chemists to study some problems that were unable to solve using existing codes. The parallel techniques and methods developed in this work can be also applied to other molecular simulation applications.
198

A model system for understanding the distribution of fines in a paper structure using fluorescence microscopy / Ett modellsystem för att förstå fördelningen av fines i en pappersstruktur med hjälp av fluorescensmikroskopi

Jansson Rådberg, Weronica January 2015 (has links)
Fines have a very important role in paper chemistry and are a determinant in retention, drainage and the properties of paper. The purpose of this project was to be able to label the fines with fluorophores and study their Brownian motion with fluorescence microscopy. When succeeded this could then be used to study fines, fibers and other additives in a suspension thus giving the fundamental knowledge of why fines have this important role. Due to aggregation of the fines no Brownian motion could be detected. Instead the fines were handled as a network system and small fluorescence labeled latex particles were then studied in this system. This approach yields information about the fines when the obstacle with sedimentation of the network is resolved. / Fines har en viktig roll i papperskemin och har en avgörande roll när det gäller retention, dränering och papprets egenskaper. Syftet med detta projekt var att kunna färga in fines med fluoroforer och sedan följa deras brownska rörelse med hjälp av ett fluorescensmikroskop. Denna metod skulle sedan kunna användas för att observera interaktionerna mellan fines, fibrer och andra additiver i en suspension. Det skulle göra de underliggande mekanismerna kända för varför fines utgör en så viktig del i processen. På grund av att fines aggregerade så fick man istället behandla dem som ett nätverk där man tillsatte redan fluorescerande prober vars rörelser studerades. Att studera fines indirekt på detta vis kommer att ge information när sedimenteringen av nätverket är löst.
199

GARCH models based on Brownian Inverse Gaussian innovation processes / Gideon Griebenow

Griebenow, Gideon January 2006 (has links)
In classic GARCH models for financial returns the innovations are usually assumed to be normally distributed. However, it is generally accepted that a non-normal innovation distribution is needed in order to account for the heavier tails often encountered in financial returns. Since the structure of the normal inverse Gaussian (NIG) distribution makes it an attractive alternative innovation distribution for this purpose, we extend the normal GARCH model by assuming that the innovations are NIG-distributed. We use the normal variance mixture interpretation of the NIG distribution to show that a NIG innovation may be interpreted as a normal innovation coupled with a multiplicative random impact factor adjustment of the ordinary GARCH volatility. We relate this new volatility estimate to realised volatility and suggest that the random impact factors are due to a news noise process influencing the underlying returns process. This GARCH model with NIG-distributed innovations leads to more accurate parameter estimates than the normal GARCH model. In order to obtain even more accurate parameter estimates, and since we expect an information gain if we use more data, we further extend the model to cater for high, low and close data, as well as full intraday data, instead of only daily returns. This is achieved by introducing the Brownian inverse Gaussian (BIG) process, which follows naturally from the unit inverse Gaussian distribution and standard Brownian motion. Fitting these models to empirical data, we find that the accuracy of the model fit increases as we move from the models assuming normally distributed innovations and allowing for only daily data to those assuming underlying BIG processes and allowing for full intraday data. However, we do encounter one problematic result, namely that there is empirical evidence of time dependence in the random impact factors. This means that the news noise processes, which we assumed to be independent over time, are indeed time dependent, as can actually be expected. In order to cater for this time dependence, we extend the model still further by allowing for autocorrelation in the random impact factors. The increased complexity that this extension introduces means that we can no longer rely on standard Maximum Likelihood methods, but have to turn to Simulated Maximum Likelihood methods, in conjunction with Efficient Importance Sampling and the Control Variate variance reduction technique, in order to obtain an approximation to the likelihood function and the parameter estimates. We find that this time dependent model assuming an underlying BIG process and catering for full intraday data fits generated data and empirical data very well, as long as enough intraday data is available. / Thesis (Ph.D. (Risk Analysis))--North-West University, Potchefstroom Campus, 2006.
200

Brauno dujų panaudojimo vidaus degimo variklyje galimybių tyrimas. Energetinis aspektas / The research of appliance of Brown gas in internal - combustion engine. Power and energy aspect

Grigaliūnas, Aivaras 11 February 2012 (has links)
Bakalauro baigiamąjį darbą „Brauno dujų panaudojimo vidaus degimo variklyje galimybių tyrimas. Energetinis aspektas“ sudaro 6 skyriai. Darbo apimtį sudaro 36 puslapiai, 19 paveikslėlių, 3 lentelės ir priedai. Įvade iškeliama darbo problema, tikslai ir uždaviniai. Pirmajame skyriuje pateiktas dvitakčio vidaus degimo variklio veikimas, aptariami naudojami degalai. Antrajame skyriuje analizuojamas elektrolizės procesas vykstantis Brauno dujų generatoriuje. Trečiajame skyriuje analizuojamas vandenilio panaudojimas vidaus degimo varikliuose ir vandenilio kaip kuro savybės. Ketvirtame skyriuje aptariamos kylančios problemos ir aktualijos. Penktame skyriuje aprašoma tyrimo metodika. Darbo pabaigoje pateikiame išvadas, kurias padarėme atlikę eksperimentą. / The final Bachelor's work " The research of appliance of Brown gas in internal - combustion engine. Power and energy aspect “ consists of six chapters. The work is made of 36 pages, 19 drawings, 3 tables and accessories. Problem of the research along with objectives and goals are provided in the introduction. The first chapter contains a two-stroke internal combustion engine operation and fuel it is using. The second chapter analyzes the electrolysis process at Brown gas generator. The third chapter examines the use of hydrogen in the internal combustion engines and hydrogen as the fuel properties. The fourth chapter deals with emerging issues and news. The fifth chapter describes the research methodology. At the end of work we made conclusions, which we notice doing the experiment.

Page generated in 0.0775 seconds