• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 480
  • 92
  • 35
  • 32
  • 10
  • 5
  • 5
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 822
  • 822
  • 127
  • 121
  • 117
  • 101
  • 85
  • 81
  • 76
  • 70
  • 70
  • 63
  • 62
  • 59
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Probabilistic modeling of natural attenuation of petroleum hydrocarbons

Hosseini, Amir Hossein 11 1900 (has links)
Natural attenuation refers to the observed reduction in contaminant concentration via natural processes as contaminants migrate from the source into environmental media. Assessment of the dimensions of contaminant plumes and prediction of their fate requires predictions of the rate of dissolution of contaminants from residual non-aqueous-phase liquids (NAPLs) into the aquifer and the rate of contaminant removal through biodegradation. The available techniques to estimate these parameters do not characterize their confidence intervals by accounting for their relationships to uncertainty in source geometry and hydraulic conductivity distribution. The central idea in this thesis is to develop a flexible modeling approach for characterization of uncertainty in residual NAPL dissolution rate and first-order biodegradation rate by tailoring the estimation of these parameters to distributions of uncertainty in source size and hydraulic conductivity field. The first development in this thesis is related to a distance function approach that characterizes the uncertainty in the areal limits of the source zones. Implementation of the approach for a given monitoring well arrangement results in a unique uncertainty band that meets the requirements of unbiasedness and fairness of the calibrated probabilities. The second development in this thesis is related to a probabilistic model for characterization of uncertainty in the 3D localized distribution of residual NAPL in a real site. A categorical variable is defined based on the available CPT-UVIF data, while secondary data based on soil texture and groundwater table elevation are also incorporated into the model. A cross-validation study shows the importance of incorporation of secondary data in improving the prediction of contaminated and uncontaminated locations. The third development in this thesis is related to the implementation of a Monte Carlo type inverse modeling to develop a screening model used to characterize the confidence intervals in the NAPL dissolution rate and first-order biodegradation rate. The development of the model is based on sequential self-calibration approach, distance-function approach and a gradient-based optimization. It is shown that tailoring the estimation of the transport parameters to joint realizations of source geometry and transmissivity field can effectively reduce the uncertainties in the predicted state variables.
272

Kinetics of Anionic Surfactant Anoxic Degradation

Camacho, Julianna G. 2010 May 1900 (has links)
The biodegradation kinetics of Geropon TC-42 (trademark) by an acclimated culture was investigated in anoxic batch reactors to determine biokinetic coefficients to be implemented in two biofilm mathematical models. Geropon TC-42 (trademark) is the surfactant commonly used in space habitation. The two biofilm models differ in that one assumes a constant biofilm density and the other allows biofilm density changes based on space occupancy theory. Extant kinetic analysis of a mixed microbial culture using Geropon TC-42 (trademark) as sole carbon source was used to determine cell yield, specific growth rate, and the half-saturation constant for S0/X0 ratios of 4, 12.5, and 34.5. To estimate cell yield, linear regression analysis was performed on data obtained from three sets of simultaneous batch experiments for three S0/X0 ratios. The regressions showed non-zero intercepts, suggesting that cell multiplication is not possible at low substrate concentrations. Non-linear least-squares analysis of the integrated equation was used to estimate the specific growth rate and the half-saturation constant. Net specific growth rate dependence on substrate concentration indicates a self-inhibitory effect of Geropon TC-42 (trademark). The flow rate and the ratio of the concentrations of surfactant to nitrate were the factors that most affected the simulations. Higher flow rates resulted in a shorter hydraulic retention time, shorter startup periods, and faster approach to a steady-state biofilm. At steady-state, higher flow resulted in lower surfactant removal. Higher influent surfactant/nitrate concentration ratios caused a longer startup period, supported more surfactant utilization, and biofilm growth. Both models correlate to the empirical data. A model assuming constant biofilm density is computationally simpler and easier to implement. Therefore, a suitable anoxic packed bed reactor for the removal of the surfactant Geropon TC-42 (trademark) can be designed by using the estimated kinetic values and a model assuming constant biofilm density.
273

Parameter estimation methods based on binary observations - Application to Micro-Electromechanical Systems (MEMS)

Jafaridinani, Kian 09 July 2012 (has links) (PDF)
While the characteristic dimensions of electronic systems scale down to micro- or nano-world, their performance is greatly influenced. Micro-fabrication process or variations of the operating situation such as temperature, humidity or pressure are usual cause of dispersion. Therefore, it seems essential to co-integrate self-testing or self-adjustment routines for these microdevices. For this feature, most existing system parameter estimation methods are based on the implementation of high-resolution digital measurements of the system's output. Thus, long design time and large silicon areas are needed, which increases the cost of the micro-fabricated devices. The parameter estimation problems based on binary outputs can be introduced as alternative self-test identification methods, requiring only a 1-bit Analog-to-Digital Converter (ADC) and a 1-bit Digital-to-Analog Converter (DAC).In this thesis, we propose a novel recursive identification method to the problem of system parameter estimation from binary observations. An online identification algorithm with low-storage requirements and small computational complexity is derived. We prove the asymptotic convergence of this method under some assumptions. We show by Monte Carlo simulations that these assumptions do not necessarily have to be met in practice in order to obtain an appropriate performance of the method. Furthermore, we present the first experimental application of this method dedicated to the self-test of integrated micro-electro-mechanical systems (MEMS). The proposed online Built-In Self-Test method is very amenable to integration for the self-testing of systems relying on resistive sensors and actuators, because it requires low memory storage, only a 1-bit ADC and a 1-bit DAC which can be easily implemented in a small silicon area with minimal energy consumption.
274

Spread of an ant-dispersed annual herb : an individual-based simulation study on population development of Melampyrum pratense L.

Winkler, Eckart, Heinken, Thilo January 2007 (has links)
The paper presents a simulation and parameter-estimation approach for evaluating stochastic patterns of population growth and spread of an annual forest herb, Melampyrum pratense (Orobanchaceae). The survival of a species during large-scale changes in land use and climate will depend, to a considerable extent, on its dispersal and colonisation abilities. Predictions on species migration need a combination of field studies and modelling efforts. Our study on the ability of M. pratense to disperse into so far unoccupied areas was based on experiments in secondary woodland in NE Germany. Experiments started in 1997 at three sites where the species was not yet present, with 300 seeds sown within one square meter. Population development was then recorded until 2001 by mapping of individuals with a resolution of 5 cm. Additional observations considered density dependence of seed production. We designed a spatially explicit individual-based computer simulation model to explain the spatial patterns of population development and to predict future population spread. Besides primary drop of seeds (barochory) it assumed secondary seed transport by ants (myrmecochory) with an exponentially decreasing dispersal tail. An important feature of populationpattern explanation was the simultaneous estimation of both population-growth and dispersal parameters from consistent spatio-temporal data sets. As the simulation model produced stochastic time series and random spatially discrete distributions of individuals we estimated parameters by minimising the expectation of weighted sums of squares. These sums-ofsquares criteria considered population sizes, radial population distributions around the area of origin and distributions of individuals within squares of 25*25 cm, the range of density action. Optimal parameter values, together with the precision of the estimates, were obtained from calculating sums of squares in regular grids of parameter values. Our modelling results showed that transport of fractions of seeds by ants over distances of 1…2 m was indispensable for explaining the observed population spread that led to distances of at most 8 m from population origin within 3 years. Projections of population development over 4 additional years gave a diffusion-like increase of population area without any “outposts”. This prediction generated by the simulation model gave a hypothesis which should be revised by additional field observations. Some structural deviations between observations and model output already indicated that for full understanding of population spread the set of dispersal mechanisms assumed in the model may have to be extended by additional features of plant-animal mutualism.
275

Model and System Inversion with Applications in Nonlinear System Identification and Control

Markusson, Ola January 2001 (has links)
No description available.
276

Performance Analysis of Parametric Spectral Estimators

Völcker, Björn January 2002 (has links)
No description available.
277

Topics on fractional Brownian motion and regular variation for stochastic processes

Hult, Henrik January 2003 (has links)
The first part of this thesis studies tail probabilities forelliptical distributions and probabilities of extreme eventsfor multivariate stochastic processes. It is assumed that thetails of the probability distributions satisfy a regularvariation condition. This means, roughly speaking, that thereis a non-negligible probability for very large or extremeoutcomes to occur. Such models are useful in applicationsincluding insurance, finance and telecommunications networks.It is shown how regular variation of the marginals, or theincrements, of a stochastic process implies regular variationof functionals of the process. Moreover, the associated tailbehavior in terms of a limit measure is derived. The second part of the thesis studies problems related toparameter estimation in stochastic models with long memory.Emphasis is on the estimation of the drift parameter in somestochastic differential equations driven by the fractionalBrownian motion or more generally Volterra-type processes.Observing the process continuously, the maximum likelihoodestimator is derived using a Girsanov transformation. In thecase of discrete observations the study is carried out for theparticular case of the fractional Ornstein-Uhlenbeck process.For this model Whittle’s approach is applied to derive anestimator for all unknown parameters.
278

Advances in Separation Science : . Molecular Imprinting: Development of Spherical Beads and Optimization of the Formulation by Chemometrics.

Kempe, Henrik January 2007 (has links)
An intrinsic mathematical model for simulation of fixed bed chromatography was demonstrated and compared to more simplified models. The former model was shown to describe variations in the physical, kinetic, and operating parameters better than the latter ones. This resulted in a more reliable prediction of the chromatography process as well as a better understanding of the underlying mechanisms responsible for the separation. A procedure based on frontal liquid chromatography and a detailed mathematical model was developed to determine effective diffusion coefficients of proteins in chromatographic gels. The procedure was applied to lysozyme, bovine serum albumin, and immunoglobulin γ in Sepharose™ CL-4B. The effective diffusion coefficients were comparable to those determined by other methods. Molecularly imprinted polymers (MIPs) are traditionally prepared as irregular particles by grinding monoliths. In this thesis, a suspension polymerization providing spherical MIP beads is presented. Droplets of pre-polymerization solution were formed in mineral oil with no need of stabilizers by vigorous stirring. The droplets were transformed into solid spherical beads by free-radical polymerization. The method is fast and the performance of the beads comparable to that of irregular particles. Optimizing a MIP formulation requires a large number of experiments since the possible combinations of the components are huge. To facilitate the optimization, chemometrics was applied. The amounts of monomer, cross-linker, and porogen were chosen as the factors in the model. Multivariate data analysis indicated the influence of the factors on the binding and an optimized MIP composition was identified. The combined use of the suspension polymerization method to produce spherical beads with the application of chemometrics was shown in this thesis to drastically reduce the number of experiments and the time needed to design and optimize a new MIP.
279

Single-Zone Cylinder Pressure Modeling and Estimation for Heat Release Analysis of SI Engines

Klein, Markus January 2007 (has links)
Cylinder pressure modeling and heat release analysis are today important and standard tools for engineers and researchers, when developing and tuning new engines. Being able to accurately model and extract information from the cylinder pressure is important for the interpretation and validity of the result. The first part of the thesis treats single-zone cylinder pressure modeling, where the specific heat ratio model constitutes a key part. This model component is therefore investigated more thoroughly. For the purpose of reference, the specific heat ratio is calculated for burned and unburned gases, assuming that the unburned mixture is frozen and that the burned mixture is at chemical equilibrium. Use of the reference model in heat release analysis is too time consuming and therefore a set of simpler models, both existing and newly developed, are compared to the reference model. A two-zone mean temperature model and the Vibe function are used to parameterize the mass fraction burned. The mass fraction burned is used to interpolate the specific heats for the unburned and burned mixture, and to form the specific heat ratio, which renders a cylinder pressure modeling error in the same order as the measurement noise, and fifteen times smaller than the model originally suggested in Gatowski et al. (1984). The computational time is increased with 40 % compared to the original setting, but reduced by a factor 70 compared to precomputed tables from the full equilibrium program. The specific heats for the unburned mixture are captured within 0.2 % by linear functions, and the specific heats for the burned mixture are captured within 1 % by higher-order polynomials for the major operating range of a spark ignited (SI) engine. In the second part, four methods for compression ratio estimation based on cylinder pressure traces are developed and evaluated for both simulated and experimental cycles. Three methods rely upon a model of polytropic compression for the cylinder pressure. It is shown that they give a good estimate of the compression ratio at low compression ratios, although the estimates are biased. A method based on a variable projection algorithm with a logarithmic norm of the cylinder pressure yields the smallest confidence intervals and shortest computational time for these three methods. This method is recommended when computational time is an important issue. The polytropic pressure model lacks information about heat transfer and therefore the estimation bias increases with the compression ratio. The fourth method includes heat transfer, crevice effects, and a commonly used heat release model for firing cycles. This method estimates the compression ratio more accurately in terms of bias and variance. The method is more computationally demanding and thus recommended when estimation accuracy is the most important property. In order to estimate the compression ratio as accurately as possible, motored cycles with as high initial pressure as possible should be used. The objective in part 3 is to develop an estimation tool for heat release analysis that is accurate, systematic and efficient. Two methods that incorporate prior knowledge of the parameter nominal value and uncertainty in a systematic manner are presented and evaluated. Method 1 is based on using a singular value decomposition of the estimated hessian, to reduce the number of estimated parameters one-by-one. Then the suggested number of parameters to use is found as the one minimizing the Akaike final prediction error. Method 2 uses a regularization technique to include the prior knowledge in the criterion function. Method 2 gives more accurate estimates than method 1. For method 2, prior knowledge with individually set parameter uncertainties yields more accurate and robust estimates. Once a choice of parameter uncertainty has been done, no user interaction is needed. Method 2 is then formulated for three different versions, which differ in how they determine how strong the regularization should be. The quickest version is based on ad-hoc tuning and should be used when computational time is important. Another version is more accurate and flexible to changing operating conditions, but is more computationally demanding.
280

Statistical Inference in Inverse Problems

Xun, Xiaolei 2012 May 1900 (has links)
Inverse problems have gained popularity in statistical research recently. This dissertation consists of two statistical inverse problems: a Bayesian approach to detection of small low emission sources on a large random background, and parameter estimation methods for partial differential equation (PDE) models. Source detection problem arises, for instance, in some homeland security applications. We address the problem of detecting presence and location of a small low emission source inside an object, when the background noise dominates. The goal is to reach the signal-to-noise ratio levels on the order of 10^-3. We develop a Bayesian approach to this problem in two-dimension. The method allows inference not only about the existence of the source, but also about its location. We derive Bayes factors for model selection and estimation of location based on Markov chain Monte Carlo simulation. A simulation study shows that with sufficiently high total emission level, our method can effectively locate the source. Differential equation (DE) models are widely used to model dynamic processes in many fields. The forward problem of solving equations for given parameters that define the DEs has been extensively studied in the past. However, the inverse problem of estimating parameters based on observed state variables is relatively sparse in the statistical literature, and this is especially the case for PDE models. We propose two joint modeling schemes to solve for constant parameters in PDEs: a parameter cascading method and a Bayesian treatment. In both methods, the unknown functions are expressed via basis function expansion. For the parameter cascading method, we develop the algorithm to estimate the parameters and derive a sandwich estimator of the covariance matrix. For the Bayesian method, we develop the joint model for data and the PDE, and describe how the Markov chain Monte Carlo technique is employed to make posterior inference. A straightforward two-stage method is to first fit the data and then to estimate parameters by the least square principle. The three approaches are illustrated using simulated examples and compared via simulation studies. Simulation results show that the proposed methods outperform the two-stage method.

Page generated in 0.1245 seconds