• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 419
  • 92
  • 32
  • 31
  • 10
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 740
  • 740
  • 112
  • 112
  • 112
  • 89
  • 79
  • 78
  • 67
  • 64
  • 61
  • 57
  • 53
  • 53
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Parameter estimation methods based on binary observations - Application to Micro-Electromechanical Systems (MEMS)

Jafaridinani, Kian 09 July 2012 (has links) (PDF)
While the characteristic dimensions of electronic systems scale down to micro- or nano-world, their performance is greatly influenced. Micro-fabrication process or variations of the operating situation such as temperature, humidity or pressure are usual cause of dispersion. Therefore, it seems essential to co-integrate self-testing or self-adjustment routines for these microdevices. For this feature, most existing system parameter estimation methods are based on the implementation of high-resolution digital measurements of the system's output. Thus, long design time and large silicon areas are needed, which increases the cost of the micro-fabricated devices. The parameter estimation problems based on binary outputs can be introduced as alternative self-test identification methods, requiring only a 1-bit Analog-to-Digital Converter (ADC) and a 1-bit Digital-to-Analog Converter (DAC).In this thesis, we propose a novel recursive identification method to the problem of system parameter estimation from binary observations. An online identification algorithm with low-storage requirements and small computational complexity is derived. We prove the asymptotic convergence of this method under some assumptions. We show by Monte Carlo simulations that these assumptions do not necessarily have to be met in practice in order to obtain an appropriate performance of the method. Furthermore, we present the first experimental application of this method dedicated to the self-test of integrated micro-electro-mechanical systems (MEMS). The proposed online Built-In Self-Test method is very amenable to integration for the self-testing of systems relying on resistive sensors and actuators, because it requires low memory storage, only a 1-bit ADC and a 1-bit DAC which can be easily implemented in a small silicon area with minimal energy consumption.
222

Spread of an ant-dispersed annual herb : an individual-based simulation study on population development of Melampyrum pratense L.

Winkler, Eckart, Heinken, Thilo January 2007 (has links)
The paper presents a simulation and parameter-estimation approach for evaluating stochastic patterns of population growth and spread of an annual forest herb, Melampyrum pratense (Orobanchaceae). The survival of a species during large-scale changes in land use and climate will depend, to a considerable extent, on its dispersal and colonisation abilities. Predictions on species migration need a combination of field studies and modelling efforts. Our study on the ability of M. pratense to disperse into so far unoccupied areas was based on experiments in secondary woodland in NE Germany. Experiments started in 1997 at three sites where the species was not yet present, with 300 seeds sown within one square meter. Population development was then recorded until 2001 by mapping of individuals with a resolution of 5 cm. Additional observations considered density dependence of seed production. We designed a spatially explicit individual-based computer simulation model to explain the spatial patterns of population development and to predict future population spread. Besides primary drop of seeds (barochory) it assumed secondary seed transport by ants (myrmecochory) with an exponentially decreasing dispersal tail. An important feature of populationpattern explanation was the simultaneous estimation of both population-growth and dispersal parameters from consistent spatio-temporal data sets. As the simulation model produced stochastic time series and random spatially discrete distributions of individuals we estimated parameters by minimising the expectation of weighted sums of squares. These sums-ofsquares criteria considered population sizes, radial population distributions around the area of origin and distributions of individuals within squares of 25*25 cm, the range of density action. Optimal parameter values, together with the precision of the estimates, were obtained from calculating sums of squares in regular grids of parameter values. Our modelling results showed that transport of fractions of seeds by ants over distances of 1…2 m was indispensable for explaining the observed population spread that led to distances of at most 8 m from population origin within 3 years. Projections of population development over 4 additional years gave a diffusion-like increase of population area without any “outposts”. This prediction generated by the simulation model gave a hypothesis which should be revised by additional field observations. Some structural deviations between observations and model output already indicated that for full understanding of population spread the set of dispersal mechanisms assumed in the model may have to be extended by additional features of plant-animal mutualism.
223

Model and System Inversion with Applications in Nonlinear System Identification and Control

Markusson, Ola January 2001 (has links)
No description available.
224

Topics on fractional Brownian motion and regular variation for stochastic processes

Hult, Henrik January 2003 (has links)
The first part of this thesis studies tail probabilities forelliptical distributions and probabilities of extreme eventsfor multivariate stochastic processes. It is assumed that thetails of the probability distributions satisfy a regularvariation condition. This means, roughly speaking, that thereis a non-negligible probability for very large or extremeoutcomes to occur. Such models are useful in applicationsincluding insurance, finance and telecommunications networks.It is shown how regular variation of the marginals, or theincrements, of a stochastic process implies regular variationof functionals of the process. Moreover, the associated tailbehavior in terms of a limit measure is derived. The second part of the thesis studies problems related toparameter estimation in stochastic models with long memory.Emphasis is on the estimation of the drift parameter in somestochastic differential equations driven by the fractionalBrownian motion or more generally Volterra-type processes.Observing the process continuously, the maximum likelihoodestimator is derived using a Girsanov transformation. In thecase of discrete observations the study is carried out for theparticular case of the fractional Ornstein-Uhlenbeck process.For this model Whittle’s approach is applied to derive anestimator for all unknown parameters.
225

Advances in Separation Science : . Molecular Imprinting: Development of Spherical Beads and Optimization of the Formulation by Chemometrics.

Kempe, Henrik January 2007 (has links)
An intrinsic mathematical model for simulation of fixed bed chromatography was demonstrated and compared to more simplified models. The former model was shown to describe variations in the physical, kinetic, and operating parameters better than the latter ones. This resulted in a more reliable prediction of the chromatography process as well as a better understanding of the underlying mechanisms responsible for the separation. A procedure based on frontal liquid chromatography and a detailed mathematical model was developed to determine effective diffusion coefficients of proteins in chromatographic gels. The procedure was applied to lysozyme, bovine serum albumin, and immunoglobulin γ in Sepharose™ CL-4B. The effective diffusion coefficients were comparable to those determined by other methods. Molecularly imprinted polymers (MIPs) are traditionally prepared as irregular particles by grinding monoliths. In this thesis, a suspension polymerization providing spherical MIP beads is presented. Droplets of pre-polymerization solution were formed in mineral oil with no need of stabilizers by vigorous stirring. The droplets were transformed into solid spherical beads by free-radical polymerization. The method is fast and the performance of the beads comparable to that of irregular particles. Optimizing a MIP formulation requires a large number of experiments since the possible combinations of the components are huge. To facilitate the optimization, chemometrics was applied. The amounts of monomer, cross-linker, and porogen were chosen as the factors in the model. Multivariate data analysis indicated the influence of the factors on the binding and an optimized MIP composition was identified. The combined use of the suspension polymerization method to produce spherical beads with the application of chemometrics was shown in this thesis to drastically reduce the number of experiments and the time needed to design and optimize a new MIP.
226

Single-Zone Cylinder Pressure Modeling and Estimation for Heat Release Analysis of SI Engines

Klein, Markus January 2007 (has links)
Cylinder pressure modeling and heat release analysis are today important and standard tools for engineers and researchers, when developing and tuning new engines. Being able to accurately model and extract information from the cylinder pressure is important for the interpretation and validity of the result. The first part of the thesis treats single-zone cylinder pressure modeling, where the specific heat ratio model constitutes a key part. This model component is therefore investigated more thoroughly. For the purpose of reference, the specific heat ratio is calculated for burned and unburned gases, assuming that the unburned mixture is frozen and that the burned mixture is at chemical equilibrium. Use of the reference model in heat release analysis is too time consuming and therefore a set of simpler models, both existing and newly developed, are compared to the reference model. A two-zone mean temperature model and the Vibe function are used to parameterize the mass fraction burned. The mass fraction burned is used to interpolate the specific heats for the unburned and burned mixture, and to form the specific heat ratio, which renders a cylinder pressure modeling error in the same order as the measurement noise, and fifteen times smaller than the model originally suggested in Gatowski et al. (1984). The computational time is increased with 40 % compared to the original setting, but reduced by a factor 70 compared to precomputed tables from the full equilibrium program. The specific heats for the unburned mixture are captured within 0.2 % by linear functions, and the specific heats for the burned mixture are captured within 1 % by higher-order polynomials for the major operating range of a spark ignited (SI) engine. In the second part, four methods for compression ratio estimation based on cylinder pressure traces are developed and evaluated for both simulated and experimental cycles. Three methods rely upon a model of polytropic compression for the cylinder pressure. It is shown that they give a good estimate of the compression ratio at low compression ratios, although the estimates are biased. A method based on a variable projection algorithm with a logarithmic norm of the cylinder pressure yields the smallest confidence intervals and shortest computational time for these three methods. This method is recommended when computational time is an important issue. The polytropic pressure model lacks information about heat transfer and therefore the estimation bias increases with the compression ratio. The fourth method includes heat transfer, crevice effects, and a commonly used heat release model for firing cycles. This method estimates the compression ratio more accurately in terms of bias and variance. The method is more computationally demanding and thus recommended when estimation accuracy is the most important property. In order to estimate the compression ratio as accurately as possible, motored cycles with as high initial pressure as possible should be used. The objective in part 3 is to develop an estimation tool for heat release analysis that is accurate, systematic and efficient. Two methods that incorporate prior knowledge of the parameter nominal value and uncertainty in a systematic manner are presented and evaluated. Method 1 is based on using a singular value decomposition of the estimated hessian, to reduce the number of estimated parameters one-by-one. Then the suggested number of parameters to use is found as the one minimizing the Akaike final prediction error. Method 2 uses a regularization technique to include the prior knowledge in the criterion function. Method 2 gives more accurate estimates than method 1. For method 2, prior knowledge with individually set parameter uncertainties yields more accurate and robust estimates. Once a choice of parameter uncertainty has been done, no user interaction is needed. Method 2 is then formulated for three different versions, which differ in how they determine how strong the regularization should be. The quickest version is based on ad-hoc tuning and should be used when computational time is important. Another version is more accurate and flexible to changing operating conditions, but is more computationally demanding.
227

Statistical Inference in Inverse Problems

Xun, Xiaolei 2012 May 1900 (has links)
Inverse problems have gained popularity in statistical research recently. This dissertation consists of two statistical inverse problems: a Bayesian approach to detection of small low emission sources on a large random background, and parameter estimation methods for partial differential equation (PDE) models. Source detection problem arises, for instance, in some homeland security applications. We address the problem of detecting presence and location of a small low emission source inside an object, when the background noise dominates. The goal is to reach the signal-to-noise ratio levels on the order of 10^-3. We develop a Bayesian approach to this problem in two-dimension. The method allows inference not only about the existence of the source, but also about its location. We derive Bayes factors for model selection and estimation of location based on Markov chain Monte Carlo simulation. A simulation study shows that with sufficiently high total emission level, our method can effectively locate the source. Differential equation (DE) models are widely used to model dynamic processes in many fields. The forward problem of solving equations for given parameters that define the DEs has been extensively studied in the past. However, the inverse problem of estimating parameters based on observed state variables is relatively sparse in the statistical literature, and this is especially the case for PDE models. We propose two joint modeling schemes to solve for constant parameters in PDEs: a parameter cascading method and a Bayesian treatment. In both methods, the unknown functions are expressed via basis function expansion. For the parameter cascading method, we develop the algorithm to estimate the parameters and derive a sandwich estimator of the covariance matrix. For the Bayesian method, we develop the joint model for data and the PDE, and describe how the Markov chain Monte Carlo technique is employed to make posterior inference. A straightforward two-stage method is to first fit the data and then to estimate parameters by the least square principle. The three approaches are illustrated using simulated examples and compared via simulation studies. Simulation results show that the proposed methods outperform the two-stage method.
228

The Integrated Distributed Hydrological Model, ECOFLOW- a Tool for Catchment Management

Sokrut, Nikolay January 2005 (has links)
In order to find effective measures that meet the requirements for proper groundwater quality and quantity management, there is a need to develop a Decision Support System (DSS) and a suitable modelling tool. Central components of a DSS for groundwater management are thought to be models for surface- and groundwater flow and solute transport. The most feasible approach seems to be integration of available mathematical models, and development of a strategy for evaluation of the uncertainty propagation through these models. The physically distributed hydrological model ECOMAG has been integrated with the groundwater model MODFLOW to form a new integrated watershed modelling system - ECOFLOW. The modelling system ECOFLOW has been developed and embedded in Arc View. The multiple-scale modelling principle, combines a more detailed representation of the groundwater flow conditions with lumped watershed modelling, characterised by simplicity in model use, and a minimised number of model parameters. A Bayesian statistical downscaling procedure has also been developed and implemented in the model. This algorithm implies downscaling of the parameters used in the model, and leads to decreasing of the uncertainty level in the modelling results. The integrated model ECOFLOW has been applied to the Vemmenhög catchment, in Southern Sweden, and the Örsundaån catchment, in central Sweden. The applications demonstrated that the model is capable of simulating, with reasonable accuracy, the hydrological processes within both the agriculturally dominated watershed (Vemmenhög) and the forest dominated catchment area (Örsundaån). The results show that the ECOFLOW model adequately predicts the stream and groundwater flow distribution in these watersheds, and that the model can be used as a possible tool for simulation of surface– and groundwater processes on both local and regional scales. A chemical module ECOMAG-N has been created and tested on the Vemmenhög watershed with a highly dense drainage system and intensive fertilisation practises. The chemical module appeared to provide reliable estimates of spatial nitrate loads in the watershed. The observed and simulated nitrogen concentration values were found to be in close agreement at most of the reference points. The proposed future research includes further development of this model for contaminant transport in the surface- and ground water for point and non-point source contamination modelling. Further development of the model will be oriented towards integration of the ECOFLOW model system into a planned Decision Support System. / QC 20101007
229

Modeling and Control of Bilinear Systems : Application to the Activated Sludge Process

Ekman, Mats January 2005 (has links)
This thesis concerns modeling and control of bilinear systems (BLS). BLS are linear but not jointly linear in state and control. In the first part of the thesis, a background to BLS and their applications to modeling and control is given. The second part, and likewise the principal theme of this thesis, is dedicated to theoretical aspects of identification, modeling and control of mainly BLS, but also linear systems. In the last part of the thesis, applications of bilinear and linear modeling and control to the activated sludge process (ASP) are given.
230

Characterizing the redundancy of universal source coding for finite-length sequences

Beirami, Ahmad 16 December 2011 (has links)
In this thesis, we first study what is the average redundancy resulting from the universal compression of a single finite-length sequence from an unknown source. In the universal compression of a source with d unknown parameters, Rissanen demonstrated that the expected redundancy for regular codes is asymptotically d/2 log n + o(log n) for almost all sources, where n is the sequence length. Clarke and Barron also derived the asymptotic average minimax redundancy for memoryless sources. The average minimax redundancy is concerned with the redundancy of the worst parameter vector for the best code. Thus, it does not provide much information about the effect of the different source parameter values. Our treatment in this thesis is probabilistic. In particular, we derive a lower bound on the probability measure of the event that a sequence of length n from an FSMX source chosen using Jeffreys' prior is compressed with a redundancy larger than a certain fraction of d/2 log n. Further, our results show that the average minimax redundancy provides good estimate for the average redundancy of most sources for large enough n and d. On the other hand, when the source parameter d is small the average minimax redundancy overestimates the average redundancy for small to moderate length sequences. Additionally, we precisely characterize the average minimax redundancy of universal coding when the coding scheme is restricted to be from the family of two--stage codes, where we show that the two--stage assumption incurs a negligible redundancy for small and moderate length n unless the number of source parameters is small. %We show that redundancy is significant in the compression of small sequences. Our results, collectively, help to characterize the non-negligible redundancy resulting from the compression of small and moderate length sequences. Next, we apply these results to the compression of a small to moderate length sequence provided that the context present in a sequence of length M from the same source is memorized. We quantify the achievable performance improvement in the universal compression of the small to moderate length sequence using context memorization.

Page generated in 0.0291 seconds