281 |
The Integrated Distributed Hydrological Model, ECOFLOW- a Tool for Catchment ManagementSokrut, Nikolay January 2005 (has links)
In order to find effective measures that meet the requirements for proper groundwater quality and quantity management, there is a need to develop a Decision Support System (DSS) and a suitable modelling tool. Central components of a DSS for groundwater management are thought to be models for surface- and groundwater flow and solute transport. The most feasible approach seems to be integration of available mathematical models, and development of a strategy for evaluation of the uncertainty propagation through these models. The physically distributed hydrological model ECOMAG has been integrated with the groundwater model MODFLOW to form a new integrated watershed modelling system - ECOFLOW. The modelling system ECOFLOW has been developed and embedded in Arc View. The multiple-scale modelling principle, combines a more detailed representation of the groundwater flow conditions with lumped watershed modelling, characterised by simplicity in model use, and a minimised number of model parameters. A Bayesian statistical downscaling procedure has also been developed and implemented in the model. This algorithm implies downscaling of the parameters used in the model, and leads to decreasing of the uncertainty level in the modelling results. The integrated model ECOFLOW has been applied to the Vemmenhög catchment, in Southern Sweden, and the Örsundaån catchment, in central Sweden. The applications demonstrated that the model is capable of simulating, with reasonable accuracy, the hydrological processes within both the agriculturally dominated watershed (Vemmenhög) and the forest dominated catchment area (Örsundaån). The results show that the ECOFLOW model adequately predicts the stream and groundwater flow distribution in these watersheds, and that the model can be used as a possible tool for simulation of surface– and groundwater processes on both local and regional scales. A chemical module ECOMAG-N has been created and tested on the Vemmenhög watershed with a highly dense drainage system and intensive fertilisation practises. The chemical module appeared to provide reliable estimates of spatial nitrate loads in the watershed. The observed and simulated nitrogen concentration values were found to be in close agreement at most of the reference points. The proposed future research includes further development of this model for contaminant transport in the surface- and ground water for point and non-point source contamination modelling. Further development of the model will be oriented towards integration of the ECOFLOW model system into a planned Decision Support System. / QC 20101007
|
282 |
Estimation Using Low Rank Signal ModelsMahata, Kaushik January 2003 (has links)
Designing estimators based on low rank signal models is a common practice in signal processing. Some of these estimators are designed to use a single low rank snapshot vector, while others employ multiple snapshots. This dissertation deals with both these cases in different contexts. Separable nonlinear least squares is a popular tool to extract parameter estimates from a single snapshot vector. Asymptotic statistical properties of the separable non-linear least squares estimates are explored in the first part of the thesis. The assumptions imposed on the noise process and the data model are general. Therefore, the results are useful in a wide range of applications. Sufficient conditions are established for consistency, asymptotic normality and statistical efficiency of the estimates. An expression for the asymptotic covariance matrix is derived and it is shown that the estimates are circular. The analysis is extended also to the constrained separable nonlinear least squares problems. Nonparametric estimation of the material functions from wave propagation experiments is the topic of the second part. This is a typical application where a single snapshot vector is employed. Numerical and statistical properties of the least squares algorithm are explored in this context. Boundary conditions in the experiments are used to achieve superior estimation performance. Subsequently, a subspace based estimation algorithm is proposed. The subspace algorithm is not only computationally efficient, but is also equivalent to the least squares method in accuracy. Estimation of the frequencies of multiple real valued sine waves is the topic in the third part, where multiple snapshots are employed. A new low rank signal model is introduced. Subsequently, an ESPRIT like method named R-Esprit and a weighted subspace fitting approach are developed based on the proposed model. When compared to ESPRIT, R-Esprit is not only computationally more economical but is also equivalent in performance. The weighted subspace fitting approach shows significant improvement in the resolution threshold. It is also robust to additive noise.
|
283 |
Modeling and Control of Bilinear Systems : Application to the Activated Sludge ProcessEkman, Mats January 2005 (has links)
This thesis concerns modeling and control of bilinear systems (BLS). BLS are linear but not jointly linear in state and control. In the first part of the thesis, a background to BLS and their applications to modeling and control is given. The second part, and likewise the principal theme of this thesis, is dedicated to theoretical aspects of identification, modeling and control of mainly BLS, but also linear systems. In the last part of the thesis, applications of bilinear and linear modeling and control to the activated sludge process (ASP) are given.
|
284 |
Characterizing the redundancy of universal source coding for finite-length sequencesBeirami, Ahmad 16 December 2011 (has links)
In this thesis, we first study what is the average redundancy resulting from the universal compression of a single finite-length sequence from an unknown source. In the universal compression of a source with d unknown parameters, Rissanen demonstrated that the expected redundancy for regular codes is asymptotically d/2 log n + o(log n) for almost all sources, where n is the sequence length. Clarke and Barron also derived the asymptotic average minimax redundancy for memoryless sources. The average minimax redundancy is concerned with the redundancy of the worst parameter vector for the best code. Thus, it does not provide much information about the effect of the different source parameter values. Our treatment in this thesis is probabilistic. In particular, we derive a lower bound on the probability measure of the event that a sequence of length n from an FSMX source chosen using Jeffreys' prior is compressed with a redundancy larger than a certain fraction of d/2 log n. Further, our results show that the average minimax redundancy provides good estimate for the average redundancy of most sources for large enough n and d. On the other hand, when the source parameter d is small the average minimax redundancy overestimates the average redundancy for small to moderate length sequences. Additionally, we precisely characterize the average minimax redundancy of universal coding when the coding scheme is restricted to be from the family of two--stage codes, where we show that the two--stage assumption incurs a negligible redundancy for small and moderate length n unless the number of source parameters is small.
%We show that redundancy is significant in the compression of small sequences. Our results, collectively, help to characterize the non-negligible redundancy resulting from the compression of small and moderate length sequences. Next, we apply these results to the compression of a small to moderate length sequence provided that the context present in a sequence of length M from the same source is memorized. We quantify the achievable performance improvement in the universal compression of the small to moderate length sequence using context memorization.
|
285 |
Networked Control System Design and Parameter EstimationYu, Bo 29 September 2008
Networked control systems (NCSs) are a kind of distributed control systems in which the data between control components are exchanged via communication networks. Because of the attractive advantages of NCSs such as reduced system wiring, low weight, and ease of system diagnosis and maintenance, the research on NCSs has received much attention in recent years. The first part (Chapter 2 - Chapter 4) of the thesis is devoted to designing new controllers for NCSs by incorporating the network-induced delays. The thesis also conducts research on filtering of multirate systems and identification of Hammerstein systems in the second part (Chapter 5 - Chapter 6).<br /><br />
Network-induced delays exist in both sensor-to-controller (S-C) and controller-to-actuator (C-A) links. A novel two-mode-dependent control scheme is proposed, in which the to-be-designed controller depends on both S-C and C-A delays. The resulting closed-loop system is a special jump linear system. Then, the conditions for stochastic stability are obtained in terms of a set of linear matrix inequalities (LMIs) with nonconvex constraints, which can be efficiently solved by a sequential LMI optimization algorithm. Further, the control synthesis problem for the NCSs is considered. The definitions of <em>H<sub>2</sub></em> and <em>H<sub>∞</sub></em> norms for the special system are first proposed. Also, the plant uncertainties are considered in the design. Finally, the robust mixed <em>H<sub>2</sub>/H<sub>∞</sub></em> control problem is solved under the framework of LMIs. <br /><br />
To compensate for both S-C and C-A delays modeled by Markov chains, the generalized predictive control method is modified to choose certain predicted future control signal as the current control effort on the actuator node, whenever the control signal is delayed. Further, stability criteria in terms of LMIs are provided to check the system stability. The proposed method is also tested on an experimental hydraulic position control system. <br /><br />
Multirate systems exist in many practical applications where different sampling rates co-exist in the same system. The <em>l<sub>2</sub>-l<sub>∞</sub></em> filtering problem for multirate systems is considered in the thesis. By using the lifting technique, the system is first transformed to a linear time-invariant one, and then the filter design is formulated as an optimization problem which can be solved by using LMI techniques. <br /><br />
Hammerstein model consists of a static nonlinear block followed in series by a linear dynamic system, which can find many applications in different areas. New switching sequences to handle the two-segment nonlinearities are proposed in this thesis. This leads to less parameters to be estimated and thus reduces the computational cost. Further, a stochastic gradient algorithm based on the idea of replacing the unmeasurable terms with their estimates is developed to identify the Hammerstein model with two-segment nonlinearities. <br /><br />
Finally, several open problems are listed as the future research directions.
|
286 |
Timing Synchronization and Node Localization in Wireless Sensor Networks: Efficient Estimation Approaches and Performance BoundsAhmad, Aitzaz 1984- 14 March 2013 (has links)
Wireless sensor networks (WSNs) consist of a large number of sensor nodes, capable of on-board sensing and data processing, that are employed to observe some phenomenon of interest. With their desirable properties of flexible deployment, resistance to harsh environment and lower implementation cost, WSNs envisage a plethora of applications in diverse areas such as industrial process control, battle- field surveillance, health monitoring, and target localization and tracking. Much of the sensing and communication paradigm in WSNs involves ensuring power efficient transmission and finding scalable algorithms that can deliver the desired performance objectives while minimizing overall energy utilization. Since power is primarily consumed in radio transmissions delivering timing information, clock synchronization represents an indispensable requirement to boost network lifetime. This dissertation focuses on deriving efficient estimators and performance bounds for the clock parameters in a classical frequentist inference approach as well as in a Bayesian estimation framework.
A unified approach to the maximum likelihood (ML) estimation of clock offset is presented for different network delay distributions. This constitutes an analytical alternative to prior works which rely on a graphical maximization of the likelihood function. In order to capture the imperfections in node oscillators, which may render a time-varying nature to the clock offset, a novel Bayesian approach to the clock offset estimation is proposed by using factor graphs. Message passing using the max-product algorithm yields an exact expression for the Bayesian inference problem. This extends the current literature to cases where the clock offset is not deterministic, but is in fact a random process.
A natural extension of pairwise synchronization is to develop algorithms for the more challenging case of network-wide synchronization. Assuming exponentially distributed random delays, a network-wide clock synchronization algorithm is proposed using a factor graph representation of the network. Message passing using the max- product algorithm is adopted to derive the update rules for the proposed iterative procedure. A closed form solution is obtained for each node's belief about its clock offset at each iteration.
Identifying the close connections between the problems of node localization and clock synchronization, we also address in this dissertation the problem of joint estimation of an unknown node's location and clock parameters by incorporating the effect of imperfections in node oscillators. In order to alleviate the computational complexity associated with the optimal maximum a-posteriori estimator, two iterative approaches are proposed as simpler alternatives. The first approach utilizes an Expectation-Maximization (EM) based algorithm which iteratively estimates the clock parameters and the location of the unknown node. The EM algorithm is further simplified by a non-linear processing of the data to obtain a closed form solution of the location estimation problem using the least squares (LS) approach. The performance of the estimation algorithms is benchmarked by deriving the Hybrid Cramer-Rao lower bound (HCRB) on the mean square error (MSE) of the estimators.
We also derive theoretical lower bounds on the MSE of an estimator in a classical frequentist inference approach as well as in a Bayesian estimation framework when the likelihood function is an arbitrary member of the exponential family. The lower bounds not only serve to compare various estimators in our work, but can also be useful in their own right in parameter estimation theory.
|
287 |
The effects of soil heterogeneity on the performance of horizontal ground loop heat exchangersSimms, Richard Blake January 2013 (has links)
Horizontal ground loop heat exchangers (GLHE) are widely used in many countries around the world as a heat source/sink for building conditioning systems. In Canada, these systems are most common in residential buildings that do not have access to the natural gas grid or in commercial structures where the heating and cooling loads are well balanced. These horizontal systems are often preferred over vertical systems because of the expense of drilling boreholes for the vertical systems. Current practice when sizing GLHEs is to add a considerable margin of safety. A margin of safety is required because of our poor understanding of in situ GLHE performance. One aspect of this uncertianty is in how these systems interact with heterogeneous soils. To investigate the impact of soil thermal property heterogeneity on GLHE performance, a specialized finite element model was created. This code avoided some of the common, non-physical assumptions made by many horizontal GLHE models by including a representation of the complete geometry of the soil continuum and pipe network. This model was evaluated against a 400 day observation period at a field site in Elora, Ontario and its estimates were found to be capable of reaching a reasonable agreement with observations. Simulations were performed on various heterogeneous conductivity fields created with GSLIB to evaluate the impact of structural heterogeneity. Through a rigorous set of experiments, heterogeneity was found to have little effect on the overall performance of horizontal ground loops over a wide range of soil types and system configurations. Other variables, such as uncertainty of the mean soil thermal conductivity, were shown to have much more impact on the uncertainty of performance than heterogeneity. The negative impact of heterogeneity was shown to be further minimized by: maintaining a 50 cm spacing between pipes in trenches; favouring multiple trenches over a single, extremely long trench; and/or using trenches greater than 1 m deep to avoid surface effects.
|
288 |
Individual-based modelling of bacterial cultures in the study of the lag phasePrats Soler, Clara 13 June 2008 (has links)
La microbiologia predictiva és una de les parts més importants de la microbiologia dels aliments. En el creixement d'un cultiu bacterià es poden observar quatre fases: latència, exponencial, estacionària i mort. La fase de latència té un interès específic en microbiologia predictiva; al llarg de dècades ha estat abordada des de dues perspectives diferents: a nivell cel·lular i intracel·lular (escala microscòpica), i a nivell de població (escala macroscòpica). La primera estudia els processos que tenen lloc a l'interior dels bacteris durant la seva adaptació a les noves condicions del medi, com els canvis en l'expressió gènica i en el metabolisme. La segona descriu l'evolució de la població bacteriana per mitjà de models matemàtics continus i d'experiments que avaluen variables relacionades amb la densitat cel·lular. L'objectiu d'aquest treball és millorar la comprensió de la fase de latència dels cultius bacterians i dels fenòmens intrínsecs a la mateixa. Aquest objectiu s'ha abordat amb la metodologia Individual-based Modelling (IbM) amb el simulador INDISIM (INDividual DIScrete SIMulation), que ha calgut optimitzar. La IbM introdueix una perspectiva mecanicista a través de la modelització de les cèl·lules com a unitats bàsiques. Les simulacions IbM permeten estudiar el creixement d'entre 1 i 106 bacteris, així com els fenòmens que emergeixen de la interacció entre ells. Aquests fenòmens pertanyen al que anomenem escala mesoscòpica. Aquesta perspectiva és imprescindible per entendre l'efecte en la població dels processos d'adaptació individuals. Per tant, la metodologia IbM és un pont entre els individus i la població o, el que és el mateix, entre els models a escala microscòpica i a escala macroscòpica.En primer lloc hem estudiat dos dels diversos mecanismes que poden causar la fase de latència: inòculs amb massa mitjana petita, i canvis de medi.S'ha verificat també la relació de la durada de la latència amb variables com la temperatura o la grandària de l'inòcul. En aquest treball s'ha identificat la distribució de biomassa del cultiu com una variable cabdal per analitzar l'evolució del cultiu durant el cicle de creixement. S'han definit les funcions matemàtiques que anomenem distàncies per avaluar quantitativament l'evolució d'aquesta distribució.Hem abordat, també, la fase de latència des d'un punt de vista teòric. L'evolució de la velocitat de creixement al llarg del cicle ha permès distingir dues etapes en la fase de latència que anomenem inicial i de transició. L'etapa de transició s'ha descrit per mitjà d'un model matemàtic continu validat amb simulacions INDISIM. S'ha constatat que la fase de latència ha de ser vista com un procés dinàmic, i no com un simple període de temps descrit per un paràmetre. Les funcions distància també s'han utilitzat per avaluar les propietats del creixement balancejat.Alguns dels resultats de les simulacions amb INDISIM s'han corroborat experimentalment per mitjà de citometria de flux. S'ha comprovat, al llarg de les diverses fases del creixement, el comportament de la distribució de biomassa previst per simulació, així com l'evolució de les funcions distància. La coincidència entre els resultats experimentals i els de simulació no és trivial, ja que el sistema estudiat és molt complex. Per tant, aquests resultats permeten comprovar la bondat de la metodologia INDISIM.Finalment, hem avançat en l'optimització d'eines per parametritzar IbMs, un pas essencial per poder utilitzar les simulacions INDISIM de manera quantitativa. S'han adaptat i assajat els mètodes grid search, NMTA i NEWUOA. Aquest darrer mètode ha donat els millors resultats en termes de temps, mantenint una bona precisió en els valors òptims dels paràmetres. Per concloure, podem afirmar que INDISIM ha estat validat com una bona eina per abordar l'estudi dels estats transitoris com la fase de latència. / Predictive food microbiology has become an important specific field in microbiology. Bacterial growth of a batch culture may show up to four phases: lag, exponential, stationary and death. The bacterial lag phase, which is of specific interest in the framework of predictive food microbiology, has generally been tackled with two generic approaches: at a cellular and intracellular level, which we call the microscopic scale, and at a population level, which we call the macroscopic scale. Studies at the microscopic level tackle the processes that take place inside the bacterium during its adaptation to the new conditions such as the changes in genetic expression and in metabolism. Studies at the macroscopic scale deal with the description of a population growth cycle by means of mathematical continuous modelling and experimental measurements of the variables related to cell density evolution.In this work we aimed to improve the understanding of the lag phase in bacterial cultures and the intrinsic phenomena behind it. This has been carried out from the perspective of Individual-based Modelling (IbM) with the simulator INDISIM (INDividual DIScrete SIMulation), which has been specifically improved for this purpose. IbM introduces a mechanistic approach by modelling the cell as an individual unit. IbM simulations deal with 1 to 106 cells, and allow specific study of the phenomena that emerge from the interaction among cells. These phenomena belong to the mesoscopic level.Mesoscopic approaches are essential if we are to understand the effects of cellular adaptations at an individual level in the evolution of a population.Thus, they are a bridge between individuals and population, or, to put it another way, between models at a microscopic scale and models at a macroscopic scale.First, we studied separately two of the several mechanisms that may cause a lag phase: the lag caused by the initial low mean mass of the inoculum, and the lag caused by a change in the nutrient source. The relationship among lag duration and several variables such as temperature and inoculum size were also checked. This analysis allowed identification of the biomass distribution as a very important variable to follow the evolution of the culture during the growth cycle. A mathematical tool was defined in order to assess its evolution during the different phases of growth: the distance functions.A theoretical approach to the culture lag phase through the dynamics of the growth rate allowed us to split this phase into two stages: initial and transition. A continuous mathematical model was built in order to shape the transition stage, and it was checked with INDISIM simulations. It was seen that the lag phase must be defined as a dynamic process rather than as a simple period of time. The distance functions were also used to discuss the balanced growth conditions.Some of the reported INDISIM simulation results were subjected to experimental corroboration by means of flow cytometry, which allow the assessment of size distributions of a culture through time. The dynamics of biomass distribution given by INDISIM simulations were checked, as well as the distance function evolution during the different phases of growth. The coincidence between simulations and experiments is not trivial: the system under study is complex; therefore, the coincidence in the dynamics of the different modelled parameters is a validation of both the model and the simulation methodology.Finally, we have made progress in IbM parameter estimation methods, which is essential to improve quantitative processing of INDISIM simulations.Classic grid search, NMTA and NEWUOA methods were adapted and tested, the latter providing better results with regard to time spent, which maintains satisfactory precision in the parameter estimation results.Above all, the validity of INDISIM as a useful tool to tackle transient processes such as the bacterial lag phase has been amply demonstrated.
|
289 |
A Bayesian Approach for Inverse Problems in Synthetic Aperture Radar ImagingZhu, Sha 23 October 2012 (has links) (PDF)
Synthetic Aperture Radar (SAR) imaging is a well-known technique in the domain of remote sensing, aerospace surveillance, geography and mapping. To obtain images of high resolution under noise, taking into account of the characteristics of targets in the observed scene, the different uncertainties of measure and the modeling errors becomes very important.Conventional imaging methods are based on i) over-simplified scene models, ii) a simplified linear forward modeling (mathematical relations between the transmitted signals, the received signals and the targets) and iii) using a very simplified Inverse Fast Fourier Transform (IFFT) to do the inversion, resulting in low resolution and noisy images with unsuppressed speckles and high side lobe artifacts.In this thesis, we propose to use a Bayesian approach to SAR imaging, which overcomes many drawbacks of classical methods and brings high resolution, more stable images and more accurate parameter estimation for target recognition.The proposed unifying approach is used for inverse problems in Mono-, Bi- and Multi-static SAR imaging, as well as for micromotion target imaging. Appropriate priors for modeling different target scenes in terms of target features enhancement during imaging are proposed. Fast and effective estimation methods with simple and hierarchical priors are developed. The problem of hyperparameter estimation is also handled in this Bayesian approach framework. Results on synthetic, experimental and real data demonstrate the effectiveness of the proposed approach.
|
290 |
An Inverse Finite Element Approach for Identifying Forces in Biological TissuesCranston, Graham January 2009 (has links)
For centuries physicians, scientists, engineers, mathematicians, and many others have been asking: 'what are the forces that drive tissues in an embryo to their final geometric forms?' At the tissue and whole embryo level, a multitude of very different morphogenetic processes, such as gastrulation and neurulation are involved. However, at the cellular level, virtually all of these processes are evidently driven by a relatively small number of internal structures all of whose forces can be resolved into equivalent interfacial tensions γ. Measuring the cell-level forces that drive specific morphogenetic events remains one of the great unsolved problems of biomechanics. Here I present a novel approach that allows these forces to be estimated from time lapse images.
In this approach, the motions of all visible triple junctions formed between trios of cells adjacent to each other in epithelia (2D cell sheets) are tracked in time-lapse images. An existing cell-based Finite Element (FE) model is then used to calculate the viscous forces needed to deform each cell in the observed way. A recursive least squares technique with variable forgetting factors is then used to estimate the interfacial tensions that would have to be present along each cell-cell interface to provide those forces, along with the attendant pressures in each cell.
The algorithm is tested extensively using synthetic data from an FE model. Emphasis is placed on features likely to be encountered in data from live tissues during morphogenesis and wound healing. Those features include algorithm stability and tracking despite input noise, interfacial tensions that could change slowly or suddenly, and complications from imaging small regions of a larger epithelial tissue (the frayed boundary problem). Although the basic algorithm is highly sensitive to input noise due to the ill-conditioned nature of the system of equations that must be solved to obtain the interfacial tensions, methods are introduced to improve the resulting force and pressure estimates. The final algorithm returns very good estimates for interfacial tensions and intracellular cellular pressures when used with synthetic data, and it holds great promise for calculating the forces that remodel live tissue.
|
Page generated in 0.1252 seconds