• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 477
  • 92
  • 35
  • 32
  • 10
  • 5
  • 5
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 818
  • 818
  • 126
  • 120
  • 117
  • 101
  • 85
  • 81
  • 75
  • 70
  • 68
  • 63
  • 62
  • 58
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Characterizing the redundancy of universal source coding for finite-length sequences

Beirami, Ahmad 16 December 2011 (has links)
In this thesis, we first study what is the average redundancy resulting from the universal compression of a single finite-length sequence from an unknown source. In the universal compression of a source with d unknown parameters, Rissanen demonstrated that the expected redundancy for regular codes is asymptotically d/2 log n + o(log n) for almost all sources, where n is the sequence length. Clarke and Barron also derived the asymptotic average minimax redundancy for memoryless sources. The average minimax redundancy is concerned with the redundancy of the worst parameter vector for the best code. Thus, it does not provide much information about the effect of the different source parameter values. Our treatment in this thesis is probabilistic. In particular, we derive a lower bound on the probability measure of the event that a sequence of length n from an FSMX source chosen using Jeffreys' prior is compressed with a redundancy larger than a certain fraction of d/2 log n. Further, our results show that the average minimax redundancy provides good estimate for the average redundancy of most sources for large enough n and d. On the other hand, when the source parameter d is small the average minimax redundancy overestimates the average redundancy for small to moderate length sequences. Additionally, we precisely characterize the average minimax redundancy of universal coding when the coding scheme is restricted to be from the family of two--stage codes, where we show that the two--stage assumption incurs a negligible redundancy for small and moderate length n unless the number of source parameters is small. %We show that redundancy is significant in the compression of small sequences. Our results, collectively, help to characterize the non-negligible redundancy resulting from the compression of small and moderate length sequences. Next, we apply these results to the compression of a small to moderate length sequence provided that the context present in a sequence of length M from the same source is memorized. We quantify the achievable performance improvement in the universal compression of the small to moderate length sequence using context memorization.
282

Networked Control System Design and Parameter Estimation

Yu, Bo 29 September 2008
Networked control systems (NCSs) are a kind of distributed control systems in which the data between control components are exchanged via communication networks. Because of the attractive advantages of NCSs such as reduced system wiring, low weight, and ease of system diagnosis and maintenance, the research on NCSs has received much attention in recent years. The first part (Chapter 2 - Chapter 4) of the thesis is devoted to designing new controllers for NCSs by incorporating the network-induced delays. The thesis also conducts research on filtering of multirate systems and identification of Hammerstein systems in the second part (Chapter 5 - Chapter 6).<br /><br /> Network-induced delays exist in both sensor-to-controller (S-C) and controller-to-actuator (C-A) links. A novel two-mode-dependent control scheme is proposed, in which the to-be-designed controller depends on both S-C and C-A delays. The resulting closed-loop system is a special jump linear system. Then, the conditions for stochastic stability are obtained in terms of a set of linear matrix inequalities (LMIs) with nonconvex constraints, which can be efficiently solved by a sequential LMI optimization algorithm. Further, the control synthesis problem for the NCSs is considered. The definitions of <em>H<sub>2</sub></em> and <em>H<sub>∞</sub></em> norms for the special system are first proposed. Also, the plant uncertainties are considered in the design. Finally, the robust mixed <em>H<sub>2</sub>/H<sub>&infin;</sub></em> control problem is solved under the framework of LMIs. <br /><br /> To compensate for both S-C and C-A delays modeled by Markov chains, the generalized predictive control method is modified to choose certain predicted future control signal as the current control effort on the actuator node, whenever the control signal is delayed. Further, stability criteria in terms of LMIs are provided to check the system stability. The proposed method is also tested on an experimental hydraulic position control system. <br /><br /> Multirate systems exist in many practical applications where different sampling rates co-exist in the same system. The <em>l<sub>2</sub>-l<sub>&infin;</sub></em> filtering problem for multirate systems is considered in the thesis. By using the lifting technique, the system is first transformed to a linear time-invariant one, and then the filter design is formulated as an optimization problem which can be solved by using LMI techniques. <br /><br /> Hammerstein model consists of a static nonlinear block followed in series by a linear dynamic system, which can find many applications in different areas. New switching sequences to handle the two-segment nonlinearities are proposed in this thesis. This leads to less parameters to be estimated and thus reduces the computational cost. Further, a stochastic gradient algorithm based on the idea of replacing the unmeasurable terms with their estimates is developed to identify the Hammerstein model with two-segment nonlinearities. <br /><br /> Finally, several open problems are listed as the future research directions.
283

Timing Synchronization and Node Localization in Wireless Sensor Networks: Efficient Estimation Approaches and Performance Bounds

Ahmad, Aitzaz 1984- 14 March 2013 (has links)
Wireless sensor networks (WSNs) consist of a large number of sensor nodes, capable of on-board sensing and data processing, that are employed to observe some phenomenon of interest. With their desirable properties of flexible deployment, resistance to harsh environment and lower implementation cost, WSNs envisage a plethora of applications in diverse areas such as industrial process control, battle- field surveillance, health monitoring, and target localization and tracking. Much of the sensing and communication paradigm in WSNs involves ensuring power efficient transmission and finding scalable algorithms that can deliver the desired performance objectives while minimizing overall energy utilization. Since power is primarily consumed in radio transmissions delivering timing information, clock synchronization represents an indispensable requirement to boost network lifetime. This dissertation focuses on deriving efficient estimators and performance bounds for the clock parameters in a classical frequentist inference approach as well as in a Bayesian estimation framework. A unified approach to the maximum likelihood (ML) estimation of clock offset is presented for different network delay distributions. This constitutes an analytical alternative to prior works which rely on a graphical maximization of the likelihood function. In order to capture the imperfections in node oscillators, which may render a time-varying nature to the clock offset, a novel Bayesian approach to the clock offset estimation is proposed by using factor graphs. Message passing using the max-product algorithm yields an exact expression for the Bayesian inference problem. This extends the current literature to cases where the clock offset is not deterministic, but is in fact a random process. A natural extension of pairwise synchronization is to develop algorithms for the more challenging case of network-wide synchronization. Assuming exponentially distributed random delays, a network-wide clock synchronization algorithm is proposed using a factor graph representation of the network. Message passing using the max- product algorithm is adopted to derive the update rules for the proposed iterative procedure. A closed form solution is obtained for each node's belief about its clock offset at each iteration. Identifying the close connections between the problems of node localization and clock synchronization, we also address in this dissertation the problem of joint estimation of an unknown node's location and clock parameters by incorporating the effect of imperfections in node oscillators. In order to alleviate the computational complexity associated with the optimal maximum a-posteriori estimator, two iterative approaches are proposed as simpler alternatives. The first approach utilizes an Expectation-Maximization (EM) based algorithm which iteratively estimates the clock parameters and the location of the unknown node. The EM algorithm is further simplified by a non-linear processing of the data to obtain a closed form solution of the location estimation problem using the least squares (LS) approach. The performance of the estimation algorithms is benchmarked by deriving the Hybrid Cramer-Rao lower bound (HCRB) on the mean square error (MSE) of the estimators. We also derive theoretical lower bounds on the MSE of an estimator in a classical frequentist inference approach as well as in a Bayesian estimation framework when the likelihood function is an arbitrary member of the exponential family. The lower bounds not only serve to compare various estimators in our work, but can also be useful in their own right in parameter estimation theory.
284

The effects of soil heterogeneity on the performance of horizontal ground loop heat exchangers

Simms, Richard Blake January 2013 (has links)
Horizontal ground loop heat exchangers (GLHE) are widely used in many countries around the world as a heat source/sink for building conditioning systems. In Canada, these systems are most common in residential buildings that do not have access to the natural gas grid or in commercial structures where the heating and cooling loads are well balanced. These horizontal systems are often preferred over vertical systems because of the expense of drilling boreholes for the vertical systems. Current practice when sizing GLHEs is to add a considerable margin of safety. A margin of safety is required because of our poor understanding of in situ GLHE performance. One aspect of this uncertianty is in how these systems interact with heterogeneous soils. To investigate the impact of soil thermal property heterogeneity on GLHE performance, a specialized finite element model was created. This code avoided some of the common, non-physical assumptions made by many horizontal GLHE models by including a representation of the complete geometry of the soil continuum and pipe network. This model was evaluated against a 400 day observation period at a field site in Elora, Ontario and its estimates were found to be capable of reaching a reasonable agreement with observations. Simulations were performed on various heterogeneous conductivity fields created with GSLIB to evaluate the impact of structural heterogeneity. Through a rigorous set of experiments, heterogeneity was found to have little effect on the overall performance of horizontal ground loops over a wide range of soil types and system configurations. Other variables, such as uncertainty of the mean soil thermal conductivity, were shown to have much more impact on the uncertainty of performance than heterogeneity. The negative impact of heterogeneity was shown to be further minimized by: maintaining a 50 cm spacing between pipes in trenches; favouring multiple trenches over a single, extremely long trench; and/or using trenches greater than 1 m deep to avoid surface effects.
285

Individual-based modelling of bacterial cultures in the study of the lag phase

Prats Soler, Clara 13 June 2008 (has links)
La microbiologia predictiva és una de les parts més importants de la microbiologia dels aliments. En el creixement d'un cultiu bacterià es poden observar quatre fases: latència, exponencial, estacionària i mort. La fase de latència té un interès específic en microbiologia predictiva; al llarg de dècades ha estat abordada des de dues perspectives diferents: a nivell cel·lular i intracel·lular (escala microscòpica), i a nivell de població (escala macroscòpica). La primera estudia els processos que tenen lloc a l'interior dels bacteris durant la seva adaptació a les noves condicions del medi, com els canvis en l'expressió gènica i en el metabolisme. La segona descriu l'evolució de la població bacteriana per mitjà de models matemàtics continus i d'experiments que avaluen variables relacionades amb la densitat cel·lular. L'objectiu d'aquest treball és millorar la comprensió de la fase de latència dels cultius bacterians i dels fenòmens intrínsecs a la mateixa. Aquest objectiu s'ha abordat amb la metodologia Individual-based Modelling (IbM) amb el simulador INDISIM (INDividual DIScrete SIMulation), que ha calgut optimitzar. La IbM introdueix una perspectiva mecanicista a través de la modelització de les cèl·lules com a unitats bàsiques. Les simulacions IbM permeten estudiar el creixement d'entre 1 i 106 bacteris, així com els fenòmens que emergeixen de la interacció entre ells. Aquests fenòmens pertanyen al que anomenem escala mesoscòpica. Aquesta perspectiva és imprescindible per entendre l'efecte en la població dels processos d'adaptació individuals. Per tant, la metodologia IbM és un pont entre els individus i la població o, el que és el mateix, entre els models a escala microscòpica i a escala macroscòpica.En primer lloc hem estudiat dos dels diversos mecanismes que poden causar la fase de latència: inòculs amb massa mitjana petita, i canvis de medi.S'ha verificat també la relació de la durada de la latència amb variables com la temperatura o la grandària de l'inòcul. En aquest treball s'ha identificat la distribució de biomassa del cultiu com una variable cabdal per analitzar l'evolució del cultiu durant el cicle de creixement. S'han definit les funcions matemàtiques que anomenem distàncies per avaluar quantitativament l'evolució d'aquesta distribució.Hem abordat, també, la fase de latència des d'un punt de vista teòric. L'evolució de la velocitat de creixement al llarg del cicle ha permès distingir dues etapes en la fase de latència que anomenem inicial i de transició. L'etapa de transició s'ha descrit per mitjà d'un model matemàtic continu validat amb simulacions INDISIM. S'ha constatat que la fase de latència ha de ser vista com un procés dinàmic, i no com un simple període de temps descrit per un paràmetre. Les funcions distància també s'han utilitzat per avaluar les propietats del creixement balancejat.Alguns dels resultats de les simulacions amb INDISIM s'han corroborat experimentalment per mitjà de citometria de flux. S'ha comprovat, al llarg de les diverses fases del creixement, el comportament de la distribució de biomassa previst per simulació, així com l'evolució de les funcions distància. La coincidència entre els resultats experimentals i els de simulació no és trivial, ja que el sistema estudiat és molt complex. Per tant, aquests resultats permeten comprovar la bondat de la metodologia INDISIM.Finalment, hem avançat en l'optimització d'eines per parametritzar IbMs, un pas essencial per poder utilitzar les simulacions INDISIM de manera quantitativa. S'han adaptat i assajat els mètodes grid search, NMTA i NEWUOA. Aquest darrer mètode ha donat els millors resultats en termes de temps, mantenint una bona precisió en els valors òptims dels paràmetres. Per concloure, podem afirmar que INDISIM ha estat validat com una bona eina per abordar l'estudi dels estats transitoris com la fase de latència. / Predictive food microbiology has become an important specific field in microbiology. Bacterial growth of a batch culture may show up to four phases: lag, exponential, stationary and death. The bacterial lag phase, which is of specific interest in the framework of predictive food microbiology, has generally been tackled with two generic approaches: at a cellular and intracellular level, which we call the microscopic scale, and at a population level, which we call the macroscopic scale. Studies at the microscopic level tackle the processes that take place inside the bacterium during its adaptation to the new conditions such as the changes in genetic expression and in metabolism. Studies at the macroscopic scale deal with the description of a population growth cycle by means of mathematical continuous modelling and experimental measurements of the variables related to cell density evolution.In this work we aimed to improve the understanding of the lag phase in bacterial cultures and the intrinsic phenomena behind it. This has been carried out from the perspective of Individual-based Modelling (IbM) with the simulator INDISIM (INDividual DIScrete SIMulation), which has been specifically improved for this purpose. IbM introduces a mechanistic approach by modelling the cell as an individual unit. IbM simulations deal with 1 to 106 cells, and allow specific study of the phenomena that emerge from the interaction among cells. These phenomena belong to the mesoscopic level.Mesoscopic approaches are essential if we are to understand the effects of cellular adaptations at an individual level in the evolution of a population.Thus, they are a bridge between individuals and population, or, to put it another way, between models at a microscopic scale and models at a macroscopic scale.First, we studied separately two of the several mechanisms that may cause a lag phase: the lag caused by the initial low mean mass of the inoculum, and the lag caused by a change in the nutrient source. The relationship among lag duration and several variables such as temperature and inoculum size were also checked. This analysis allowed identification of the biomass distribution as a very important variable to follow the evolution of the culture during the growth cycle. A mathematical tool was defined in order to assess its evolution during the different phases of growth: the distance functions.A theoretical approach to the culture lag phase through the dynamics of the growth rate allowed us to split this phase into two stages: initial and transition. A continuous mathematical model was built in order to shape the transition stage, and it was checked with INDISIM simulations. It was seen that the lag phase must be defined as a dynamic process rather than as a simple period of time. The distance functions were also used to discuss the balanced growth conditions.Some of the reported INDISIM simulation results were subjected to experimental corroboration by means of flow cytometry, which allow the assessment of size distributions of a culture through time. The dynamics of biomass distribution given by INDISIM simulations were checked, as well as the distance function evolution during the different phases of growth. The coincidence between simulations and experiments is not trivial: the system under study is complex; therefore, the coincidence in the dynamics of the different modelled parameters is a validation of both the model and the simulation methodology.Finally, we have made progress in IbM parameter estimation methods, which is essential to improve quantitative processing of INDISIM simulations.Classic grid search, NMTA and NEWUOA methods were adapted and tested, the latter providing better results with regard to time spent, which maintains satisfactory precision in the parameter estimation results.Above all, the validity of INDISIM as a useful tool to tackle transient processes such as the bacterial lag phase has been amply demonstrated.
286

A Bayesian Approach for Inverse Problems in Synthetic Aperture Radar Imaging

Zhu, Sha 23 October 2012 (has links) (PDF)
Synthetic Aperture Radar (SAR) imaging is a well-known technique in the domain of remote sensing, aerospace surveillance, geography and mapping. To obtain images of high resolution under noise, taking into account of the characteristics of targets in the observed scene, the different uncertainties of measure and the modeling errors becomes very important.Conventional imaging methods are based on i) over-simplified scene models, ii) a simplified linear forward modeling (mathematical relations between the transmitted signals, the received signals and the targets) and iii) using a very simplified Inverse Fast Fourier Transform (IFFT) to do the inversion, resulting in low resolution and noisy images with unsuppressed speckles and high side lobe artifacts.In this thesis, we propose to use a Bayesian approach to SAR imaging, which overcomes many drawbacks of classical methods and brings high resolution, more stable images and more accurate parameter estimation for target recognition.The proposed unifying approach is used for inverse problems in Mono-, Bi- and Multi-static SAR imaging, as well as for micromotion target imaging. Appropriate priors for modeling different target scenes in terms of target features enhancement during imaging are proposed. Fast and effective estimation methods with simple and hierarchical priors are developed. The problem of hyperparameter estimation is also handled in this Bayesian approach framework. Results on synthetic, experimental and real data demonstrate the effectiveness of the proposed approach.
287

An Inverse Finite Element Approach for Identifying Forces in Biological Tissues

Cranston, Graham January 2009 (has links)
For centuries physicians, scientists, engineers, mathematicians, and many others have been asking: 'what are the forces that drive tissues in an embryo to their final geometric forms?' At the tissue and whole embryo level, a multitude of very different morphogenetic processes, such as gastrulation and neurulation are involved. However, at the cellular level, virtually all of these processes are evidently driven by a relatively small number of internal structures all of whose forces can be resolved into equivalent interfacial tensions γ. Measuring the cell-level forces that drive specific morphogenetic events remains one of the great unsolved problems of biomechanics. Here I present a novel approach that allows these forces to be estimated from time lapse images. In this approach, the motions of all visible triple junctions formed between trios of cells adjacent to each other in epithelia (2D cell sheets) are tracked in time-lapse images. An existing cell-based Finite Element (FE) model is then used to calculate the viscous forces needed to deform each cell in the observed way. A recursive least squares technique with variable forgetting factors is then used to estimate the interfacial tensions that would have to be present along each cell-cell interface to provide those forces, along with the attendant pressures in each cell. The algorithm is tested extensively using synthetic data from an FE model. Emphasis is placed on features likely to be encountered in data from live tissues during morphogenesis and wound healing. Those features include algorithm stability and tracking despite input noise, interfacial tensions that could change slowly or suddenly, and complications from imaging small regions of a larger epithelial tissue (the frayed boundary problem). Although the basic algorithm is highly sensitive to input noise due to the ill-conditioned nature of the system of equations that must be solved to obtain the interfacial tensions, methods are introduced to improve the resulting force and pressure estimates. The final algorithm returns very good estimates for interfacial tensions and intracellular cellular pressures when used with synthetic data, and it holds great promise for calculating the forces that remodel live tissue.
288

Performance comparison of the Extended Kalman Filter and the Recursive Prediction Error Method / Jämförelse mellan Extended Kalmanfiltret och den Rekursiva Prediktionsfelsmetoden

Wiklander, Jonas January 2003 (has links)
In several projects within ABB there is a need of state and parameter estimation for nonlinear dynamic systems. One example is a project investigating optimisation of gas turbine operation. In a gas turbine there are several parameters and states which are not measured, but are crucial for the performance. Such parameters are polytropic efficiencies in compressor and turbine stages, cooling mass flows, friction coefficients and temperatures. Different methods are being tested to solve this problem of system identification or parameter estimation. This thesis describes the implementation of such a method and compares it with previously implemented identification methods. The comparison is carried out in the context of parameter estimation in gas turbine models, a dynamic load model used in power systems as well as models of other dynamic systems. Both simulated and real plant measurements are used in the study.
289

Fysikalisk modellering av klimat i entreprenadmaskin / Physical Modeling of Climate in Construction Vehicles

Nilsson, Sebastian January 2005 (has links)
This masters thesis concerns a modeling project performed at Volvo Technology in Gothenburg, Sweden. The main purpose of the project has been to develop a physical model of the climate in construction vehicles that later on can be used in the development of an electronic climate controller. The focus of the work has been on one type of wheel loader and one type of excavator. The temperature inside the compartment has been set equal to the notion climate. With physical theories about air flow and heat transfer in respect, relations between the components in the climate unit and the compartment has been calculated. Parameters that has had unknown values has been estimated. The relations have then been implemented in the modeling tool Simulink. The validation of the model has been carried out by comparison between measured data and modeled values by calculation of Root Mean Square and correlation. Varying the estimated parameters and identifying the change in the output signal, i.e the temperature of the compartment, have performed a sensitivity analysis. The result of the validation has shown that the factor with the greatest influence on the temperature in the vehicle is the airflow through the climate unit and the outlets. Minor changes of airflow have resulted in major changes in temperature. The validation principally shows that the model gives a good estimation of the temperature in the compartment. The static values of the model differs from the values of the measured data but is regarded being as within an acceptable margin of error. The weakness of the model is mainly its predictions of the dynamics, which does not correlate satisfyingly with the data.
290

An Inverse Finite Element Approach for Identifying Forces in Biological Tissues

Cranston, Graham January 2009 (has links)
For centuries physicians, scientists, engineers, mathematicians, and many others have been asking: 'what are the forces that drive tissues in an embryo to their final geometric forms?' At the tissue and whole embryo level, a multitude of very different morphogenetic processes, such as gastrulation and neurulation are involved. However, at the cellular level, virtually all of these processes are evidently driven by a relatively small number of internal structures all of whose forces can be resolved into equivalent interfacial tensions γ. Measuring the cell-level forces that drive specific morphogenetic events remains one of the great unsolved problems of biomechanics. Here I present a novel approach that allows these forces to be estimated from time lapse images. In this approach, the motions of all visible triple junctions formed between trios of cells adjacent to each other in epithelia (2D cell sheets) are tracked in time-lapse images. An existing cell-based Finite Element (FE) model is then used to calculate the viscous forces needed to deform each cell in the observed way. A recursive least squares technique with variable forgetting factors is then used to estimate the interfacial tensions that would have to be present along each cell-cell interface to provide those forces, along with the attendant pressures in each cell. The algorithm is tested extensively using synthetic data from an FE model. Emphasis is placed on features likely to be encountered in data from live tissues during morphogenesis and wound healing. Those features include algorithm stability and tracking despite input noise, interfacial tensions that could change slowly or suddenly, and complications from imaging small regions of a larger epithelial tissue (the frayed boundary problem). Although the basic algorithm is highly sensitive to input noise due to the ill-conditioned nature of the system of equations that must be solved to obtain the interfacial tensions, methods are introduced to improve the resulting force and pressure estimates. The final algorithm returns very good estimates for interfacial tensions and intracellular cellular pressures when used with synthetic data, and it holds great promise for calculating the forces that remodel live tissue.

Page generated in 0.0969 seconds