• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • Tagged with
  • 16
  • 16
  • 16
  • 6
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Optimal designs for two-colour microarray experiments.

Sanchez, Penny S. January 2010 (has links)
My PhD research focuses on the recommendation of optimal designs for two-colour microarray experiments. Two-colour microarrays are a technology used to investigate the behaviour of many thousands of genes in a single experiment. This technology has created the potential for making significant advances in the field of bioinformatics. Careful statistical design is crucial to realize the full potential of microarray technology. My research has focused on the recommendation of designs that are optimal in terms of precision for effects that are of scientific interest, making the most effective use of available resources. Based on statistical efficiency, the optimality criterion used is Pareto optimality. A design is defined to be Pareto optimal if there is no other design that leads to equal or greater precision for each effect of scientific interest and strictly greater precision for at least one. My PhD thesis was submitted in June and key aspects of my research are summarised below. Pareto optimality enables the recommendation of designs that are particularly efficient for the effects that are of scientific interest. I have developed methodology to cater for effects of interest that correspond to contrasts rather than solely considering parameters of the statistical linear model. My approach also caters for additional experimental considerations such as contrasts that are of equal scientific interest. During my PhD, I have provided advice regarding the design of two-colour microarray experiments aimed at discovering the genetic basis of medical conditions. For large experiments, it is not feasible to examine all possible designs in an exhaustive search for Pareto optimal designs. I have adapted the multiple objective metaheuristic method of Pareto simulated annealing to the microarray context. The aim of Pareto simulated annealing is to generate an approximation to the set of Pareto optimal designs in a relatively short time. At each iteration, a sample of generating designs is used to explore the design space in an efficient way. This involves the setting of a number of Pareto simulated annealing parameters and the development of appropriate quality measures. I have developed algorithms to search systematically for the optimal values of the tuning parameters based on Pareto simulated annealing and response surface methodology. / Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 2010
2

Non-linear reparameterization of complex models with applications to a microalgal heterotrophic fed-batch bioreactor

Surisetty, Kartik Unknown Date
No description available.
3

Non-linear reparameterization of complex models with applications to a microalgal heterotrophic fed-batch bioreactor

Surisetty, Kartik 06 1900 (has links)
Good process control is often critical for the economic viability of large-scale production of several commercial products. In this work, the production of biodiesel from microalgae is investigated. Successful implementation of a model-based control strategy requires the identification of a model that properly captures the biochemical dynamics of microalgae, yet is simple enough to allow its implementation for controller design. For this purpose, two model reparameterization algorithms are proposed that partition the parameter space into estimable and inestimable subspaces. Both algorithms are applied using a first principles ODE model of a microalgal bioreactor, containing 6 states and 12 unknown parameters. Based on initial simulations, the non-linear algorithm achieved better degree of output prediction when compared to the linear one at a greatly decreased computational cost. Using the parameter estimates obtained through implementation of the non-linear algorithm on experimental data from a fed-batch bioreactor, the possible improvement in volumetric productivity was recognized. / Process Control
4

Sequential optimal design of neurophysiology experiments

Lewi, Jeremy 31 March 2009 (has links)
For well over 200 years, scientists and doctors have been poking and prodding brains in every which way in an effort to understand how they work. The earliest pokes were quite crude, often involving permanent forms of brain damage. Though neural injury continues to be an active area of research within neuroscience, technology has given neuroscientists a number of tools for stimulating and observing the brain in very subtle ways. Nonetheless, the basic experimental paradigm remains the same; poke the brain and see what happens. For example, neuroscientists studying the visual or auditory system can easily generate any image or sound they can imagine to see how an organism or neuron will respond. Since neuroscientists can now easily design more pokes then they could every deliver, a fundamental question is ``What pokes should they actually use?' The complexity of the brain means that only a small number of the pokes scientists can deliver will produce any information about the brain. One of the fundamental challenges of experimental neuroscience is finding the right stimulus parameters to produce an informative response in the system being studied. This thesis addresses this problem by developing algorithms to sequentially optimize neurophysiology experiments. Every experiment we conduct contains information about how the brain works. Before conducting the next experiment we should use what we have already learned to decide which experiment we should perform next. In particular, we should design an experiment which will reveal the most information about the brain. At a high level, neuroscientists already perform this type of sequential, optimal experimental design; for example crude experiments which knockout entire regions of the brain have given rise to modern experimental techniques which probe the responses of individual neurons using finely tuned stimuli. The goal of this thesis is to develop automated and rigorous methods for optimizing neurophysiology experiments efficiently and at a much finer time scale. In particular, we present methods for near instantaneous optimization of the stimulus being used to drive a neuron.
5

Development of a Non-Intrusive Continuous Sensor for Early Detection of Fouling in Commercial Manufacturing Systems

Fernando Jose Cantarero Rivera (9183332) 31 July 2020 (has links)
<p>Fouling is a critical issue in commercial food manufacturing. Fouling can cause biofilm formation and pose a threat to the safety of food products. Early detection of fouling can lead to informed decision making about the product’s safety and quality, and effective system cleaning to avoid biofilm formation. In this study, a Non-Intrusive Continuous Sensor (NICS) was designed to estimate the thermal conductivity of the product as they flow through the system at high temperatures as an indicator of fouling. Thermal properties of food products are important for product and process design and to ensure food safety. Online monitoring of thermal properties during production and development stages at higher processing temperatures, ~140°C like current aseptic processes, is not possible due to limitations in sensing technology and safety concerns due to high temperature and pressure conditions. Such an in-line and noninvasive sensor can provide information about fouling layer formation, food safety issues, and quality degradation of the products. A computational fluid dynamics model was developed to simulate the flow within the sensor and provide predicted data output. Glycerol, water, 4% potato starch solution, reconstituted non-fat dry milk (NFDM), and heavy whipping cream (HWC) were selected as products with the latter two for fouling layer thickness studies. The product and fouling layer thermal conductivities were estimated at high temperatures (~140°C). Scaled sensitivity coefficients and optimal experimental design were taken into consideration to improve the accuracy of parameter estimates. Glycerol, water, 4% potato starch, NFDM, and HWC were estimated to have thermal conductivities of 0.292 ± 0.006, 0.638 ± 0.013, 0.487 ± 0.009, 0.598 ± 0.010, and 0.359 ± 0.008 W/(m·K), respectively. The thermal conductivity of the fouling layer decreased as the processing time increased. At the end of one hour process time, thermal conductivity achieved an average minimum of 0.365 ± 0.079 W/(m·K) and 0.097 ± 0.037 W/(m·K) for NFDM and HWC fouling, respectively. The sensor’s novelty lies in the short duration of the experiments, the non-intrusive aspect of its measurements, and its implementation for commercial manufacturing.</p>
6

Bayesian Learning in Computational Rheology: Applications to Soft Tissues and Polymers

Kedari, Sayali Ravindra 23 May 2022 (has links)
No description available.
7

Optimal Sensing and Actuation Policies for Networked Mobile Agents in a Class of Cyber-Physical Systems

Tricaud, Christophe 01 May 2010 (has links)
The main purpose of this dissertation is to define and solve problems on optimal sensing and actuating policies in Cyber-Physical Systems (CPSs). Cyber-physical system is a term that was introduced recently to define the increasing complexity of the interactions between computational hardwares and their physical environments. The problem of designing the ``cyber'' part may not be trivial but can be solved from scratch. However, the ``physical'' part, usually a natural physical process, is inherently given and has to be identified in order to propose an appropriate ``cyber'' part to be adopted. Therefore, one of the first steps in designing a CPS is to identify its ``physical'' part. The ``physical'' part can belong to a large array of system classes. Among the possible candidates, we focus our interest on Distributed Parameter Systems (DPSs) whose dynamics can be modeled by Partial Differential Equations (PDE). DPSs are by nature very challenging to observe as their states are distributed throughout the spatial domain of interest. Therefore, systematic approaches have to be developed to obtain the optimal locations of sensors to optimally estimate the parameters of a given DPS. In this dissertation, we first review the recent methods from the literature as the foundations of our contributions. Then, we define new research problems within the above optimal parameter estimation framework. Two different yet important problems considered are the optimal mobile sensor trajectory planning and the accuracy effects and allocation of heterogeneous sensors. Under the remote sensing setting, we are able to determine the optimal trajectories of remote sensors. The problem of optimal robust estimation is then introduced and solved using an interlaced ``online'' or ``real-time'' scheme. Actuation policies are introduced into the framework to improve the estimation by providing the best stimulation of the DPS for optimal parameter identification, where trajectories of both sensors and actuators are optimized simultaneously. We also introduce a new methodology to solving fractional-order optimal control problems, with which we demonstrate that we can solve optimal sensing policy problems when sensors move in complex media, displaying fractional dynamics. We consider and solve the problem of optimal scale reconciliation using satellite imagery, ground measurements, and Unmanned Aerial Vehicles (UAV)-based personal remote sensing. Finally, to provide the reader with all the necessary background, the appendices contain important concepts and theorems from the literature as well as the Matlab codes used to numerically solve some of the described problems.
8

Essays on Experimental Economics

Daniel John Woods (11038146) 22 July 2021 (has links)
This thesis contains three chapters, each of which covers a different topic in experimental economics.<br><br>The first chapter investigates power and power analysis in economics experiments. Power is the probability of detecting an effect when a true effect exists, which is an important but under-considered concept in empirical research. Power analysis is the process of selecting the number of observations in order to avoid issues with low power. However, it is often not clear ex-ante what the required parameters for a power analysis, like the effect size and standard deviation, should be. <br>This chapter considers the use of Quantal Choice/Response (QR) simulations for ex-ante power analysis, as it maps related data-sets into predictions for novel environments. <br>The QR can also guide optimal design decisions, both ex-ante as well as ex-post for conceptual replication studies. This chapter demonstrates QR simulations on a wide variety of applications related to power analysis and experimental design.<br><br>The second chapter considers a question of interest to computer scientists, information technology and security professionals. How do people distribute defenses over a directed network attack graph, where they must defend a critical node? Decision-makers are often subject to behavioral biases that cause them to make sub-optimal defense decisions. Non-linear probability weighting<br>is one bias that may lead to sub-optimal decision-making in this environment. An experimental test provides support for this conjecture, and also other empirically important biases such as naive diversification and preferences over the spatial timing of the revelation of an overall successful defense. <br><br>The third chapter analyzes how individuals resolve an exploration versus exploitation trade-off in a laboratory experiment. The experiment implements the single-agent exponential bandit model. The experiment finds that subjects respond in the predicted direction to changes in the prior belief, safe action, and discount factor. However, subjects also typically explore less than predicted. A structural model that incorporates risk preferences, base rate neglect/conservatism, and non-linear probability weighting explains the empirical findings well. <br>
9

Experimental planning and sequential kriging optimization using variable fidelity data

Huang, Deng 09 March 2005 (has links)
No description available.
10

Novel Application of Nondestructive Testing to Evaluate Anomalous Conditions in Drilled Shafts and the Geologic Materials Underlying Their Excavations

Kordjazi, Alireza January 2019 (has links)
Drilled shafts are deep foundation elements created by excavating cylindrical shafts into the ground and filling them with concrete. Given the types of structures they support, failure to meet their performance criteria can jeopardize public safety and cause severe financial losses. Consequently, quality control measures are warranted to ensure these foundations meet design specifications, particularly with respect to their structural integrity and geotechnical capacity. Due to their inaccessibility, non-destructive testing (NDT) techniques have received much attention for drilled shaft quality control. However, there are limitations in the NDT tools currently used for structural integrity testing. Moreover, there is no current NDT tool to evaluate conditions underlying drilled shaft excavations and aid in verifying geotechnical capacity. The main objective of this research is to examine the development of new NDT methodologies to address some of the limitations in the inspection of drilled shaft structural integrity and geotechnical conditions underlying their excavations. The use of stress waves in large laboratory models is first examined to evaluate the performance of ray-based techniques for detecting anomalies. The study then continues to investigate the improvements offered by using a full waveform inversion (FWI) approach to analyze the stress wave data. A hybrid, multi-scale FWI workflow is recommended to increase the chance of the convergence of the inversion algorithms. Additionally, the benefits of a multi-parameter FWI are discussed. Since FWI is computationally expensive, a sequential optimal experimental design (SOED) analysis is proposed to determine the optimal hardware configurations for each application. The resulting benefit-cost curves from this analysis allow for designing an NDT survey that matches the available resources for the project. / Civil Engineering

Page generated in 0.3697 seconds