• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 9
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 39
  • 39
  • 16
  • 13
  • 10
  • 9
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A New Pool Boiling Facility for the Study of Nanofluids

Strack, James M. 04 1900 (has links)
<p>Nanofluids are engineered colloidal dispersions of nanoparticles in a liquid. The field of nanofluids has seen much interest due to reported heat transfer enhancements over the corresponding pure fluids at low particle concentrations. Particularly, a large increase in critical heat flux (CHF) has been widely reported along with modification of the boiling interface. Inconsistencies in reported impact on nucleate boiling heat transfer and the degree of CHF enhancement illustrate the need for further study.</p> <p>A pool boiling experiment has been designed and constructed at McMaster University to allow for the study the boiling of water-based nanofluids. The facility has been commissioned with saturated distilled water tests at atmospheric pressure, heat flux levels up to 1200 kW·m<sup>-2</sup>, and at wall superheat levels up to 19.5<sup>o</sup>C. Wall superheat and heat flux uncertainties were estimated to be ±0.6<sup>o</sup>C and ±20 kW∙m<sup>-2</sup>, respectively. For the installed test section, heat flux is limited to 2.62 ± 0.06 MW·m<sup>-2</sup>. A high speed video system for the analysis of bubble dynamics was tested and used for qualitative comparisons between experimental runs. This system was tested at 2500 FPS and an imaging resolution of 39 pixels per mm, but is capable of up to 10 000 FPS at the same spatial resolution. Heat flux versus wall superheat data was compared to the Rohsenow correlation and found to qualitatively agree using surface factor <em>C<sub>sf</sub></em> = 0.011. Results were found to have a high degree of repeatability at heat flux levels higher than 600 kW·m<sup>-2</sup>.</p> <p>The new facility will be used to conduct studies into the pool boiling of saturated water-based nanofluids at atmospheric pressure. Additional work will involve the control and characterization of heater surface conditions before and after boiling. Quantitative analysis of bubble dynamics will be possible using high speed video and particle image velocimetry.</p> / Master of Applied Science (MASc)
12

Quantifying the Benefits of Immersion for Procedural Training

Sowndararajan, Ajith 04 August 2008 (has links)
Training is one of the most important and widely-used applications of immersive Virtual Reality (VR). Research has shown that Immersive Virtual Environments (IVEs) are beneficial for training motor activities and spatial activities, but it is unclear whether immersive VEs are beneficial for purely mental activities, such as memorizing a procedure. In this thesis, we present two experiments to identify benefits of immersion for a procedural training process. The first experiment is a between-subjects experiment comparing two levels of immersion in a procedural training task. For the higher level of immersion, we used a large L-shaped projection display. We used a typical laptop display for the lower level of immersion. We asked participants to memorize two procedures: one simple and the other complex. We found that the higher level of immersion resulted in significantly faster task performance and reduced error for the complex procedure. As result of the first experiment we performed a controlled second experiment. We compared two within-subjects variables namely environment and location under various treatments formed by combination of three between-subject variables namely Software Field Of View (SFOV), Physical FOV, Field Of Regard (FOR). We found that SFOV is the most essential component for learning a procedure efficiently using IVEs. We hypothesize that the higher level of immersion helped users to memorize the complex procedure by providing enhanced spatial cues, leading to the development of an accurate mental map that could be used as a memory aid. / Master of Science
13

A Dual Metamodeling Perspective for Design and Analysis of Stochastic Simulation Experiments

Wang, Wenjing 17 July 2019 (has links)
Fueled by a growing number of applications in science and engineering, the development of stochastic simulation metamodeling methodologies has gained momentum in recent years. A majority of the existing methods, such as stochastic kriging (SK), only focus on efficiently metamodeling the mean response surface implied by a stochastic simulation experiment. As the simulation outputs are stochastic with the simulation variance varying significantly across the design space, suitable methods for variance modeling are required. This thesis takes a dual metamodeling perspective and aims at exploiting the benefits of fitting the mean and variance functions simultaneously for achieving an improved predictive performance. We first explore the effects of replacing the sample variances with various smoothed variance estimates on the performance of SK and propose a dual metamodeling approach to obtain an efficient simulation budget allocation rule. Second, we articulate the links between SK and least-square support vector regression and propose to use a ``dense and shallow'' initial design to facilitate selection of important design points and efficient allocation of the computational budget. Third, we propose a variational Bayesian inference-based Gaussian process (VBGP) metamodeling approach to accommodate the situation where either one or multiple simulation replications are available at every design point. VBGP can fit the mean and variance response surfaces simultaneously, while taking into full account the uncertainty in the heteroscedastic variance. Lastly, we generalize VBGP for handling large-scale heteroscedastic datasets based on the idea of ``transductive combination of GP experts.'' / Doctor of Philosophy / In solving real-world complex engineering problems, it is often helpful to learn the relationship between the decision variables and the response variables to better understand the real system of interest. Directly conducting experiments on the real system can be impossible or impractical, due to the high cost or time involved. Instead, simulation models are often used as a surrogate to model the complex stochastic systems for conducting simulation-based design and analysis. However, even simulation models can be very expensive to run. To alleviate the computational burden, a metamodel is often built based on the outputs of the simulation runs at some selected design points to map the performance response surface as a function of the controllable decision variables, or uncontrollable environmental variables, to approximate the behavior of the original simulation model. There has been a plethora of work in the simulation research community dedicated to studying stochastic simulation metamodeling methodologies suitable for analyzing stochastic simulation experiments in science and engineering. A majority of the existing methods, such as stochastic kriging (SK), have been known as effective metamodeling tool for approximating a mean response surface implied by a stochastic simulation. Despite that SK has been extensively used as an effective metamodeling methodology for stochastic simulations, SK and metamodeling techniques alike still face four methodological barriers: 1) Lack of the study in variance estimates methods; 2) Absence of an efficient experimental design for simultaneous mean and variance metamodeling; 3) Lack of flexibility to accommodate situations where simulation replications are not available; and 4) Lack of scalability. To overcome the aforementioned barriers, this thesis takes a dual metamodeling perspective and aims at exploiting the benefits of fitting the mean and variance functions simultaneously for achieving an improved predictive performance. We first explore the effects of replacing the sample variances with various smoothed variance estimates on the performance of SK and propose a dual metamodeling approach to obtain an efficient simulation budget allocation rule. Second, we articulate the links between SK and least-square support vector regression and propose to use a “dense and shallow” initial design to facilitate selection of important design points and efficient allocation of the computational budget. Third, we propose a variational Bayesian inference-based Gaussian process (VBGP) metamodeling approach to accommodate the situation where either one or multiple simulation replications are available at every design point. VBGP can fit the mean and variance response surfaces simultaneously, while taking into full account the uncertainty in the heteroscedastic variance. Lastly, we generalize VBGP for handling large-scale heteroscedastic datasets based on the idea of “transductive combination of GP experts.”
14

Non-intrusive Methods for Mode Estimation in Power Systems using Synchrophasors

Peric, Vedran January 2016 (has links)
Real-time monitoring of electromechanical oscillations is of great significance for power system operators; to this aim, software solutions (algorithms) that use synchrophasor measurements have been developed for this purpose. This thesis investigates different approaches for improving mode estimation process by offering new methods and deepening the understanding of different stages in the mode estimation process. One of the problems tackled in this thesis is the selection of synchrophasor signals used as the input for mode estimation. The proposed selection is performed using a quantitative criterion that is based on the variance of the critical mode estimate. The proposed criterion and associated selection method, offer a systematic and quantitative approach for PMU signal selection. The thesis also analyzes methods for model order selection used in mode estimation. Further, negative effects of forced oscillations and non-white noise load random changes on mode estimation results have been addressed by exploiting the intrinsic power system property that the characteristics of electromechanical modes are predominately determined by the power generation and transmission network. An improved accuracy of the mode estimation process can be obtained by intentionally injecting a probing disturbance. The thesis presents an optimization method that finds the optimal spectrum of the probing signals. In addition, the probing signal with the optimal spectrum is generated considering arbitrary time domain signal constraints that can be imposed by various probing signal generating devices. Finally, the thesis provides a comprehensive description of a practical implementation of a real-time mode estimation tool. This includes description of the hardware, software architecture, graphical user interface, as well as details of the most important components such as the Statnett’s SDK that allows easy access to synchrophasor data streams. / <p>The Doctoral Degrees issued upon completion of the programme are issued by Comillas Pontifical University, Delft University of Technology and KTH Royal Institute of Technology. The invested degrees are official in Spain, the Netherlands and Sweden, respectively.</p><p>QC 20160218</p> / FP7 iTesla
15

貝氏A式最佳實驗設計

程華懿, CHENG,HUA-YI Unknown Date (has links)
在農業、工業上或生物學、醫學上, 我們經常拿一組控制的試驗組與其他在不同控制 下的試驗組對照比較結果。這方面的問題, 可經由適當的實驗設計(experiment desi -gn)而得到較佳的結果。假設我們有p+1 個處理(treatments), 其中一個處理為對照 處理(control treatment),設其為0,我們已有先前的資料(prior information),其余 p 個處理為試驗處理(test treatments),設其分別為1、2、…p,無任何資料可利用, 在此情況下將已知的資料加入實驗設計的考量中, 將會增加此設計之效率(efficienc y)。本文將討論此一對照組與其他P 個不同試驗組同時比較之貝氏A 式最佳設計(Bay -es A-optimal rowcolumn design)。 假設此模型為沒有交互作用之可加性線性模型(additive linear model without int -eraction): Y =α +τ +β +γ +ε α =處理i 之效果i=0,1,…p τ =α -α =試驗減對照處理之比較i=1,…p β =第j 列之效果j=1,…R γ =第k 行之效果k=1,…C ε =不相關之隨機變數, 其期望值為0,變異數為σ 我們就以此模型來建立貝氏A 式最佳設計, 所謂貝氏A 式最佳設計即能讓對照- 試驗比較之后續期望平方差(poste -rior expected square error loss) 最小之設計。 在本篇論文中, 我們將以電腦程式設計(FORTRAN程式語言) 來尋找貝氏A 式最佳設計 , 并歸納出結論以及比較先前變異數(prior variance)對貝氏最佳設計之影響。
16

Automating the development of Metabolic Network Models using Abductive Logic Programming

Rozanski, Robert January 2017 (has links)
The complexity of biological systems constitute a significant problem for the development of biological models. This inspired the creation of a few Computational Scientific Discovery systems that attempt to address this problem in the context of metabolomics through the use of computers and automation. These systems have important limitations, however, like limited revision and experiment design abilities and the inability to revise refuted models. The goal of this project was to address some of these limitations. The system developed for this project, "Huginn", was based on the use of Abductive Logic Programming to automate crucial development tasks, like experiment design, testing consistency of models with experimental results and revision of refuted models. The main questions of this project were (1) whether the proposed system can successfully develop Metabolic Network Models and (2) whether it can do it better than its predecessors. To answer these questions we tested Huginn in a simulated environment. Its task was to relearn the structures of disrupted fragments of a state-of-the-art model of yeast metabolism. The results of the simulations show that Huginn can relearn the structure of metabolic models, and that it can do it better than previous systems thanks to the specific features introduced in it. Furthermore, we show how the design of extended crucial experiments can be automated using Answer Set Programming for the first time.
17

Experimental and Modelling Studies on the Spreading of Non-Aqueous Phase Liquids in Heterogeneous Media / Spridning av flerfasföroreningar i heterogen mark : Studier med experiment och modellering

Fagerlund, Fritjof January 2006 (has links)
Non-Aqueous Phase Liquids (NAPLs) include commonly occurring organic contaminants such as gasoline, diesel fuel and chlorinated solvents. When released to subsurface environments their spreading is a complex process of multi-component, multi-phase flow. This work has strived to develop new models and methods to describe the spreading of NAPLs in heterogeneous geological media. For two-phase systems, infiltration and immobilisation of NAPL in stochastically heterogeneous, water-saturated media were investigated. First, a methodology to continuously measure NAPL saturations in space and time in a two-dimensional experiment setup, using multiple-energy x-ray-attenuation techniques, was developed. Second, a set of experiments on NAPL infiltration in carefully designed structures of well-known stochastic heterogeneity were conducted. Three detailed data-sets were generated and the importance of heterogeneity for both flow and the immobilised NAPL architecture was demonstrated. Third, the laboratory experiments were modelled with a continuum- and Darcy’s-law-based multi-phase flow model. Different models for the capillary pressure (Pc) – fluid saturation (S) – relative permeability (kr) constitutive relations were compared and tested against experimental observations. A method to account for NAPL immobility in dead-end pore-spaces during drainage was introduced and the importance of accounting for hysteresis and NAPL entrapment in the constitutive relations was demonstrated. NAPL migration in three-phase, water-NAPL-air systems was also studied. Different constitutive relations used in modelling of three-phase flow were analysed and compared to existing laboratory data. To improve model performance, a new formulation for the saturation dependence of tortuosity was introduced and different scaling options for the Pc-S relations were investigated. Finally, a method to model the spreading of multi-constituent contaminants using a single-component multi-phase model was developed. With the method, the migration behaviour of individual constituents in a mixture, e.g. benzene in gasoline, could be studied, which was demonstrated in a modelling study of a gasoline spill in connection with a transport accident. / Flerfasföroreningar innefattar vanligt förekommande organiska vätskor som bensin, dieselolja och klorerade lösningsmedel. Spridningen av dessa föroreningar i mark är komplicerad och styrs av det samtidiga flödet av organisk vätska, vatten och markluft samt utbytet av komponenter (föroreningar) mellan de olika faserna. Detta arbete syftade till att utveckla nya metoder och modeller för att studera spridningen av flerfasföroreningar i mark: (i) En metodik utvecklades för att i laboratorium noggrant och kontinuerligt mäta hur en organisk vätska är rumsligt fördelad i en tvådimensionell experimentuppställning. Metoden baserades på röntgenutsläckning för olika energinivåer. (ii) Infiltration av organisk vätska i vattenmättade medier studerades för olika konfigurationer av geologisk heterogenitet. I experimentuppställningen packades olika sandmaterial noggrant för att konstruera en välkänd, stokastiskt heterogen struktur. Spridningsprocessen dokumenterades i tre detaljerade mätserier och heterogenitetens påverkan på flöde och kvarhållning av den organiska vätskan påvisades. (iii) Experimenten simulerades med en numerisk modell. Olika modeller prövades för att beskriva de grundläggande relationerna mellan kapillärtryck (Pc) vätskehalt (S) och relativ permeabilitet (kr) för detta tvåfassystem av vatten och organisk vätska. En relation infördes för att beskriva partiell orörlighet hos den organiska vätskan i porer vars halsar tillfälligt blockeras av vatten då mediet avvattnas. Vikten av att i de grundläggande relationerna ta hänsyn till hysteresis och kvarhållning av organisk fas visades. (iv) Olika Pc-S-kr relationer för trefassystem av vatten, organisk vätska och markluft testades mot befintliga experimentella data. En ny relation för hur slingrigheten (eng. tortuosity) beror av vätskehalten infördes i kr-S relationen och olika möjligheter för att skala Pc-S relationen analyserades. (v) En modelleringsmetodik utvecklades för att studera spridningen av flerkomponentsföroreningar. Med metoden kunde spridningsbeteendet hos enskilda, särskilt skadliga komponenter som t.ex. bensen särskiljas då ett bensinutsläpp i samband med en transportolycka simulerades.
18

Viscoelastic Materials : Identification and Experiment Design

Rensfelt, Agnes January 2010 (has links)
Viscoelastic materials can today be found in a wide range of practical applications. In order to make efficient use of these materials in construction, it is of importance to know how they behave when subjected to dynamic load. Characterization of viscoelastic materials is therefore an important topic, that has received a lot of attention over the years. This thesis treats different methods for identifying the complex modulus of an viscoelastic material. The complex modulus is a frequency dependent material function, that describes the deformation of the material when subjected to stress. With knowledge of this and other material functions, it is possible to simulate and predict how the material behaves under different kinds of dynamic load. The complex modulus is often identified through wave propagation testing, where the viscoelastic material is subjected to some kind of load and the response then measured. Models describing the wave propagation in the setups are then needed. In order for the identification to be accurate, it is important that these models can describe the wave propagation in an adequate way. A statistical test quantity is therefore derived and used to evaluate the wave propagation models in this thesis. Both nonparametric and parametric identification of the complex modulus is considered in this thesis.  An important aspect of the identification is the accuracy of the estimates.  Theoretical expressions for the variance of the estimates are therefore derived, both for the nonparametric and the parametric identification. In order for the identification to be as accurate as possible, it is also important that the experimental data contains as much valuable information as possible. Different experimental conditions, such as sensor locations and choice of excitation, can influence the amount of information in the data. The procedure of determining optimal values for such design parameters is known as optimal experiment design. In this thesis, both optimal sensor locations and optimal excitation are considered.
19

Contribuições para métodos de controle baseados em dados obtidos em apenas um experimento

Campestrini, Lucíola January 2010 (has links)
Este trabalho apresenta algumas contribuições para métodos de controle baseados em dados obtidos em apenas um experimento, a fim de torná-los mais atrativos quanto à aplicação em processos industriais. A partir de dados obtidos em experimentos no processo, os métodos baseados em dados estimam os parâmetros de um controlador de estrutura fixa por meio da minimização do erro entre a saída do sistema real em malha fechada e uma saída desejada, dada por um modelo de referência. O método Virtual Reference Feedback Tuning - VRFT é o método mais expressivo na literatura que estima os parâmetros do controlador usando apenas uma batelada de dados, mas o mesmo apresenta alguns inconvenientes em sua formulação que limitam a sua aplicação. Neste trabalho, o método VRFT é modificado de forma que se obtém um método VRFT flexível, o qual minimiza um critério flexível, pelo qual são estimados tanto os parâmetros do controlador quanto os parâmetros do numerador do modelo de referência; assim, caso a planta que se deseja controlar seja de fase não-mínima, o critério é capaz de estimar esses zeros e os mesmos devem ser incluídos no modelo de referência que será utilizado no projeto do controlador. Além disso, em sistemas com ruído, o método VRFT necessita de uma variável instrumental para que a estimativa dos parâmetros do controlador seja não-polarizada. Para eliminar a necessidade de usar variáveis instrumentais, um novo método de controle baseado em dados é proposto, o qual é descrito sob a ótica de identificação. Este método pode ser visto como a identificação de um sistema, no qual a função de transferência do processo é reparametrizada em função do controlador ideal e do modelo de referência. Além disso, estende-se a teoria de projeto de experimento com solução baseada em restrições LMI para o caso da identificação do controlador ótimo. Todas essas contribuições são ilustradas através de simulações. / This work presents some contributions to data-based control methods where data is obtained in only one experiment, in order to make them more attractive to industrial process applications. Using data from experiments on the process, data-based methods estimate the parameters of fixed structure controller, through the minimization of the error between the closed loop response of the system and the desired response, given by a reference model. The Virtual Reference Feedback Tuning - VRFT method is the most expressive method in the literature that estimates the controller parameters using only one batch of data, but this method presents some inconveniences in its formulation which limit its application. In this work, the VRFT method is modified in order to obtain a flexible VRFT method, which minimizes a flexible criterion, and then obtains the controller parameters together with the parameters related to the reference model numerator; thus, if the plant which we want to control is non-minimum phase, then the criterion is able to estimate these zeros and they need to be included in the reference model that will be used in the control design. Besides, when dealing with noisy systems, the VRFT method needs an instrumental variable so the controller parameters estimate is unbiased. In order to eliminate this necessity, a new data-based control is proposed in this work, which is formulated using identification theory. This method can be seen as the identification of a system, where the process transfer function is reparameterized as a function of the ideal controller and the reference model. Besides, we extend the experimental design theory where the problem is solved using LMI constraints to the case of the optimal controller identification method. All these contributions are illustrated through simulations.
20

Contribuições para métodos de controle baseados em dados obtidos em apenas um experimento

Campestrini, Lucíola January 2010 (has links)
Este trabalho apresenta algumas contribuições para métodos de controle baseados em dados obtidos em apenas um experimento, a fim de torná-los mais atrativos quanto à aplicação em processos industriais. A partir de dados obtidos em experimentos no processo, os métodos baseados em dados estimam os parâmetros de um controlador de estrutura fixa por meio da minimização do erro entre a saída do sistema real em malha fechada e uma saída desejada, dada por um modelo de referência. O método Virtual Reference Feedback Tuning - VRFT é o método mais expressivo na literatura que estima os parâmetros do controlador usando apenas uma batelada de dados, mas o mesmo apresenta alguns inconvenientes em sua formulação que limitam a sua aplicação. Neste trabalho, o método VRFT é modificado de forma que se obtém um método VRFT flexível, o qual minimiza um critério flexível, pelo qual são estimados tanto os parâmetros do controlador quanto os parâmetros do numerador do modelo de referência; assim, caso a planta que se deseja controlar seja de fase não-mínima, o critério é capaz de estimar esses zeros e os mesmos devem ser incluídos no modelo de referência que será utilizado no projeto do controlador. Além disso, em sistemas com ruído, o método VRFT necessita de uma variável instrumental para que a estimativa dos parâmetros do controlador seja não-polarizada. Para eliminar a necessidade de usar variáveis instrumentais, um novo método de controle baseado em dados é proposto, o qual é descrito sob a ótica de identificação. Este método pode ser visto como a identificação de um sistema, no qual a função de transferência do processo é reparametrizada em função do controlador ideal e do modelo de referência. Além disso, estende-se a teoria de projeto de experimento com solução baseada em restrições LMI para o caso da identificação do controlador ótimo. Todas essas contribuições são ilustradas através de simulações. / This work presents some contributions to data-based control methods where data is obtained in only one experiment, in order to make them more attractive to industrial process applications. Using data from experiments on the process, data-based methods estimate the parameters of fixed structure controller, through the minimization of the error between the closed loop response of the system and the desired response, given by a reference model. The Virtual Reference Feedback Tuning - VRFT method is the most expressive method in the literature that estimates the controller parameters using only one batch of data, but this method presents some inconveniences in its formulation which limit its application. In this work, the VRFT method is modified in order to obtain a flexible VRFT method, which minimizes a flexible criterion, and then obtains the controller parameters together with the parameters related to the reference model numerator; thus, if the plant which we want to control is non-minimum phase, then the criterion is able to estimate these zeros and they need to be included in the reference model that will be used in the control design. Besides, when dealing with noisy systems, the VRFT method needs an instrumental variable so the controller parameters estimate is unbiased. In order to eliminate this necessity, a new data-based control is proposed in this work, which is formulated using identification theory. This method can be seen as the identification of a system, where the process transfer function is reparameterized as a function of the ideal controller and the reference model. Besides, we extend the experimental design theory where the problem is solved using LMI constraints to the case of the optimal controller identification method. All these contributions are illustrated through simulations.

Page generated in 0.1205 seconds