• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 72
  • 14
  • 9
  • 8
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 133
  • 133
  • 25
  • 20
  • 18
  • 17
  • 14
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Application of a Network Model for Complex Fenestration Systems

Rogalsky, Christine Jane January 2011 (has links)
In the fight to reduce carbon emissions, it is easy to see the necessity of reducing energy consumption. Buildings consume a large amount of energy, and have significant potential for energy savings. One tool for realising these potential savings is building simulation. To be able to use building simulation, accurate models for windows are needed. The models include individual layer models, to determine the solar and longwave radiative behaviours, as well as whole-system models to determine heat flows through the various layers of fenestration systems. This thesis looks at both kinds of models for incorporating windows into building simulations. A new network whole-system model is implemented, and integrated into the California Simulation Engine building simulation software. This model is also used as the calculation engine for a stand-alone rating tool. Additionally, a measurement technique used to measure off-normal solar properties of drapery materials, as part of developing shading layer models, is investigated using a Monte Carlo simulation. The network model uses a very general resistance network, allowing heat transfer between any two layers in a complex fenestration system (CFS), whether they are adjacent or not, between any layer and the indoor or outdoor side, or between the indoor and outdoor sides, although this last case is unlikely. Convective and radiative heat transfer are treated using the same format, resulting in increased stability. This general resistance network is used to calculate indices of merit for the CFS using numerical experiments. This approach requires fewer iterations to solve than previous solution methods, and is more flexible. The off-normal measurement technique which was investigated used a sample holder inserted into an integrating sphere. This is a non-standard way of using an integrating sphere, and early analyses did not provide conclusive information as to the effect of the sample holder. A Monte Carlo analysis confirmed the amount of beam attenuation as being 20% for the sample holder used in the experiments. Also con firmed was the effectiveness of dual-beam integrating spheres in correcting for the presence of a sample holder. The stand-alone rating tool which uses the general network framework, incorporates an easy-to-use visual interface. This tool models multiple types of shading layers with no restrictions on how they are combined. Users can easily change any one layer to see the effects of different arrangements. Users may specify any combination of indoor and outdoor ambient and mean radiant temperatures, insolation, and beam/diffuse split.
62

Analysis And Design Of Microstrip Patch Antennas With Arbitrary Slot Shapes

Sener, Goker 01 April 2011 (has links) (PDF)
A new method is proposed that provides simple and effcient design and analysis algorithm for microstrip antennas with arbitrary patch shapes. The proposed procedure uses the mutiport network model (MNM) where the antenna is considered as a cavity bounded by perfect electric conductors on the top and the bottom surfaces and perfect magnetic conductor on the side surfaces. Ports are defined along the periphery of the patch, and the impedance matrix representing the voltage induced at one port due to a current source at another port, is obtained through the use of the 2-D Green&rsquo / s function corresponding to the cavity. For the MNM analysis of patches with irregular shapes such as slotted structures, the segmentation/desegmentation methods are utilized since the Green&rsquo / s function expressions are available only for regularly shaped cavities. To speed up the analysis and to develop a design procedure, vector Pade approximation is used in order to approximate the antenna impedance matrix as a rational function of two polynomials. When the approximation is performed with respect to frequency, the roots of the polynomial at the denominator provides the resonant frequencies of the antenna. The design algorithm is applicable when the approximation variable is changed to one of the dimensions of the patch that need to be optimized. Because for this case, the roots of the denominator polynomial correspond to optimum dimensions of the antenna where it resonates.
63

Calibration Of Water Distribution Networks

Ar, Kerem 01 January 2012 (has links) (PDF)
Water distribution network models are used for different purposes. In this study, a model, used for daily operational issues is concerned. Models results should be consistent with actual conditions for sound decisions during operational studies. Adjusting model parameters according to site measurements in order to fit the model to obtain realistic results is known as calibration. Researchers have carried out numerous studies on calibration and developed various methods. In this study, an actual network (N8.3 Pressure Zone, Ankara) has been calibrated by two classical methods developed by Walski (1983) and Bhave (1988). The network parameter calibrated in this study is Hazen-Williams roughness coefficient, C-factor, and other parameters have been lumped in the C-factor. Results of the analysis showed that, C-factors have been found in a wide range.
64

Stochastic Modeling and Bayesian Inference with Applications in Biophysics

Du, Chao January 2012 (has links)
This thesis explores stochastic modeling and Bayesian inference strategies in the context of the following three problems: 1) Modeling the complex interactions between and within molecules; 2) Extracting information from stepwise signals that are commonly found in biophysical experiments; 3) Improving the computational efficiency of a non-parametric Bayesian inference algorithm. Chapter 1 studies the data from a recent single-molecule biophysical experiment on enzyme kinetics. Using a stochastic network model, we analyze the autocorrelation of experimental fluorescence intensity and the autocorrelation of enzymatic reaction times. This chapter shows that the stochastic network model is capable of explaining the experimental data in depth and further explains why the enzyme molecules behave fundamentally differently from what the classical model predicts. The modern knowledge on the molecular kinetics is often learned through the information extracted from stepwise signals in experiments utilizing fluorescence spectroscopy. Chapter 2 proposes a new Bayesian method to estimate the change-points in stepwise signals. This approach utilizes marginal likelihood as the tool of inference. This chapter illustrates the impact of the choice of prior on the estimator and provides guidelines for setting the prior. Based on the results of simulation study, this method outperforms several existing change-points estimators under certain settings. Furthermore, DNA array CGH data and single molecule data are analyzed with this approach. Chapter 3 focuses on the optional Polya tree, a newly established non-parametric Bayesian approach (Wong and Li 2010). While the existing study shows that the optional Polya tree is promising in analyzing high dimensional data, its applications are hindered by the high computational costs. A heuristic algorithm is proposed in this chapter, with an attempt to speed up the optional Polya tree inference. This study demonstrates that the new algorithm can reduce the running time significantly with a negligible loss of precision. / Statistics
65

Application of a Network Model for Complex Fenestration Systems

Rogalsky, Christine Jane January 2011 (has links)
In the fight to reduce carbon emissions, it is easy to see the necessity of reducing energy consumption. Buildings consume a large amount of energy, and have significant potential for energy savings. One tool for realising these potential savings is building simulation. To be able to use building simulation, accurate models for windows are needed. The models include individual layer models, to determine the solar and longwave radiative behaviours, as well as whole-system models to determine heat flows through the various layers of fenestration systems. This thesis looks at both kinds of models for incorporating windows into building simulations. A new network whole-system model is implemented, and integrated into the California Simulation Engine building simulation software. This model is also used as the calculation engine for a stand-alone rating tool. Additionally, a measurement technique used to measure off-normal solar properties of drapery materials, as part of developing shading layer models, is investigated using a Monte Carlo simulation. The network model uses a very general resistance network, allowing heat transfer between any two layers in a complex fenestration system (CFS), whether they are adjacent or not, between any layer and the indoor or outdoor side, or between the indoor and outdoor sides, although this last case is unlikely. Convective and radiative heat transfer are treated using the same format, resulting in increased stability. This general resistance network is used to calculate indices of merit for the CFS using numerical experiments. This approach requires fewer iterations to solve than previous solution methods, and is more flexible. The off-normal measurement technique which was investigated used a sample holder inserted into an integrating sphere. This is a non-standard way of using an integrating sphere, and early analyses did not provide conclusive information as to the effect of the sample holder. A Monte Carlo analysis confirmed the amount of beam attenuation as being 20% for the sample holder used in the experiments. Also con firmed was the effectiveness of dual-beam integrating spheres in correcting for the presence of a sample holder. The stand-alone rating tool which uses the general network framework, incorporates an easy-to-use visual interface. This tool models multiple types of shading layers with no restrictions on how they are combined. Users can easily change any one layer to see the effects of different arrangements. Users may specify any combination of indoor and outdoor ambient and mean radiant temperatures, insolation, and beam/diffuse split.
66

3D imaging and modeling of carbonate core at multiple scales

Ghous, Abid, Petroleum Engineering, Faculty of Engineering, UNSW January 2010 (has links)
The understanding of multiphase flow properties is essential for the exploitation of hydrocarbon reserves in a reservoir; these properties in turn are dependent on the geometric properties and connectivity of the pore space. The determination of the pore size distribution in carbonate reservoirs remains challenging; carbonates exhibit complex pore structures comprising length scales from nanometers to several centimeters. A major challenge to the accurate evaluation of these reservoirs is accounting for pore scale heterogeneity on multiple scales. This is the topic of this thesis. Conventionally, this micron scale information is achieved either by building stochastic models using 2D images or by combining log and laboratory data to classify pore types and their behaviour. None of these capture the true 3D connectivity vital for flow characterisation. We present here an approach to build realistic 3D network models across a range of scales to improve property estimation through employment of X-ray micro-Computed Tomography (μCT) and Focussed Ion Beam Tomography (FIBT). The submicron, or microporous, regions are delineated through a differential imaging technique undertaken on x-ray CT providing a qualitative description of microporosity. Various 3-Phase segmentation methods are then applied for quantitative characterisation of those regions utilising the attenuation coefficient values from the 3D tomographic images. X-ray micro-CT is resolution limited and can not resolve the detailed geometrical features of the submicron pores. FIB tomography is used to image the 3D pore structure of submicron pores down to a scale of tens of nanometers. We describe the experimental development and subsequent image processing including issues and difficulties resolved at various stages. The developed methodology is implemented on cores from producing wackstone and grainstone reservoirs. Pore network models are generated to characterise the 3D interconnectivity of pores. We perform the simulations of petrophysical properties (permeability and formation resistivity) directly on the submicron scale image data. Simulated drainage capillary pressure curves are matched with the experimental data. We also present some preliminary results for the integration of multiscale pore information to build dual-scale network models. The integration of multiscale data allows one to select appropriate effective medium theories to incorporate sub-micron structure into property calculations at macro scale giving a more realistic estimation of properties.
67

Image Compression and Channel Error Correction using Neurally-Inspired Network Models

Watkins, Yijing Zhang 01 May 2018 (has links)
Everyday an enormous amount of information is stored, processed and transmitted digitally around the world. Neurally-inspired compression models have been rapidly developed and researched as a solution to image processing tasks and channel error correction control. This dissertation presents a deep neural network (DNN) for gray high-resolution image compression and a fault-tolerant transmission system with channel error-correction capabilities. A feed-forward DNN implemented with the Levenberg-Marguardt learning algorithm is proposed and implemented for image compression. I demonstrate experimentally that the DNN not only provides better quality reconstructed images but also requires less computational capacity as compared to DCT Zonal coding, DCT Threshold coding, Set Partitioning in Hierarchical Trees (SPIHT) and Gaussian Pyramid. An artificial neural network (ANN) with improved channel error-correction rate is also proposed. The experimental results indicate that the implemented artificial neural network provides a superior error-correction ability by transmitting binary images over the noisy channel using Hamming and Repeat-Accumulate coding. Meanwhile, the network’s storage requirement is 64 times less than the Hamming coding and 62 times less than the Repeat-Accumulate coding. Thumbnail images contain higher frequencies and much less redundancy, which makes them more difficult to compress compared to high-resolution images. Bottleneck autoencoders have been actively researched as a solution to image compression tasks. However, I observed that thumbnail images compressed at a 2:1 ratio through bottleneck autoencoders often exhibit subjectively low visual quality. In this dissertation, I compared bottleneck autoencoders with two sparse coding approaches. Either 50\% of the pixels are randomly removed or every other pixel is removed, each achieving a 2:1 compression ratio. In the subsequent decompression step, a sparse inference algorithm is used to in-paint the missing the pixel values. Compared to bottleneck autoencoders, I observed that sparse coding with a random dropout mask yields decompressed images that are superior based on subjective human perception yet inferior according to pixel-wise metrics of reconstruction quality, such as PSNR and SSIM. With a regular checkerboard mask, decompressed images were superior as assessed by both subjective and pixel-wise measures. I hypothesized that alternative feature-based measures of reconstruction quality would better support my subjective observations. To test this hypothesis, I fed thumbnail images processed using either bottleneck autoencoder or sparse coding using either checkerboard or random masks into a Deep Convolutional Neural Network (DCNN) classifier. Consistent, with my subjective observations, I discovered that sparse coding with checkerboard and random masks support on average 2.7\% and 1.6\% higher classification accuracy and 18.06\% and 3.74\% lower feature perceptual loss compared to bottleneck autoencoders, implying that sparse coding preserves more feature-based information. The optic nerve transmits visual information to the brain as trains of discrete events, a low-power, low-bandwidth communication channel also exploited by silicon retina cameras. Extracting high-fidelity visual input from retinal event trains is thus a key challenge for both computational neuroscience and neuromorphic engineering. % Here, we investigate whether sparse coding can enable the reconstruction of high-fidelity images and video from retinal event trains. Our approach is analogous to compressive sensing, in which only a random subset of pixels are transmitted and the missing information is estimated via inference. We employed a variant of the Locally Competitive Algorithm to infer sparse representations from retinal event trains, using a dictionary of convolutional features optimized via stochastic gradient descent and trained in an unsupervised manner using a local Hebbian learning rule with momentum. Static images, drawn from the CIFAR10 dataset, were passed to the input layer of an anatomically realistic retinal model and encoded as arrays of output spike trains arising from separate layers of integrate-and-fire neurons representing ON and OFF retinal ganglion cells. The spikes from each model ganglion cell were summed over a 32 msec time window, yielding a noisy rate-coded image. Analogous to how the primary visual cortex is postulated to infer features from noisy spike trains in the optic nerve, we inferred a higher-fidelity sparse reconstruction from the noisy rate-coded image using a convolutional dictionary trained on the original CIFAR10 database. Using a similar approach, we analyzed the asynchronous event trains from a silicon retina camera produced by self-motion through a laboratory environment. By training a dictionary of convolutional spatiotemporal features for simultaneously reconstructing differences of video frames (recorded at 22HZ and 5.56Hz) as well as discrete events generated by the silicon retina (binned at 484Hz and 278Hz), we were able to estimate high frame rate video from a low-power, low-bandwidth silicon retina camera.
68

Pore network modelling of condensation in gas diffusion layers of proton exchange membrane fuel cell / Modélisation à l'aide d'une approche réseau de pores de la condensation dans les couches de diffusion des piles à combustible de type PEM

Straubhaar, Benjamin 30 November 2015 (has links)
Une pile à membrane échangeuse de protons (PEMFC) est un dispositif convertissant l’hydrogène en électricité grâce à une réaction électrochimique appelé électrolyse inverse. Comme chaque pile à combustible ou batterie, les PEMFC sont composées d’une série de couches. Nous nous intéressons à la couche de diffusion (GDL) du côté de la cathode. La GDL est constituée de fibres de carbone traitées pour être hydrophobes. Elle peut être vue comme un milieu poreux mince avec une taille moyenne de pores de quelques dizaines de microns. Une question clé dans ce système est la gestion de l'eau produite par la réaction. Dans ce contexte, le principal objectif de la thèse est le développement d'un outil numérique visant à simuler la formation de l'eau liquide dans la GDL. Une approche réseau de pores est utilisée. Nous nous concentrons sur un scénario où l’eau liquide se forme par condensation dans la GDL. Les comparaisons entre simulations et expériences effectuées grâce à un dispositif microfluidique bidimensionnel, sont d'abord présentées pour différentes conditions de mouillabilité, de distributions de température et d'humidité relative à l’entrée, afin de valider le modèle. Une étude de sensibilité est alors effectuée afin de mieux caractériser les paramètres contrôlant l'invasion de l'eau. Enfin, les simulations sont comparées à des distributions d’eau obtenues in-situ par micro-tomographie à rayons X, ainsi que des distributions expérimentales de la littérature obtenues par imagerie neutronique. / A Proton Exchange Membrane Fuel Cell (PEMFC) is a device converting hydrogen into electricity thanks to an electrochemical reaction called reverse electrolysis. Like every fuel cell or battery, PEMFCs are made of a series of layers. We are interested in the gas diffusion layer (GDL) on the cathode side. The GDL is made of carbon fibers treated hydrophobic. It can be seen as a thin porous medium with a mean pore size of few tens of microns. A key question in this system is the management of the water produced by the reaction. In this context, the main objective of the thesis is the development of a numerical tool aiming at simulating the liquid water formation within the GDL. A pore network approach is used. We concentrate on a scenario where liquid water forms in the GDL by condensation. Comparisons between simulations and experiments performed with a two-dimensional microfluidic device are first presented for different wettability conditions, temperature distributions and inlet relative humidity in order to validate the model. A sensitivity study is then performed to better characterize the parameters controlling the water invasion. Finally, simulations are compared with in situ experimental water distributions obtained by X-ray micro-tomography as well as with experimental distributions from the literature obtained by neutron imaging.
69

Multiphase Fluid Flow through Porous Media: Conductivity and Geomechanics

January 2016 (has links)
abstract: The understanding of multiphase fluid flow in porous media is of great importance in many fields such as enhanced oil recovery, hydrology, CO2 sequestration, contaminants cleanup, and natural gas production from hydrate bearing sediments. In this study, first, the water retention curve (WRC) and relative permeability in hydrate bearing sediments are explored to obtain fitting parameters for semi-empirical equations. Second, immiscible fluid invasion into porous media is investigated to identify fluid displacement pattern and displacement efficiency that are affected by pore size distribution and connectivity. Finally, fluid flow through granular media is studied to obtain fluid-particle interaction. This study utilizes the combined techniques of discrete element method simulation, micro-focus X-ray computed tomography (CT), pore-network model simulation algorithms for gas invasion, gas expansion, and relative permeability calculation, transparent micromodels, and water retention curve measurement equipment modified for hydrate-bearing sediments. In addition, a photoelastic disk set-up is fabricated and the image processing technique to correlate the force chain to the applied contact forces is developed. The results show that the gas entry pressure and the capillary pressure increase with increasing hydrate saturation. Fitting parameters are suggested for different hydrate saturation conditions and morphologies. And, a new model for immiscible fluid invasion and displacement is suggested in which the boundaries of displacement patterns depend on the pore size distribution and connectivity. Finally, the fluid-particle interaction study shows that the fluid flow increases the contact forces between photoelastic disks in parallel direction with the fluid flow. / Dissertation/Thesis / Doctoral Dissertation Civil and Environmental Engineering 2016
70

Análise de modos normais em proteínas / Normal mode analysis in proteins

Matheus Rodrigues de Mendonça 26 April 2010 (has links)
A abordagem de modos normais de baixa frequência na descrição das flutuações conformacionais dos estados nativos das proteínas globulares tem ajudado na caracterização das suas funções biológicas. Vários métodos teóricos e experimentais têm sido empregados para a determinação destas flutuações internas. Estes movimentos podem ser caracterizados pelo fator Debye-Waller (fator-B), correspondente à mobilidade local do resíduo em nível atômico. A análise de modos normais utilizando os modelos de rede elástica (ENM) demonstra ser uma técnica robusta. Fatores-B experimentais são reproduzidos teoricamente por meio desta técnica em tempos computacionais relativamente curtos, mostrando-se competitiva com as técnicas mais sofisticadas. O modelo de rede elástica é uma abordagem t ipo coarse-grain na qual a proteína no seu estado enovelado é representada por uma rede elástica tridimensional de carbonos conectados por molas. As molas representam as interações ligantes e não ligantes entre os carbonos . Neste trabalho, inicialmente, estudamos os modelos de rede elástica já conhecidos na literatura. Em seguida, realizamos um estudo comparativo entre eles. Neste estudo, comprovamos que os modelos pfGNM e pfANM apresentam melhor correlação com os fatores-B experimentais que os os modelos GNM e ANM tradicionais. Desenvolvemos também uma nova abordagem, a qual intitulamos número de contatos ponderados anisotrópica (AWCN). Mostramos que a abordagem AWCN apresenta um desempenho significativamente melhor que o modelo de rede elástica anisotrópica tradicional. Por fim, realizamos um estudo de caráter investigativo do comportamento do peso das interações entre resíduos. Este estudo re velou que, para os modelos WCN e AWCN, a correlação exibe o seu valor máximo para interações ponderadas $1/R^p$, entre resíduos $i$ e $j$j, para valores de $p$ em torno de 2. Nos modelos pfGNM e pfANM a correlação é maximizada para dois valores de $p$, o primeiro em torno de 2 e o segundo em torno de 4,75, indicando que a ponderação pelo recíproco do quadrado da distância, usualmente empregada na literatura, pode não ser adequada para obter a melhor correlação. / Low frequency normal mode approach to describe conformational fluctuations of globular proteins has helped to characterize their biological functions. Various theoretical and experimental methods have been employed to det ermine the magnitudes of those internal motions. Those motions can be characterized by the Debye-Waller factor (B-factor), co rresponding to the local mobility of the residue at the atomic level. Normal mode analysis using elastic network models (ENM) has demonstrated to be a robust technique. Experimental B-factors has been reproduced theoretically by means of this techniq ue in a short computational time and it has been shown to be competitive with more sophisticated techniques. The ENM is a coarse-grained approach in which the protein is represented by a three-dimensional elastic network of alpha-carbon atoms connect ed by springs. Springs represent bonded and non-bonded interactions between the alpha-carbon atoms. In this work, we study th e elastic network models known in the literature. Next, we perform a comparative study between them. We show that the pfGNM a nd pfANM models present better correlation with experimental B-factors than the traditional GNM and ANM models. We also devel op a new approach, which we entitled anisotropic weighted contact number (AWCN). We show that it presents results significantly better than the traditional anisotropic elastic network model. Finally, we perform a study of investigative character of the behavior for the weight of the interactions between residues. This study revealed that, for the WCN and AWCN models, the correlation exhibits its maximum value for weighted interactions $1/R^p$, between residues $i$ and $j$, for values of $p$ around 2. In the pfGNM and pfANM models the correlation is max imized for two values of $p$, the first one around 2 and the second one around 4.75. This indicates that the weighting by the reciprocal of the square of the distance, usually employed in the literature, may not be appropriate to obtain the best correlation.

Page generated in 0.0666 seconds