• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 95
  • 13
  • 11
  • 6
  • 5
  • 3
  • 2
  • 2
  • Tagged with
  • 165
  • 165
  • 39
  • 36
  • 36
  • 25
  • 23
  • 23
  • 21
  • 21
  • 20
  • 19
  • 18
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Volumetric measurements of the transitional backward facing step flow

Kitzhofer, Jens 08 August 2011 (has links)
The thesis describes state of the art volumetric measurement techniques and applies a 3D measurement technique, 3D Scanning Particle Tracking Velocimetry, to the transitional backward facing step flow. The measurement technique allows the spatial and temporal analysis of coherent structures apparent at the backward facing step. The thesis focusses on the extraction and interaction of coherent flow structures like shear layers or vortical structures.
152

DEVELOPMENT OF IMAGE-BASED DENSITY DIAGNOSTICS WITH BACKGROUND-ORIENTED SCHLIEREN AND APPLICATION TO PLASMA INDUCED FLOW

Lalit Rajendran (8960978) 07 May 2021 (has links)
<p>There is growing interest in the use of nanosecond surface dielectric barrier discharge (ns-SDBD) actuators for high-speed (supersonic/hypersonic) flow control. A plasma discharge is created using a nanosecond-duration pulse of several kilovolts, and leads to a rapid heat release and a complex three-dimensional flow field. Past work has been limited to qualitative visualizations such as schlieren imaging, and detailed measurements of the induced flow are required to develop a mechanistic model of the actuator performance. </p><p><br></p><p></p><p>Background-Oriented Schlieren (BOS) is a quantitative variant of schlieren imaging and measures density gradients in a flow field by tracking the apparent distortion of a target dot pattern. The distortion is estimated by cross-correlation, and the density gradients can be integrated spatially to obtain the density field. Owing to the simple setup and ease of use, BOS has been applied widely, and is becoming the preferred density measurement technique. However, there are several unaddressed limitations with potential for improvement, especially for application to complex flow fields such as those induced by plasma actuators. </p><p></p><p>This thesis presents a series of developments aimed at improving the various aspects of the BOS measurement chain to provide an overall improvement in the accuracy, precision, spatial resolution and dynamic range. A brief summary of the contributions are: </p><p>1) a synthetic image generation methodology to perform error and uncertainty analysis for PIV/BOS experiments, </p><p>2) an uncertainty quantification methodology to report local, instantaneous, a-posteriori uncertainty bounds on the density field, by propagating displacement uncertainties through the measurement chain,</p><p>3) an improved displacement uncertainty estimation method using a meta-uncertainty framework whereby uncertainties estimated by different methods are combined based on the sensitivities to image perturbations, </p><p>4) the development of a Weighted Least Squares-based density integration methodology to reduce the sensitivity of the density estimation procedure to measurement noise.</p><p>5) a tracking-based processing algorithm to improve the accuracy, precision and spatial resolution of the measurements, </p><p>6) a theoretical model of the measurement process to demonstrate the effect of density gradients on the position uncertainty, and an uncertainty quantification methodology for tracking-based BOS,</p><p>Then the improvements to BOS are applied to perform a detailed characterization of the flow induced by a filamentary surface plasma discharge to develop a reduced-order model for the length and time scales of the induced flow. The measurements show that the induced flow consists of a hot gas kernel filled with vorticity in a vortex ring that expands and cools over time. A reduced-order model is developed to describe the induced flow and applying the model to the experimental data reveals that the vortex ring's properties govern the time scale associated with the kernel dynamics. The model predictions for the actuator-induced flow length and time scales can guide the choice of filament spacing and pulse frequencies for practical multi-pulse ns-SDBD configurations.</p>
153

Neural Network Based Model Predictive Control of Turbulent Gas-Solid Corner Flow

Wredh, Simon January 2020 (has links)
Over the past decades, attention has been brought to the importance of indoor air quality and the serious threat of bio-aerosol contamination towards human health. A novel idea to transport hazardous particles away from sensitive areas is to automatically control bio-aerosol concentrations, by utilising airflows from ventilation systems. Regarding this, computational fluid dynamics (CFD) may be employed to investigate the dynamical behaviour of airborne particles, and data-driven methods may be used to estimate and control the complex flow simulations. This thesis presents a methodology for machine-learning based control of particle concentrations in turbulent gas-solid flow. The aim is to reduce concentration levels at a 90 degree corner, through systematic manipulation of underlying two-phase flow dynamics, where an energy constrained inlet airflow rate is used as control variable. A CFD experiment of turbulent gas-solid flow in a two-dimensional corner geometry is simulated using the SST k-omega turbulence model for the gas phase, and drag force based discrete random walk for the solid phase. Validation of the two-phase methodology is performed against a backwards facing step experiment, with a 12.2% error correspondence in maximum negative particle velocity downstream the step. Based on simulation data from the CFD experiment, a linear auto-regressive with exogenous inputs (ARX) model and a non-linear ARX based neural network (NN) is used to identify the temporal relationship between inlet flow rate and corner particle concentration. The results suggest that NN is the preferred approach for output predictions of the two-phase system, with roughly four times higher simulation accuracy compared to ARX. The identified NN model is used in a model predictive control (MPC) framework with linearisation in each time step. It is found that the output concentration can be minimised together with the input energy consumption, by means of tracking specified target trajectories. Control signals from NN-MPC also show good performance in controlling the full CFD model, with improved particle removal capabilities, compared to randomly generated signals. In terms of maximal reduction of particle concentration, the NN-MPC scheme is however outperformed by a manually constructed sine signal. In conclusion, CFD based NN-MPC is a feasible methodology for efficient reduction of particle concentrations in a corner area; particularly, a novel application for removal of indoor bio-aerosols is presented. More generally, the results show that NN-MPC may be a promising approach to turbulent multi-phase flow control.
154

Revealing the transport mechanisms from a single trajectory in living cells / Révéler les mécanismes de transport à partir d'une seule trajectoire dans les cellules vivantes

Lanoiselée, Yann 01 October 2018 (has links)
Cette thèse est dédiée à l’analyse et la modélisation d'expériences où la position d'un traceur dans le milieu cellulaire est enregistrée au cours du temps. Il s’agit de pouvoir de retirer le maximum d’information à partir d’une seule trajectoire observée expérimentalement. L’enjeu principal consiste à identifier les mécanismes de transport sous-jacents au mouvement observé. La difficulté de cette tâche réside dans l’analyse de trajectoires individuelles, qui requiert de développer de nouveaux outils d’analyse statistique. Dans le premier chapitre, un aperçu est donné de la grande variété des dynamiques observables dans le milieu cellulaire. Notamment, une revue de différents modèles de diffusion anormale et non-Gaussienne est réalisée. Dans le second chapitre, un test est proposé afin de révéler la rupture d'ergodicité faible à partir d’une trajectoire unique. C’est une généralisation de l’approche de M. Magdziarz et A. Weron basée sur la fonction caractéristique du processus moyennée au cours du temps. Ce nouvel estimateur est capable d’identifier la rupture d’ergodicité de la marche aléatoire à temps continu où les temps d'attente sont distribués selon une loi puissance. Par le calcul de la moyenne de l’estimateur pour plusieurs modèles typiques de sous diffusion, l’applicabilité de la méthode est démontrée. Dans le troisième chapitre, un algorithme est proposé afin reconnaître à partir d’une seule trajectoire les différentes phases d'un processus intermittent (e.g. le transport actif/passif à l'intérieur des cellules, etc.). Ce test suppose que le processus alterne entre deux phases distinctes mais ne nécessite aucune hypothèse sur la dynamique propre dans chacune des phases. Les changements de phase sont capturés par le calcul de quantités associées à l’enveloppe convexe locale (volume, diamètre) évaluées au long de la trajectoire. Il est montré que cet algorithme est efficace pour distinguer les états d’une large classe de processus intermittents (6 modèles testés). De plus, cet algorithme est robuste à de forts niveaux de bruit en raison de la nature intégrale de l’enveloppe convexe. Dans le quatrième chapitre, un modèle de diffusion dans un milieu hétérogène où le coefficient de diffusion évolue aléatoirement est introduit et résolu analytiquement. La densité de probabilité des déplacements présente des queues exponentielles et converge vers une Gaussienne au temps long. Ce modèle généralise les approches précédentes et permet ainsi d’étudier en détail les hétérogénéités dynamiques. En particulier, il est montré que ces hétérogénéités peuvent affecter de manière drastique la précision de mesures effectuées sur une trajectoire par des moyennes temporelles. Dans le dernier chapitre, les méthodes d’analyses de trajectoires individuelles sont utilisées pour étudier deux expériences. La première analyse effectuée révèle que les traceurs explorant le cytoplasme montrent que la densité de probabilité des déplacements présente des queues exponentielles sur des temps plus longs que la seconde. Ce comportement est indépendant de la présence de microtubules ou du réseau d’actine dans la cellule. Les trajectoires observées présentent donc des fluctuations de diffusivité témoignant pour la première fois de la présence d’hétérogénéités dynamiques au sein du cytoplasme. La seconde analyse traite une expérience dans laquelle un ensemble de disques de 4mm de diamètre a été vibré verticalement sur une plaque, induisant un mouvement aléatoire des disques. Par une analyse statistique approfondie, il est démontré que cette expérience est proche d'une réalisation macroscopique d'un mouvement Brownien. Cependant les densités de probabilité des déplacements des disques présentent des déviations par rapport à la Gaussienne qui sont interprétées comme le résultat des chocs inter-disque. Dans la conclusion, les limites des approches adoptées ainsi que les futures pistes de recherches ouvertes par ces travaux sont discutées en détail. / This thesis is dedicated to the analysis and modeling of experiments where the position of a tracer in the cellular medium is recorded over time. The goal is to be able to extract as much information as possible from a single experimentally observed trajectory. The main challenge is to identify the transport mechanisms underlying the observed movement. The difficulty of this task lies in the analysis of individual trajectories, which requires the development of new statistical analysis tools. In the first chapter, an overview is given of the wide variety of dynamics that can be observed in the cellular medium. In particular, a review of different models of anomalous and non-Gaussian diffusion is carried out. In the second chapter, a test is proposed to reveal weak ergodicity breaking from a single trajectory. This is a generalization of the approach of M. Magdziarz and A. Weron based on the time-averaged characteristic function of the process. This new estimator is able to identify the ergodicity breaking of continuous random walking where waiting times are power law distributed. By calculating the average of the estimator for several subdiffusion models, the applicability of the method is demonstrated. In the third chapter, an algorithm is proposed to recognize the different phases of an intermittent process from a single trajectory (e.g. active/passive transport within cells, etc.).This test assumes that the process alternates between two distinct phases but does not require any hypothesis on the dynamics of each phase. Phase changes are captured by calculating quantities associated with the local convex hull (volume, diameter) evaluated along the trajectory. It is shown that this algorithm is effective in distinguishing states from a large class of intermittent processes (6 models tested). In addition, this algorithm is robust at high noise levels due to the integral nature of the convex hull. In the fourth chapter, a diffusion model in a heterogeneous medium where the diffusion coefficient evolves randomly is introduced and solved analytically. The probability density function of the displacements presents exponential tails and converges towards a Gaussian one at long time. This model generalizes previous approaches and thus makes it possible to study dynamic heterogeneities in detail. In particular, it is shown that these heterogeneities can drastically affect the accuracy of measurements made by time averages along a trajectory. In the last chapter, single-trajectory based methods are used for the analysis of two experiments. The first analysis carried out shows that the tracers exploring the cytoplasm show that the probability density of displacements has exponential tails over periods of time longer than the second. This behavior is independent of the presence of both microtubules and the actin network in the cell. The trajectories observed therefore show fluctuations in diffusivity, indicating for the first time the presence of dynamic heterogeneities within the cytoplasm. The second analysis deals with an experiment in which a set of 4mm diameter discs was vibrated vertically on a plate, inducing random motion of the disks. Through an in-depth statistical analysis, it is demonstrated that this experiment is close to a macroscopic realization of a Brownian movement. However, the probability densities of disks’ displacements show deviations from Gaussian which are interpreted as the result of inter-disk shocks. In the conclusion, the limits of the approaches adopted as well as the future research orientation opened by this thesis are discussed in detail.
155

Algorithmes parallèles pour le suivi de particules / Parallel algorithms for tracking of particles

Bonnier, Florent 12 December 2018 (has links)
Les méthodes de suivi de particules sont couramment utilisées en mécanique des fluides de par leur propriété unique de reconstruire de longues trajectoires avec une haute résolution spatiale et temporelle. De fait, de nombreuses applications industrielles mettant en jeu des écoulements gaz-particules, comme les turbines aéronautiques utilisent un formalisme Euler-Lagrange. L’augmentation rapide de la puissance de calcul des machines massivement parallèles et l’arrivée des machines atteignant le petaflops ouvrent une nouvelle voie pour des simulations qui étaient prohibitives il y a encore une décennie. La mise en oeuvre d’un code parallèle efficace pour maintenir une bonne performance sur un grand nombre de processeurs devra être étudié. On s’attachera en particuliers à conserver un bon équilibre des charges sur les processeurs. De plus, une attention particulière aux structures de données devra être fait afin de conserver une certaine simplicité et la portabilité et l’adaptabilité du code pour différentes architectures et différents problèmes utilisant une approche Lagrangienne. Ainsi, certains algorithmes sont à repenser pour tenir compte de ces contraintes. La puissance de calcul permettant de résoudre ces problèmes est offerte par des nouvelles architectures distribuées avec un nombre important de coeurs. Cependant, l’exploitation efficace de ces architectures est une tâche très délicate nécessitant une maîtrise des architectures ciblées, des modèles de programmation associés et des applications visées. La complexité de ces nouvelles générations des architectures distribuées est essentiellement due à un très grand nombre de noeuds multi-coeurs. Ces noeuds ou une partie d’entre eux peuvent être hétérogènes et parfois distants. L’approche de la plupart des bibliothèques parallèles (PBLAS, ScalAPACK, P_ARPACK) consiste à mettre en oeuvre la version distribuée de ses opérations de base, ce qui signifie que les sous-programmes de ces bibliothèques ne peuvent pas adapter leurs comportements aux types de données. Ces sous programmes doivent être définis une fois pour l’utilisation dans le cas séquentiel et une autre fois pour le cas parallèle. L’approche par composants permet la modularité et l’extensibilité de certaines bibliothèques numériques (comme par exemple PETSc) tout en offrant la réutilisation de code séquentiel et parallèle. Cette approche récente pour modéliser des bibliothèques numériques séquentielles/parallèles est très prometteuse grâce à ses possibilités de réutilisation et son moindre coût de maintenance. Dans les applications industrielles, le besoin de l’emploi des techniques du génie logiciel pour le calcul scientifique dont la réutilisabilité est un des éléments des plus importants, est de plus en plus mis en évidence. Cependant, ces techniques ne sont pas encore maÃotrisées et les modèles ne sont pas encore bien définis. La recherche de méthodologies afin de concevoir et réaliser des bibliothèques réutilisables est motivée, entre autres, par les besoins du monde industriel dans ce domaine. L’objectif principal de ce projet de thèse est de définir des stratégies de conception d’une bibliothèque numérique parallèle pour le suivi lagrangien en utilisant une approche par composants. Ces stratégies devront permettre la réutilisation du code séquentiel dans les versions parallèles tout en permettant l’optimisation des performances. L’étude devra être basée sur une séparation entre le flux de contrôle et la gestion des flux de données. Elle devra s’étendre aux modèles de parallélisme permettant l’exploitation d’un grand nombre de coeurs en mémoire partagée et distribuée. / The complexity of these new generations of distributed architectures is essencially due to a high number of multi-core nodes. Most of the nodes can be heterogeneous and sometimes remote. Today, nor the high number of nodes, nor the processes that compose the nodes are exploited by most of applications and numerical libraries. The approach of most of parallel libraries (PBLAS, ScalAPACK, P_ARPACK) consists in implementing the distributed version of its base operations, which means that the subroutines of these libraries can not adapt their behaviors to the data types. These subroutines must be defined once for use in the sequential case and again for the parallel case. The object-oriented approach allows the modularity and scalability of some digital libraries (such as PETSc) and the reusability of sequential and parallel code. This modern approach to modelize sequential/parallel libraries is very promising because of its reusability and low maintenance cost. In industrial applications, the need for the use of software engineering techniques for scientific computation, whose reusability is one of the most important elements, is increasingly highlighted. However, these techniques are not yet well defined. The search for methodologies for designing and producing reusable libraries is motivated by the needs of the industries in this field. The main objective of this thesis is to define strategies for designing a parallel library for Lagrangian particle tracking using a component approach. These strategies should allow the reuse of the sequential code in the parallel versions while allowing the optimization of the performances. The study should be based on a separation between the control flow and the data flow management. It should extend to models of parallelism allowing the exploitation of a large number of cores in shared and distributed memory.
156

The Hydrodynamic Interaction of Two Small Freely-moving Particles in a Couette Flow of a Yield Stress Fluid

Firouznia, Mohammadhossein January 2017 (has links)
No description available.
157

A Combined Microscopy and Spectroscopy Approach to Study Membrane Biophysics

Kohram, Maryam 15 September 2015 (has links)
No description available.
158

MAGNETIC TWEEZERS: ACTUATION, MEASUREMENT, AND CONTROL AT NANOMETER SCALE

Zhang, Zhipeng 03 September 2009 (has links)
No description available.
159

Three-dimensional hydrodynamic models coupled with GIS-based neuro-fuzzy classification for assessing environmental vulnerability of marine cage aquaculture

Navas, Juan Moreno January 2010 (has links)
There is considerable opportunity to develop new modelling techniques within a Geographic Information Systems (GIS) framework for the development of sustainable marine cage culture. However, the spatial data sets are often uncertain and incomplete, therefore new spatial models employing “soft computing” methods such as fuzzy logic may be more suitable. The aim of this study is to develop a model using Neuro-fuzzy techniques in a 3D GIS (Arc View 3.2) to predict coastal environmental vulnerability for Atlantic salmon cage aquaculture. A 3D hydrodynamic model (3DMOHID) coupled to a particle-tracking model is applied to study the circulation patterns, dispersion processes and residence time in Mulroy Bay, Co. Donegal Ireland, an Irish fjard (shallow fjordic system), an area of restricted exchange, geometrically complicated with important aquaculture activities. The hydrodynamic model was calibrated and validated by comparison with sea surface and water flow measurements. The model provided spatial and temporal information on circulation, renewal time, helping to determine the influence of winds on circulation patterns and in particular the assessment of the hydrographic conditions with a strong influence on the management of fish cage culture. The particle-tracking model was used to study the transport and flushing processes. Instantaneous massive releases of particles from key boxes are modelled to analyse the ocean-fjord exchange characteristics and, by emulating discharge from finfish cages, to show the behaviour of waste in terms of water circulation and water exchange. In this study the results from the hydrodynamic model have been incorporated into GIS to provide an easy-to-use graphical user interface for 2D (maps), 3D and temporal visualization (animations), for interrogation of results. v Data on the physical environment and aquaculture suitability were derived from a 3- dimensional hydrodynamic model and GIS for incorporation into the final model framework and included mean and maximum current velocities, current flow quiescence time, water column stratification, sediment granulometry, particulate waste dispersion distance, oxygen depletion, water depth, coastal protection zones, and slope. The Neuro-fuzzy classification model NEFCLASS–J, was used to develop learning algorithms to create the structure (rule base) and the parameters (fuzzy sets) of a fuzzy classifier from a set of classified training data. A total of 42 training sites were sampled using stratified random sampling from the GIS raster data layers, and the vulnerability categories for each were manually classified into four categories based on the opinions of experts with field experience and specific knowledge of the environmental problems investigated. The final products, GIS/based Neuro Fuzzy maps were achieved by combining modeled and real environmental parameters relevant to marine fin fish Aquaculture. Environmental vulnerability models, based on Neuro-fuzzy techniques, showed sensitivity to the membership shapes of the fuzzy sets, the nature of the weightings applied to the model rules, and validation techniques used during the learning and validation process. The accuracy of the final classifier selected was R=85.71%, (estimated error value of ±16.5% from Cross Validation, N=10) with a Kappa coefficient of agreement of 81%. Unclassified cells in the whole spatial domain (of 1623 GIS cells) ranged from 0% to 24.18 %. A statistical comparison between vulnerability scores and a significant product of aquaculture waste (nitrogen concentrations in sediment under the salmon cages) showed that the final model gave a good correlation between predicted environmental vi vulnerability and sediment nitrogen levels, highlighting a number of areas with variable sensitivity to aquaculture. Further evaluation and analysis of the quality of the classification was achieved and the applicability of separability indexes was also studied. The inter-class separability estimations were performed on two different training data sets to assess the difficulty of the class separation problem under investigation. The Neuro-fuzzy classifier for a supervised and hard classification of coastal environmental vulnerability has demonstrated an ability to derive an accurate and reliable classification into areas of different levels of environmental vulnerability using a minimal number of training sets. The output will be an environmental spatial model for application in coastal areas intended to facilitate policy decision and to allow input into wider ranging spatial modelling projects, such as coastal zone management systems and effective environmental management of fish cage aquaculture.
160

Particle diffusion in protein gels and at interfaces / Diffusion de particules dans des gels de protéines et aux interfaces

Balakrishnan Nair, Gireeshkumar 14 March 2012 (has links)
L'objectif de la thèse était d'étudier la mobilité de traceurs particulaires dans des milieuxcomplexes par microscopie confocale à balayage laser (CLSM) combinée avec le suivi demultiple particules (MPT) et le recouvrement de fluorescence après photoblanchiment (FRAP).Tout d'abord, nous avons étudié la diffusion de particules dans les gels formés par des protéinesglobulaires. Dans ce but, des gels avec structures variés ont été préparés en faisant varier lesconcentrations en protéine et en sel. La structure a été caractérisée par l'analyse des imagesobtenues par CLSM en termes de fonction de corrélation de paires. La mobilité de particulesavec une large gamme de tailles (2nm - 1 micron) a été étudiée à la fois dans des gels homogèneset hétérogènes et reliée à la structure du gel.Deuxièmement, nous avons étudié des émulsions eau dans eau préparées en mélangeant dessolutions aqueuses de PEO et de dextran. Il a été montré que lorsque des particules colloïdalessont ajoutées, elles sont emprisonnées à l'interface eau-eau, car elles réduisent la tensioninterfaciale. La structure et le déplacement des particules à l'interface ont été déterminés parCLSM combinée avec MPT. / The objective of the thesis was to investigate the mobility of tracer particles in complex media byConfocal Laser Scanning Microscopy (CLSM) combined with multiple particle tracking (MPT)and fluorescence recovery after photobleaching (FRAP).First, we investigated the diffusion of tracer particles in gels formed by globular proteins. Gelswith a variety of structures were prepared by varying the protein and salt concentrations. Thestructure was characterized by analysis of the CLSM images in terms of the pair correlationfunction. The mobility of particles with a broad range of sizes (2nm - 1μm) was investigatedboth in homogeneous and heterogeneous gels and related to the gel structure.Second, we studied water-in-water-emulsions prepared by mixing aqueous solutions of PEO anddextran. It is shown that when colloidal particles are added they become trapped at the waterwaterinterface because they reduce the interfacial tension. The structure and the displacement ofthe particles at the interface were determined using CLSM combined with MPT.

Page generated in 0.0334 seconds