• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 13
  • 9
  • 9
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 118
  • 118
  • 20
  • 20
  • 15
  • 13
  • 13
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Network capacity sharing with QoS as a financial derivative pricing problem : algorithms and network design

Rasmusson, Lars January 2002 (has links)
<p>A design of anautomatic network capacity markets, oftenreferred to as a bandwidth market, is presented. Three topicsare investigated. First, a network model is proposed. Theproposed model is based upon a trisection of the participantroles into network users, network owners, and market middlemen.The network capacity is defined in a way that allows it to betraded, and to have a well defined price. The network devicesare modeled as core nodes, access nodes, and border nodes.Requirements on these are given. It is shown how theirfunctionalities can be implemented in a network. Second, asimulated capacity market is presented, and a statisticalmethod for estimating the price dynamics in the market isproposed. A method for pricing network services based on sharedcapacity is proposed, in which the price of a service isequivalent to that of a financial derivative contract on anumber of simple capacity shares.Third, protocols for theinteraction between the participants are proposed. The marketparticipants need to commit to contracts with an auditableprotocol with a small overhead. The proposed protocol is basedon a public key infrastructure and on known protocols for multiparty contract signing. The proposed model allows networkcapacity to be traded in a manner that utilizes the networkeciently. A new feature of this market model, compared to othernetwork capacity markets, is that the prices are not controlledby the network owners. It is the end-users who, by middlemen,trade capacity among each-other. Therefore, financial, ratherthan control theoretic, methods are used for the pricing ofcapacity.</p><p><b>Keywords:</b>Computer network architecture, bandwidthtrading, inter-domain Quality-of-Service, pricing,combinatorial allocation, financial derivative pricing,stochastic modeling</p>
62

Stochastic modelling of large cavities : random and coherent field applications

Cozza, Andrea 28 September 2012 (has links) (PDF)
Bien que souvent présentés comme des configurations atypiques, la classe des milieux diffusifs représente une grande partie des milieux dans lesquels se propagent des ondes aussi bien électromagnétiques qu'acoustiques. Les grandes cavités étant capables de bien approcher ces mêmes caractéristiques, elles sont largement utilisées dans un contexte métrologique, afin d'émuler un grand nombre de configurations pratiques et d'évaluer certaines propriétés de dispositifs électroniques, acoustiques et optiques. Nous nous intéressons à la modélisation statistique de la propagation d'ondes dans les grandes cavités. La pratique courante de modéliser les champs dans une cavité comme diffus est d'abord analysée, afin de montrer comment cette hypothèse n'est pas réaliste en basse fréquence, et les conséquences qui en découlent. L'importance du recouvrement modal et sa nature aléatoire sont discutées, montrant comment l'hypothèse diffusive ne peut pas être décrite comme une propriété certaine. Dans un deuxième temps nous étudions les applications du retournement temporel aux grandes cavités, ce qui nous amène à l'introduction d'une technique généralisée capable de reproduire des fronts d'ondes cohérents dans un environnement diffusif, typiquement regardé comme uniquement capable de supporter une propagation aléatoire.
63

Linear and non-linear mechanistic modeling and simulation of the formation of carbon adsorbents

Argoti Caicedo, Alvaro Andres January 1900 (has links)
Doctor of Philosophy / Department of Chemical Engineering / Liang T. Fan / Walter P. Walawender Jr / Carbon adsorbents, namely, activated carbons and carbon molecular sieves, can be variously applied in the purification and separation of gaseous and liquid mixtures, e.g., in the separation of nitrogen or oxygen from air; often, carbon adsorbents also serve as catalysts or catalyst supports. The formation of carbon adsorbents entails the modification of the original internal surfaces of carbonaceous substrates by resorting to a variety of chemical or physical methods, thereby augmenting the carbonaceous substrates' adsorbing capacity. The formation of carbon adsorbents proceeds randomly, which is mainly attributable to the discrete nature, mesoscopic sizes, and irregular shapes of the substrates utilized as well as to their intricate internal surface configuration. Moreover, any process of carbon-adsorbent formation may fluctuate increasingly severely with time. It is desirable that such a process involving discrete and mesoscopic entities undergoing complex motion and behavior be explored by means of the statistical framework or a probabilistic paradigm. This work aims at probabilistic analysis, modeling, and simulation of the formation of carbon adsorbents on the basis of mechanistic rate expressions. Specifically, the current work has formulated a set of linear and non-linear models of varied complexity; derived the governing equations of the models formulated; obtained the analytical solutions of the governing equations whenever possible; simulated one of the models by the Monte Carlo method; and validated the results of solution and simulation in light of the available experimental data for carbon-adsorbent formation from carbonaceous substrates, e.g., biomass or coal, or simulated data obtained by sampling them from a probability distribution. It is expected that the results from this work will be useful in establishing manufacturing processes for carbon adsorbents. For instance, they can be adopted in planning bench-scale or pilot-scale experiments; preliminary design and economic analysis of production facilities; and devising the strategies for operating and controlling such facilities.
64

Générateurs de suites binaires vraiment aléatoires : modélisation et implantation dans des cibles FPGA / True random numbers generators : modelisation and implementation in FPGA

Valtchanov, Boyan 14 December 2010 (has links)
Cette thèse adresse le sujet de la génération de suites binaires aléatoires dans les circuits logiques programmables FPGA et plus particulièrement les suites dont l’origine aléatoire est de nature physique et non algorithmique. De telles suites trouvent une utilisation abondante dans la plupart des protocoles cryptographiques. Un état de l’art portant sur les différentes méthodes de génération de vrai aléa dans les circuits logiques programmables est présenté sous forme d’analyse critique d’articles scientifiques. Une synthèse des différentes tendances dans l’extraction et la génération d’aléa est également présentée. Une campagne d’expériences et de mesures est présentée visant à caractériser les différentes sources de signaux aléatoires disponibles à l’intérieur du FPGA. Des phénomènes intéressants tel le verrouillage de plusieurs oscillateurs en anneau, la dépendance de la source d’aléa vis-à-vis de la logique environnante et la méthodologie de mesure du jitter sont analysés. Plusieurs méthodes nouvelles de génération de suites binaires aléatoires sont décrites. Finalement une méthodologie nouvelle de simulation en VHDL de générateurs complets ainsi qu’un modèle mathématique d’un oscillateur en anneau en tant que source d’aléa sont présentés / This thesis addresses the topic of the generation of random binary streams in FPGA and especially random sequences whose origin is physical and not algorithmic. Such sequences find abundant use in most cryptographic protocols. A state of the art regarding the various methods of generating true randomness in programmable logic is presented as a critical analysis of scientific articles. A synthesis of different trends in the extraction and generation of true randomness is presented. A campaign of experiments and measurements is presented to characterize the different sources of random signals available inside the FPGA. Interesting phenomena such as the locking of several ring oscillators and the sensibility of the source of randomness depending to the surrounding logic activity are reported. Several new methods for generating random binary sequences are described and analyzed. Finally a new simulation methodology in VHDL and a mathematical model of a ring oscillator as a source of randomness for TRNG are presented
65

Explicit algebraic subgrid-scale stress and passive scalar flux modeling in large eddy simulation

Rasam, Amin January 2011 (has links)
The present thesis deals with a number of challenges in the field of large eddy simulation (LES). These include the performance of subgrid-scale (SGS) models at fairly high Reynolds numbers and coarse resolutions, passive scalar and stochastic modeling in LES. The fully-developed turbulent channel flow is used as the test case for these investigations. The advantage of this particular test case is that highly accurate pseudo-spectral methods can be used for the discretization of the governing equations. In the absence of discretization errors, a better understanding of the subgrid-scale model performance can be achieved. Moreover, the turbulent channel flow is a challenging test case for LES, since it shares some of the common important features of all wall-bounded turbulent flows. Most commonly used eddy-viscosity-type models are suitable for moderately to highly-resolved LES cases, where the unresolved scales are approximately isotropic. However, this makes simulations of high Reynolds number wall-bounded flows computationally expensive. In contrast, explicit algebraic (EA) model takes into account the anisotropy of SGS motions and performs well in predicting the flow statistics in coarse-grid LES cases. Therefore, LES of high Reynolds number wall-bounded flows can be performed at much lower number of grid points in comparison with other models. A demonstration of the resolution requirements for the EA model in comparison with the dynamic Smagorinsky and its high-pass filtered version for a fairly high Reynolds number is given in this thesis. One of the shortcomings of the commonly used eddy diffusivity model arises from its assumption of alignment of the SGS scalar flux vector with the resolved scalar gradients. However, better SGS scalar flux models that overcome this issue are very few. Using the same methodology that led to the EA SGS stress model, a new explicit algebraic SGS scalar flux model is developed, which allows the SGS scalar fluxes to be partially independent of the resolved scalar gradient. The model predictions are verified and found to improve the scalar statistics in comparison with the eddy diffusivity model. The intermittent nature of energy transfer between the large and small scales of turbulence is often not fully taken into account in the formulation of SGS models both for velocity and scalar. Using the Langevin stochastic differential equation, the EA models are extended to incorporate random variations in their predictions which lead to a reasonable amount of backscatter of energy from the SGS to the resolved scales. The stochastic EA models improve the predictions of the SGS dissipation by decreasing its length scale and improving the shape of its probability density function. / QC 20110615
66

Floodplain Risk Analysis Using Flood Probability and Annual Exceedance Probability Maps

Smemoe, Christopher M. 18 March 2004 (has links) (PDF)
This research presents two approaches to determining the effects of natural variability and model uncertainty on the extents of computed floodplain boundaries. The first approach represents the floodplain boundary as a spatial map of flood probabilities -- with values between 0 and 100%. Instead of representing the floodplain boundary at a certain recurrence interval as a single line, this approach creates a spatial map that shows the probability of flooding at each point in the floodplain. This flood probability map is a useful tool for visualizing the uncertainty of a floodplain boundary. However, engineers are still required to determine a single line showing the boundary of a floodplain for flood insurance and other floodplain studies. The second approach to determining the effects of uncertainty on a floodplain boundary computes the annual exceedance probability (AEP) at each point on the floodplain. This spatial map of AEP values represents the flood inundation probability for any point on the floodplain in any given year. One can determine the floodplain boundary at any recurrence interval from this AEP map. These floodplain boundaries include natural variability and model uncertainty inherent in the modeling process. The boundary at any recurrence interval from the AEP map gives a single, definite boundary that considers uncertainty. This research performed case studies using data from Leith Creek in North Carolina and the Virgin River in southern Utah. These case studies compared a flood probability map for a certain recurrence interval with an AEP map and demonstrated the consistency of the results from these two methods. Engineers and planners can use floodplain probability maps for viewing the uncertainty of a floodplain boundary at a certain recurrence interval. They can also use AEP maps for determining a single boundary for a certain recurrence interval that considers all the natural variability and model uncertainty inherent in the modeling process.
67

Engineering Healthcare Delivery: A Systems Engineering Approach to Improving Trauma Center Nursing Efficacy

Myers, Robert A. January 2016 (has links)
No description available.
68

Hybrid Modeling and Simulation of Stochastic Effects on Biochemical Regulatory Networks

Ahmadian, Mansooreh 04 August 2020 (has links)
A complex network of genes and proteins governs the robust progression through cell cycles in the presence of inevitable noise. Stochastic modeling is viewed as a key paradigm to study the effects of intrinsic and extrinsic noise on the dynamics of biochemical networks. A detailed quantitative description of such complex and multiscale networks via stochastic modeling poses several challenges. First, stochastic models generally require extensive computations, particularly when applied to large networks. Second, the accuracy of stochastic models is highly dependent on the quality of the parameter estimation based on experimental observations. The goal of this dissertation is to address these problems by developing new efficient methods for modeling and simulation of stochastic effects in biochemical systems. Particularly, a hybrid stochastic model is developed to represent a detailed molecular mechanism of cell cycle control in budding yeast cells. In a single multiscale model, the proposed hybrid approach combines the advantages of two regimes: 1) the computational efficiency of a deterministic approach, and 2) the accuracy of stochastic simulations. The results show that this hybrid stochastic model achieves high computational efficiency while generating simulation results that match very well with published experimental measurements. Furthermore, a new hierarchical deep classification (HDC) algorithm is developed to address the parameter estimation problem in a monomolecular system. The HDC algorithm adopts a neural network that, via multiple hierarchical search steps, finds reasonably accurate ranges for the model parameters. To train the neural network in the presence of experimental data scarcity, the proposed method leverages the domain knowledge from stochastic simulations to generate labeled training data. The results show that the proposed HDC algorithm yields accurate ranges for the model parameters and highlight the potentials of model-free learning for parameter estimation in stochastic modeling of complex biochemical networks. / Doctor of Philosophy / Cell cycle is a process in which a growing cell replicates its DNA and divides into two cells. Progression through the cell cycle is regulated by complex interactions between networks of genes, transcripts, and proteins. These interactions inside the confined volume of a cell are subject to inherent noise. To provide a quantitative description of the cell cycle, several deterministic and stochastic models have been developed. However, deterministic models cannot capture the intrinsic noise. In addition, stochastic modeling poses the following challenges. First, stochastic models generally require extensive computations, particularly when applied to large networks. Second, the accuracy of stochastic models is highly dependent on the accuracy of the estimated model parameters. The goal of this dissertation is to address these challenges by developing new efficient methods for modeling and simulation of stochastic effects in biochemical networks. The results show that the proposed hybrid model that combines stochastic and deterministic modeling approaches can achieve high computational efficiency while generating accurate simulation results. Moreover, a new machine learning-based method is developed to address the parameter estimation problem in biochemical systems. The results show that the proposed method yields accurate ranges for the model parameters and highlight the potentials of model-free learning for parameter estimation in stochastic modeling of complex biochemical networks.
69

Lagrangian stochastic modeling of turbulent gas-solid flows with two-way coupling in homogeneous isotropic turbulence / Modélisation lagrangienne stochastique des écoulements gaz-solides turbulents avec couplage inverse en turbulence homogène isotrope stationnaire

Zeren, Zafer 29 October 2010 (has links)
Dans ce travail de thèse, réalisé à l'IMFT, nous nous sommes intéressés aux écoulements turbulents diphasiques gaz-solides et plus particulièrement au phénomène de couplage inverse qui correspond à la modulation de la turbulence par la phase dispersée. Ce mécanisme est crucial pour les écoulements à forts chargements massiques. Dans cette thèse, nous avons considéré une turbulence homogène isotrope stationnaire sans gravité dans laquelle des particules sont suivies individuellement d'une façon Lagrangienne. La turbulence du fluide porteur est obtenue par des simulations directes (DNS). Les particules sont sphériques, rigides et d'une taille inférieure aux plus petites échelles de la turbulence. Leur densité est bien plus grande que la densité du fluide. Dans ce cadre, la force la plus importante agissant sur les particules est celle de traînée. Les interactions inter-particules ainsi que la gravité ne sont pas prises en compte. Pour modéliser ce type d'écoulement, une approche stochastique est utilisée pour laquelle l'accélération du fluide est modélisée par une équation de Langevin. L'originalité de ce travail est la prise en compte de l'effet de la modulation de la turbulence par un terme additionnel. Nous avons proposé deux modèles : une force de couplage moyenne qui est définie à partir des vitesses moyennes des phases, et une force instantanée qui est définie à l'aide du formalisme mésoscopique Eulérien. La fermeture des modèles s’appuie sur la fonction d’autocorrélation Lagrangienne et l’équation de transport de l’énergie cinétique. Les modèles sont testés en terme de prédiction de la vitesse de dérive et des corrélations fluide-particule. Les résultats montrent que le modèle moyen, plus simple, prend en compte les effets principaux du couplage inverse. Cependant, le problème de fermeture pratique est reporté sur la modélisation de l’échelle intégrale Lagrangienne et l’énergie cinétique de la turbulence du fluide vue par les particules. / In this thesis, performed in IMFT, we are interested in the turbulent gas-solid flows and more specifically, in the phenomenon of turbulence modulation which is the modification of the structure of the turbulence due to the solid particles. This mechanism is crucial in flows with high particle mass-loadings. In this work, we considered a homogeneous isotropic turbulence without gravity kept stationary with stochastic type forcing. Discrete particles are tracked individually in Lagrangian manner. Turbulence of the carrier phase is obtained by using DNS. The particles are spherical, rigid and of a diameter smaller than the smallest scales of turbulence. Their density is very large in comparison to the density of the fluid. In this configuration the only force acting on the particles is the drag force. Volume fraction of particles is very small and inter-particle interactions are not considered. To model this type of flow, a stochastic approach is used where the fluid element accel- eration is modeled using stochastic Langevin equation. The originality in this work is an additional term in the stochastic equation which integrates the effect of the particles on the trajectory of fluid elements. To model this term, we proposed two types of modeling: a mean drag model which is defined using the mean velocities from the mean transport equations of the both phases and an instantaneous drag term which is written with the help of the Mesoscopic Eulerian Approach. The closure of the models is based on the Lagrangian auto- correlation function of the fluid velocity and on the transport equation of the fluid kinetic energies. The models are tested in terms of the fluid-particle correlations and fluid-particle turbulent drift velocity. The results show that the mean model, simple, takes into account the principal physical mechanism of turbulence modulation. However, practical closure problem is brought forward to the Lagrangian integral scale and the fluid kinetic energy of the fluid turbulence viewed by the particles.
70

Générateurs de nombres véritablement aléatoires à base d'anneaux asynchrones : conception, caractérisation et sécurisation / Ring oscillator based true random number generators : design, characterization and security

Cherkaoui, Abdelkarim 16 June 2014 (has links)
Les générateurs de nombres véritablement aléatoires (TRNG) sont des composants cruciaux dans certaines applications cryptographiques sensibles (génération de clés de chiffrement, génération de signatures DSA, etc). Comme il s’agit de composants très bas-niveau, une faille dans le TRNG peut remettre en question la sécurité de tout le système cryptographique qui l’exploite. Alors que beaucoup de principes de TRNG existent dans la littérature, peu de travaux analysent rigoureusement ces architectures en termes de sécurité. L’objectif de cette thèse était d’étudier les avantages des techniques de conception asynchrone pour la conception de générateurs de nombres véritablement aléatoires (TRNG) sûrs et robustes. Nous nous sommes en particulier intéressés à des oscillateurs numériques appelés anneaux auto-séquencés. Ceux-ci exploitent un protocole de requêtes et acquittements pour séquencer les données qui y circulent. En exploitant les propriétés uniques de ces anneaux, nous proposons un nouveau principe de TRNG, avec une étude théorique détaillée sur son fonctionnement, et une évaluation du cœur du générateur dans des cibles ASIC et FPGA. Nous montrons que ce nouveau principe permet non seulement de générer des suites aléatoires de très bonne qualité et avec un très haut débit (>100 Mbit/s), mais il permet aussi une modélisation réaliste de l’entropie des bits de sortie (celle-ci peut être réglée grâce aux paramètres de l’extracteur). Ce travail propose également une méthodologie complète pour concevoir ce générateur, pour le dimensionner en fonction du niveau de bruit dans le circuit, et pour le sécuriser face aux attaques et défaillances / True Random Number Generators (TRNG) are ubiquitous in many critical cryptographic applications (key generation, DSA signatures, etc). While many TRNG designs exist in literature, only a few of them deal with security aspects, which is surprising considering that they are low-level primitives in a cryptographic system (a weak TRNG can jeopardize a whole cryptographic system). The objective of this thesis was to study the advantages of asynchronous design techniques in order to build true random number generators that are secure and robust. We especially focused on digital oscillators called self-timed rings (STR), which use a handshake request and acknowledgement protocol to organize the propagation of data. Using some of the unique properties of STRs, we propose a new TRNG principle, with a detailed theoretical study of its behavior, and an evaluation of the TRNG core in ASICs and FPGAs. We demonstrate that this new principle allows to generate high quality random bit sequences with a very high throughput (> 100 Mbit/s). Moreover, it enables a realistic estimation for the entropy per output bit (this entropy level can be tuned using the entropy extractor parameters). We also present a complete methodology to design the TRNG, to properly set up the architecture with regards to the level of noise in the circuit, and to secure it against attacks and failures

Page generated in 0.0957 seconds