• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 13
  • 9
  • 9
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 117
  • 117
  • 20
  • 20
  • 15
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Stochastic multi-market modeling with "efficient quadratures"

Oreamuno, Marco Antonio Artavia 17 February 2014 (has links)
Stochastische Anwendungen von großen Simulationsmodellen des Agrarsektors werden immer häufiger. Allerdings ist die stochastische Modellierung mit großen Marktmodellen rechenintensiv und mit hohen Kosten für Datenabspeicherung, -analyse und -manipulation verbunden. Gausssche Quadraturen sind effiziente Stichprobenmethoden, die wenige Punkte für die Approximation der zentralen Momente von gemeinsamen Wahrscheinlichkeitsverteilungen brauchen und somit die Kosten der Datenmanipulation senken. Für symmetrische Integrationsräume sind die Ecken des Oktaeder von Stroud (Stroud 1957) Formeln dritten Grades mit minimaler Anzahl von Punkten, die die stochastische Modellierung mit großen Modellen handhabbar machen kann. Es gibt trotzdem die Vermutung, dass Rotationen von Stroud''s Oktaeder einen Einfluss auf die Exaktheit der Quadraturen haben könnten; daher werden in dieser Studie acht unterschiedliche Rotationen (Quadraturformeln) getestet. Es zeigte sich, dass der Gebrauch der Formel von Artavia et al. (2009) oder der von Arndt (1996) bei der Generierung der Quadraturen entscheidend ist, und dass die Formel von Arndt einen höheren Exaktheitsgrad ergibt. Mit der Rotation, die sich aus der Formel von Arndt ergibt und Modellen oder Märkten mit starken Asymmetrien wie der Weizenmarkt in ESIM, könnten die Reihenfolge der stochastischen Variablen in der Kovarianz Matrix (A1 oder A2) oder die Methoden zur Einführung der Kovarianz Matrix (via Cholesky-Zerlegung –C– oder via die Diagonalisierungsmethode –D– ) einen bedeutsamen Einfluss auf die Exaktheit der Quadraturen haben. Mit Arndt''s Formel und weniger asymmetrischen Modellen oder Märkten, wie der Fall von Raps in ESIM, haben die Reihenfolgen A1 und A2 oder die Methoden zur Einführung der Kovarianz Matrix C und D weniger Einfluss auf die Exaktheit der Quadraturen. / Recently, stochastic applications of large-scale applied simulation models of agricultural markets have become more common. However, stochastic modeling with large market models incurs high computational and management costs for data storage, analysis and manipulation. Gaussian Quadratures (GQ) are efficient sampling methods requiring few points to approximate the central moments of the joint probability distribution of stochastic variables, and therefore reduce computational costs. For symmetric regions of integration, the vertices of Stroud''s n-octahedron (Stroud 1957) are formulas of degree 3 with minimal number of points, which can make the stochastic modeling with large economic models manageable. However, the conjecture exists that rotations of Stroud''s n-octahedron may have an effect on the accuracy of approximation of the model results. To address this, eight different rotations (quadrature formulas) were tested using the European Simulation Model (ESIM). It was found that using the formulas from Artavia et al. (2009) or Arndt (1996) in the generation of the quadratures is crucial, and furthermore, that the formula from Arndt yields higher accuracy. With the rotation obtained with Arndt''s formula and in models or markets with high asymmetries, as is the case for soft wheat in ESIM, the arrangement of the stochastic variables (A1 or A2) in the covariance matrix or the method selected to induce the covariance matrix (via Cholesky decomposition – C – or via the diagonalization method – D – ) may have a significant effect on the accuracy of the quadratures. With Arndt''s formula and with less asymmetric markets, as is the case for rapeseed in ESIM, the selection of arrangements A1 or A2 and of the method to induce the covariance C or D might not have a significant effect on the accuracy of the quadratures.
52

Transporttidsmodellering vid provpumpning i heterogen jord : spårämnesförsök i en isälvsavlagring

Lönnerholm, Björn January 2006 (has links)
<p>When protection zones for wells are delineated, it is important to acquire good knowledge about possible travel time from different points in the catchment area to the well. Often, simple analytical methods are used for estimating travel times and the assumption is made that the hydraulic conductivity is relatively homogenous within the aquifer. Nevertheless, many aquifers are strongly heterogeneous which may lead to differences between estimates and actual travel times. As a part of the process to develop improved methods for delineating protection zones for groundwater supply wells, a tracer experiment was performed in a glaciofluvial esker formation in Järlåsa. On the basis of the experiment, a numerical flow model was created for the test site.</p><p>The purpose of this master’s thesis was to apply the flow model to an aquifer where the hydraulic conductivity shows great variability and should be described by a stochastic distribution. The purpose was also to determine the statistical properties of the hydraulic conductivity and to simulate the transport times and their variation from different locations in the aquifer to the pumping well.</p><p>The hydraulic conductivity was estimated from grain size distributions in soil samples that were taken at various locations within the test site. The analysis of the hydraulic conductivity showed a large variation and confirmed the hypothesis that the aquifer is heterogeneous. Using the statistics, a large number of stochastic conductivity fields were created and flow simulation was performed for each realization. From the simulation result, frequency distributions of the transport times were produced describing the probability for the transit time for a water particle between a certain location in the aquifer and the pumping well. A comparison with the tracer experiment shows higher simulated transport times implying the need for improved model calibration. The conclusion is that the method used in this project is suitable for glaciofluvial esker aquifers. When protection zones are delineated, stochastic modeling can be used to express the zone boundaries in statistical terms.</p> / <p>När skyddsområden för grundvattentäkter skapas är det viktigt att ha god kännedom om vattnets transporttider från olika delar i tillrinningsområdet till uttagsplatsen av grundvattnet. Metoderna för att bestämma dessa tider är ofta enkla och vanligen görs antagandet att områdets hydrauliska konduktivitet är relativt homogen. Många akviferer är dock kraftigt heterogena och de verkliga transporttiderna kan då skilja sig från de uppskattade. Som ett led i metodutvecklingen för bättre avgränsning av skyddsområden genomfördes ett spårämnesförsök i en isälvsavlagring i Järlåsa. Med utgångspunkt från försöket har en numerisk flödesmodell konstruerats över försöksområdet.</p><p>Syftet med examensarbetet var att tillämpa flödesmodellen i en akvifer där den hydrauliska konduktiviteten visar sådana variationer att den beskrivs bäst av en stokastisk fördelning. Vidare var syftet att bestämma den hydrauliska konduktivitetens statistiska egenskaper och att simulera transporttiderna och deras variation från olika punkter i akviferen till pumpbrunnen.</p><p>Den hydrauliska konduktiviteten uppskattades utifrån kornstorleksfördelningar i jordprover som togs på en mängd platser i försöksområdet. Analysen av den hydrauliska konduktiviteten visar stora variationer i området vilket bekräftar att akviferen är heterogen.</p><p>Utifrån konduktivitetens statistik genererades ett stort antal stokastiska konduktivitetsfält och transporttiderna beräknades för varje realisering. Resultatet från simuleringarna gav frekvensfördelningar för transporttiderna som beskriver sannolikheten för hur lång uppehållstid en vattenpartikel har i marken mellan en startpunkt och pumpbrunnen. Jämfört med spårämnesförsöket blev de simulerade transporttiderna något större vilket tyder på att flödesmodellen kräver en bättre kalibrering mot fältmätningar. Slutsatsen är att metodiken är lämplig för att studera vattnets transporttider i isälvsavlagringen och när ett skyddsområde skapas för den här typen av akvifer kan stokastisk modellering användas för att beskriva skyddszoner i form av statistiska termer.</p>
53

Stochastic modeling of intracellular processes : bidirectional transport and microtubule dynamics / Modélisation stochastique de processus intracellulaires : transport bidirectionnel et dynamique de microtubules

Ebbinghaus, Maximilian 21 April 2011 (has links)
Dans cette thèse, des méthodes de la physique statistique hors équilibre sont utilisées pour décrire deux processus intracellulaires. Le transport bidirectionnel sur les microtubules est décrit à l'aide d'un gaz sur réseau stochastique quasi-unidimensionnel. Deux espèces de particules sautent dans des directions opposées en interagissant par exclusion. La présence habituelle d'accumulations de particules peut être supprimée en rajoutant la dynamique du réseau, c'est-à-dire de la microtubule. Un modèle simplifié pour la dynamique du réseau produit une transition de phase vers un état homogène avec un transport très efficace dans les deux directions. Dans la limite thermodynamique, une propriété de l'état stationnaire limite la longueur maximale des accumulations. La formation de voies peut être causée par des interactions entre particules. Néanmoins, ces mécanismes s'avèrent peu robustes face à une variation des paramètres du modèle. Dans presque tous les cas, la dynamique du réseau a un effet positif et bien plus important sur le transport que la formation de voies. Par conséquent, la dynamique du réseau semble un point-clé pour comprendre la régulation du transport intracellulaire. La dernière partie introduit un modèle pour la dynamique d'une microtubule sous l'action d'une protéine qui favorise les sauvetages. Des phénomènes intéressants de vieillissement apparaissent alors, et devraient être observables dans des expériences. / This thesis uses methods and models from non-equilibrium statistical physics to describe intracellular processes. Bidirectional microtubule-based transport within axons is modeled as a quasi-one-dimensional stochastic lattice gas with two particle species moving in opposite directions under mutual exclusion interaction. Generically occurring clusters of particles in current models for intracellular transport can be dissolved by additionally considering the dynamics of the transport lattice, i.e., the microtubule. An idealized model for the lattice dynamics is used to create a phase transition toward a homogenous state with efficient transport in both directions. In the thermodynamic limit, a steady state property of the dynamic lattice limits the maximal size of clusters. Lane formation mechanisms which are due to specific particle-particle interactions turn out to be very sensitive to the model assumptions. Furthermore, even if some particle-particle interaction is considered, taking the lattice dynamics into account almost always improves transport. Thus the lattice dynamics seems to be the key aspect in understanding how nature regulates intracellular traffic. The last part introduces a model for the dynamics of a microtubule which is limited in its growth by the cell boundary. The action of a rescue-enhancing protein which is added to the growing tip of a microtubule and then slowly dissociates leads to interesting aging effects which should be experimentally observable.
54

Uncertainty Evaluation in Large-scale Dynamical Systems: Theory and Applications

Zhou, Yi (Software engineer) 12 1900 (has links)
Significant research efforts have been devoted to large-scale dynamical systems, with the aim of understanding their complicated behaviors and managing their responses in real-time. One pivotal technological obstacle in this process is the existence of uncertainty. Although many of these large-scale dynamical systems function well in the design stage, they may easily fail when operating in realistic environment, where environmental uncertainties modulate system dynamics and complicate real-time predication and management tasks. This dissertation aims to develop systematic methodologies to evaluate the performance of large-scale dynamical systems under uncertainty, as a step toward real-time decision support. Two uncertainty evaluation approaches are pursued: the analytical approach and the effective simulation approach. The analytical approach abstracts the dynamics of original stochastic systems, and develops tractable analysis (e.g., jump-linear analysis) for the approximated systems. Despite the potential bias introduced in the approximation process, the analytical approach provides rich insights valuable for evaluating and managing the performance of large-scale dynamical systems under uncertainty. When a system’s complexity and scale are beyond tractable analysis, the effective simulation approach becomes very useful. The effective simulation approach aims to use a few smartly selected simulations to quickly evaluate a complex system’s statistical performance. This approach was originally developed to evaluate a single uncertain variable. This dissertation extends the approach to be scalable and effective for evaluating large-scale systems under a large-number of uncertain variables. While a large portion of this dissertation focuses on the development of generic methods and theoretical analysis that are applicable to broad large-scale dynamical systems, many results are illustrated through a representative large-scale system application on strategic air traffic management application, which is concerned with designing robust management plans subject to a wide range of weather possibilities at 2-15 hours look-ahead time.
55

Modeling Electric Vehicle Energy Demand and Regional Electricity Generation Dispatch for New England and New York

Howerter, Sarah E 01 January 2019 (has links)
The transportation sector is a largest emitter of greenhouse gases in the U.S., accounting for 28.6% of all 2016 emissions, the majority of which come from the passenger vehicle fleet [1,2]. One major technology that is being investigated by researchers, planners, and policy makers to help lower the emissions from the transportation sector is the plug-in electric vehicle (PEV). The focus of this work is to investigate and model the impacts of increased levels of PEVs on the regional electric power grid and on the net change in CO2 emissions due to the decrease tailpipe emissions and the increase in electricity generation under current emissions caps. The study scope includes all of New England and New York state, modeled as one system of electricity supply and demand, which includes the estimated 2030 baseline demand and the cur- rent generation capacity plus increased renewable capacity to meet state Renewable Portfolio Standard targets for 2030. The models presented here include fully electric vehicles and plug-in hybrids, public charging infrastructure scenarios, hourly charging demand, solar and wind generation and capacity factors, and real-world travel derived from the 2016-2017 National Household Travel Survey. We make certain assumptions, informed by the literature, with the goal of creating a modeling methodology to improve the estimation of hourly PEV charging demand for input into regional electric sector dispatch models. The methodology included novel stochastic processes, considered seasonal and weekday versus weekend differences in travel, and did not force the PEV battery state-of-charge to be full at any specific time of day. The results support the need for public charging infrastructure, specifically at workplaces, with the “work” infrastructure scenario shifting more of the unmanaged charging demand to daylight hours when solar generation could be utilized. Workplace charging accounted for 40% of all non-home charging demand in the scenario where charging infrastructure was “universally” available. Under the increased renewable fuel portfolio, the reduction in average CO2 emissions ranged from 90 to 92% for the vehicles converted from ICEV to PEV. The total emissions reduced for 15% PEV penetration and universally available charging infrastructure was 5.85 million metric tons, 5.27% of system-wide emissions. The results support the premise of plug-in electric vehicles being an important strategy for the reduction of CO2 emissions in our study region. Future investigation into the extent of reductions possible with both the optimization of charging schedules through pricing or other mechanisms and the modeling of grid level energy storage is warranted. Additional model development should include a sensitivity analysis of the PEV charging demand model parameters, and better data on the charging behavior of PEV owners as they continue to penetrate the market at higher rates.
56

Stochastic Modeling and Statistical Inference of Geological Fault Populations and Patterns

Borgos, Hilde Grude January 2000 (has links)
<p>The focus of this work is on faults, and the main issue is statistical analysis and stochastic modeling of faults and fault patterns in petroleum reservoirs. The thesis consists of Part I-V and Appendix A-C. The units can be read independently. Part III is written for a geophysical audience, and the topic of this part is fault and fracture size-frequency distributions. The remaining parts are written for a statistical audience, but can also be read by people with an interest in quantitative geology. The topic of Part I and II is statistical model choice for fault size distributions, with a samling algorithm for estimating Bayes factor. Part IV describes work on spatial modeling of fault geometry, and Part V is a short note on line partitioning. Part I, II and III constitute the main part of the thesis. The appendices are conference abstracts and papers based on Part I and IV.</p> / Paper III: reprinted with kind permission of the American Geophysical Union. An edited version of this paper was published by AGU. Copyright [2000] American Geophysical Union
57

Modeling Collective Decision-Making in Animal Groups

Granovskiy, Boris January 2012 (has links)
Many animal groups benefit from making decisions collectively. For example, colonies of many ant species are able to select the best possible nest to move into without every ant needing to visit each available nest site. Similarly, honey bee colonies can focus their foraging resources on the best possible food sources in their environment by sharing information with each other. In the same way, groups of human individuals are often able to make better decisions together than each individual group member can on his or her own. This phenomenon is known as "collective intelligence", or "wisdom of crowds." What unites all these examples is the fact that there is no centralized organization dictating how animal groups make their decisions. Instead, these successful decisions emerge from interactions and information transfer between individual members of the group and between individuals and their environment. In this thesis, I apply mathematical modeling techniques in order to better understand how groups of social animals make important decisions in situations where no single individual has complete information. This thesis consists of five papers, in which I collaborate with biologists and sociologists to simulate the results of their experiments on group decision-making in animals. The goal of the modeling process is to better understand the underlying mechanisms of interaction that allow animal groups to make accurate decisions that are vital to their survival. Mathematical models also allow us to make predictions about collective decisions made by animal groups that have not yet been studied experimentally or that cannot be easily studied. The combination of mathematical modeling and experimentation gives us a better insight into the benefits and drawbacks of collective decision making, and into the variety of mechanisms that are responsible for collective intelligence in animals. The models that I use in the thesis include differential equation models, agent-based models, stochastic models, and spatially explicit models. The biological systems studied included foraging honey bee colonies, house-hunting ants, and humans answering trivia questions.
58

Stochastic Modeling and Statistical Inference of Geological Fault Populations and Patterns

Borgos, Hilde Grude January 2000 (has links)
The focus of this work is on faults, and the main issue is statistical analysis and stochastic modeling of faults and fault patterns in petroleum reservoirs. The thesis consists of Part I-V and Appendix A-C. The units can be read independently. Part III is written for a geophysical audience, and the topic of this part is fault and fracture size-frequency distributions. The remaining parts are written for a statistical audience, but can also be read by people with an interest in quantitative geology. The topic of Part I and II is statistical model choice for fault size distributions, with a samling algorithm for estimating Bayes factor. Part IV describes work on spatial modeling of fault geometry, and Part V is a short note on line partitioning. Part I, II and III constitute the main part of the thesis. The appendices are conference abstracts and papers based on Part I and IV. / Paper III: reprinted with kind permission of the American Geophysical Union. An edited version of this paper was published by AGU. Copyright [2000] American Geophysical Union
59

Network capacity sharing with QoS as a financial derivative pricing problem : algorithms and network design

Rasmusson, Lars January 2002 (has links)
A design of anautomatic network capacity markets, oftenreferred to as a bandwidth market, is presented. Three topicsare investigated. First, a network model is proposed. Theproposed model is based upon a trisection of the participantroles into network users, network owners, and market middlemen.The network capacity is defined in a way that allows it to betraded, and to have a well defined price. The network devicesare modeled as core nodes, access nodes, and border nodes.Requirements on these are given. It is shown how theirfunctionalities can be implemented in a network. Second, asimulated capacity market is presented, and a statisticalmethod for estimating the price dynamics in the market isproposed. A method for pricing network services based on sharedcapacity is proposed, in which the price of a service isequivalent to that of a financial derivative contract on anumber of simple capacity shares.Third, protocols for theinteraction between the participants are proposed. The marketparticipants need to commit to contracts with an auditableprotocol with a small overhead. The proposed protocol is basedon a public key infrastructure and on known protocols for multiparty contract signing. The proposed model allows networkcapacity to be traded in a manner that utilizes the networkeciently. A new feature of this market model, compared to othernetwork capacity markets, is that the prices are not controlledby the network owners. It is the end-users who, by middlemen,trade capacity among each-other. Therefore, financial, ratherthan control theoretic, methods are used for the pricing ofcapacity. <b>Keywords:</b>Computer network architecture, bandwidthtrading, inter-domain Quality-of-Service, pricing,combinatorial allocation, financial derivative pricing,stochastic modeling
60

Network capacity sharing with QoS as a financial derivative pricing problem : algorithms and network design

Rasmusson, Lars January 2002 (has links)
<p>A design of anautomatic network capacity markets, oftenreferred to as a bandwidth market, is presented. Three topicsare investigated. First, a network model is proposed. Theproposed model is based upon a trisection of the participantroles into network users, network owners, and market middlemen.The network capacity is defined in a way that allows it to betraded, and to have a well defined price. The network devicesare modeled as core nodes, access nodes, and border nodes.Requirements on these are given. It is shown how theirfunctionalities can be implemented in a network. Second, asimulated capacity market is presented, and a statisticalmethod for estimating the price dynamics in the market isproposed. A method for pricing network services based on sharedcapacity is proposed, in which the price of a service isequivalent to that of a financial derivative contract on anumber of simple capacity shares.Third, protocols for theinteraction between the participants are proposed. The marketparticipants need to commit to contracts with an auditableprotocol with a small overhead. The proposed protocol is basedon a public key infrastructure and on known protocols for multiparty contract signing. The proposed model allows networkcapacity to be traded in a manner that utilizes the networkeciently. A new feature of this market model, compared to othernetwork capacity markets, is that the prices are not controlledby the network owners. It is the end-users who, by middlemen,trade capacity among each-other. Therefore, financial, ratherthan control theoretic, methods are used for the pricing ofcapacity.</p><p><b>Keywords:</b>Computer network architecture, bandwidthtrading, inter-domain Quality-of-Service, pricing,combinatorial allocation, financial derivative pricing,stochastic modeling</p>

Page generated in 0.1103 seconds