571 |
Inégalités fonctionnelles liées aux formes de Dirichlet. De l'isopérimétrie aux inégalités de Sobolev.Fougères, Pierre 18 October 2002 (has links) (PDF)
Les semi-groupes de Markov ergodiques permettent d'approcher des mesures de probabilité au moyen d'inégalités fonctionnelles. L'objectif de la thèse est l'étude de certaines de ces inégalités, de l'isopérimétrie gaussienne aux inégalités de Sobolev. Nous cherchons essentiellement à établir des liens entre elles, à déterminer leurs constantes optimales et à obtenir des critères assurant leur existence. Le travail est divisé en trois parties. Dans la première , nous nous intéressons aux liens entre les inégalités de Sobolev logarithmiques (SL) et celles d'?isopérimétrie gaussienne de Bobkov (IGB). Nous montrons qu'?un semi-groupe de courbure minorée (éventuellement négative) qui satisfait à (SL) vérifie également une inégalité (IGB). Nous obtenons ainsi une inégalité (IGB) pour certains systèmes de spins. Dans la seconde partie, nous montrons que la constante de Poincaré d'une mesure de probabilité log-concave sur la droite réelle est universellement comparable au carré de la distance moyenne à la médiane. La preuve repose sur un calcul de variations dans l'ensemble des fonctions convexes. La dernière partie est consacrée à de nouveaux critères conduisant aux inégalités de Sobolev lorsque le critère de courbure-dimension (CD) de Bakry et Emery est mis en défaut. La technique utilisée repose sur la construction (au moyen de changements conformes de métrique et tensorisation) d?'une structure de Dirichlet en dimension supérieure qui satisfait un critère (CD) et se projette sur la structure de départ.
|
572 |
A Model for Multiperiod Route Planning and a Tabu Search Method for Daily Log Truck SchedulingHolm, Christer, Larsson, Andreas January 2004 (has links)
<p>The transportation cost of logs from forest to customers is a large part of the overall cost for the Swedish forestry industry. Finding good routes from harvesting points to saw and pulp mills is a complex task, where the total number of feasible routes is extremely high. In this thesis we present two methods for log truck scheduling. </p><p>The first is to, from a given set of routes, find the most valuable subset that fulfils the customers demand. We use a model that is similar to the set partitioning problem and a method that is referred to as a composite pricing coupled with Branch and Bound. The composite pricing based method prices the routes (columns) and chooses the most valuable ones that are then added to the LP relaxation. Once an LP optimum is found, the Branch and Bound method is used to find an integer optimum solution. We have tested this on a case of realistic size. </p><p>The second method is a tabu search heuristic. Here, the purpose is to create efficient and qualitative routes from a given number of trips (referred to as predefined trips). From a start solution tabu search systematically generates new solutions. This method was tested on a small problem and on a five times larger problem to study how the size of the problem affected the result. It was also tested and compared on two cases in which the backhauling possibilities (i.e. instead of traveling empty the truck picks up another load on the return trip) had and had not been studied. The composite pricing based method and the tabu search method proved to be very useful for this kind of scheduling.</p>
|
573 |
Enterprise Users and Web Search BehaviorLewis, April Ann 01 May 2010 (has links)
This thesis describes analysis of user web query behavior associated with Oak Ridge National Laboratory’s (ORNL) Enterprise Search System (Hereafter, ORNL Intranet). The ORNL Intranet provides users a means to search all kinds of data stores for relevant business and research information using a single query. The Global Intranet Trends for 2010 Report suggests the biggest current obstacle for corporate intranets is “findability and Siloed content”. Intranets differ from internets in the way they create, control, and share content which can make it often difficult and sometimes impossible for users to find information. Stenmark (2006) first noted studies of corporate internal search behavior is lacking and so appealed for more published research on the subject.
This study employs mature scientific internet web query transaction log analysis (TLA) to examine how corporate intranet users at ORNL search for information. The focus of the study is to better understand general search behaviors and to identify unique trends associated with query composition and vocabulary. The results are compared to published Intranet studies. A literature review suggests only a handful of intranet based web search studies exist and each focus largely on a single aspect of intranet search. This implies that the ORNL study is the first to comprehensively analyze a corporate intranet user web query corpus, providing results to the public.
This study analyzes 65,000 user queries submitted to the ORNL intranet from September 17, 2007 through December 31, 2007. A granular relational data model first introduced by Wang, Berry, and Yang (2003) for Web query analysis was adopted and modified for data mining and analysis of the ORNL query corpus. The ORNL query corpus is characterized using Zipf Distributions, descriptive word statistics, and Mutual Information. User search vocabulary is analyzed using frequency distribution and probability statistics.
The results showed that ORNL users searched for unique types of information. ORNL users are uncertain of how to best formulate queries and don’t use search interface tools to narrow search scope. Special domain language comprised 38% of the queries. The average results returned per query for ORNL were too high and no hits occurred 16.34%.
|
574 |
modelli neokeynesiani per l'analisi delle politiche monetarie ottimali con prezzi vischiosi: un approccio non lineare. / Newkeynesian models for monetary policy analysis under stycky prices: a nonlinear approachCORNARO, ALESSANDRA 09 June 2009 (has links)
La tesi è organizzata come segue: nella prima parte viene presentato il background matematico necessario per studiare il modello, introducendo la nozione di sistema dinamico (capitolo 2). Succesivamente viene introdotto il background economico, con il concetto di determinatezza sia dal punto di vista tecnico, con esempio applicativo, che dal punto di vista dei modelli presenti in letteratura in cui viene impiegato. Nella seconda parte viene presentato il framework analitico, basato sul modello di Woodford. In seguito, si passa studiare il modello con lo strumento della loglinearizzazione, fornendo le relazioni di equilibrio attorno allo stato stazionario. Poi si specifica il modello in un caso particolare e si prosegue per ottenere la versione non lineare del modello, introducendo nuove ipotesi compatibili con il framework analitico, in modo da poter studiare la determinatezza dell'equilibrio. / The thesis is organized as follow: we start presenting, in the first part, the mathematical background we need to study our model, introducing at first the notion of dynamical system, more specifically in the discrete time, as our model required (Chapter 2).
Afterwards, for the economical background (Chapter 3-4), the concept of determinacy of the equilibrium is analyzed from the technical point of view for linear models, providing the analytical conditions that let us obtain a unique and determinate equilibrium.
Once explored the techniques, we give an exhaustive example that allows us to better understand from the mathematical point of view the concept of determinacy and how it is linked to the concept of stability. After that, a brief survey of the models that involved the study of determinacy is exposed, showing the several fields of application.
Then, since in our model of monetary policy we imply different interest-rate policy rules in order to study the stability of the macroeconomic system, we provide a short preamble of the rules for the operating target interest rate set by Central Bank.
In the second part the analytical framework is presented. The starting point is a model for price level determination in a cashless economy, where nominal rigidities are introduced, based on Woodford's work and we give the equilibrium relations of the model in the implicit form (Chapter 5).
Afterwards, we build the model in a particular case (Chapter 6), by specifying the functions involved in the model by using utility function of C.R.R.A. type and linear production function, compatible with the analytical characterization. In this way we get the components of a general economic equilibrium model, consistent with the optimizing behavior of households and firms. The results obtained after the specification of the functions are the same we can find in the analysis proposed by Walsh.
At this point, we obtain the log-linearized version of the model that is the starting point for the study of the stability of the system in the linear case.
This procedure let us find a two equations forward-looking and rational-expectations model for inflation and the output gap.
Then we briefly present the different policy regimes used in the analysis according to our framework, providing the interest-rate relations that close the model
Since our intention is to find a nonlinear version of the model, the step of using the log-linearization is essential in order to understand and to underline how this tool is useful not only for studying the equations around the steady-state but also to make these relations more treatable from a mathematical point of view: in fact it is important in order to figure out the obstacles we faced to build the nonlinear model and to find the solutions proposed in this work.
In the third part (Chapter 7), to reach our purpose to go beyond the log-linearized and simplified version of the model, we try, under some assumption compatible with the behavior of the agents, to provide nonlinear conditions for this model. This is meaningful in order to avoid loss of informations due to the limited analysis around a neighborhood of the steady-state.
Once illustrated the nonlinear model and the equilibrium relation, we study the determinacy of the equilibrium, using the techniques shown in the first part, under two different interest rate specifications that close the model. Chapter 8 concludes.
|
575 |
On the Resilience of Network Coding in Peer-to-Peer Networks and its ApplicationsNiu, Di 14 July 2009 (has links)
Most current-generation P2P content distribution protocols use fine-granularity blocks to distribute content in a decentralized fashion. Such systems often suffer from a significant variation in block distributions, such that certain blocks become rare or even unavailable, adversely affecting content availability and download efficiency. This phenomenon is further aggravated by peer dynamics which is inherent in P2P networks.
In this thesis, we quantitatively analyze how network coding may improve block availability and introduce resilience to peer dynamics.
Since in reality, network coding can only be performed within segments, each containing a subset of blocks, we explore the fundamental tradeoff
between the resilience gain of network coding and its inherent coding complexity, as the number of blocks in a segment varies.
As another application of the resilience of network coding, we also devise an indirect data collection scheme based on network coding for the purpose of large-scale network measurements.
|
576 |
A Model for Multiperiod Route Planning and a Tabu Search Method for Daily Log Truck SchedulingHolm, Christer, Larsson, Andreas January 2004 (has links)
The transportation cost of logs from forest to customers is a large part of the overall cost for the Swedish forestry industry. Finding good routes from harvesting points to saw and pulp mills is a complex task, where the total number of feasible routes is extremely high. In this thesis we present two methods for log truck scheduling. The first is to, from a given set of routes, find the most valuable subset that fulfils the customers demand. We use a model that is similar to the set partitioning problem and a method that is referred to as a composite pricing coupled with Branch and Bound. The composite pricing based method prices the routes (columns) and chooses the most valuable ones that are then added to the LP relaxation. Once an LP optimum is found, the Branch and Bound method is used to find an integer optimum solution. We have tested this on a case of realistic size. The second method is a tabu search heuristic. Here, the purpose is to create efficient and qualitative routes from a given number of trips (referred to as predefined trips). From a start solution tabu search systematically generates new solutions. This method was tested on a small problem and on a five times larger problem to study how the size of the problem affected the result. It was also tested and compared on two cases in which the backhauling possibilities (i.e. instead of traveling empty the truck picks up another load on the return trip) had and had not been studied. The composite pricing based method and the tabu search method proved to be very useful for this kind of scheduling.
|
577 |
On the Resilience of Network Coding in Peer-to-Peer Networks and its ApplicationsNiu, Di 14 July 2009 (has links)
Most current-generation P2P content distribution protocols use fine-granularity blocks to distribute content in a decentralized fashion. Such systems often suffer from a significant variation in block distributions, such that certain blocks become rare or even unavailable, adversely affecting content availability and download efficiency. This phenomenon is further aggravated by peer dynamics which is inherent in P2P networks.
In this thesis, we quantitatively analyze how network coding may improve block availability and introduce resilience to peer dynamics.
Since in reality, network coding can only be performed within segments, each containing a subset of blocks, we explore the fundamental tradeoff
between the resilience gain of network coding and its inherent coding complexity, as the number of blocks in a segment varies.
As another application of the resilience of network coding, we also devise an indirect data collection scheme based on network coding for the purpose of large-scale network measurements.
|
578 |
Well-log based determination of rock thermal conductivity in the North German BasinFuchs, Sven January 2013 (has links)
In sedimentary basins, rock thermal conductivity can vary both laterally and vertically, thus altering the basin’s thermal structure locally and regionally. Knowledge of the thermal conductivity of geological formations and its spatial variations is essential, not only for quantifying basin evolution and hydrocarbon maturation processes, but also for understanding geothermal conditions in a geological setting. In conjunction with the temperature gradient, thermal conductivity represents the basic input parameter for the determination of the heat-flow density; which, in turn, is applied as a major input parameter in thermal modeling at different scales. Drill-core samples, which are necessary to determine thermal properties by laboratory measurements, are rarely available and often limited to previously explored reservoir formations. Thus, thermal conductivities of Mesozoic rocks in the North German Basin (NGB) are largely unknown. In contrast, geophysical borehole measurements are often available for the entire drilled sequence. Therefore, prediction equations to determine thermal conductivity based on well-log data are desirable. In this study rock thermal conductivity was investigated on different scales by (1) providing thermal-conductivity measurements on Mesozoic rocks, (2) evaluating and improving commonly applied mixing models which were used to estimate matrix and pore-filled rock thermal conductivities, and (3) developing new well-log based equations to predict thermal conductivity in boreholes without core control.
Laboratory measurements are performed on sedimentary rock of major geothermal reservoirs in the Northeast German Basin (NEGB) (Aalenian, Rhaethian-Liassic, Stuttgart Fm., and Middle Buntsandstein). Samples are obtained from eight deep geothermal wells that approach depths of up to 2,500 m. Bulk thermal conductivities of Mesozoic sandstones range between 2.1 and 3.9 W/(m∙K), while matrix thermal conductivity ranges between 3.4 and 7.4 W/(m∙K). Local heat flow for the Stralsund location averages 76 mW/m², which is in good agreement to values reported previously for the NEGB. For the first time, in-situ bulk thermal conductivity is indirectly calculated for entire borehole profiles in the NEGB using the determined surface heat flow and measured temperature data. Average bulk thermal conductivity, derived for geological formations within the Mesozoic section, ranges between 1.5 and 3.1 W/(m∙K).
The measurement of both dry- and water-saturated thermal conductivities allow further evaluation of different two-component mixing models which are often applied in geothermal calculations (e.g., arithmetic mean, geometric mean, harmonic mean, Hashin-Shtrikman mean, and effective-medium theory mean). It is found that the geometric-mean model shows the best correlation between calculated and measured bulk thermal conductivity. However, by applying new model-dependent correction, equations the quality of fit could be significantly improved and the error diffusion of each model reduced. The ‘corrected’ geometric mean provides the most satisfying results and constitutes a universally applicable model for sedimentary rocks. Furthermore, lithotype-specific and model-independent conversion equations are developed permitting a calculation of water-saturated thermal conductivity from dry-measured thermal conductivity and porosity within an error range of 5 to 10%.
The limited availability of core samples and the expensive core-based laboratory measurements make it worthwhile to use petrophysical well logs to determine thermal conductivity for sedimentary rocks. The approach followed in this study is based on the detailed analyses of the relationships between thermal conductivity of rock-forming minerals, which are most abundant in sedimentary rocks, and the properties measured by standard logging tools. By using multivariate statistics separately for clastic, carbonate and evaporite rocks, the findings from these analyses allow the development of prediction equations from large artificial data sets that predict matrix thermal conductivity within an error of 4 to 11%. These equations are validated successfully on a comprehensive subsurface data set from the NGB. In comparison to the application of earlier published approaches formation-dependent developed for certain areas, the new developed equations show a significant error reduction of up to 50%.
These results are used to infer rock thermal conductivity for entire borehole profiles. By inversion of corrected in-situ thermal-conductivity profiles, temperature profiles are calculated and compared to measured high-precision temperature logs. The resulting uncertainty in temperature prediction averages < 5%, which reveals the excellent temperature prediction capabilities using the presented approach.
In conclusion, data and methods are provided to achieve a much more detailed parameterization of thermal models. / Die thermische Modellierung des geologischen Untergrundes ist ein wichtiges Werkzeug bei der Erkundung und Bewertung tiefliegender Ressourcen sedimentärer Becken (e.g., Kohlenwasserstoffe, Wärme). Die laterale und vertikale Temperaturverteilung im Untergrund wird, neben der Wärmestromdichte und der radiogenen Wärmeproduktion, hauptsächlich durch die Wärmeleitfähigkeit (WLF) der abgelagerten Gesteinsschichten bestimmt. Diese Parameter stellen die wesentlichen Eingangsgrößen für thermische Modelle dar. Die vorliegende Dissertation befasst sich mit der Bestimmung der Gesteins-WLF auf verschiedenen Skalen. Dies umfasst (1) laborative WLF-Messungen an mesozoischen Bohrkernproben, (2) die Evaluierung und Verbesserung der Prognosefähigkeit von Mischgesetzten zur Berechnung von Matrix- und Gesamt-WLF sedimentärer Gesteine, sowie (3) die Entwicklung neuer Prognosegleichungen unter Nutzung bohrlochgeophysikalischer Messungen und multivariater Analysemethoden im NGB.
Im Nordostdeutschen Becken (NEGB) wurden für die wichtigsten geothermischen Reservoire des Mesozoikums (Aalen, Rhät-Lias-Komplex, Stuttgart Formation, Mittlerer Buntsandstein) Bohrkerne geothermischer Tiefbohrungen (bis 2.500 m Tiefe) auf Ihre thermischen und petrophysikalischen Eigenschaften hin untersucht. Die WLF mesozoischer Sandsteine schwankt im Mittel zwischen 2,1 und 3,9 W/(m∙K), die WLF der Gesteinsmatrix hingegen im Mittel zwischen 3,4 und 7,4 W/(m∙K). Neu berechnete Werte zur Oberflächenwärmestromdichte (e.g., 76 mW/m², Stralsund) stehen im Einklang mit den Ergebnissen früherer Studien im NEGB. Erstmals im NDB wurde für das mesozoisch/känozoischen Intervall am Standort Stralsund ein in-situ WLF-Profil berechnet. In-situ Formations-WLF, für als potentielle Modelschichten interessante, stratigraphische Intervalle, variieren im Mittel zwischen 1,5 und 3,1 W/(m∙K) und bilden eine gute Grundlage für kleinskalige (lokale) thermische Modelle.
Auf Grund der in aller Regel nur eingeschränkt verfügbaren Bohrkernproben sowie des hohen laborativen Aufwandes zur Bestimmung der WLF waren alternative Methoden gesucht. Die Auswertung petrophysikalischer Bohrlochmessungen mittels mathematischer-statistischer Methoden stellt einen lang genutzten und erprobten Ansatz dar, welcher in seiner Anwendbarkeit jedoch auf die aufgeschlossenen Gesteinsbereiche (Genese, Geologie, Stratigraphie, etc.) beschränkt ist. Daher wurde ein leicht modifizierter Ansatz entwickelt. Die thermophysikalischen Eigenschaften der 15 wichtigsten gesteinsbildenden Minerale (in Sedimentgesteinen) wurden statistisch analysiert und aus variablen Mischungen dieser Basisminerale ein umfangreicher, synthetischer Datensatz generiert. Dieser wurde mittels multivariater Statistik bearbeitet, in dessen Ergebnis Regressionsgleichungen zur Prognose der Matrix-WLF für drei Gesteinsgruppen (klastisch, karbonatisch, evaporitisch) abgeleitet wurden. In einem zweiten Schritt wurden für ein Echtdatenset (laborativ gemessene WLF und Standardbohrlochmessungen) empirische Prognosegleichungen für die Berechnung der Gesamt-WLF entwickelt. Die berechneten WLF zeigen im Vergleich zu gemessenen WLF Fehler zwischen 5% und 11%. Die Anwendung neu entwickelter, sowie in der Literatur publizierter Verfahren auf den NGB-Datensatz zeigt, dass mit den neu aufgestellten Gleichungen stets der geringste Prognosefehler erreicht wird. Die Inversion neu berechneter WLF-Profile erlaubt die Ableitung synthetischer Temperaturprofile, deren Vergleich zu gemessenen Gesteinstemperaturen in einen mittleren Fehler von < 5% resultiert.
Im Rahmen geothermischer Berechnungen werden zur Umrechnung zwischen Matrix- und Gesamt-WLF häufig Zwei-Komponenten-Mischmodelle genutzt (Arithmetisches Mittel, Harmonische Mittel, Geometrisches Mittel, Hashin-Shtrikman Mittel, Effektives-Medium Mittel). Ein umfangreicher Datensatz aus trocken- und gesättigt-gemessenen WLF und Porosität erlaubt die Evaluierung dieser Modelle hinsichtlich Ihrer Prognosefähigkeit. Diese variiert für die untersuchten Modelle stark (Fehler: 5 – 53%), wobei das geometrische Mittel die größte, quantitativ aber weiterhin unbefriedigende Übereinstimmungen zeigt. Die Entwicklung und Anwendung mischmodelspezifischer Korrekturgleichungen führt zu deutlich reduzierten Fehlern. Das korrigierte geometrische Mittel zeigt dabei, bei deutlich reduzierter Fehlerstreubreite, erneut die größte Übereinstimmung zwischen berechneten und gemessenen Werten und scheint ein universell anwendbares Mischmodel für sedimentäre Gesteine zu sein. Die Entwicklung modelunabhängiger, gesteinstypbezogener Konvertierungsgleichungen ermöglicht die Abschätzung der wassergesättigten Gesamt-WLF aus trocken-gemessener WLF und Porosität mit einem mittleren Fehler < 9%.
Die präsentierten Daten und die neu entwickelten Methoden erlauben künftig eine detailliertere und präzisere Parametrisierung thermischer Modelle sedimentärer Becken.
|
579 |
Automatic Markov Chain Monte Carlo Procedures for Sampling from Multivariate DistributionsKarawatzki, Roman, Leydold, Josef January 2005 (has links) (PDF)
Generating samples from multivariate distributions efficiently is an important task in Monte Carlo integration and many other stochastic simulation problems. Markov chain Monte Carlo has been shown to be very efficient compared to "conventional methods", especially when many dimensions are involved. In this article we propose a Hit-and-Run sampler in combination with the Ratio-of-Uniforms method. We show that it is well suited for an algorithm to generate points from quite arbitrary distributions, which include all log-concave distributions. The algorithm works automatically in the sense that only the mode (or an approximation of it) and an oracle is required, i.e., a subroutine that returns the value of the density function at any point x. We show that the number of evaluations of the density increases slowly with dimension. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
|
580 |
A Simple Universal Generator for Continuous and Discrete Univariate T-concave DistributionsLeydold, Josef January 2000 (has links) (PDF)
We use inequalities to design short universal algorithms that can be used to generate random variates from large classes of univariate continuous or discrete distributions (including all log-concave distributions). The expected time is uniformly bounded over all these distributions. The algorithms can be implemented in a few lines of high level language code. In opposition to other black-box algorithms hardly any setup step is required and thus it is superior in the changing parameter case. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
|
Page generated in 0.0206 seconds