• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 150
  • 40
  • 34
  • 30
  • 8
  • 6
  • 6
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 322
  • 322
  • 55
  • 49
  • 41
  • 39
  • 31
  • 31
  • 28
  • 27
  • 27
  • 25
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Continuum diffusion on networks

Christophe Haynes Unknown Date (has links)
In this thesis we develop and use a continuum random walk framework to solve problems that are usually studied using a discrete random walk on a discrete lattice. Problems studied include; the time it takes for a random walker to be absorbed at a trap on a fractal lattice, the calculation of the spectral dimension for several different classes of networks, the calculation of the density of states for a multi-layered Bethe lattice and the relationship between diffusion exponents and a resistivity exponent that occur in relevant power laws. The majority of the results are obtained by deriving an expression for a Laplace transformed Green’s function or first passage time, and then using Tauberian theorems to find the relevant asymptotic behaviour. The continuum framework is established by studying the diffusion equation on a 1-d bar with non-homogeneous boundary conditions. The result is extended to model diffusion on networks through linear algebra. We derive the transformation linking the Green’s functions and first passage time results in the continuum and discrete settings. The continuum method is used in conjunction with renormalization techniques to calculate the time taken for a random walker to be absorbed at a trap on a fractal lattice and also to find the spectral dimension of new classes of networks. Although these networks can be embedded in the d- dimensional Euclidean plane, they do not have a spectral dimension equal to twice the ratio of the fractal dimension and the random walk dimension when the random walk on the network is transient. The networks therefore violate the Alexander-Orbach law. The fractal Einstein relationship (a relationship relating a diffusion exponent to a resistivity exponent) also does not hold on these networks. Through a suitable scaling argument, we derive a generalised fractal Einstein relationship which holds for our lattices and explains anomalous results concerning transport on diffusion limited aggregates and Eden trees.
92

Implementation of a Manycast Protocol in a Partitionable Mobile Ad hoc Network

Nykvist, Gustav January 2009 (has links)
<p>Wireless communication has grown very popular, and communication is the key</p><p>to success in many situations. However, most of the common technologies today</p><p>rely on infrastructure and in disaster situations infrastructure might be lost or</p><p>get severely overloaded. This master thesis concerns intermittently connected</p><p>mobile ad hoc networks. A network in which the devices may move freely in any</p><p>direction and still be able to communicate. To be able to demonstrate a network</p><p>protocol called random-walk gossip-based manycast (RWG) my assignment has been</p><p>to implement this protocol using off-the-shelf hardware and software.</p><p>RWG is a multi-hop and partition-tolerant mobile ad hoc manycast network</p><p>protocol. Multi-hop refers to information being able to hop between more than</p><p>two nodes in a network and partition-tolerant means that the protocol works even</p><p>though a network is partitioned. Manycast means that the information should</p><p>be successfully delivered to K of all the potential nodes in the area. The RWG</p><p>protocol makes use of four different packet types, request to forward (REQF), ac-</p><p>knowledgement (ACK), ok to forward (OKTF) and be silent (BS). The actual data</p><p>being sent is carried by REQFs, and is referred to as messages. When a message</p><p>is sent it takes what could be described as a random walk among the nodes in the</p><p>network, hence the name.</p><p>The implementation of the RWG protocol resides in user-space and depends on</p><p>the IEEE 802.11b standard and the raw socket that is specified in the BSD socket</p><p>API. It is written in C and was developed on a machine running Ubuntu. It runs</p><p>on systems that use Linux 2.6 kernels and it supports cross-compiling for ARM</p><p>based devices such as the Nokia N810 internet tablet and the Android dev phone</p><p>1. To be able to demonstrate the protocol I developed my own client application.</p><p>Moreover, an already existing application for Android, Portable Open Search and</p><p>Identification Tool (POSIT), was successfully extended to run on top of the RWG</p><p>implementation. The extension was developed by people in the POSIT project</p><p>and tested in a physical experiment covering five devices.</p><p>The report covers the RWG protocol, the system choice, the implementation</p><p>and the testing of the implementation.</p>
93

Upscaling of solute transport in heterogeneous media : theories and experiments to compare and validate Fickian and non-Fickian approaches

Frippiat, Christophe 29 May 2006 (has links)
The classical Fickian model for solute transport in porous media cannot correctly predict the spreading (the dispersion) of contaminant plumes in a heterogeneous subsoil unless its structure is completely characterized. Although the required precision is outside the reach of current field characterization methods, the classical Fickian model remains the most widely used model among practitioners. Two approaches can be adopted to solve the effect of physical heterogeneity on transport. First, upscaling methods allow one to compute “apparent” scale-dependent parameters to be used in the classical Fickian model. In the second approach, upscaled (non-Fickian) transport equations with scale-independent parameters are used. This research aims at comparing upscaling methods for Fickian transport parameters with non-Fickian upscaled transport equations, and evaluate their capabilities to predict solute transport in heterogeneous media. The models were tested using simplified numerical examples (perfectly stratified aquifers and bidimensional heterogeneous media). Hypothetical lognormal permeability fields were investigated, for different values of variance, correlation length and anisotropy ratio. Examples exhibiting discrete and multimodal permeability distributions were also investigated using both numerical examples and a physical laboratory experiment. It was found that non-Fickian transport equations involving fractional derivatives have higher upscaling capabilities regarding the prediction of contaminant plume migration and spreading, although their key parameters can only be inferred from inverse modelling of test data.
94

Green Functions on Self--Similar Graphs and Bounds for the Spectrum of the Laplacian

kroen@finanz.math.tu-graz.ac.at 26 September 2001 (has links)
No description available.
95

Growth Series and Random Walks on Some Hyperbolic Graphs

Laurent@math.berkeley.edu 26 September 2001 (has links)
No description available.
96

Random Loewner Chains

Johansson, Carl Fredrik January 2010 (has links)
This thesis contains four papers and two introductory chapters. It is mainly devoted to problems concerning random growth models related to the Loewner differential equation. In Paper I we derive a rate of convergence of the Loewner driving function for loop-erased random walk to Brownian motion with speed 2 on the unit circle, the Loewner driving function for radial SLE(2). Thereby we provide the first instance of a formal derivation of a rate of convergence for any of the discrete models known to converge to SLE. In Paper II we use the known convergence of (radial) loop-erased random walk to radial SLE(2) to prove that the scaling limit of loop-erased random walk excursion in the upper half plane is chordal SLE(2). Our proof relies on a version of Wilson’s algorithm for weighted graphs together with a Beurling-type hitting estimate for random walk excursion. We also establish and use the convergence of the radial SLE path to the chordal SLE path as the bulk point tends to a boundary point. In the final section we sketch how to extend our results to more general domains. In Paper III we prove an upper bound on the optimal Hölder exponent for the chordal SLE path parameterized by capacity and thereby establish the optimal exponent as conjectured by J. Lind. We also give a new proof of the lower bound. Our proofs are based on sharp estimates of moments of the derivative of the inverse SLE map. In particular, we improve an estimate of G. F. Lawler. In Paper IV we consider radial Loewner evolutions driven by unimodular Lévy processes. We rescale the hulls of the evolution by capacity, and prove that the weak limit of the rescaled hulls exists. We then study a random growth model obtained by driving the Loewner equation with a compound Poisson process with two real parameters: the intensity of the underlying Poisson process and a localization parameter of the Poisson kernel which determines the jumps. A particular choice of parameters yields a growth process similar to the Hastings-Levitov HL(0) model. We describe the asymptotic behavior of the hulls with respect to the parameters, showing that growth tends to become localized as the jump parameter increases. We obtain deterministic evolutions in one limiting case, and Loewner evolution driven by a unimodular Cauchy process in another. We also show that the Hausdorff dimension of the limiting rescaled hulls is equal to 1.
97

Financial Analysts' Forecast Precision : Swedish Evidence

Personne, Karl, Pääjärvi, Sandra January 2013 (has links)
The future is uncertain. We therefore make predictions and forecasts of the future in order to be able to plan and react to future events. For this purpose, financial analysts are argued to have a responsibility towards investors and the market, in helping to keep the market efficient. Given that financial analysts act in a rational way we argue that analysts should strive to maximize forecast accuracy. The purpose of this study is to investigate how accurate financial analysts’ forecasts of Swedish firms’ future values are, and what information that analysts use that significantly affect the analysts’ forecast accuracy. To investigate this we first examine whether financial analysts contribute with value to investors by comparing their forecast precision against a simple time-series model. Our findings show that financial analysts produce significantly more accurate forecasts than a time-series model in the short term. Furthermore, given that rational analysts act in their own best interest while making accurate forecasts, we argue that analysts will incorporate and use the information that is available to them for the purpose of maximizing forecast accuracy. We investigate this by testing if the analysts’ forecast accuracy is affected by; the forecast horizon, the number of analysts following a firm, the firm size, the corporate visibility, the predictability of earnings, and trading volume. We find that the forecast accuracy is better when the amount of analysts following a firm is high, the firm size is larger, the forecasted company’s corporate visibility in the news is more frequent, and the predictability of earnings is higher. The trading volume does not have a significant effect on analysts’ forecast accuracy. To conclude, we question the value of financial analysts’ forecasts for longer forecast horizons.
98

Color Range Determination and Alpha Matting for Color Images

Luo, Zhenyi 02 November 2011 (has links)
This thesis proposes a new chroma keying method that can automatically detect background, foreground, and unknown regions. For background color detection, we use K-means clustering in color space to calculate the limited number of clusters of background colors. We use spatial information to clean the background regions and minimize the unknown regions. Our method only needs minimum inputs from user. For unknown regions, we implement the alpha matte based on Wang's robust matting algorithm, which is considered one of the best algorithms in the literature, if not the best. Wang's algorithm is based on modified random walk. We proposed a better color selection method, which improves matting results in the experiments. In the thesis, a detailed implementation of robust matting is provided. The experimental results demonstrate that our proposed method can handle images with one background color, images with gridded background, and images with difficult regions such as complex hair stripes and semi-transparent clothes.
99

Autonomic Core Network Management System

Tizghadam, Ali 11 December 2009 (has links)
This thesis presents an approach to the design and management of core networks where the packet transport is the main service and the backbone should be able to respond to unforeseen changes in network parameters in order to provide smooth and reliable service for the customers. Inspired by Darwin's seminal work describing the long-term processes in life, and with the help of graph theoretic metrics, in particular the "random-walk betweenness", we assign a survival value, the network criticality, to a communication network to quantify its robustness. We show that the random-walk betweenness of a node (link) consists of the product of two terms, a global measure which is fixed for all the nodes (links) and a local graph measure which is in fact the weight of the node (link). The network criticality is defined as the global part of the betweenness of a node (link). We show that the network criticality is a monotone decreasing, and strictly convex function of the weight matrix of the network graph. We argue that any communication network can be modeled as a topology that evolves based on survivability and performance requirements. The evolution should be in the direction of decreasing the network criticality, which in turn increases the network robustness. We use network criticality as the main control parameter and we propose a network management system, AutoNet, to guide the network evolution in real time. AutoNet consists of two autonomic loops, the slow loop to control the long-term evolution of robustness throughout the whole network, and the fast loop to account for short-term performance and robustness issues. We investigate the dynamics of network criticality and we develop a convex optimization problem to minimize the network criticality. We propose a network design procedure based on the optimization problem which can be used to develop the long-term autonomic loop for AutoNet. Furthermore, we use the properties of the duality gap of the optimization problem to develop traffic engineering methods to manage the transport of packets in a network. This provides for the short-term autonomic loop of AutoNet architecture. Network criticality can also be used to rank alternative networks based on their robustness to the unpredicted changes in network conditions. This can help find the best network structure under some pre-specified constraint to deal with robustness issues.
100

Autonomic Core Network Management System

Tizghadam, Ali 11 December 2009 (has links)
This thesis presents an approach to the design and management of core networks where the packet transport is the main service and the backbone should be able to respond to unforeseen changes in network parameters in order to provide smooth and reliable service for the customers. Inspired by Darwin's seminal work describing the long-term processes in life, and with the help of graph theoretic metrics, in particular the "random-walk betweenness", we assign a survival value, the network criticality, to a communication network to quantify its robustness. We show that the random-walk betweenness of a node (link) consists of the product of two terms, a global measure which is fixed for all the nodes (links) and a local graph measure which is in fact the weight of the node (link). The network criticality is defined as the global part of the betweenness of a node (link). We show that the network criticality is a monotone decreasing, and strictly convex function of the weight matrix of the network graph. We argue that any communication network can be modeled as a topology that evolves based on survivability and performance requirements. The evolution should be in the direction of decreasing the network criticality, which in turn increases the network robustness. We use network criticality as the main control parameter and we propose a network management system, AutoNet, to guide the network evolution in real time. AutoNet consists of two autonomic loops, the slow loop to control the long-term evolution of robustness throughout the whole network, and the fast loop to account for short-term performance and robustness issues. We investigate the dynamics of network criticality and we develop a convex optimization problem to minimize the network criticality. We propose a network design procedure based on the optimization problem which can be used to develop the long-term autonomic loop for AutoNet. Furthermore, we use the properties of the duality gap of the optimization problem to develop traffic engineering methods to manage the transport of packets in a network. This provides for the short-term autonomic loop of AutoNet architecture. Network criticality can also be used to rank alternative networks based on their robustness to the unpredicted changes in network conditions. This can help find the best network structure under some pre-specified constraint to deal with robustness issues.

Page generated in 0.1114 seconds