• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 13
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 78
  • 78
  • 23
  • 16
  • 13
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Small-world network models and their average path length

Taha, Samah M. Osman 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: Socially-based networks are of particular interest amongst the variety of communication networks arising in reality. They are distinguished by having small average path length and high clustering coefficient, and so are examples of small-world networks. This thesis studies both real examples and theoretical models of small-world networks, with particular attention to average path length. Existing models of small-world networks, due to Watts and Strogatz (1998) and Newman and Watts (1999a), impose boundary conditions on a one dimensional lattice, and rewire links locally and probabilistically in the former or probabilistically adding extra links in the latter. These models are investigated and compared with real-world networks. We consider a model in which randomness is provided by the Erdos-Rényi random network models superposed on a deterministic one dimensional structured network. We reason about this model using tools and results from random graph theory. Given a disordered network C(n, p) formed by adding links randomly with probability p to a one dimensional network C(n). We improve the analytical result regarding the average path length by showing that the onset of smallworld behaviour occurs if pn is bounded away from zero. Furthermore, we show that when pn tends to zero, C(n, p) is no longer small-world. We display that the average path length in this case approaches infinity with the network order. We deduce that at least εn (where ε is a constant bigger than zero) random links should be added to a one dimensional lattice to ensure average path length of order log n. / AFRIKAANSE OPSOMMING: Sosiaal-baseerde netwerke is van besondere belang onder die verskeidenheid kommunikasie netwerke. Hulle word onderskei deur ’n klein gemiddelde skeidingsafstand en hoë samedrommingskoëffisiënt, en is voorbeelde van kleinwêreld netwerke. Hierdie verhandeling bestudeer beide werklike voorbeelde en teoretiese modelle van klein-wêreld netwerke, met besondere aandag op die gemiddelde padlengte. Bestaande modelle van klein-wêreld netwerke, te danke aan Watts en Strogatz (1998) en Newman en Watts (1999a), voeg randvoorwaardes by tot eendimensionele roosters, en herbedraad nedwerkskakels gebaseer op lokale kennis in die eerste geval en voeg willekeurig ekstra netwerkskakels in die tweede. Hierdie modelle word ondersoek en vergelyk met werklike-wêreld netwerke. Ons oorweeg ’n prosedure waarin willekeurigheid verskaf word deur die Erdös- Renyi toevalsnetwerk modelle wat op ’n een-dimensionele deterministiese gestruktureerde netwerk geimposeer word. Ons redeneer oor hierdie modelle deur gebruik te maak van gereedskap en resultate toevalsgrafieke teorie. Gegewe ’n wanordelike netwerk wat gevorm word deur skakels willekeurig met waarskynlikheid p tot ‘n een-dimensionele netwerk C(n) toe te voeg, verbeter ons die analitiese resultaat ten opsigte van die gemiddelde padlengte deur te wys dat die aanvang van klein-wêreld gedrag voorkom wanneer pn weg van nul begrens is. Verder toon ons dat, wanneer pn neig na nul, C(n, p) nie meer klein-wêreld is nie. Ons toon dat die gemiddelde padlengte in hierdie geval na oneindigheid streef saam met die netwerk groote. Ons lei af dat ten minste εn (waar εn n konstante groter as nul is) ewekansige skakels bygevoeg moet word by ’n een-dimensionele rooster om ‘n gemiddelde padlengte van orde log n te verseker.
12

Stochastické síťové modely / Stochastic activity networks

Sůva, Pavel January 2011 (has links)
In the present work, stochastic network models representing a project as a set of activities are studied, as well as different approaches to these models. The critical path method, stochastic network models with probability constraints, finding a reference project duration, worst-case analysis in stochastic networks and optimization of the parameters of the probability distributions of the activity durations are studied. Use of stochastic network models in telecommunications networks is also briefly presented. In a numerical study, some of these models are implemented and the~related numerical results are analyzed.
13

On the economic costs of value at risk forecasts

Miazhynskaia, Tatiana, Dockner, Engelbert J., Dorffner, Georg January 2003 (has links) (PDF)
We specify a class of non-linear and non-Gaussian models for which we estimate and forecast the conditional distributions with daily frequency. We use these forecasts to calculate VaR measures for three different equity markets (US, GB and Japan). These forecasts are evaluated on the basis of different statistical performance measures as well as on the basis of their economic costs that go along with the forecasted capital requirements. The results indicate that different performance measures generate different rankings of the models even within one financial market. We also find that for the three markets the improvement in the forecast by non-linear models over linear ones is negligible, while non-gaussian models significantly dominate the gaussian models. / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
14

Evaluation de précision et vitesse de simulation pour des systèmes de calcul distribué à large échelle / Accurate and Fast Simulations of Large-Scale Distributed Computing Systems

Madeira de Campos Velho, Pedro Antonio 04 July 2011 (has links)
De nos jours, la grande puissance de calcul et l'importante capacité de stockage fournie par les systèmes de calcul distribué à large échelle sont exploitées par des applications dont les besoins grandissent continuellement. Les plates-formes de ces systèmes sont composées d'un ensemble de ressources reliées entre elles par une infrastructure de communication. Dans ce type de système, comme dans n'importe quel environnement de calcul, il est courant que des solutions innovantes soient étudiées. Leur adoption nécessite une phase d'expérimentation pour que l'on puisse les valider et les comparer aux solutions existantes ou en développement. Néanmoins, de par leur nature distribuée, l'exécution d'expériences dans ces environnements est difficile et coûteuse. Dans ces systèmes, l'ordre d'exécution dépend de l'ordre des événements, lequel peut changer d'une exécution à l'autre. L'absence de reproductibilité des expériences rend complexe la conception, le développement et la validation de nouvelles solutions. De plus, les ressources peu- vent changer d'état ou intégrer le système dynamiquement ; les architectures sont partagées et les interférences entre applications, ou même entre processus d'une même application, peuvent affecter le comportement général du système. Enfin, le temps d'exécution d'application à large échelle sur ces sys- tèmes est souvent long, ce qui empêche en général l'exploration exhaustive des valeurs des éventuels paramètres de cette application. Pour toutes ces raisons, les expérimentations dans ce domaine sont souvent basées sur la simulation. Diverses approches existent actuellement pour simuler le calcul dis- tribué à large-échelle. Parmi celles-ci, une grande partie est dédiée à des architectures particulières, comme les grappes de calcul, les grilles de calcul ou encore les plates-formes de calcul bénévole. Néan- moins, ces simulateurs adressent les mêmes problèmes : modéliser le réseau et gérer les ressources de calcul. De plus, leurs besoins sont les même quelle que soit l'architecture cible : la simulation doit être rapide et passer à l'échelle. Pour respecter ces exigences, la simulation de systèmes distribués à large échelle repose sur des techniques de modélisation pour approximer le comportement du système. Cependant, les estimations obtenues par ces modèles peuvent être fausses. Quand c'est le cas, faire confiance à des résultats obtenus par simulation peut amener à des conclusions aléatoires. En d'autres mots, il est nécessaire de connaître la précision des modèles que l'on utilise pour que les conclusions basées sur des résultats de simulation soient crédibles. Mais malgré l'importance de ce dernier point, il existe très rarement des études sur celui-ci. Durant cette thèse, nous nous sommes intéressés à la problématique de la précision des modèles pour les architectures de calcul distribué à large-échelle. Pour atteindre cet objectif, nous avons mené une évaluation de la précision des modèles existants ainsi que des nouveaux modèles conçus pendant cette thèse. Grâce à cette évaluation, nous avons proposé des améliorations pour atténuer les erreurs dues aux modèles en utilisant SimGrid comme cas d'étude. Nous avons aussi évalué les effets des ces améliorations en terme de passage à l'échelle et de vitesse d'exécution. Une contribution majeure de nos travaux est le développement de modèles plus intuitifs et meilleurs que l'existant, que ce soit en termes de précision, vitesse ou passage à l'échelle. Enfin, nous avons mis en lumière les principaux en- jeux de la modélisation des systèmes distribuées à large-échelle en montrant que le principal problème provient de la négligence de certains phénomènes importants. / Large-Scale Distributed Computing (LSDC) systems are in production today to solve problems that require huge amounts of computational power or storage. Such systems are composed by a set of computational resources sharing a communication infrastructure. In such systems, as in any computing environment, specialists need to conduct experiments to validate alternatives and compare solutions. However, due to the distributed nature of resources, performing experiments in LSDC environments is hard and costly. In such systems, the execution flow depends on the order of events which is likely to change from one execution to another. Consequently, it is hard to reproduce experiments hindering the development process. Moreover, resources are very likely to fail or go off-line. Yet, LSDC archi- tectures are shared and interference among different applications, or even among processes of the same application, affects the overall application behavior. Last, LSDC applications are time consuming, thus conducting many experiments, with several parameters is often unfeasible. Because of all these reasons, experiments in LSDC often rely on simulations. Today we find many simulation approaches for LSDC. Most of them objective specific architectures, such as cluster, grid or volunteer computing. Each simulator claims to be more adapted for a particular research purpose. Nevertheless, those simulators must address the same problems: modeling network and managing computing resources. Moreover, they must satisfy the same requirements providing: fast, accurate, scalable, and repeatable simulations. To match these requirements, LSDC simulation use models to approximate the system behavior, neglecting some aspects to focus on the desired phe- nomena. However, models may be wrong. When this is the case, trusting on models lead to random conclusions. In other words, we need to have evidence that the models are accurate to accept the con- clusions supported by simulated results. Although many simulators exist for LSDC, studies about their accuracy is rarely found. In this thesis, we are particularly interested in analyzing and proposing accurate models that respect the requirements of LSDC research. To follow our goal, we propose an accuracy evaluation study to verify common and new simulation models. Throughout this document, we propose model improvements to mitigate simulation error of LSDC simulation using SimGrid as case study. We also evaluate the effect of these improvements on scalability and speed. As a main contribution, we show that intuitive models have better accuracy, speed and scalability than other state-of-the art models. These better results are achieved by performing a thorough and systematic analysis of problematic situations. This analysis reveals that many small yet common phenomena had been neglected in previous models and had to be accounted for to design sound models.
15

Estudo dos fenômenos de transporte em cromatografia através da aplicação de modelos de rede e estocásticos / Study of transport phenomena in chromatography by application of network and stochastic models

Flávio de Matos Silva 17 May 2011 (has links)
Conselho Nacional de Desenvolvimento Científico e Tecnológico / O estudo dos diferentes fenômenos de separação tem sido cada vez mais importante para os diferentes ramos da indústria e ciência. Devido à grande capacidade computacional atual, é possível modelar e analisar os fenômenos cromatográficos a nível microscópico. Os modelos de rede vêm sendo cada vez mais utilizados, para representar processos de separação por cromatografia, pois através destes pode-se representar os aspectos topológicos e morfológicos dos diferentes materiais adsorventes disponíveis no mercado. Neste trabalho visamos o desenvolvimento de um modelo de rede tridimensional para representação de uma coluna cromatográfica, a nível microscópico, onde serão modelados os fenômenos de adsorção, dessorção e dispersão axial através de um método estocástico. Também foram utilizadas diferentes abordagens com relação ao impedimento estérico Os resultados obtidos foram comparados a resultados experimentais. Depois é utilizado um modelo de rede bidimensional para representar um sistema de adsorção do tipo batelada, mantendo-se a modelagem dos fenômenos de adsorção e dessorção, e comparados a sistemas reais posteriormente. Em ambos os sistemas modelados foram analisada as constantes de equilíbrio, parâmetro fundamental nos sistemas de adsorção, e por fim foram obtidas e analisadas isotermas de adsorção. Foi possível concluir que, para os modelos de rede, os fenômenos de adsorção e dessorção bastam para obter perfis de saída similares aos vistos experimentalmente, e que o fenômeno da dispersão axial influência menos que os fenômenos cinéticos em questão / The study of different phenomena of separation has been most important for the different areas of industry and science. Due to the large computational available, we can model and analyze the chromatographic phenomena at the microscopic level. The network model has been most used to represent processes of separation by chromatography, because through them we are able represent the topological and morphological aspects of different adsorbent materials available on the market. In this work we aim at studying the dynamics of percolation chromatography through the phenomenology of adsorption, desorption and axial dispersion. In this work we aim to develop a three-dimensional network model representation of a chromatographic column, microscopic level, which will model the phenomena of adsorption, desorption and axial dispersion through a stochastic method. After that, a two-dimensional network model is used to represent a system of adsorption in batch, keeping the modeling of the adsorption / desorption, and then compare to real systems. In both systems modeled are analyzed the equilibrium constant, the basic parameter in the adsorption systems, and at the end are obtained and analyzed adsorption isotherms. We concluded for the network models the phenomena of adsorption and desorption were sufficient to obtain output profiles similar to those seen experimentally, the phenomenon of axial dispersion effect unless the kinetic phenomena in question.
16

Verification of Solutions to the Sensor Location Problem

May, Chandler 01 May 2011 (has links)
Traffic congestion is a serious problem with large economic and environmental impacts. To reduce congestion (as a city planner) or simply to avoid congested channels (as a road user), one might like to accurately know the flow on roads in the traffic network. This information can be obtained from traffic sensors, devices that can be installed on roads or intersections to measure traffic flow. The sensor location problem is the problem of efficiently locating traffic sensors on intersections such that the flow on the entire network can be extrapolated from the readings of those sensors. I build on current research concerning the sensor location problem to develop conditions on a traffic network and sensor configuration such that the flow can be uniquely extrapolated from the sensors. Specifically, I partition the network by a method described by Morrison and Martonosi (2010) and establish a necessary and sufficient condition for uniquely extrapolatable flow on a part of that network that has certain flow characteristics. I also state a different sufficient but not necessary condition and include a novel proof thereof. Finally, I present several results illustrating the relationship between the inputs to a general network and the flow solution.
17

Analytical models to evaluate system performance measures for vehicle based material-handling systems under various dispatching policies

Lee, Moonsu 29 August 2005 (has links)
Queueing network-based approximation models were developed to evaluate the performance of fixed-route material-handling systems supporting a multiple workcenter manufacturing facility. In this research, we develop analytical models for fixed-route material-handling systems from two different perspectives: the workcenters?? point of view and the transporters?? point of view. The state-dependent nature of the transportation time is considered here for more accurate analytical approximation models for material-handling systems. Also, an analytical methodology is developed for analytical descriptions of the impact of several different vehicledispatching policies for material-handling systems. Two different types of vehicledispatching policies are considered. Those are workcenter-initiated vehicle dispatching rules and vehicle-initiated vehicle dispatching rules. For the workcenterinitiated vehicle dispatching rule, the Closest Transporter Allocation Rule (CTAR) was used to assign empty transporters to jobs needing to be moved between various workcenters. On the other hand, four different vehicle-initiated vehicle dispatching rules, Shortest Distance Dispatching Rule (SDR), Time Limit/Shortest DistanceDispatching Rule (TL/SDR), First-Come First-Serve Dispatching Rule (FCFSR), Longest Distance Dispatching Rule (LDR), are used to select job requests from workcenters when a transporter is available. From the models with a queue space limit of one at each workcenter and one transporter, two different types of extensions are considered. First, the queue space limit at each workcenter is increased from one to two while the number of transporters remains at one. Second, the number of transporters in the system is also increased from one to two while maintaining the queue space limit of one at each workcenter. Finally, using a simulation approach, we modified the Nearest Neighbor (NN) heuristic dispatching procedure for multi-load transporters proposed by Tanchoco and Co (1994) and tested for a fixed-route material-handling system. The effects of our modified NN and the original NN transporter dispatching procedures on the system performance measures, such as WIP or Cycle Time were investigated and we demonstrated that the modified NN heuristic dispatching procedure performs better than the original NN procedure in terms of these system performance measures.
18

Digital control networks for virtual creatures

Bainbridge, Christopher James January 2010 (has links)
Robot control systems evolved with genetic algorithms traditionally take the form of floating-point neural network models. This thesis proposes that digital control systems, such as quantised neural networks and logical networks, may also be used for the task of robot control. The inspiration for this is the observation that the dynamics of discrete networks may contain cyclic attractors which generate rhythmic behaviour, and that rhythmic behaviour underlies the central pattern generators which drive lowlevel motor activity in the biological world. To investigate this a series of experiments were carried out in a simulated physically realistic 3D world. The performance of evolved controllers was evaluated on two well known control tasks—pole balancing, and locomotion of evolved morphologies. The performance of evolved digital controllers was compared to evolved floating-point neural networks. The results show that the digital implementations are competitive with floating-point designs on both of the benchmark problems. In addition, the first reported evolution from scratch of a biped walker is presented, demonstrating that when all parameters are left open to evolutionary optimisation complex behaviour can result from simple components.
19

Decomposition of general queueing network models : an investigation into the implementation of hierarchical decomposition schemes of general closed queueing network models using the principle of minimum relative entropy subject to fully decomposable constraints

Tomaras, Panagiotis J. January 1989 (has links)
Decomposition methods based on the hierarchical partitioning of the state space of queueing network models offer powerful evaluation tools for the performance analysis of computer systems and communication networks. These methods being conventionally implemented capture the exact solution of separable queueing network models but their credibility differs when applied to general queueing networks. This thesis provides a universal information theoretic framework for the implementation of hierarchical decomposition schemes, based on the principle of minimum relative entropy given fully decomposable subset and aggregate utilization, mean queue length and flow-balance constraints. This principle is used, in conjuction with asymptotic connections to infinite capacity queues, to derive new closed form approximations for the conditional and marginal state probabilities of general queueing network models. The minimum relative entropy solutions are implemented iteratively at each decomposition level involving the generalized exponential (GE) distributional model in approximating the general service and asymptotic flow processes in the network. It is shown that the minimum relative entropy joint state probability, subject to mean queue length and flow-balance constraints, is identical to the exact product-form solution obtained as if the network was separable. An investigation into the effect of different couplings of the resource units on the relative accuracy of the approximation is carried out, based on an extensive experimentation. The credibility of the method is demonstrated with some illustrative examples involving first-come-first-served general queueing networks with single and multiple servers and favourable comparisons against exact solutions and other approximations are made.
20

General queueing network models for computer system performance analysis : a maximum entropy method of analysis and aggregation of general queueing network models with application to computer systems

El-Affendi, Mohamed Ahmed January 1983 (has links)
In this study the maximum entropy formalism [JAYN 57] is suggested as an alternative theory for general queueing systems of computer performance analysis. The motivation is to overcome some of the problems arising in this field and to extend the scope of the results derived in the context of Markovian queueing theory. For the M/G/l model a unique maximum entropy solution., satisfying locALl balance is derived independent of any assumptions about the service time distribution. However, it is shown that this solution is identical to the steady state solution of the underlying Marko-v process when the service time distribution is of the generalised exponential (CE) type. (The GE-type distribution is a mixture of an exponential term and a unit impulse function at the origin). For the G/M/1 the maximum entropy solution is identical in form to that of the underlying Markov process, but a GE-type distribution still produces the maximum overall similar distributions. For the GIG11 model there are three main achievements: first, the spectral methods are extended to give exaft formulae for the average number of customers in the system for any G/G/l with rational Laplace transform. Previously, these results are obtainable only through simulation and approximation methods. (ii) secondly, a maximum entropy model is developed and used to obtain unique solutions for some types of the G/G/l. It is also discussed how these solutions can be related to the corresponding stochastic processes. (iii) the importance of the G/GE/l and the GE/GE/l for the analysis of general networks is discussed and some flow processes for these systems are characterised. For general queueing networks it is shown that the maximum entropy solution is a product of the maximum entropy solutions of the individual nodes. Accordingly, existing computational algorithms are extended to cover general networks with FCFS disciplines. Some implementations are suggested and a flow algorithm is derived. Finally, these results are iised to improve existing aggregation methods. In addition, the study includes a number of examples, comparisons, surveys, useful comments and conclusions.

Page generated in 0.0574 seconds