• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 30
  • 30
  • 30
  • 20
  • 14
  • 10
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Semantically annotated multi-protocol adapter nodes: a new approach to implementing network-based information systems using ontologies.

Falkner, Nickolas John Gowland January 2007 (has links)
Network-based information systems are an important class of distributed systems that serve large and diverse user communities with information and essential network services. Centrally defined standards for interoperation and information exchange ensure that any required functionality is provided but do so at the expense of flexibility and ease of system evolution. This thesis presents a novel approach to implementing network-based information systems in a knowledge-representation-based format using an ontological description of the service. Our approach allows us to provide flexible distributed systems that can conform to global standards while still allowing local developments and protocol extensions. We can share data between systems if we provide an explicit specification of the relationship between the knowledge in the system and the structure and nature of the values shared between systems. Existing distributed systems may share data based on the values and structures of that data but we go beyond syntax-based value exchange to introduce a semantically-based exchange of knowledge. The explicit statement of the semantics and syntax of the system in a machine-interpretable form provides the automated integration of different systems through the use of adapter nodes. Adapter nodes are members of more than one system and seamlessly transport data between the systems. We develop a multi-tier software architecture that characterises the values held inside the system depending on an ontological classification of their structure and context to allow the definition of values in terms of the knowledge that they represent. Initially, received values are viewed as data, with no structural information. Structural and type information, and the context of the value can now be associated with it through the use of ontologies, leading to a value-form referred to as knowledge: a value that is structurally and contextually rich. This is demonstrated through an implementation process employing RDF, OWL and SPARQL to develop an ontological description of a network-based information system. The implementation provides evidence for the benefits and costs of representing a system in such a manner, including a complexity-based analysis of system performance. The implementation demonstrates the ability of such a representation to separate global standards-based requirements from local user requirements. This allows the addition of behaviour, specific to local needs, to otherwise global systems in a way that does not compromise the global standards. Our contribution is in providing a means for network-based information systems to retain the benefits of their global interaction while still allowing local customisation to meet the user expectations. This thesis presents a novel use of ontologically-based representation and tools to demonstrate the benefits of the multi-tier software architecture with a separation of the contents of the system into data, information and knowledge. Our approach increases the ease of interoperation for large-scale distributed systems and facilitates the development of systems that can adapt to local requirements while retaining their wider interoperability. Further, our approach provides a strong contextual framework to ground concepts in the system and also supports the amalgamation of data from many sources to provide rich and extensible network-based information system. / http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1295234 / Thesis (Ph.D.) -- School of Computer Science, 2007
12

Overlapping community detection exploiting direct dependency structures in complex networks

Liang, Fengfeng 30 August 2017 (has links)
Many important applications in the social, ecological, epidemiological, and biological sciences can be modeled as complex systems in which a node or variable interacts with another via the edges in the network. Community detection has been known to be important in obtaining insights into the network structure characteristics of these complex systems. The existing community detection methods often assume that the pairwise interaction data between nodes are already available, and they simply apply the detection algorithms to the network. However, the predefined network might contain inaccurate structures as a result of indirect effects that stem from the nodes' high-order interactions, which poses challenges for the algorithms upon which they are built. Meanwhile, existing methods to infer the direct interaction relationships suffer from the difficulty in identifying the cut point value that differentiates the direct interactions from the indirect interactions. In this thesis, we consider the overlapping community detection problem with determination and integration of the structural information of direct dependency interactions. We propose a new overlapping community detection model, named direct-dependency-based nonnegative matrix factorization (DNMF), that exploits the Bayesian framework for pairwise ordering to incorporate the structural information of the underlying network. To evaluate the effectiveness and efficiency of the proposed method, we compare it with state-of-the-art methods on benchmark datasets collected from different domains. Our empirical results show that after the incorporation of a direct dependency network, significant improvement is seen in the community detection performance in networks with homophilic effects.
13

Design and performance evaluation of a high-speed fiber optic integrated computer network for imaging communication systems.

Nematbakhsh, Mohammed Ali. January 1988 (has links)
In recent years, a growing number of diagnostic examinations in a hospital are being generated by digitally formatted imaging modalities. The evolution of these systems has led to the development of a totally digitized imaging system, which is called Picture Archiving and Communication System (PACS). A high speed computer network plays a very important role in the design of a Picture Archiving and Communication System. The computer network must not only offer a high data rate, but also it must be structured to satisfy the PACS requirements efficiently. In this dissertation, a computer network, called PACnet, is proposed for PACS. The PACnet is designed to carry image, voice, image pointing overlay, and intermittent data over a 200 Mbps dual fiber optic ring network. The PACnet provides a data packet channel and image and voice channels based on Time Division Multiple Access (TDMA) technique. The intermittent data is transmitted over a data packet channel using a modified token passing scheme. The voice and image pointing overlay are transferred between two stations in real-time to support the consultive nature of a radiology department using circuit switching techniques. Typical 50 mega-bit images are transmitted over the image channel in less than a second using circuit switching techniques. A technique, called adaptive variable frame size, is developed for PACnet to achieve high network utilization and short response time. This technique allows the data packet traffic to use any residual voice or image traffic momentarily available due to variation in voice traffic or absence of images. To achieve optimal design parameters for network and interfaces, the PACnet is also simulated under different conditions.
14

A reference architecture for cloud computing and its security applications

Unknown Date (has links)
Cloud Computing is security. In complex systems such as Cloud Computing, parts of a system are secured by using specific products, but there is rarely a global security analysis of the complete system. We have described how to add security to cloud systems and evaluate its security levels using a reference architecture. A reference architecture provides a framework for relating threats to the structure of the system and makes their numeration more systematic and complete. In order to secure a cloud framework, we have enumerated cloud threats by combining several methods because it is not possible to prove that we have covered all the threats. We have done a systematic enumeration of cloud threats by first identifying them in the literature and then by analyzing the activities from each of their use cases in order to find possible threats. These threats are realized in the form of misuse cases in order to understand how an attack happens from the point of view of an attacker. The reference architecture is used as a framework to determine where to add security in order to stop or mitigate these threats. This approach also implies to develop some security patterns which will be added to the reference architecture to design a secure framework for clouds. We finally evaluate its security level by using misuse patterns and considering the threat coverage of the models. / by Keiko Hashizume. / Thesis (Ph.D.)--Florida Atlantic University, 2013. / Includes bibliography. / Mode of access: World Wide Web. / System requirements: Adobe Reader.
15

Performance Evaluation Tools for Interconnection Network Design

Kolinska, Anna 08 April 1994 (has links)
A methodology is proposed for designing performance optimized computer systems. The methodology uses software tools created for performance monitoring and evaluation of parallel programs, replacing actual hardware with a simulator modeling the hardware under development. We claim that a software environment can help hardware designers to make decisions on the architectural design level. A simulator executes real programs and provides access to performance monitors from user's code. The performance monitoring system collects data traces when running the simulator and the performance analysis module extracts performance data of interest, that are later displayed with visualization tools. Key features of our methodology are "plug and play" simulation and modeling hardware/software interaction during the process of hardware design. The ability to use different simulators gives the user flexibility to configure the system for the required functionality, accuracy and simulation performance. Evaluation of hardware performance based on results obtained by modeling hardware/software interaction is crucial for designing performance optimized computer systems. We have developed a software system, based on our design methodology, for performance evaluation of multicomputer interconnection networks. The system, called the Parsim Common Environment (PCE), consists of an instrumented network simulator that executes assembly language instructions, and performance analysis and visualization modules. Using PCE we have investigated a specific network design example. The system helped us spot performance problems, explain why they happened and find the ways to solve them. The obtained results agreed with observations presented in the literature, hence validating our design methodology and the correctness of the software performance evaluation system for hardware designs. Using software tools a designer can easily check different design options and evaluate the obtained performance results without the overhead of building expensive prototypes. With our system, data analysis that required 10 man-hours to complete manually took just a couple of seconds on a Sparc-4 workstation. Without experimentation with the simulator and the performance evaluation environment one might build an expensive hardware prototype, expecting improved performance, and then be disappointed with poorer results than expected. Our tools help designers spot and solve performance problems at early stages of the hardware design process.
16

Evolutionary algorithms for some problems in telecommunications = Algoritmos evolutivos para alguns problemas em telecomunicações / Algoritmos evolutivos para alguns problemas em telecomunicações

Andrade, Carlos Eduardo de, 1981- 03 May 2015 (has links)
Orientadores: Flavio Keidi Miyazawa, Mauricio Guilherme de Carvalho Resende / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-27T21:53:09Z (GMT). No. of bitstreams: 1 Andrade_CarlosEduardode_D.pdf: 4654702 bytes, checksum: 566cb3ea8fc876147ffa6df2ec8482b3 (MD5) Previous issue date: 2015 / Resumo: Nos últimos anos, as redes de telecomunicação tem experienciado um grande aumento no fluxo de dados. Desde a utilização massiva de vídeo sob demanda até o incontável número de dispositivos móveis trocando texto e vídeo, o tráfego alcançou uma escala capaz de superar a capacidade das redes atuais. Portanto, as companhias de telecomunicação ao redor do mundo tem sido forçadas a aumentar a capacidade de suas redes para servir esta crescente demanda. Como o custo de instalar uma infraestrutura de rede é geralmente muito grande, o projeto de redes usa fortemente ferramentas de otimização para manter os custos tão baixos quanto possível. Nesta tese, nós analisamos vários aspectos do projeto e implementação de redes de telecomunicação. Primeiramente, nós apresentamos um novo problema de projeto de redes usado para servir demandas sem fio de dispositivos móveis e rotear tal tráfego para a rede principal. Tais redes de acesso são baseadas em tecnologias sem fio modernos como Wi-Fi, LTE e HSPA. Este problema consideramos várias restrições reais e é difícil de ser resolvido. Nós estudamos casos reais nas vizinhanças de uma grande cidade nos Estados Unidos. Em seguida, nós apresentamos uma variação do problema de localização de hubs usado para modelar as redes principais (backbones ou laços centrais). Este problema também pode ser utilizado para modelar redes de transporte de cargas e passageiros. Nós também estudamos o problema de clusterização correlacionada com sobreposições usado para modelar o comportamento dos usuários quando utilizam seus equipamentos móveis. Neste problema, nós podemos rotular um objeto usando múltiplos rótulos e analisar a conexão entre eles. Este problema é adequado para análise de mobilidade de equipamentos que pode ser usada para estimar o tráfego em uma dada região. E finalmente, nós analisamos o licenciamento de espectro sobre uma perspectiva governamental. Nestes casos, uma agência do governo deseja vender licenças para companhias de telecomunicação para que operem em uma dada faixa de espectro. Este processo usualmente é conduzido usando leilões combinatoriais. Para todos problemas, nós propomos algoritmos genéticos de chaves aleatórias viciadas e modelos de programação linear inteira mista para resolvê-los (exceto para o problema de clusterização correlacionada com sobreposição, devido sua natureza não-linear). Os algoritmos que propusemos foram capazes de superar algoritmos do estado da arte para todos problemas / Abstract: Cutting and packing problems are common problems that occur in many industry and business process. Their optimized resolution leads to great profits in several sectors. A common problem, that occur in textil and paper industries, is to cut a strip of some material to obtain several small items, using the minimum length of material. This problem, known by Two Dimensional Strip Packing Problem (2SP), is a hard combinatorial optimization problem. In this work, we present an exact algorithm to 2SP, restricted to two staged cuts (known by Two Dimensional Level Strip Packing, 2LSP). The algorithm uses the branch-and-price technique, and heuristics based on approximation algorithms to obtain upper bounds. The algorithm obtained optimal or almost optimal for small and moderate sized instances / Abstract: In last twenty years, telecommunication networks have experienced a huge increase in data utilization. From massive on-demand video to uncountable mobile devices exchanging text and video, traffic reached scales that overcame the network capacities. Therefore, telecommunication companies around the world have been forced to increase their capacity to serve this increasing demand. As the cost to deploy network infrastructure is usually very large, the design of a network heavily uses optimization tools to keep costs as low as possible. In this thesis, we analyze several aspects of the design and deployment of communication networks. First, we present a new network design problem used to serve wireless demands from mobile devices and route the traffic to the core network. Such access networks are based on modern wireless access technologies such as Wi-Fi, LTE, and HSPA. This problem has several real world constraints and it is hard to solve. We study real cases of the vicinity of a large city in the United States. Following, we present a variation of the hub-location problem used to model these core networks. Such problem is also suitable to model transportation networks. We also study the overlapping correlation clustering problem used to model the user's behavior when using their mobile devices. In such problem, one can label an object with multiple labels and analyzes the connections between them. Although this problem is very generic, it is suitable to analyze device mobility which can be used to estimate traffic in geographical regions. Finally, we analyze spectrum licensing from a governmental perspective. In these cases, a governmental agency wants to sell rights for telecommunication companies to operate over a given spectrum range. This process usually is conducted using combinatorial auctions. For all problems we propose biased random-key genetic algorithms and mixed integer linear programming models (except in the case of the overlapping correlation clustering problem due its non-linear nature). Our algorithms were able to overcome the state of the art algorithms for all problems / Doutorado / Ciência da Computação / Doutor em Ciência da Computação
17

RRMP : rate based reliable multicast protocol

Kondapalli, Naveen 01 April 2002 (has links)
No description available.
18

Performing under overload

Macpherson, Luke, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
This dissertation argues that admission control should be applied as early as possible within a system. To that end, this dissertation examines the benefits and trade-offs involved in applying admission control to a networked computer system at the level of the network interface hardware. Admission control has traditionally been applied in software, after significant resources have already been expended on processing a request. This design decision leads to systems whose algorithmic cost is a function of the load applied to the system, rather than the load admitted to the system. By performing admission control at the network interface, it is possible to develop systems whose algorithmic cost is a function of load admitted to the system, rather than load applied to the system. Such systems are able to deal with excessive applied loads without exhibiting performance degradation. This dissertation first examines existing admission control approaches, focussing on the cost of admission control within those systems. It then goes on to develop a model of system behaviour under overload, and the impact of admission control on that behaviour. A new class of admission control mechanisms which are able to perform load rejection using the network interface hardware are then described, along with a prototype implementation using commodity hardware. A prototype implementation in the FreeBSD operating system is evaluated for a variety of network protocols and performance is compared to the standard FreeBSD implementation. Performance and scalability under overload is significantly improved.
19

Performing under overload

Macpherson, Luke, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
This dissertation argues that admission control should be applied as early as possible within a system. To that end, this dissertation examines the benefits and trade-offs involved in applying admission control to a networked computer system at the level of the network interface hardware. Admission control has traditionally been applied in software, after significant resources have already been expended on processing a request. This design decision leads to systems whose algorithmic cost is a function of the load applied to the system, rather than the load admitted to the system. By performing admission control at the network interface, it is possible to develop systems whose algorithmic cost is a function of load admitted to the system, rather than load applied to the system. Such systems are able to deal with excessive applied loads without exhibiting performance degradation. This dissertation first examines existing admission control approaches, focussing on the cost of admission control within those systems. It then goes on to develop a model of system behaviour under overload, and the impact of admission control on that behaviour. A new class of admission control mechanisms which are able to perform load rejection using the network interface hardware are then described, along with a prototype implementation using commodity hardware. A prototype implementation in the FreeBSD operating system is evaluated for a variety of network protocols and performance is compared to the standard FreeBSD implementation. Performance and scalability under overload is significantly improved.
20

An Exploration Of Heterogeneous Networks On Chip

Grimm, Allen Gary 01 January 2011 (has links)
As the the number of cores on a single chip continue to grow, communication increasingly becomes the bottleneck to performance. Networks on Chips (NoC) is an interconnection paradigm showing promise to allow system size to increase while maintaining acceptable performance. One of the challenges of this paradigm is in constructing the network of inter-core connections. Using the traditional wire interconnect as long range links is proving insufficient due to the increase in relative delay as miniaturization progresses. Novel link types are capable of delivering single-hop long-range communication. We investigate the potential benefits of constructing networks with many link types applied to heterogeneous NoCs and hypothesize that a network with many link types available can achieve a higher performance at a given cost than its homogeneous network counterpart. To investigate NoCs with heterogeneous links, a multiobjective evolutionary algorithm is given a heterogeneous set of links and optimizes the number and placement of those links in an NoC using objectives of cost, throughput, and energy as a representative set of a NoC's quality. The types of links used and the topology of those links is explored as a consequence of the properties of available links and preference set on the objectives. As the platform of experimentation, the Complex Network Evolutionary Algorithm (CNEA) and the associated Complex Network Framework (CNF) are developed. CNEA is a multiobjective evolutionary algorithm built from the ParadisEO framework to facilitate the construction of optimized networks. CNF is designed and used to model and evaluate networks according to: cost of a given topology; performance in terms of a network's throughput and energy consumption; and graph-theory based metrics including average distance, degree-, length-, and link-distributions. It is shown that optimizing complex networks to cost as a function of total link length and average distance creates a power-law link-length distribution. This offers a way to decrease the average distance of a network for a given cost when compared to random networks or the standard mesh network. We then explore the use of several types of constrained-length links in the same optimization problem and find that, when given access to all link types, we obtain networks that have the same or smaller average distance for a given cost than any network that is produced when given access to only one link type. We then introduce traffic on the networks with an interconnect-based packet-level shortest-path-routed traffic model. We find heterogeneous networks can achieve a throughput as good or better than the homogeneous network counterpart using the same amount of link. Finally, these results are confirmed by augmenting a wire-based mesh network with non-traditional link types and finding significant increases the overall performance of that network.

Page generated in 0.0744 seconds