• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8730
  • 2930
  • 1104
  • 1047
  • 1016
  • 682
  • 315
  • 302
  • 277
  • 266
  • 135
  • 128
  • 79
  • 78
  • 75
  • Tagged with
  • 20065
  • 3906
  • 2815
  • 2572
  • 2430
  • 2343
  • 1929
  • 1828
  • 1554
  • 1521
  • 1510
  • 1510
  • 1497
  • 1443
  • 1395
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

New and Provable Results for Network Inference Problems and Multi-agent Optimization Algorithms

January 2017 (has links)
abstract: Our ability to understand networks is important to many applications, from the analysis and modeling of biological networks to analyzing social networks. Unveiling network dynamics allows us to make predictions and decisions. Moreover, network dynamics models have inspired new ideas for computational methods involving multi-agent cooperation, offering effective solutions for optimization tasks. This dissertation presents new theoretical results on network inference and multi-agent optimization, split into two parts - The first part deals with modeling and identification of network dynamics. I study two types of network dynamics arising from social and gene networks. Based on the network dynamics, the proposed network identification method works like a `network RADAR', meaning that interaction strengths between agents are inferred by injecting `signal' into the network and observing the resultant reverberation. In social networks, this is accomplished by stubborn agents whose opinions do not change throughout a discussion. In gene networks, genes are suppressed to create desired perturbations. The steady-states under these perturbations are characterized. In contrast to the common assumption of full rank input, I take a laxer assumption where low-rank input is used, to better model the empirical network data. Importantly, a network is proven to be identifiable from low rank data of rank that grows proportional to the network's sparsity. The proposed method is applied to synthetic and empirical data, and is shown to offer superior performance compared to prior work. The second part is concerned with algorithms on networks. I develop three consensus-based algorithms for multi-agent optimization. The first method is a decentralized Frank-Wolfe (DeFW) algorithm. The main advantage of DeFW lies on its projection-free nature, where we can replace the costly projection step in traditional algorithms by a low-cost linear optimization step. I prove the convergence rates of DeFW for convex and non-convex problems. I also develop two consensus-based alternating optimization algorithms --- one for least square problems and one for non-convex problems. These algorithms exploit the problem structure for faster convergence and their efficacy is demonstrated by numerical simulations. I conclude this dissertation by describing future research directions. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2017
292

XML to facilitate management of multi-vendor networks

Halse, G A, Wells, George, Terzoli, A January 2001 (has links)
Many standards aimed at managing networks currently exist, and yet networks remain notoriously difficult to maintain. Template-based management systems go a long way towards solving this problem. By developing an XML based language to describe network elements, as well as the topology of a network, we can create tools that are free from vendor specific idiosyncrasies, and are capable of managing both today’s networks and those of the future.
293

Test and fault-tolerance for network-on-chip infrastructures

Grecu, Cristian 05 1900 (has links)
The demands of future computing, as well as the challenges of nanometer-era VLSI design, will require new design techniques and design styles that are simultaneously high performance, energy-efficient, and robust to noise and process variation. One of the emerging problems concerns the communication mechanisms between the increasing number of blocks, or cores, that can be integrated onto a single chip. The bus-based systems and point-to-point interconnection strategies in use today cannot be easily scaled to accommodate the large numbers of cores projected in the near future. Network-on-chip (NoC) interconnect infrastructures are one of the key technologies that will enable the emergence of many-core processors and systems-on-chip with increased computing power and energy efficiency. This dissertation is focused on testing, yield improvement and fault-tolerance of such NoC infrastructures. A fast, efficient test method is developed for NoCs, that exploits their inherent parallelism to reduce the test time by transporting test data on multiple paths and testing multiple NoC components concurrently. The improvement of test time varies, depending on the NoC architecture and test transport protocol, from 2X to 34X, compared to current NoC test methods. This test mechanism is used subsequently to perform detection of NoC link permanent faults, which are then repaired by an on-chip mechanism that replaces the faulty signal lines with fault-free ones, thereby increasing the yield, while maintaining the same wire delay characteristics. The solution described in this dissertation improves significantly the achievable yield of NoC inter-switch channels â from 4% improvement for an 8-bit wide channel, to a 71% improvement for a 128-bit wide channel. The direct benefit is an improved fault-tolerance and increased yield and long-term reliability of NoC based multicore systems. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
294

A network aware adaptable application using a novel content scheduling and delivery scheme

Abdul Rahman, Abdul Muin January 2006 (has links)
The aim of this research is to investigate techniques or methodologies that will allow networked applications to adapt to network conditions between end nodes in order to maintain a reasonable Quality of Service and to design, develop and test techniques for achieving such adaptability through the use of a novel content scheduling and delivery scheme. In order to achieve this adaptation, information regarding network conditions, both static and dynamic, has to be first gathered. Since various parties have already conducted substantial research in this area, the task was to review those network measurement techniques and adopt a suitable one for use in the subsequent research. Hence the research is concerned more on how to realize these techniques in practical terms and make those network parameters accessible to applications that are going to adapt based on them. A network measurement service utilizing a standard measurement tool was proposed, developed, tested and subsequently used throughout the project. In this way the research project implementation has help in trying to understand the impact of network measurement on the overall performance of the system and what network metrics are essential in order to help the application make better adaptation decision. The project proceeded further to develop and show case an adaptable network application using a novel scheme in which content was restructured and its delivery rescheduled taking account of the available bandwidth, the content structure, size and order of importance and user specified deadlines, which made use of the network measurement service. In so doing, the project sought to show how and when adaptation can be applied and its potential benefits or otherwise as compared to conventional applications based on best effort systems. The project has proved that by adapting according to the abovementioned scheme in the event of poor network performance, user specified deadlines can be satisfied by reducing the load with contents of high importance being delivered first while contents of less importance being delivered during idle time or user's reading time or ignored if the deadline could not be met. In most cases content of high importance are delivered faster in the adaptable system as compared to the conventional best effort system.
295

Configuration planning on an ICL computer utilizing a stochastic network analysis package

Kingon, Ian Grenville Douglas 14 May 2014 (has links)
M.Sc (Computer Science.) / This dissertation details the implementation of SNAP, a stochastic network analysis package, as the basis of an in-house computer configuration planning facility. The work was performed at Head Office, Gold Fields of South Africa Limited, Johannesburg, South Africa (GFSA) during the period April 1980 to December 1981. SNAP was developed by the Institute of Applied Computer Science at the University of Stellenbosch, Stellenbosch, South Africa. The implementation of SNAP at GFSA signalled the first in-house SNAP facility, and the first SNAP implementation on an ICL computer (although implementation had been in progress at another ICL site since 1979). Although this dissertation is very specific in nature, it is intended to provide an insight into the methodology employed in planning and implementing an in-house configuration planning facility. An overview of multiclass queueing network models and the SNAP package is provided, although no attempt is made to explain the stochastic theory of queueing networks in any detail. Attention is thereafter focussed on the various phases of the project. Problems were encountered in monitoring performance data, and these are looked at in some depth. The question of workload characterization and the difficulties of producing a satisfactory GFSA classification strategy are then presented. The model design, calibration and validation stages are explained using the GFSA model. Thereafter, use of the model for prediction purposes is illustrated by means of a number of examples. Finally, tne memory management model is discussed - main memory does not form part of the SNAP model and has to be dealt with as a separate issue.
296

An Epistemic Event-based Correlation Scheme for Pervasive Network Management

Ganapathy, Vinayak 26 March 2008 (has links)
Computer networks produce tremendous amounts of event-based data that can be collected and managed to support an increasing number of new classes of pervasive applications. Examples of such applications are network monitoring and crisis management. Although the problem of distributed event-based management has been addressed in the non-pervasive settings such as the Internet, the domain of pervasive networks has its own characteristics that make these results non-applicable. Many of these applications are based on time-series data that possess the form of time-ordered series of events. Such applications also embody the need to handle large volumes of unexpected events, often modified on-the-fly, containing conflicting information, and dealing with rapidly changing contexts while producing results with low-latency. Correlating events across contextual dimensions holds the key to expanding the capabilities and improving the performance of these applications. This dissertation addresses this critical challenge. It establishes an effective scheme for complex-event semantic correlation. The scheme examines epistemic uncertainty in computer networks by fusing event synchronization concepts with belief theory. Because of the distributed nature of the event detection, time-delays are considered. Events are no longer instantaneous, but duration is associated with them. Existing algorithms for synchronizing time are split into two classes, one of which is asserted to provide a faster means for converging time and hence better suited for pervasive network management. Besides the temporal dimension, the scheme considers imprecision and uncertainty when an event is detected. A belief value is therefore associated with the semantics and the detection of composite events. This belief value is generated by a consensus among participating entities in a computer network. The scheme taps into in-network processing capabilities of pervasive computer networks and can withstand missing or conflicting information gathered from multiple participating entities. Thus, this dissertation advances knowledge in the field of network management by facilitating the full utilization of characteristics offered by pervasive, distributed and wireless technologies in contemporary and future computer networks.
297

Understanding project managers at work

Blackburn, Sarah Kaye January 2001 (has links)
No description available.
298

Feedforward neural network design with application to image subsampling

Dumitras, Adriana January 1999 (has links)
Feedforward artificial neural networks (FANNs), which have been successfully applied to various image processing tasks, are particularly suitable for image subsampling due to their high processing speed. However, the performance of FANNs in image subsampling, which depends on both the FANN topology and the FANN training algorithm, has not been acceptable so far. High performance image subsampling is important in many systems, such as subband decomposition systems, and scalable image and video processing systems. This thesis addresses the design of FANNs with application to image subsampling. More specifically, we focus on both the topological design of FANNs and the training algorithm, so that efficient FANN structures, yielding good performance in image subsampling, are obtained. That is, we aim at obtaining compact FANNs that yield good subsampled versions of the original images, such that if reconstructed, they are as close as possible to the original images. Moreover, we aim at obtaining better performance-speed tradeoffs than those of the traditional lowpass filtering and subsampling methods. First, we propose a design method for FANNs, which leads to compact tridiagonally symmetrical feedforward neural networks (TS—FANNs). Next, in order to address the problem of artifacts that generally appear in the reconstructed images after FANN-based subsampling, we propose a training method for FANNs. When applied to first-order (FOS) and multi-stage first-order (MFOS) image subsampling, the FANNs trained using our method outperform the traditional lowpass filtering and subsampling (LPFS) method, without requiring pre- or post-processing stages. Motivated by our observation that the computational demands of the MFOS process increase approximately linearly with the image size, we then combine the proposed methods and evaluate the performance-complexity tradeoffs of the resulting TS-FANNs in FOS and MFOS. We show that our TS-FANNs-based subsampling has important advantages over subsampling methods based on fully connected FANNs (FC—FANNs) and LPFS, such as significantly reduced computational demands, and the same, or better, quality of the resulting images. The main contributions of this thesis consist of a method for FANN design with tridiagonal symmetry constraints, a training algorithm for FANNs applied to image subsampling, the design and evaluation of the performance-speed tradeoffs of FC—FANNs in image subsampling, and the design and evaluation of the performancespeed tradeoffs of TS—FANNs in image subsampling. The FANN performance in image subsampling is evaluated objectively (using the peak signal-to-noise ratios), subjectively (by visual examination of the subsampled and of the reconstructed images), and in the context of a video coding application. The speed and memory demands of the designed FANN structures are evaluated in terms of the subsampling time and the number of FANN parameters, respectively. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
299

Network Decontamination Using Cellular Automata

Rakotomalala, Livaniaina Hary January 2016 (has links)
We consider the problem of decontaminating a network where all nodes are infected by a virus. The decontamination strategy is performed using a Cellular Automata (CA) model in which each node of the network is represented by the automata cell and thus, the network host status is also mapped to the CA state (contaminated, decontaminating, decontaminated). All hosts are assumed to be initially contaminated and the status of each cell is synchronously updated according to a set of local rules, based on the state of its neighbourhood. Our goal is to find the set of local rules that will accomplish the decontamination in an optimal way. The metrics used to define optimality is the minimization of three metrics: the maximum number of decontaminating cells at each step, the required value of the immunity time of each cell and the number of steps to complete the sanitization algorithm. In our research, we explore the designing of these local decontamination rules by refining the concept of the neighbourhood radius of CA with the addition of two new dimensions: Visibility Hop and Contamination Distance. Additionally, a research tool that help us manage our study have been developed.
300

Human Footpaths in the Outer Suburbs of Ottawa: Distribution, Network Connectivity, and Walkability

Saboui, Karine January 2016 (has links)
This research has three objectives; 1) describe the distribution of footpaths in the outer suburbs of Ottawa, 2) quantify the impact footpaths have on network connectivity in the outer western suburban neighborhoods of Ottawa, 3) quantify the impact of footpaths on destination-based walkability measures in the outer western suburban neighborhoods of Ottawa. The distribution of footpaths is assessed using a principal component analysis on 86 observations (footpaths) and 11 variables (land usage, transit connection, income, population density). Network connectivity is measured using the link-node ratio, the gamma index, and the alpha index, as well a node betweenness centrality. Walkability is measured in ArcGIS through an origin-destination cost matrix. The results show that the distribution of footpaths cannot be explained by the selected variables. Footpaths slightly decrease overall network connectivity and re-work node betweenness centrality. Footpaths have no impact on destination-based walkability. And so, footpaths may serve as better pedestrian routes but not necessarily as faster routes through the outer western suburbs of Ottawa.

Page generated in 0.067 seconds