Spelling suggestions: "subject:"bnetwork "" "subject:"conetwork ""
521 |
Inferring Network Status from Partial ObservationsRangudu, Venkata Pavan Kumar 09 February 2017 (has links)
In many network applications, such as the Internet and infrastructure networks, nodes fail or get congested dynamically, but tracking this information about all the nodes in a network where some dynamical processes are taking place is a fundamental problem. In this work, we study the problem of inferring the complete set of failed nodes, when only a sample of the node failures are known---we will be referring to this particular problem as prob{} . We consider the setting in which there exists correlations between node failures in networks, which has been studied in the case of many infrastructure networks. We formalize the prob{} problem using the Minimum Description Length (MDL) principle and we show that, in general, finding solutions that minimize the MDL cost is hard, and develop efficient algorithms with rigorous performance guarantees for finding near-optimal MDL cost solutions. We evaluate our methods on both synthetic and real world datasets, which includes the one from WAZE. WAZE is a crowd-sourced road navigation tool, that collects and presents the traffic incident reports. We found that the proposed greedy algorithm for this problem is able to recover $80%$, on average, of the failed nodes in a network for a given partial sample of input failures, which are sampled from the true set of failures at some predefined rate. Furthermore, we have also proved that this algorithm will find a solution that has MDL cost with an additive approximation guarantee of log(n) from the optimal. / Master of Science / In many real-world networks, such as Internet and Transportation networks, there will be some dynamical processes taking place. Due to the activity of these processes some of the elements in these networks may fail at random. For example service node failures in Internet, traffic congestion in road networks are some such scenarios. Identifying the complete state information of such networks is a fundamental problem. In this work, we study the problem of identifying unknown node failures in a network based on the partial observations – we referr to this problem as NetStateInf. Similar to some of the previous studies in this area we assume the settings where node failures in these networks are correlated. We approached this problem using Minimum Description Length (MDL) principle, which states that the information learned from a given data can be maximized by compressing it i.e., by identifying maximum number of patterns in the data. Using these concepts we have developed a mathematical representation of NetStateInf problem and proposed efficient algorithms with rigorous performance guarantees for finding the best set of failed nodes in the network that can best explain the observed faiures. We evaluated our algorithms against both synthetic – artificial network with failures generated based on a predefined mathematical model – and real-world data, for example traffic alerts data collected by WAZE, a crowdsourced navigation tool, for Boston road network. Using this approach we are able to recover around 80% of the failured nodes in the network from the given partial failure data. Furthermore, we have proved that our algorithm will find a solution that has a maximum cost difference of <i>log(n)</i> when compared with the optimal solution, where cost of a solution is the MDL way of representing its allignment with desired requirements.
|
522 |
On Programmable Control and Optimization for Multi-Hop Wireless NetworksJalaian, Brian Alexander 24 October 2016 (has links)
Traditionally, achieving good performance for a multi-hop wireless network is known to be difficult. The main approach to control the operation of such a network relies on a distributed paradigm, assuming that a centralized approach is not feasible. Relying on a distributed paradigm could be justified at the time when the basic technical building blocks (e.g., node computational power, communication technology, positioning technology) were the bottlenecks. Recent advances and breakthroughs in these technical areas along with the emergence of programmable networks with softwarized control plane intelligence allow us to consider employing a centralized optimization paradigm to control and manage the operation of a multi-hop wireless network. The programmable control provides a platform on which the centralized global network optimization paradigm can be supported. The benefits of a centralized network optimization lie specially in that a network may be configured in such a way that offers optimal performance, which is hardly possible for a network relying on distributed operation.
The objectives of this dissertation are to fully understand the potential benefits of a centralized control plane for a multi-hop wireless network, to identify any new challenges under this new paradigm, and to devise innovative solutions for optimal performance via a centralized control plane. Given that the performance of a wireless network heavily depends on its physical layer capabilities, we will consider a number of advanced wireless technologies, including MIMO, full duplex, and interference cancellation at the physical layer. The focus is on building tractable computational models for these wireless technologies that can be used for modeling, analysis and optimization in the centralized control plane. Problem formulation and efficient solution procedures are developed for various centralized optimization problems across multiple layers. End-to-end throughput maximization is a key objective among these optimization problems on the centralized control plane and is used to demonstrate the superior advantage of this paradigm. We study several problems:
• Integration of SIC and MIMO DoF IC.
We propose to integrate MIMO Degree-of-Freedom (DoF) interface cancellation (IC) and Successive Interference Cancellation (SIC) in MIMO multi-hop network under DoF protocol model. We show that DoF-based IC and SIC can be jointly integrated to combat the interference more effectively and improve the end-to-end throughput significantly. We develop the necessary mathematical models to realize the idea in a multi-hop wireless network.
• Full-Duplex MIMO Wireless Networks Throughput.
We investigate the performance of MIMO full-duplex (FD) in a multi-hop network.
We show that if IC is exploited, MIMO FD can achieve significant throughput gain over MIMO HD in a multi-hop network, which is contrary to the recent literature suggesting an unexpected marginal gain. Our proposed model handles the additional network interference by joint efficient link scheduling and interference cancellation.
• PCP in Tactical Wireless Networking.
We propose the idea of the Programmable Control Plane (PCP) for the tactical wireless network under the protocol model. PCP decouples the control and data plane and allows the network control layer functionalities to be dynamically configured to adapt to specific wireless channel conditions, customized applications and/or certain tactical situations. The proposed PCP functionalities are cast into a centralized optimization problem, which can be updated as needed and provide a centralized intelligence to manage the operation of a wireless MIMO multi-hop network under the protocol model.
• UPCP in Heterogeneous Wireless Networks.
We propose the idea of the Unified Programmable Control Plane (UPCP) for tactical heterogeneous wireless networks with interference management capabilities under the SINR model. The UPCP abstracts the complexity of the underlying network comprised of heterogeneous wireless technologies and provides a centralized intelligence over the network resources. We develop necessary mathematical model to realize the UPCP. / Ph. D. / In the past decades, wireless ad hoc communication networks have found a number of applications in both civilian and military environments. Such networks are comprised of a set of smart nodes, which are able to organize themselves into a multi-hop network (able to communicate from the source nodes to the destination nodes across multiple intermediary relay nodes) to provide various services such as unattended and real-time surveillance. Their capabilities of selfform and self-heal make them attractable for network deployment and maintenance, especially in the scenarios where infrastructure is hard to establish. Because of their ease of deployment and independence of infrastructure, wireless ad hoc network have motivated more and more research efforts to sustain their continued growth and well-being. Nevertheless, with rapidly increasing demand for data rate from various applications, we find ourselves still very much in the infancy of the development of such networks, which have the potential to offer orders-of-magnitude higher network-level throughput.
Traditionally, the main approach to control the operation of wireless ad hoc network relies on a distributed paradigm, assuming that a centralized approach is not feasible. Relying on a distributed paradigm could be justified at the time when were the bottlenecks. Recent advances and breakthroughs in basic technical areas the basic technical building blocks (e.g., node computational power, communication technology, positioning technology) along with the emergence of programmable networks with softwarized control plane intelligence allow us to consider employing a centralized optimization paradigm to control and manage the operation of a multi-hop wireless network. The objectives of this dissertation are to fully understand the potential benefits of a centralized optimization paradigm in multi-hop wireless network, to identify any new challenges under this new paradigm, and to devise innovative solutions for optimal performance.
|
523 |
Partitioning Techniques for Reducing Computational Effort of Routing in Large Networks.Woodward, Mike E., Al-Fawaz, M.M. January 2004 (has links)
No / A new scheme is presented for partitioning a network having a specific number of nodes and degree of connectivity such that the number of operations required to find a constrained path between a source node and destination node, averaged over all source-destination pairs, is minimised. The scheme can speed up the routing function, possibly by orders of magnitude under favourable conditions, at the cost of a sub-optimal solution.
|
524 |
Small-Scale Dual Path Network for Image Classification and Machine Learning Applications to Color QuantizationMurrell, Ethan Davis 05 1900 (has links)
This thesis consists of two projects in the field of machine learning. Previous research in the OSCAR UNT lab based on KMeans color quantization is further developed and applied to individual color channels and segmented input images to explore compression rates while still maintaining high output image quality. The second project implements a small-scale dual path network for image classifiaction utilizing the CIFAR-10 dataset containing 60,000 32x32 pixel images ranging across ten categories.
|
525 |
Network Security Tool for a NoviceGanduri, Rajasekhar 08 1900 (has links)
Network security is a complex field that is handled by security professionals who need certain expertise and experience to configure security systems. With the ever increasing size of the networks, managing them is going to be a daunting task. What kind of solution can be used to generate effective security configurations by both security professionals and nonprofessionals alike? In this thesis, a web tool is developed to simplify the process of configuring security systems by translating direct human language input into meaningful, working security rules. These human language inputs yield the security rules that the individual wants to implement in their network. The human language input can be as simple as, "Block Facebook to my son's PC". This tool will translate these inputs into specific security rules and install the translated rules into security equipment such as virtualized Cisco FWSM network firewall, Netfilter host-based firewall, and Snort Network Intrusion Detection. This tool is implemented and tested in both a traditional network and a cloud environment. One thousand input policies were collected from various users such as staff from UNT departments' and health science, including individuals with network security background as well as students with a non-computer science background to analyze the tool's performance. The tool is tested for its accuracy (91%) in generating a security rule. It is also tested for accuracy of the translated rule (86%) compared to a standard rule written by security professionals. Nevertheless, the network security tool built has shown promise to both experienced and inexperienced people in network security field by simplifying the provisioning process to result in accurate and effective network security rules.
|
526 |
Analysis of Resource Isolation and Resource Management in Network VirtualizationLindholm, Rickard January 2016 (has links)
Context. Virtualized networks are considered a major advancement in the technology of today, virtualized networks are offering plenty of functional benefits compared to todays dedicated networking elements. The virtualization allows network designers to separate networks and adapting the resources depending on the actual loads in other words, Load Balancing. Virtual networks would enable a minimized downtime for deployment of updates and similar tasks by performing a simple migration and then updating the linking after properly testing and preparing the Virtual Machine with the new software. When this technology is successfully proven to be efficient or evaluated and later adapted to the existing flaws. Virtualized networks will at that point claim the tasks of todays dedicated networking elements. But there are still unknown behaviors and effects of the technology for example, how the scheduler or hypervisor handles the virtual separation since they do share the same physical transmission resources.Objectives. By performing the experiments in this thesis, the hope is to learn about the effects of virtualization and how it performs under stress. By learning about the performance under stress it would also increase the knowledge about the efficiency of network virtualization. The experiments are conducted by creating scripts, using already written programs and systems, adding different loads and measuring the effects, this is documented so that other students and researchers can benefit from the research done in this thesis.Methods. In this thesis 5 different methodologies are performed: Experimental validation, statistical comparative analysis, resource sharing, control theory and literature review. Two systems are compared to previous research by evaluating the statistical results and analyzing them. As mentioned earlier the investigation of this thesis is focusing on how the scheduler executes the resource sharing under stress. The first system which is the control test is designed without any interference and a 5 Mbit/s UDP stream which is going through the system under test and being timestamped on measurement points on both the ingress and the egress, the second experiment involves an interfering load of a 5 Mbit/s UDP stream on the same system under test. Since it is a complex system quite some literature reviewing was done but mostly to gain a understanding and an overview of the different parts of the system and so that some obstacles would be able to be avoided.Results. The statistical comparative analysis of the experiments produced two graphs and two tables containing the coefficient of variance of the two experiments. The graph of the control test produced a graph with a quite even distribution over the time intervals with a coefficient of variance difference to the power of 10−3 and increasing somewhat over the larger time intervals. The second experiment with two virtual machines and an interfering packet stream are more distributed over the 0.0025 seconds and the 0.005 seconds intervals with a larger difference than the control test having a difference to the power of 10−2, showing some signs of a bottleneck in the system.Conclusions. Since the performance of the experiments and also the statistical handling of the data took longer than expected the choice was made to not deploy the system using Open Virtual Switch instead of Linux Bridge, hence there is not any other experiments to compare the performance with. But from referred research under related works the researcher concluded that the difference between Open Virtual Switch and Linux Bridge is small when compared without introducing any load. This is also confirmed on the website of Open Virtual Switch which states that Open Virtual Switch uses the same base as Linux Bridge. Linux Bridge is performing according to the expectations, it is a simple yet powerful tool and the results are confirming the previous research which claims that there are bottlenecks in the system. According to the pre-set requirement for validity for this experiment the difference of the CoV would be greater than to the power of 10−5, the measured difference was to the power of 10−2 which gives support to the theory that there are bottlenecks in the system. In the future it would be interesting to examine more about the effects of different hypervisors, virtualization techniques, packet generators etcetera to tackle these problems. A company that have taken countermeasures is Intel who have developed DPDK which confronts these efficiency problems by tailoring the scheduler towards the specific tasks. The downside of Intel’s DPDK is that it limits the user to Intel processors and removes one of the most important benefits of virtualization, the independence. But Intel have tried to keep it as independent as possible by maintaining DPDK as open source.
|
527 |
INSTRUMENTING AN AIRBORNE NETWORK TELEMETRY LINKLaird, Daniel, Temple, Kip 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / The Central Test and Evaluation Investment Program (CTEIP) Integrated Network Enhanced Telemetry (iNET) program is currently testing a wireless local area networking (WLAN) in an L-band telemetry (TM) channel to evaluate the feasibility and capabilities of enhancing traditional TM methods in a seamless wide area network (WAN). Several advantages of networking are real-time command and control of instrumentation formats, quick-look acquisition, data retransmission and recovery (gapless TM) and test point real-time verification. These networking functions, and all others, need to be tested and evaluated. The iNET team is developing a WLAN based on 802.x technologies to test the feasibility of the enhanced telemetry implementation for flight testing.
|
528 |
A Cost Effective Residential Telemetry NetworkByland, Sean, Clarke, Craig, Gegg, Matt, Schumacher, Ryan, Strehl, Chris 10 1900 (has links)
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California / As cost and power consumption of wireless devices decreases, it becomes increasingly practical to use wireless communications and control in residential settings. These networks share some of the same challenges and constraints as conventional telemetry networks. This particular project focused on using a commercial, off-the-shelf router to implement a residential automation system using Z-Wave wireless devices. The router can communicate status, and accept commands over a conventional 802.11 network, but does not require a remote host to operate the network. The router was reprogrammed using open source software, so it could issue commands, collect data, and monitor the Z-Wave network.
|
529 |
ACHIEVING HIGH-ACCURACY TIME DISTRIBUTION IN NETWORK-CENTRIC DATA ACQUISITION AND TELEMETRY SYSTEMS WITH IEEE 1588Grim, Evan T. 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / Network-centric data acquisition and telemetry systems continue to gain momentum and
adoption. However, inherent non-deterministic network delays hinder these systems’ suitability
for use where high-accuracy timing information is required. The emerging IEEE 1588 standard
for time distribution offers the potential for real-time data acquisition system development using
cost-effective, standards-based network technologies such as Ethernet and IP multicast. This
paper discusses the challenges, realities, lessons, and triumphs experienced using IEEE 1588 in
the development and implementation of such a large-scale network-centric data acquisition and
telemetry system. IEEE 1588 clears a major hurdle in moving the network-centric buzz from
theory to realization.
|
530 |
Dynamic network flow with uncertain arc capacitiesGlockner, Gregory D. 05 1900 (has links)
No description available.
|
Page generated in 0.0568 seconds