Spelling suggestions: "subject:" bnetwork"" "subject:" conetwork""
521 |
Extensions for Multicast in Mobile Ad-hoc Networks (XMMAN): The Reduction of Data Overhead in Wireless Multicast TreesChristman, Michael Edward 22 August 2002 (has links)
Mobile Ad hoc Network (MANET) routing protocols are designed to provide connectivity between wireless mobile nodes that do not have access to high-speed backbone networks. While many unicast MANET protocols have been explored, research involving multicast protocols has been limited. Existing multicast algorithms attempt to reduce routing overhead, but few, if any, attempt to reduce data overhead.
The broadcast nature of wireless communication creates a unique environment in which overlaps in coverage are common. When designed properly, a multicast algorithm can take advantage of these overlaps and reduce data overhead. Unlike a unicast route, in which there is one path between a sender and receiver, a multicast tree can have multiple branches between the sender and its multiple receivers. Some of these paths can be combined to reduce redundant data rebroadcasts.
The extensions presented in this thesis are a combination of existing and original routing techniques that were designed to reduce data rebroadcasts by aggregating multicast data flows. One such optimization takes advantage of the multipoint relay (MPR) nodes used by the Optimized Link State Routing (OLSR) unicast protocol. These nodes are used in unicast routing to reduce network broadcast, but can be used to help create efficient multicast data flows. Additionally, by listening to routing messages meant for other nodes, a host can learn a bit about its network and may be able to make routing changes that improve the multicast tree.
This protocol was implemented as a software router in Linux. It should be emphasized that this is a real implementation and not a simulation. Experiments showed that the number of data packets in the network could be reduced by as much as 19 percent. These improvements were accomplished while using only a small amount of routing overhead. / Master of Science
|
522 |
Salience and Frontoparietal Network Patterns in Children with Autism Spectrum Disorder and Attention-Deficit/Hyperactivity DisorderAntezana, Ligia 18 April 2018 (has links)
Autism spectrum disorder (ASD) and attention deficit/hyperactivity disorder (ADHD) have been difficult to differentiate in clinical settings, as these two disorders are phenotypically similar and both exhibit atypical attention and executive functioning. Mischaracterizations between these two disorders can lead to inappropriate medication regimes, significant delays in special services, and personal distress to families and caregivers. There is evidence that ASD and ADHD are biologically different for attentional and executive functioning mechanisms, as only half of individuals with co-occurring ASD and ADHD respond to stimulant medication. Further, neurobehavioral work has supported these biological differences for ASD and ADHD, with both shared and distinct functional connectivity. In specific, two brain networks have been implicated in these disorders: the salience network (SN) and frontoparietal network (FPN). The SN is a network anchored by bilateral anterior insula and the dorsal anterior cingulate cortex and has been implicated in “bottom-up” attentional processes for both internal and external events. The FPN is anchored by lateral prefrontal cortex areas and the parietal lobe and plays a roll in “top-down” executive processes. Functional connectivity subgroups differentiated ASD from ADHD with between SN-FPN connectivity patterns, but not by within-SN or within-FPN connectivity patterns. Further, subgroup differences in ASD+ADHD comorbidity vs. ASD only were found for within-FPN connectivity. / Master of Science / Autism spectrum disorder (ASD) and attention deficit/hyperactivity disorder (ADHD) have been difficult to differentiate in clinical settings, as these two disorders are similar and both exhibit attention and executive functioning difficulties. ASD and ADHD have shared and distinct functional brain network connectivity related to attention and executive functioning. Two brain networks have been implicated in these disorders: the salience network (SN) and frontoparietal network (FPN). The SN is a network that has been implicated in “bottom-up” attentional processes for both internal and external events. The FPN plays a roll in “top-down” executive processes. This study found that functional connectivity patterns between the SN and FPN differentiated ASD from ADHD. Further, connectivity patterns in children with co-occurring ASD and ADHD were characterized by within-FPN connectivity.
|
523 |
Partitioning Techniques for Reducing Computational Effort of Routing in Large Networks.Woodward, Mike E., Al-Fawaz, M.M. January 2004 (has links)
No / A new scheme is presented for partitioning a network having a specific number of nodes and degree of connectivity such that the number of operations required to find a constrained path between a source node and destination node, averaged over all source-destination pairs, is minimised. The scheme can speed up the routing function, possibly by orders of magnitude under favourable conditions, at the cost of a sub-optimal solution.
|
524 |
Inferring Network Status from Partial ObservationsRangudu, Venkata Pavan Kumar 09 February 2017 (has links)
In many network applications, such as the Internet and infrastructure networks, nodes fail or get congested dynamically, but tracking this information about all the nodes in a network where some dynamical processes are taking place is a fundamental problem. In this work, we study the problem of inferring the complete set of failed nodes, when only a sample of the node failures are known---we will be referring to this particular problem as prob{} . We consider the setting in which there exists correlations between node failures in networks, which has been studied in the case of many infrastructure networks. We formalize the prob{} problem using the Minimum Description Length (MDL) principle and we show that, in general, finding solutions that minimize the MDL cost is hard, and develop efficient algorithms with rigorous performance guarantees for finding near-optimal MDL cost solutions. We evaluate our methods on both synthetic and real world datasets, which includes the one from WAZE. WAZE is a crowd-sourced road navigation tool, that collects and presents the traffic incident reports. We found that the proposed greedy algorithm for this problem is able to recover $80%$, on average, of the failed nodes in a network for a given partial sample of input failures, which are sampled from the true set of failures at some predefined rate. Furthermore, we have also proved that this algorithm will find a solution that has MDL cost with an additive approximation guarantee of log(n) from the optimal. / Master of Science / In many real-world networks, such as Internet and Transportation networks, there will be some dynamical processes taking place. Due to the activity of these processes some of the elements in these networks may fail at random. For example service node failures in Internet, traffic congestion in road networks are some such scenarios. Identifying the complete state information of such networks is a fundamental problem. In this work, we study the problem of identifying unknown node failures in a network based on the partial observations – we referr to this problem as NetStateInf. Similar to some of the previous studies in this area we assume the settings where node failures in these networks are correlated. We approached this problem using Minimum Description Length (MDL) principle, which states that the information learned from a given data can be maximized by compressing it i.e., by identifying maximum number of patterns in the data. Using these concepts we have developed a mathematical representation of NetStateInf problem and proposed efficient algorithms with rigorous performance guarantees for finding the best set of failed nodes in the network that can best explain the observed faiures. We evaluated our algorithms against both synthetic – artificial network with failures generated based on a predefined mathematical model – and real-world data, for example traffic alerts data collected by WAZE, a crowdsourced navigation tool, for Boston road network. Using this approach we are able to recover around 80% of the failured nodes in the network from the given partial failure data. Furthermore, we have proved that our algorithm will find a solution that has a maximum cost difference of <i>log(n)</i> when compared with the optimal solution, where cost of a solution is the MDL way of representing its allignment with desired requirements.
|
525 |
On Programmable Control and Optimization for Multi-Hop Wireless NetworksJalaian, Brian Alexander 24 October 2016 (has links)
Traditionally, achieving good performance for a multi-hop wireless network is known to be difficult. The main approach to control the operation of such a network relies on a distributed paradigm, assuming that a centralized approach is not feasible. Relying on a distributed paradigm could be justified at the time when the basic technical building blocks (e.g., node computational power, communication technology, positioning technology) were the bottlenecks. Recent advances and breakthroughs in these technical areas along with the emergence of programmable networks with softwarized control plane intelligence allow us to consider employing a centralized optimization paradigm to control and manage the operation of a multi-hop wireless network. The programmable control provides a platform on which the centralized global network optimization paradigm can be supported. The benefits of a centralized network optimization lie specially in that a network may be configured in such a way that offers optimal performance, which is hardly possible for a network relying on distributed operation.
The objectives of this dissertation are to fully understand the potential benefits of a centralized control plane for a multi-hop wireless network, to identify any new challenges under this new paradigm, and to devise innovative solutions for optimal performance via a centralized control plane. Given that the performance of a wireless network heavily depends on its physical layer capabilities, we will consider a number of advanced wireless technologies, including MIMO, full duplex, and interference cancellation at the physical layer. The focus is on building tractable computational models for these wireless technologies that can be used for modeling, analysis and optimization in the centralized control plane. Problem formulation and efficient solution procedures are developed for various centralized optimization problems across multiple layers. End-to-end throughput maximization is a key objective among these optimization problems on the centralized control plane and is used to demonstrate the superior advantage of this paradigm. We study several problems:
• Integration of SIC and MIMO DoF IC.
We propose to integrate MIMO Degree-of-Freedom (DoF) interface cancellation (IC) and Successive Interference Cancellation (SIC) in MIMO multi-hop network under DoF protocol model. We show that DoF-based IC and SIC can be jointly integrated to combat the interference more effectively and improve the end-to-end throughput significantly. We develop the necessary mathematical models to realize the idea in a multi-hop wireless network.
• Full-Duplex MIMO Wireless Networks Throughput.
We investigate the performance of MIMO full-duplex (FD) in a multi-hop network.
We show that if IC is exploited, MIMO FD can achieve significant throughput gain over MIMO HD in a multi-hop network, which is contrary to the recent literature suggesting an unexpected marginal gain. Our proposed model handles the additional network interference by joint efficient link scheduling and interference cancellation.
• PCP in Tactical Wireless Networking.
We propose the idea of the Programmable Control Plane (PCP) for the tactical wireless network under the protocol model. PCP decouples the control and data plane and allows the network control layer functionalities to be dynamically configured to adapt to specific wireless channel conditions, customized applications and/or certain tactical situations. The proposed PCP functionalities are cast into a centralized optimization problem, which can be updated as needed and provide a centralized intelligence to manage the operation of a wireless MIMO multi-hop network under the protocol model.
• UPCP in Heterogeneous Wireless Networks.
We propose the idea of the Unified Programmable Control Plane (UPCP) for tactical heterogeneous wireless networks with interference management capabilities under the SINR model. The UPCP abstracts the complexity of the underlying network comprised of heterogeneous wireless technologies and provides a centralized intelligence over the network resources. We develop necessary mathematical model to realize the UPCP. / Ph. D. / In the past decades, wireless ad hoc communication networks have found a number of applications in both civilian and military environments. Such networks are comprised of a set of smart nodes, which are able to organize themselves into a multi-hop network (able to communicate from the source nodes to the destination nodes across multiple intermediary relay nodes) to provide various services such as unattended and real-time surveillance. Their capabilities of selfform and self-heal make them attractable for network deployment and maintenance, especially in the scenarios where infrastructure is hard to establish. Because of their ease of deployment and independence of infrastructure, wireless ad hoc network have motivated more and more research efforts to sustain their continued growth and well-being. Nevertheless, with rapidly increasing demand for data rate from various applications, we find ourselves still very much in the infancy of the development of such networks, which have the potential to offer orders-of-magnitude higher network-level throughput.
Traditionally, the main approach to control the operation of wireless ad hoc network relies on a distributed paradigm, assuming that a centralized approach is not feasible. Relying on a distributed paradigm could be justified at the time when were the bottlenecks. Recent advances and breakthroughs in basic technical areas the basic technical building blocks (e.g., node computational power, communication technology, positioning technology) along with the emergence of programmable networks with softwarized control plane intelligence allow us to consider employing a centralized optimization paradigm to control and manage the operation of a multi-hop wireless network. The objectives of this dissertation are to fully understand the potential benefits of a centralized optimization paradigm in multi-hop wireless network, to identify any new challenges under this new paradigm, and to devise innovative solutions for optimal performance.
|
526 |
Analysis of Resource Isolation and Resource Management in Network VirtualizationLindholm, Rickard January 2016 (has links)
Context. Virtualized networks are considered a major advancement in the technology of today, virtualized networks are offering plenty of functional benefits compared to todays dedicated networking elements. The virtualization allows network designers to separate networks and adapting the resources depending on the actual loads in other words, Load Balancing. Virtual networks would enable a minimized downtime for deployment of updates and similar tasks by performing a simple migration and then updating the linking after properly testing and preparing the Virtual Machine with the new software. When this technology is successfully proven to be efficient or evaluated and later adapted to the existing flaws. Virtualized networks will at that point claim the tasks of todays dedicated networking elements. But there are still unknown behaviors and effects of the technology for example, how the scheduler or hypervisor handles the virtual separation since they do share the same physical transmission resources.Objectives. By performing the experiments in this thesis, the hope is to learn about the effects of virtualization and how it performs under stress. By learning about the performance under stress it would also increase the knowledge about the efficiency of network virtualization. The experiments are conducted by creating scripts, using already written programs and systems, adding different loads and measuring the effects, this is documented so that other students and researchers can benefit from the research done in this thesis.Methods. In this thesis 5 different methodologies are performed: Experimental validation, statistical comparative analysis, resource sharing, control theory and literature review. Two systems are compared to previous research by evaluating the statistical results and analyzing them. As mentioned earlier the investigation of this thesis is focusing on how the scheduler executes the resource sharing under stress. The first system which is the control test is designed without any interference and a 5 Mbit/s UDP stream which is going through the system under test and being timestamped on measurement points on both the ingress and the egress, the second experiment involves an interfering load of a 5 Mbit/s UDP stream on the same system under test. Since it is a complex system quite some literature reviewing was done but mostly to gain a understanding and an overview of the different parts of the system and so that some obstacles would be able to be avoided.Results. The statistical comparative analysis of the experiments produced two graphs and two tables containing the coefficient of variance of the two experiments. The graph of the control test produced a graph with a quite even distribution over the time intervals with a coefficient of variance difference to the power of 10−3 and increasing somewhat over the larger time intervals. The second experiment with two virtual machines and an interfering packet stream are more distributed over the 0.0025 seconds and the 0.005 seconds intervals with a larger difference than the control test having a difference to the power of 10−2, showing some signs of a bottleneck in the system.Conclusions. Since the performance of the experiments and also the statistical handling of the data took longer than expected the choice was made to not deploy the system using Open Virtual Switch instead of Linux Bridge, hence there is not any other experiments to compare the performance with. But from referred research under related works the researcher concluded that the difference between Open Virtual Switch and Linux Bridge is small when compared without introducing any load. This is also confirmed on the website of Open Virtual Switch which states that Open Virtual Switch uses the same base as Linux Bridge. Linux Bridge is performing according to the expectations, it is a simple yet powerful tool and the results are confirming the previous research which claims that there are bottlenecks in the system. According to the pre-set requirement for validity for this experiment the difference of the CoV would be greater than to the power of 10−5, the measured difference was to the power of 10−2 which gives support to the theory that there are bottlenecks in the system. In the future it would be interesting to examine more about the effects of different hypervisors, virtualization techniques, packet generators etcetera to tackle these problems. A company that have taken countermeasures is Intel who have developed DPDK which confronts these efficiency problems by tailoring the scheduler towards the specific tasks. The downside of Intel’s DPDK is that it limits the user to Intel processors and removes one of the most important benefits of virtualization, the independence. But Intel have tried to keep it as independent as possible by maintaining DPDK as open source.
|
527 |
INSTRUMENTING AN AIRBORNE NETWORK TELEMETRY LINKLaird, Daniel, Temple, Kip 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / The Central Test and Evaluation Investment Program (CTEIP) Integrated Network Enhanced Telemetry (iNET) program is currently testing a wireless local area networking (WLAN) in an L-band telemetry (TM) channel to evaluate the feasibility and capabilities of enhancing traditional TM methods in a seamless wide area network (WAN). Several advantages of networking are real-time command and control of instrumentation formats, quick-look acquisition, data retransmission and recovery (gapless TM) and test point real-time verification. These networking functions, and all others, need to be tested and evaluated. The iNET team is developing a WLAN based on 802.x technologies to test the feasibility of the enhanced telemetry implementation for flight testing.
|
528 |
A Cost Effective Residential Telemetry NetworkByland, Sean, Clarke, Craig, Gegg, Matt, Schumacher, Ryan, Strehl, Chris 10 1900 (has links)
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California / As cost and power consumption of wireless devices decreases, it becomes increasingly practical to use wireless communications and control in residential settings. These networks share some of the same challenges and constraints as conventional telemetry networks. This particular project focused on using a commercial, off-the-shelf router to implement a residential automation system using Z-Wave wireless devices. The router can communicate status, and accept commands over a conventional 802.11 network, but does not require a remote host to operate the network. The router was reprogrammed using open source software, so it could issue commands, collect data, and monitor the Z-Wave network.
|
529 |
ACHIEVING HIGH-ACCURACY TIME DISTRIBUTION IN NETWORK-CENTRIC DATA ACQUISITION AND TELEMETRY SYSTEMS WITH IEEE 1588Grim, Evan T. 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / Network-centric data acquisition and telemetry systems continue to gain momentum and
adoption. However, inherent non-deterministic network delays hinder these systems’ suitability
for use where high-accuracy timing information is required. The emerging IEEE 1588 standard
for time distribution offers the potential for real-time data acquisition system development using
cost-effective, standards-based network technologies such as Ethernet and IP multicast. This
paper discusses the challenges, realities, lessons, and triumphs experienced using IEEE 1588 in
the development and implementation of such a large-scale network-centric data acquisition and
telemetry system. IEEE 1588 clears a major hurdle in moving the network-centric buzz from
theory to realization.
|
530 |
Dynamic network flow with uncertain arc capacitiesGlockner, Gregory D. 05 1900 (has links)
No description available.
|
Page generated in 0.0884 seconds