• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 133
  • 133
  • 133
  • 65
  • 56
  • 28
  • 26
  • 26
  • 22
  • 21
  • 21
  • 20
  • 18
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Web-Dinar: Web Based Diagnosis of Network and Application Resources in Disaster Response Systems

Deshpande, Kartik 01 January 2010 (has links) (PDF)
Disaster management and emergency response mechanisms are coming of age post 9/11. Paper based triaging and evacuation is slowly being replaced with much advanced mechanisms using remote clients (Laptops, Thin clients, PDAs), RFiDs etc. This reflects a modern trend to deploy Information Technology (IT) in disaster management. IT elements provide a great a deal of flexibility and seamlessness in the communication of information. The information flowing is so critical that, loss of data is not at all acceptable. Loss of data would mean loss of critical medical information portraying the disaster scenario. This would amount to a wrong picture being painted of the disaster incident. This basic idea led to the motivation of DiNAR (Diagnosis of Network and Application Resource). The aim of DiNAR was to remotely monitor all the components of the deployed system infrastructure (Remote clients, Servers) and if there is a fault in the infrastructure (Hardware, Software or Communication) DiNAR captures the fault alarm and do an event correlation to find the source of the problem. The biggest challenge that lies here is the fact that the entities we are trying to monitor are scattered around in the Internet. Traditional network management techniques always assume that the network is within administrative control and every device we monitor is easily reachable on demand. But the ad-hoc scenario of deployment of disaster management systems makes this task non trivial. DiNAR has been designed with an aim to work with any application which has its infrastructure elements scattered in the Internet space. DIORAMA (A real time disaster management system) represents a new series of applications (especially in medical field) where the deployment of network infrastructure is scattered around with Internet being the backbone connector. Another such example is the Intel® Health Guide PHS6000 [1], which is used in patient monitoring in homes. This thesis work uses DIORAMA as a case study application used to prove the concept of DiNAR.
62

Simulation of Packet Pacing in Small-Buffer Networks

Misra, Anindya 01 January 2010 (has links) (PDF)
The growing use of the internet and the wide variety of applications which run on it puts a considerable demand for high bandwidth networks. All optical core networks are one such possible networks which cater to the demand of high bandwidths.Since the all optical routers use the fiber delay lines as optical buffers, it is impossible to build optical buffers of such high capacity.The present day solutions for optical buffers are fiber delay lines(FDL) which are nothing but long optical fiber lines which are convoluted and folded in order to provide the necessary delay in transmission resulting in a small buffer which can store packets and thus can be used as a buffer.If we consider the example of a single TCP source sending an infinite amount of data with packets of constant size with the flow passing through a single router. If we make an assumption that the sender's access link is much faster than the receiver's bottleneck link of capacity, it will cause packets to be queued at the router.We propose a mechanism to pace traffic in the network based on the queue length of the buffer in the output port. The underlying principle delays the transmission of the packet depending on the instantaneous queue length of the buffer.A prototype of such a model was simulated in network simulator and the performance metrics were measured.
63

Leveraging Multi-radio Communication for Mobile Wireless Sensor Networks

Gummeson, Jeremy J 01 January 2011 (has links) (PDF)
An important challenge in mobile sensor networks is to enable energy-efficient communication over a diversity of distances while being robust to wireless effects caused by node mobility. In this thesis, we argue that the pairing of two complementary radios with heterogeneous range characteristics enables greater range and interference diversity at lower energy cost than a single radio. We make three contributions towards the design of such multi-radio mobile sensor systems. First, we present the design of a novel reinforcement learning-based link layer algorithm that continually learns channel characteristics and dynamically decides when to switch between radios. Second, we describe a simple protocol that translates the benefits of the adaptive link layer into practice in an energy-efficient manner. Third, we present the design of Arthropod, a mote-class sensor platform that combines two such heterogeneous radios (XE1205 and CC2420) and our implementation of the Q-learning based switching protocol in TinyOS 2.0. Using experiments conducted in a variety of urban and forested environments, we show that our system achieves up to 52% energy gains over a single radio system while handling node mobility. Our results also show that our system can handle short, medium and long-term wireless interference in such environments.
64

Virtual Network Mapping with Traffic Matrices

Wang, Cong 01 January 2011 (has links) (PDF)
Nowadays Network Virtualization provides a new perspective for running multiple, relatively independent applications on same physical network (the substrate network) within shared substrate resources. This method is especially useful for researchers or investigators to get involved into networking field within a lower barrier. As for network virtualization, Virtual Network Mapping (VNM) problem is one of the most important aspects for investigation. Within years of deeply research, several efficient algorithms have been proposed to solve the Virtual Network Mapping problem, however, most of the current mapping algorithm assumes that the virtual network request topology is known or given by customers, in this thesis, a new VNM assumption based on traffic matrix is proposed, also using existing VNM benchmarks, we evaluated the mapping performance based on various metrics, and by comparing the new traffic matrix based VNM algorithm and existing ones, we provide its advantages and shortcomings and optimization to this new VNM algorithm.
65

Simulating a Universal Geocast Scheme for Vehicular Ad Hoc Networks

Bovee, Benjamin L 01 January 2011 (has links) (PDF)
Recently a number of communications schemes have been proposed for Vehicular Ad hoc Networks (VANETs). One of these, the Universal Geocast Scheme (UGS) proposed by Hossein Pishro-Nik and Mohammad Nekoui, provides for a diverse variety of VANET-specific characteristics such as time-varying topology, protocol variation based on road congestion, and support for non line-of-sight communication. In this research, the UGS protocol is extended to consider inter-vehicle multi-hop connections in intersections with surrounding obstructions along with single-hop communications in an open road scenario. Since UGS is a probabilistic, repetition-based scheme, it supports the capacity-delay tradeoffs crucial for periodic safety message exchange. The approach is shown to support both vehicle-to-vehicle and vehicle-to-infrastructure communication. This research accurately evaluates this scheme using network (NS-2) and mobility (SUMO) simulators, verifying two crucial elements of successful VANETs, received packet ratio and message delay. A contemporary wireless radio propagation model is used to augment accuracy. Results show a 6% improvement in received packet ratio in intersection simulations combined with a decrease in average packet delay versus a previous, well-known inter-vehicle communication protocol.
66

Addressing/Exploiting Transceiver Imperfections in Wireless Communication Systems

Wang, Lihao 01 January 2011 (has links) (PDF)
This thesis consists of two research projects on wireless communication systems. In the first project, we propose a fast inphase and quadrature (I/Q) imbalance compensation technique for the analog quadrature modulators in direct conversion transmitters. The method needs no training sequence, no extra background data gathering process and no prior perfect knowledge of the envelope detector characteristics. In contrast to previous approaches, it uses points from both the linear and predictable nonlinear regions of the envelope detector to hasten convergence. We provide a least mean square (LMS) version and demonstrate that the quadrature modulator compensator converges. In the second project, we propose a technique to deceive the automatic gain control (AGC) block in an eavesdropper's receiver to increase wireless physical layer data transmission secrecy. By sharing a key with the legitimate receiver and fluctuating the transmitted signal power level in the transmitter side, a positive average secrecy capacity can be achieved even when an eavesdropper has the same or even better additive white gaussian noise (AWGN) channel condition. Then, the possible options that an eavesdropper may choose to fight against our technique are discussed and analyzed, and approaches to eliminate these options are proposed. We demonstrate that a positive average secrecy capacity can still be achieved when an eavesdropper uses these options.
67

Internet-Scale Reactive Routing and Mobility

Nelson, Daniel B 01 June 2009 (has links) (PDF)
Since its commercialization, the Internet has grown exponentially. A large variety of devices can communicate creating advanced services for a diverse ecosystem of applications. However, as the number of Internet hosts has grown, the size of routing tables required to correctly route data between them has also increased exponentially. This growth rate necessitates increasingly frequent upgrades to routing device hardware, providing them with additional memory for fast-access storage of route information. These upgrades are both physically and fiscally untenable, and a new Internet routing solution is necessary for future growth. This research focuses around an incrementally deployable, reactive routing system that is scalable to projected Internet growth. It requires no hardware or software updates to Internet routers, and offoads processing to end hosts and the network's edge. Within this framework, routers can make accurate decisions about optimal data paths; incurring no increase in path length over the current routing system. A new architecture for IP Mobility is considered as a case study within this routing system, and compared with existing standards and implementations. The new architecture eliminates the triangle routing problem, while providing legacy hosts with connectivity to mobile devices. This mobility solution can integrate with a variety of hierarchical reactive routing systems with little overhead.
68

Reliable Ethernet

Movsesyan, Aleksandr 01 August 2011 (has links) (PDF)
Networks within data centers, such as connections between servers and disk arrays, need lossless flow control allowing all packets to move quickly through the network to reach their destination. This paper proposes a new algorithm for congestion control to satisfy the needs of such networks and to answer the question: Is it possible to provide circuit-less reliability and flow control in an Ethernet network? TCP uses an end-to-end congestion control algorithm, which is based on end-to-end round trip time (RTT). Therefore its flow control and error detection/correction approach is dependent on end-to-end RTT. Other approaches utilize specialized data link layer networks such as InfiniBand and Fibre Channel to provide network reliability. The algorithm proposed in this thesis builds on the ubiquitous Ethernet protocol to provide reliability at the data link layer without the overhead and cost of the specialized networks or the delay induced by TCP’s end-to-end approach. This approach requires modifications to the Ethernet switches to implement a back pressure based flow control algorithm. This back pressure algorithm utilizes a modified version of the Random Early Detection (RED) algorithm to detect congestion. Our simulation results show that the algorithm can quickly recover from congestion and that the average latency of the network is close to the average latency when no congestion is present. With correct threshold and alpha values, buffer sizes in the network and on the source nodes can be kept small to allow little needed additional hardware to implement the system.
69

User Interface Design And Forensic Analysis For DIORAMA, Decision Support System For Mass Casualty Incidents

Yi, Jun 23 November 2015 (has links)
In this thesis we introduces the user interface design and forensic analysis tool for DIORAMA system. With an Android device, DIORAMA provides emergency personnel the ability to collect information in real time, track the resources and manage them. It allows the responders and commanders to mange multiple incidents simultaneously. This thesis also describes the implementations of commander app and responder app, as well as two different communication strategies used in DIORAMA. Several trials and simulated mass casualty incidents were conducted to test the functionalities and performance of DIORAMA system. All responders that participated in all trials were very satisfied with it. As a result, DIORAMA system significantly reduced the evacuation time by up to 43% when compared to paper based triage systems.
70

CUDA Enhanced Filtering In a Pipelined Video Processing Framework

Dworaczyk Wiltshire, Austin Aaron 01 June 2013 (has links) (PDF)
The processing of digital video has long been a significant computational task for modern x86 processors. With every video frame composed of one to three planes, each consisting of a two-dimensional array of pixel data, and a video clip comprising of thousands of such frames, the sheer volume of data is significant. With the introduction of new high definition video formats such as 4K or stereoscopic 3D, the volume of uncompressed frame data is growing ever larger. Modern CPUs offer performance enhancements for processing digital video through SIMD instructions such as SSE2 or AVX. However, even with these instruction sets, CPUs are limited by their inherently sequential design, and can only operate on a handful of bytes in parallel. Even processors with a multitude of cores only execute on an elementary level of parallelism. GPUs provide an alternative, massively parallel architecture. GPUs differ from CPUs by providing thousands of throughput-oriented cores, instead of a maximum of tens of generalized “good enough at everything” x86 cores. The GPU’s throughput-oriented cores are far more adept at handling large arrays of pixel data, as many video filtering operations can be performed independently. This computational independence allows for pixel processing to scale across hun- dreds or even thousands of device cores. This thesis explores the utilization of GPUs for video processing, and evaluates the advantages and caveats of porting the modern video filtering framework, Vapoursynth, over to running entirely on the GPU. Compute heavy GPU-enabled video processing results in up to a 108% speedup over an SSE2-optimized, multithreaded CPU implementation.

Page generated in 0.1824 seconds