• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 291
  • 73
  • 43
  • 24
  • 24
  • 14
  • 8
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 622
  • 622
  • 121
  • 121
  • 103
  • 95
  • 80
  • 69
  • 66
  • 63
  • 62
  • 61
  • 61
  • 58
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Reduced Energy Consumption and Improved Accuracy for Distributed Speech Recognition in Wireless Environments

Delaney, Brian William 04 October 2004 (has links)
The central theme of this dissertation is the study of a multimedia client for pervasive wireless multimedia applications. Speech recognition is considered as one such application, where the computational demands have hindered its use on wireless mobile devices. Our analysis considers distributed speech recognition on hardware platforms with PDA-like functionality (i.e. wireless LAN networking, high-quality audio input/output, a low-power general-purpose processing core, and limited amounts of flash and working memory.) We focus on quality of service for the end-user (i.e. ASR accuracy and delay) and reduced energy consumption with increased battery lifetimes. We investigate quality of service and energy trade-offs in this context. We present software optimizations on a speech recognition front-end that can reduce the energy consumption by over 80% compared to the original implementation. A power on/off scheduling algorithm for the wireless interface is presented. This scheduling of the wireless interface can increase the battery lifetime by an order of magnitude. We study the effects of wireless networking and fading channel characteristics on distributed speech recognition using Bluetooth and IEEE 802.11b networks. When viewed as a whole, the optimized distributed speech recognition system can reduce the total energy consumption by over 95% compared to a client-side ASR implementation. We present an interleaving and loss concealment algorithm to increase the robustness of distributed speech recognition in a burst error channel. This improvement allows a decreased reliance on error protection overhead, which can provide reductions in transmit energy of up to 46% on a Bluetooth wireless network. The findings presented in this dissertation stress the importance of energy-aware design and optimization at all levels for battery-powered wireless devices.
122

AODV-ABR:Adaptive Backup Route in Ad-hoc Networks

Hsiao, Sheng-Yu 06 September 2004 (has links)
An ad-hoc network operates without a central entity or infrastructure, and is composed of highly mobile hosts. In ad-hoc network, routing protocols are with host mobility and bandwidth constraints. There have been many recent proposals of routing protocols for ad-hoc networks. A recent trend in ad hoc network routing is the reactive on-demand philosophy where routes are established only when required. AODV(Ad-hoc On-demand Distance Vector routing) evaluates routes only on an as-needed basis and routes are maintained only as long as they are necessary. Because the network topology changes frequently in ad-hoc networks, some on-demand protocols with multi-paths or backup routes have been proposed. Sung-Ju Lee and Mario Gerla proposed an AODV-BR scheme to improve existing on-demand routing protocols by creating a mesh and providing multiple alternate routes . The algorithm establishes the mesh and multi-path using the RREP (Route Reply) of AODV, which does not transmit any extra control message. In this paper, we propose two schemes : AODV-ABR(Adaptive Backup Route) and AODV-ABL (Adaptive Backup Route and Local repair) to increase the adaptation of routing protocol to topology changes by modifying AODV-BR. In AODV-ABR, the alternative route can be creating by overhearing not only RREP packets but also data packets. AODV-ABL combines the benefits of AODV-ABR and Local Repair. Finally, we evaluate the performance improvement by simulation.
123

FTCP, Csnoop - Two Novel Strategies for TCP over Wired and Wireless Network

Shiu, Jia-Ching 03 July 2002 (has links)
Abstract The throughput of a TCP connection is decided by the size of the congestion window. And cwnd increases when an acknowledgement arrives. It leads to that TCP has a bias against connections with long round-trip-time. For enhancing the fairness of TCP, we proposed a new scheme FTCP (Fair TCP). Unlike TCP, in FTCP congestion avoidance state, it compares its RTT with the standard RTT to adjust the increase amount of cwnd when an ACK arrives TCP sender. Therefore FTCP can keep the throughput increase rate of connections with different RTTs be the same. When FTCP enters timeout state, it sets appropriate slow start threshold by calculating the difference value of cwnd / 2 and the cwnd while standard connection achieves ssthresh. So that FTCP can eliminate the difference of throughput between connections with different RTT while leaving the slow start state. FTCP significantly improves the unfair bandwidth distribution between connections with different RTT. TCP connections over wireless links perform badly because of the unnecessary congestion control, inefficiency to burst packet loss, and long delay to slow down the cwnd recovery time. In proposed schemes, Snoop takes BS as a pivot point to cache the unacknowledged TCP packets. When errors occur in wireless link, Snoop retransmits the packets locally from BS instead of retransmitting these packets from sender. And Snoop shields off the duplicate ACKs caused by wireless errors to avoid sender triggering unnecessary congestion control. But Snoop adopts same retransmission style as TCP. It only retransmits one packet per continuous duplicate ACKs. Snoop recovers error packets more quickly and tolerates higher BER than TCP. But Snoop doesn¡¦t really solve the degraded performance problem of multiple errors of TCP. When the channel is the in a very bad quality, Snoop still performs badly. We proposed a new scheme, Csnoop (continuous snoop), extended from Snoop. When bursty errors happen in the wireless links, Csnoop retransmits one lost packets from the BS in first RTT and counts the number of ACKs that arrives BS to calculate the number of lost packets. And Csnoop retransmits these lost packets continuously. When local timeout happens, Csnoop infers that all packets were dropped and retransmits all packets cached in the buffer. Simulations show that Csnoop achieves better throughput compared to Snoop and TCP, especially for bad quality wireless links. Furthermore, Csnoop needs less buffer size to cache the unacknowledged packets at the base station than Snoop.
124

Concurrent Search for Digital Content in Wireless Mobile Networks

Wu, Cheng-Lin 06 August 2008 (has links)
With the state-of-the-art IC technology, we can share a variety of digital content stored in our mobile devices via wireless communications. When requests for digital content arrive, base stations have to search for at least one copy of the digital content. We extend the concurrent search approach to efficiently location digital content. In addition, we propose the opportunistic concurrent search scheme in which a base station could use a single channel to page a number of mobile stations simultaneously. We use computer simulations to evaluate the performance and justify the usage of the proposed schemes.
125

Adaptive Personal Mobile Communication, Service Architecture and Protocols.

Kanter, Theo January 2001 (has links)
No description available.
126

Robust optimization and machine learning tools for adaptive transmission in wireless networks

Yun, Sung-Ho 01 February 2012 (has links)
Current and emerging wireless systems require adaptive transmissions to improve their throughput, to meet the QoS requirements or to maintain robust performance. However finding the optimal transmit parameters is getting more difficult due to the growing number of wireless devices that share the wireless medium and the increasing dimensions of transmit parameters, e.g., frequency, time and spatial domain. The performance of adaptive transmission policies derived from given measurements degrade when the environment changes. The policies need to either build up protection against those changes or tune themselves accordingly. Also, an adaptation for systems that take advantages of transmit diversity with finer granularity of resource allocation is hard to come up with due to the prohibitively large number of explicit and implicit environmental variables to take into account. The solutions to the simplified problems often fail due to incorrect assumptions and approximations. In this dissertation, we suggest two tools for adaptive transmission in changing complex environments. We show that adjustable robust optimization builds up protection upon the adaptive resource allocation in interference limited cellular broadband systems, yet maintains the flexibility to tune it according to temporally changing demand. Another tool we propose is based on a data driven approach called Support Vectors. We develop adaptive transmission policies to select the right set of transmit parameters in MIMO-OFDM wireless systems. While we don't explicitly consider all the related parameters, learning based algorithms implicitly take them all into account and result in the adaptation policies that fit optimally to the given environment. We extend the result to multicast traffic and show that the distributed algorithm combined with a data driven approach increases the system performance while keeping the required overhead for information exchange bounded. / text
127

Low-overhead cooperation to mitigate interference in wireless networks

Peters, Steven Wayne 23 October 2013 (has links)
Wireless cellular networks, which serve a large area by geographically partitioning users, suffer from interference from adjacent cells transmitting in the same frequency band. This interference can theoretically be completely mitigated via transceiver cooperation in both the uplink and downlink. Optimally, base stations serving the users can utilize high-capacity backbones. to jointly transmit and receive all the data in the network across all the base stations. In reality, the backbone connecting the base stations is of finite capacity, limiting joint processing to localized clusters. Even with joint processing on a small scale, the overhead involved in sharing data between multiple base stations is large and time-sensitive. Other forms of cooperation have been shown to require less overhead while exhibiting much of the performance benefit from interference mitigation. One particular strategy, called interference alignment (IA), has been shown to exploit all the spatial degrees of freedom in the channel provided data cannot be shared among base stations. Interference alignment was developed for the multi-user interference channel to exploit independent channel observations when all of the links in the network have high signal-to-noise ratio, and assumes all the nodes utilizing the physical resources are participating in the cooperative protocol. When some or all of the links are at moderate signal-to-noise ratio, or when there are non-cooperating users, IA is suboptimal. In this dissertation, I take three approaches to addressing the drawbacks of IA. First, I develop cooperative transmission strategies that outperform IA in various operationg regimes, including at low-to-moderate SNR and in the presence of non-cooperating users. These strategies have the same complexity and overhead as IA. I then develop algorithms for network partitioning by directly considering the overhead of cooperative strategies. Partitioning balances the capacity gains of cooperation with the overhead required to achieve them. Finally, I develop the shared relaying model, which is equivalent to the interference channel but with a single multi-antenna relay mediating communications between transceivers. The shared relay requires less overhead and cooperation than interference alignment but requires added infrastructure. It is shown to outperform conventional relaying strategies in cellular networks with a fixed number of total relay antennas. / text
128

Tattle - "Here's How I See It" : Crowd-Sourced Monitoring and Estimation of Cellular Performance Through Local Area Measurement Exchange

Liang, Huiguang 01 May 2015 (has links)
The operating environment of cellular networks can be in a constant state of change due to variations and evolutions of technology, subscriber load, and physical infrastructure. One cellular operator, which we interviewed, described two key difficulties. Firstly, they are unable to monitor the performance of their network in a scalable and fine-grained manner. Secondly, they find difficulty in monitoring the service quality experienced by each user equipment (UE). Consequently, they are unable to effectively diagnose performance impairments on a per-UE basis. They currently expend considerable manual efforts to monitor their network through controlled, small-scale drive-testing. If this is not performed satisfactorily, they risk losing subscribers, and also possible penalties from regulators. In this dissertation, we propose Tattle1, a distributed, low-cost participatory sensing framework for the collection and processing of UE measurements. Tattle is designed to solve three problems, namely coverage monitoring (CM), service quality monitoring (QM) and, per-device service quality estimation and classification (QEC). In Tattle, co-located UEs exchange uncertain location information and measurements using local-area broadcasts. This preserves the context of co-location of these measurements. It allows us to develop U-CURE, as well as its delay-adjusted variant, to discard erroneously-localized samples, and reduce localization errors respectively. It allows operators to generate timely, high-resolution and accurate monitoring maps. Operators can then make informed, expedient network management decisions, such as adjusting base-station parameters, to making long-term infrastructure investment. We propose a comprehensive statistical framework that also allows an individual UE to estimate and classify its own network performance. In our approach, each UE monitors its recent measurements, together with those reported by co-located UEs. Then, through our framework, UEs can automatically determine if any observed impairment is endemic amongst other co-located devices. Subscribers that experience isolated impairments can then take limited remedy steps, such as rebooting their devices. We demonstrate Tattle's effectiveness by presenting key results, using up to millions of real-world measurements. These were collected systematically using current generations of commercial-off-the-shelf (COTS) mobile devices. For CM, we show that in urban built-up areas, GPS locations reported by UEs may have significant uncertainties and can sometimes be several kilometers away from their true locations. We describe how U-CURE can take into account reported location uncertainty and the knowledge of measurement co-location to remove erroneously-localized readings. This allows us to retain measurements with very high location accuracy, and in turn derive accurate, fine-grained coverage information. Operators can then react and respond to specific areas with coverage issues in a timely manner. Using our approach, we showcase high-resolution results of actual coverage conditions in selected areas of Singapore. For QM, we show that localization performance in COTS devices may exhibit non-negligible correlation with network round-trip delay. This can result in localization errors of up to 605.32m per 1,000ms of delay. Naïve approaches that blindly accepts measurements with their reported locations will therefore result in grossly mis-localized data points. This affects the fidelity of any geo-spatial monitoring information derived from these data sets. We demonstrate that using the popular localization approach of combining Global-Positioning System together with Network-Assisted Localization, may result in a median root-mean-square (rms) error increase of over 60%. This is in comparison to simply using the Global-Positioning System on its own. We propose a network-delay-adjusted variant of U-CURE, to cooperatively improve the localization performance of COTS devices. We show improvements of up to 70% in terms of median rms location errors, even while subjected to uncertain real-world network delay conditions, with just 3 participating UEs. This allows us to refine the purported locations of delay measurements, and as a result, derive accurate, fine-grained and actionable cellular quality information. Using this approach, we present accurate cellular network delay maps that are of much higher spatial-resolution, as compared to those naively derived using raw data. For QEC, we report on the characteristics of the delay performance of co-located devices subscribed to 2 particular cellular network operators in Singapore. We describe the results of applying our proposed approach to addressing the QEC problem, on real-world measurements of over 443,500 data points. We illustrate examples where “normal” and “abnormal” performances occur in real networks, and report instances where a device can experience complete outage, while none of its neighbors are affected. We give quantitative results on how well our algorithm can detect an “abnormal” time series, with increasing effectiveness as the number of co-located UEs increases. With just 3 UEs, we are able to achieve a median detection accuracy of just under 70%. With 7 UEs, we can achieve a median detection rate of just under 90%. 1 The meaning of Tattle, as a verb, is to gossip idly. By letting devices communicate their observations with one another, we explore the kinds of insights that can elicited based on this peer-to-peer exchange.
129

An enhanced cross-layer routing protocol for wireless mesh networks based on received signal strength

Amusa, Ebenezer Olukayode January 2010 (has links)
The research work presents an enhanced cross-layer routing solution for Wireless Mesh Networks (WMN) based on Received Signal Strength. WMN is an emerging technology with varied applications due to inherent advantages ranging from self-organisation to auto-con guration. Routing in WMN is fundamen- tally achieved by hop counts which have been proven to be de cient in terms of network performance. The realistic need to enhance the link quality metric to improve network performance has been a growing concern in recent times. The cross-Layer routing approach is one of the identi ed methods of improving routing process in Wireless technology. This work presents an RSSI-aware routing metric implemented on Optimized Link-State Routing (OLSR) for WMN. The embedded Received Signal Strength Information (RSSI) from the mesh nodes on the network is extracted, processed, transformed and incorporated into the routing process. This is to estimate efficiently the link quality for network path selections to improved network performance. The measured RSSI data is filtered by an Exponentially Weighted Moving Average (EWMA) filter. This novel routing metric method is called RSSI-aware ETT (rETT). The performance of rETT is then optimised and the results compared with the fundamental hop count metric and the link quality metric by Expected Transmission Counts (ETX). The results reveal some characteristics of RSSI samples and link conditions through the analysis of the statistical data. The divergence or variability of the samples is a function of interference and multi-path e effect on the link. The implementation results show that the routing metric with rETT is more intelligent at choosing better network paths for the packets than hop count and ETX estimations. rETT improvement on network throughput is more than double (120%) compared to hop counts and 21% improvement compared to ETX. Also, an improvement of 33% was achieved in network delay compared to hop counts and 28% better than ETX. This work brings another perspective into link-quality metric solutions for WMN by using RSSI to drive the metric of the wireless routing protocol. It was carried out on test-beds and the results obtained are more realistic and practical. The proposed metric has shown improvement in performance over the classical hop counts metric and ETX link quality metric.
130

Improving the performance of wireless networks using frame aggregation and rate adaptation

Kim, Won Soo, 1975- 09 February 2011 (has links)
As the data rates supported by the physical layer increase, overheads increasingly dominate the throughput of wireless networks. A promising approach for reducing overheads is to group a number of frames together into one transmission. This can reduce the impact of overheads by sharing headers and the time spent waiting to gain access to the transmission floor. Traditional aggregation schemes require that frames that are aggregated all be destined to the same receiver. These approaches neglect the fact that transmissions are broadcast and a single transmission will potentially be received by many receivers. Thus, by taking advantage of the broadcast nature of wireless transmissions, overheads can be amortized over more data and achieve more performance gain. To show this, we design a series of MAC-based aggregation protocols that take advantage of rate adaptation and the broadcast nature of wireless transmissions. We first show the design of a system that can aggregate both unicast and broadcast frames. Further, the system can classify TCP ACK segments so that they can be aggregated with TCP data flowing in the opposite direction. Second, we develop a rate-adaptive frame aggregation scheme that allows us to find the best aggregation size by tracking the size based on received data frames and the data rate chosen by rate adaptation. Third, we develop a multi-destination frame aggregation scheme to aggregate broadcast frames and unicast frames that are destined for different receivers using delayed ACKs. Using a delayed ACK scheme allows multiple receivers to control transmission time of the ACKs. Finally, we extend multi-destination rate-adaptive frame aggregation to allow piggybacking of various types of metadata with user packets. This promises to lower the impact of metadata-based control protocols on data transport. A novel aspect of our work is that we implement and validate the designs not through simulation, but rather using our wireless node prototype, Hydra, which supports a high performance PHY based on 802.11n. To validate our designs, we conduct extensive experiments both on real and emulator-based channels and measure system performance. / text

Page generated in 0.0691 seconds