• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2399
  • 1700
  • 316
  • 141
  • 85
  • 48
  • 40
  • 40
  • 40
  • 40
  • 40
  • 40
  • 24
  • 24
  • 24
  • Tagged with
  • 5741
  • 5741
  • 2820
  • 2063
  • 1511
  • 1102
  • 888
  • 782
  • 681
  • 577
  • 536
  • 521
  • 432
  • 403
  • 391
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

ON THE USE OF NATURAL LANGUAGE PROCESSING FOR AUTOMATED CONCEPTUAL DATA MODELING

Du, Siqing 13 August 2008 (has links)
This research involved the development of a natural language processing (NLP) architecture for the extraction of entity relation diagrams (ERDs) from natural language requirements specifications. Conceptual data modeling plays an important role in database and software design and many approaches to automating and developing software tools for this process have been attempted. NLP approaches to this problem appear to be plausible because compared to general free texts, natural language requirements documents are relatively formal and exhibit some special regularities which reduce the complexity of the problem. The approach taken here involves a loose integration of several linguistic components. Outputs from syntactic parsing are used by a set of hueristic rules developed for this particular domain to produce tuples representing the underlying meanings of the propositions in the documents and semantic resources are used to distinguish between correct and incorrect tuples. Finally the tuples are integrated into full ERD representations. The major challenge addressed in this research is how to bring the various resources to bear on the translation of the natural language documents into the formal language. This system is taken to be representative of a potential class of similar systems designed to translate documents in other restricted domains into corresponding formalisms. The system is incorporated into a tool that presents the final ERDs to users who can modify them in the attempt to produce an accurate ERD for the requirements document. An experiment demonstrated that users with limited experience in ERD specifications could produce better representations of requirements documents than they could without the system, and could do so in less time.
42

Risk-based Survivable Network Design

Vajanapoom, Korn 25 September 2008 (has links)
Communication networks are part of the critical infrastructure upon which society and the economy depends; therefore it is crucial for communication networks to survive failures and physical attacks to provide critical services. Survivability techniques are deployed to ensure the functionality of communication networks in the face of failures. The basic approach for designing survivable networks is that given a survivability technique (e.g., link protection, or path protection) the network is designed to survive a set of predefined failures (e.g., all single-link failures) with minimum cost. However, a hidden assumption in this design approach is that the sufficient monetary funds are available to protect all predefined failures, which might not be the case in practice as network operators may have a limited budget for improving network survivability. To overcome this limitation, this dissertation proposed a new approach for designing survivable networks, namely; risk-based survivable network design, which integrates risk analysis techniques into an incremental network design procedure with budget constraints. In the risk-based design approach, the basic design problem considered is that given a working network and a fixed budget, how best to allocate the budget for deploying a survivability technique in different parts of the network based on the risk. The term risk measures two related quantities: the likelihood of failure or attack, and the amount of damage caused by the failure or attack. Various designs with different risk-based design objectives are considered, for example, minimizing the expected damage, minimizing the maximum damage, and minimizing a measure of the variability of damage that could occur in the network. In this dissertation, a design methodology for the proposed risk-based survivable network design approach is presented. The design problems are formulated as Integer Programming (InP) models; and in order to scale the solution of models, some greedy heuristic solution algorithms are developed. Numerical results and analysis illustrating different risk-based designs are presented.
43

On Detection Mechanisms and Their Performance for Packet Dropping Attack in Ad Hoc Networks

Anusas-amornkul, Tanapat 11 September 2008 (has links)
Ad hoc networking has received considerable attention in the research community for seamless communications without an existing infrastructure network. However, such networks are not designed with security protection in mind and they are prone to several security attacks. One such simple attack is the packet dropping attack, where a malicious node drops all data packets, while participating normally in routing information exchange. This attack is easy to deploy and can significantly reduce the throughput in ad hoc networks. In this dissertation, we study this problem through analysis and simulation. The packet dropping attack can be a result of the behavior of a selfish node or pernicious nodes that launch blackhole or a wormhole attacks. We are only interested in detecting this attack but not the causes of the attack. In this dissertation, for simple static ad hoc networks, analysis of the throughput drop due to this attack along with its improvement when mitigated are presented. A watchdog and a newly proposed "cop" detection mechanisms are studied for mitigating the throughput degradation after detection of the attack. The watchdog mechanism is a detection mechanism that has to be typically implemented in every node in the network. The cop detection mechanism is similar to the watchdog mechanism but only a few nodes opportunistically detect malicious nodes instead of all nodes performing this function. For multiple flows in static and mobile ad hoc networks, simulations are used to study and compare both mechanisms. The study shows that the cop mechanism can improve the throughput of the network while reducing the detection load and complexity for other nodes.
44

Channel Access Management in Data Intensive Sensor Networks

Lin, Chih-kuang 11 September 2008 (has links)
There are considerable challenges for channel access in Data Intensive Sensor Networks - DISN, supporting Data Intensive Applications like Structural Health Monitoring. As the data load increases, considerable degradation of the key performance parameters of such sensor networks is observed. Successful packet delivery ratio drops due to frequent collisions and retransmissions. The data glut results in increased latency and energy consumption overall. With the considerable limitations on sensor node resources like battery power, this implies that excessive transmissions in response to sensor queries can lead to premature network death. After a certain load threshold the performance characteristics of traditional WSNs become unacceptable. Research work indicates that successful packet delivery ratio in 802.15.4 networks can drop from 95% to 55% as the offered network load increases from 1 packet/sec to 10 packets/sec. This result in conjunction with the fact that it is common for sensors in an SHM system to generate 6-8 packets/sec of vibration data makes it important to design appropriate channel access schemes for such data intensive applications. In this work, we address the problem of significant performance degradation in a special-purpose DISN. Our specific focus is on the medium access control layer since it gives a fine-grained control on managing channel access and reducing energy waste. The goal of this dissertation is to design and evaluate a suite of channel access schemes that ensure graceful performance degradation in special-purpose DISNs as the network traffic load increases. First, we present a case study that investigates two distinct MAC proposals based on random access and scheduling access. The results of the case study provide the motivation to develop hybrid access schemes. Next, we introduce novel hybrid channel access protocols for DISNs ranging from a simple randomized transmission scheme that is robust under channel and topology dynamics to one that utilizes limited topological information about neighboring sensors to minimize collisions and energy waste. The protocols combine randomized transmission with heuristic scheduling to alleviate network performance degradation due to excessive collisions and retransmissions. We then propose a grid-based access scheduling protocol for a mobile DISN that is scalable and decentralized. The grid-based protocol efficiently handles sensor mobility with acceptable data loss and limited overhead. Finally, we extend the randomized transmission protocol from the hybrid approaches to develop an adaptable probability-based data transmission method. This work combines probabilistic transmission with heuristics, i.e., Latin Squares and a grid network, to tune transmission probabilities of sensors, thus meeting specific performance objectives in DISNs. We perform analytical evaluations and run simulation-based examinations to test all of the proposed protocols.
45

Sync & Sense Enabled Adaptive Packetization VoIP

Ngamwongwattana, Boonchai 29 June 2007 (has links)
The quality and reliability problem of VoIP comes from the fact that VoIP relies on the network to transport the voice packets. The inherent problem of VoIP is that there is a mismatch between VoIP and the network. Namely, VoIP has a strict requirement of bandwidth, delay, and loss, but the network (particularly best-effort service networks) cannot guarantee such a requirement. A solution to deal with this problem is to enhance VoIP with an adaptive-rate control, called adaptive-rate VoIP. Adaptive-rate VoIP has the ability to detect the state of the network and adjust the transmission accordingly. Therefore, it gives VoIP the intelligence to optimize its performance, and making it resilient and robust to the service offered by the network. The objective of this dissertation is to develop an adaptive-rate VoIP system. We take a comprehensive approach in the study and development. Adaptive-rate VoIP is generally composed of three components: rate adaptation, network state detection, and adaptive-rate control. In the rate adaptation component, we study optimizing packetization, which can be used as an alternative means for rate adaptation. An advantage is that rate adaptation is independent of the speech coder. With this method, an adaptive-rate VoIP can be based on any constant bitrate speech coder. The study shows that the VoIP performance is primarily affected by three factors: packetization, network load, and significance of VoIP traffic; and, optimizing packetization allows us to ensure the highest possible performance. In the network state detection component, we propose a novel measurement methodology called Sync & Sense of periodic stream. Sync & Sense is unique in that it can virtually synchronize the transmission and reception timing of the VoIP session without requiring a synchronized clock. The simulation result shows that Sync & Sense can accurately measure one-way network delay. Other benefits of Sync & Sense include the ability to estimate the available network bandwidth and the full spectrum of the delays of the VoIP session. In the adaptive-rate control component, we consider the design choices and develop an adaptive-rate control that makes use of the first two components. The integration of the three components is a novel and unique adaptive-rate VoIP called Sync & Sense Enabled Adaptive Packetization VoIP. The simulation result shows that our adaptive VoIP can optimize the performance under any given network condition, and deliver a better performance than traditional VoIP. The simulation result also demonstrates that our adaptive VoIP possesses the desirable properties, which include fast response to network condition, aggressiveness to compete for the needed share of bandwidth, TCP-friendliness, and fair bandwidth allocation.
46

A Location Fingerprint Framework Towards Efficient Wireless Indoor Positioning Systems

Swangmuang, Nattapong 08 January 2009 (has links)
Location of mobile computers, potentially indoors, is essential information to enable locationaware applications in wireless pervasive computing. The popularity of wireless local area networks (WLANs) inside and around buildings makes positioning systems based on readily available received signal strength (RSS) from access points (APs) desirable. The fingerprinting technique associates location-dependent characteristics such as RSS values from multiple APs to a location (namely location fingerprint) and uses these characteristics to infer the location. The collection of RSS fingerprints from different locations are stored in a database called radio map, which is later used to compare to an observed RSS sample vector for estimating the MSs location. An important challenge for the location fingerprinting is how to efficiently collect fingerprints and construct an effective radio map for different indoor environments. In addition, analytical models to evaluate and predict precision performance of indoor positioning systems based on location fingerprinting are lacking. In this dissertation, we provide a location fingerprint framework that will enable a construction of efficient wireless indoor systems. We develop a new analytical model that employs a proximity graph for predicting performance of indoor positioning systems based on location fingerprinting. The model approximates probability distribution of error distance given a RSS location fingerprint database and its associated statistics. This model also allows a system designer to perform analysis of the internal structure of location fingerprints. The analytical model is employed to identify and eliminate unnecessary location fingerprints stored in the radio map, thereby saving on computation while performing location estimation. Using the location fingerprint properties such as clustering is also shown to help reduce computational effort and create a more scalable model. Finally, by study actual measurement with the analytical results, a useful guideline for collecting fingerprints is given.
47

SIGNALING OVERLOAD CONTROL FOR WIRELESS CELLULAR NETWORKS

Sasanus, Saowaphak 16 January 2009 (has links)
As the worldwide market of cellular phone increases, many subscribers have come to rely on cellular phone services. In catastrophes or mass call in situations, the load can be greater than what the cellular network can support, and the entire network may become completely non-functional. This raises serious concerns on the survivability of wireless cellular networks in order to provide necessary services such as 911 calls in those circumstances. In high load cases, overload control must be deployed to reserve network resource for emergency traffic and maintenance services. Over the past several years, many catastrophes have revealed the deficiencies of the existing overload control mechanisms in cellular networks. Improvement to the existing overload controls are needed in order to cope with unexpected situations. A key to the survivability of wireless cellular networks lies in the signaling services from database servers that support a call connection throughout its duration (e.g., for monitoring users' locations and supplying authentication codes for secure communications), this dissertation focuses on the overload control at the database servers. As loss of different signaling services impacts a user's perception differently, the overload control is proposed to provide differentiation and guaranteed classes of signaling services. Specifically, multi-class token rate controls are proposed due to theirs flexibility in various network configurations and advantages over other controls such as, percentage blocking and call gapping. The concept of adaptive control decision is used so that the proposed controls react quickly to changes in the load. A simulation based performance evaluation of the proposed control is conducted and compared with existing controls. It is shown that the proposed controls outperform the existing multi-class token based controls due to various reasons. First, the proposed controls use adaptive resource sharing that guarantees a lower bound, where the percentage of resource sharing among classes is adaptively set. The existing token rate controls either distribute resource among classes using static ratios or completely share resources among classes. Although using static ratios guarantees the quality of service within each class, it lowers the total utilization of the server. Whereas, allowing a complete resource sharing among classes may cause large load fluctuations in each class. Second, the proposed controls use the novel concept of integrating information on the availability of the radio resources into the control decision, allowing servers to save theirs resources from serving signaling that later on might be dropped due to unavailable radio resources.
48

COMPLEMENTING THE GSP ROUTING PROTOCOL IN WIRELESS SENSOR NETWORKS

Calle Torres, Maria Gabriela 11 May 2009 (has links)
Gossip-Based Sleep Protocol (GSP) is a routing protocol in the flooding family with overhead generated by duplicate packets. GSP does not have other sources of overhead or additional information requirements common in routing protocols, such as routing packets, geographical information, addressing or explicit route computation. Because of its simple functionality, GSP is a candidate routing protocol for Wireless Sensor Networks. However, previous research showed that GSP uses the majority of energy in the network by keeping the nodes with their radios on ready to receive, even when there are no transmissions, situation known as Idle Listening. Complementing GSP implies creating additional protocols that make use of GSP particular characteristics in order to improve performance without additional overhead. The research analyzes the performance of GSP with different topologies, number of hops from source to destination and node densities, and presents one alternative protocol to complement GSP decreasing idle listening, number of duplicate packets in the network and overall energy consumption. The study compared the results of this alternative protocol, MACGSP6, to a protocol stack proposed for Wireless Sensor Networks: Sensor MAC (S-MAC) with Dynamic Source Routing (DSR), showing the advantages and disadvantages of the different approaches.
49

Towards An Optimal Core Optical Network Using Overflow Channels

Menon, Pratibha 12 May 2009 (has links)
This dissertation is based on a traditional circuit switched core WDM network that is supplemented by a pool of wavelengths that carry optical burst switched overflow data. These overflow channels function to absorb channel overflows from traditional circuit switched networks and they also provide wavelengths for newer, high bandwidth applications. The channel overflows that appear at the overflow layer as optical bursts are either carried over a permanently configured, primary light path, or over a burst-switched, best-effort path while traversing the core network. At every successive hop along the best effort path, the optical bursts will attempt to enter a primary light path to its destination. Thus, each node in the network is a Hybrid Node that will provide entry for optical bursts to hybrid path that is made of a point to point, pre-provisioned light path or a burst switched path. The dissertations main outcome is to determine the cost optimality of a Hybrid Route, to analyze cost-effectiveness of a Hybrid Node and compare it to a route and a node performing non-hybrid operation, respectively. Finally, an example network that consists of several Hybrid Routes and Hybrid Nodes is analyzed for its cost-effectiveness. Cost-effectiveness and optimality of a Hybrid Route is tested for its dependency on the mean and variance of channel demands offered to the route, the number of sources sharing the route, and the relative cost of a primary and overflow path called path cost ratio. An optimality condition that relates the effect of traffic statistics to the path cost ratio is analytically derived and tested. Cost-effectiveness of a Hybrid Node is compared among different switching fabric architecture that is used to construct the Hybrid Node. Broadcast-Select, Benes and Clos architectures are each considered with different degrees of chip integration. An example Hybrid Network that consists of several Hybrid Routes and Hybrid Nodes is found to be cost-effective and dependent of the ratio of switching to transport costs.
50

Technical Architectures and Economic Conditions for Viable Spectrum Trading Markets

Caicedo Bastidas, Carlos Enrique 24 July 2009 (has links)
The growing interest from telecommunication services providers to offer wireless based services has spurred an also growing demand for wireless spectrum. This has made the tasks related to spectrum management more complicated, especially those related to the allocation of spectrum between competing uses and users. Economically efficient spectrum allocation and assignment requires up to date information on the value of spectrum. Consequently, many spectrum management authorities are or have been elaborating regulations in order to increase the use of market based mechanisms for spectrum management, thus reducing their emphasis on command and control methods. Spectrum trading (ST) is a market based mechanism where buyers and sellers determine the assignment of spectrum and its uses. That is, it can address both the allocation and assignment aspects of spectrum use. The assignment of spectrum licenses through spectrum trading markets can be used as a mechanism to grant access to spectrum to those who value it most and can use it more efficiently. For it to be optimally effective, a secondary market must exist that allows spectrum users to optimally choose between capital investment and spectrum use on a continuous basis, not just at the time of initial assignment. This research identifies the different technical architectures for ST markets and studies the possible behaviors and interactions in spectrum trading markets with the use of Agent based Computational Economics (ACE). The research objective is to understand and determine the conditions that lead to viable spectrum trading markets. This analysis is valuable because it can help regulators prepare for plausible future scenarios and create policy instruments that promote these markets. It is also of value to wireless service providers as they can use the results of this work to understand the economic behavior of different ST market implementations and prepare strategies to participate in these markets.

Page generated in 0.1542 seconds