Spelling suggestions: "subject:" forminformation"" "subject:" informationation""
141 |
Efficient Information Access in Data-Intensive Sensor NetworksSharma, Divyasheel 23 December 2010 (has links)
Recent advances in wireless communications and microelectronics have enabled wide deployment of smart sensor networks. Such networks naturally apply to a broad range of applications that involve system monitoring and information tracking (e.g., fine-grained weather/environmental monitoring, structural health monitoring, urban-scale traffic or parking monitoring, gunshot detection, monitoring volcanic eruptions, measuring rate of melting glaciers, forest fire detection, emergency medical care, disaster response, airport security infrastructure, monitoring of children in metropolitan areas, product transition in warehouse networks etc.).
Meanwhile, existing wireless sensor networks (WSNs) perform poorly when the applications have high bandwidth needs for data transmission and stringent delay constraints against the network communication. Such requirements are common for Data Intensive Sensor Networks (DISNs) implementing Mission-Critical Monitoring applications (MCM applications).We propose to enhance existing wireless network standards with flexible query optimization strategies that take into account network constraints and application-specific data delivery patterns in order to meet high performance requirements of MCM applications.
In this respect, this dissertation has two major contributions: First, we have developed an algebraic framework called Data Transmission Algebra (DTA) for collision-aware concurrent data transmissions. Here, we have merged the serialization concept from the databases with the knowledge of wireless network characteristics. We have developed an optimizer that uses the DTA framework, and generates an optimal data transmission schedule with respect to latency, throughput, and energy usage. We have extended the DTA framework to handle location-based trust and sensor mobility. We improved DTA scalability with Whirlpool data delivery mechanism, which takes advantage of partitioning of the network. Second, we propose relaxed optimization strategy and develop an adaptive approach to deliver data in data-intensive wireless sensor networks. In particular, we have shown that local actions at nodes help network to adapt in worse network conditions and perform better. We show that local decisions at the nodes can converge towards desirable global network properties e.g.,
high packet success ratio for the network. We have also developed a network monitoring tool to assess the state and dynamic convergence of the WSN, and force it towards better performance.
|
142 |
Cross-Layer Resilience Based On Critical Points in MANETsKim, Tae-Hoon 06 January 2011 (has links)
A fundamental problem in mobile ad hoc and unstructured sensor networks is maintaining connectivity. A network is connected if all nodes have a communication route (typically multi-hop) to each other. Maintaining connectivity is a challenge due to the unstructured nature of the network topology and the frequent occurrence of link and node failures due to interference, mobility, radio channel effects and battery limitations. In order to effectively deploy techniques to improve the resilience of sensor and mobile ad hoc networks against failures or attacks one must be able to identify all the weak points of a network topology. Here we define the weak or critical points of the topology as those links and nodes whose failure results in partitioning of the network. In this dissertation, we propose a set of algorithms to identify the critical points of a network topology. Utilizing these algorithms we study the behavior of critical points and the effect of using only local information in identifying global critical points. Then, we propose both local and global based resilient techniques that can improve the wireless network connectivity around critical points to lessen their importance and improve the network resilience. Next we extend the work to examine the network connectivity for heterogeneous wireless networks that can be result due to factors such as variations in transmission power and signal propagation environments and propose an algorithm to identify the connectivity of the network. We also propose two schemes for constructing additional links to enhance the connectivity of the network and evaluate the network performance of when a random interference factor occurs. Lastly, we implement our resilience techniques to improve the performance.
|
143 |
Generation of Classificatory Metadata for Web Resources using Social TagsSyn, Sue Yeon 23 December 2010 (has links)
With the increasing popularity of social tagging systems, the potential for using social tags as a source of metadata is being explored. Social tagging systems can simplify the involvement of a large number of users and improve the metadata generation process, especially for semantic metadata. This research aims to find a method to categorize web resources using social tags as metadata. In this research, social tagging systems are a mechanism to allow non-professional catalogers to participate in metadata generation. Because social tags are not from a controlled vocabulary, there are issues that have to be addressed in finding quality terms to represent the content of a resource. This research examines ways to deal with those issues to obtain a set of tags representing the resource from the tags provided by users.
Two measurements that measure the importance of a tag are introduced. Annotation Dominance (AD) is a measurement of how much a tag term is agreed to by users. Another is Cross Resources Annotation Discrimination (CRAD), a measurement to discriminate tags in the collection. It is designed to remove tags that are used broadly or narrowly in the collection. Further, the study suggests a process to identify and to manage compound tags.
The research aims to select important annotations (meta-terms) and remove meaningless ones (noise) from the tag set. This study, therefore, suggests two main measurements for getting a subset of tags with classification potential. To evaluate the proposed approach to find classificatory metadata candidates, we rely on users relevance judgments comparing suggested tag terms and expert metadata terms. Human judges rate how relevant each term is on an n-point scale based on the relevance of each of the terms for the given resource.
|
144 |
An Access Control and Trust Management Framework for Loosely-Coupled Multidomain EnvironmentZhang, Yue 06 May 2011 (has links)
Multidomain environments where multiple organizations interoperate with each other are becoming a reality as can be seen in emerging Internet-based enterprise applications. Access control to ensure secure interoperation in such an environment is a crucial challenge. A multidomain environment can be categorized as tightly-coupled and loosely-coupled. The access control challenges in the loosely-coupled environment have not been studied adequately in the literature.
In a loosely-coupled environment, different domains do not know each other before they interoperate. Therefore, traditional approaches based on users identities cannot be applied directly. Motivated by this, researchers have developed several attribute-based authorization approaches to dynamically build trust between previously unknown domains. However, these approaches all focus on building trust between individual requesting users and the resource providing domain. We demonstrate that such approaches are inefficient when the requests are issued by a set of users assigned to a functional role in the organization. Moreover, preserving principle of security has long been recognized as a challenging problem when facilitating interoperations. Existing research work has mainly focused on solving this problem only in a tightly-coupled environment where a global policy is used to preserve the principle of security.
In this thesis, we propose a role-based access control and trust management framework for loosely-coupled environments. In particular, we allow the users to specify the interoperation requests in terms of requested permissions and propose several role mapping algorithms to map the requested permissions into roles in the resource providing domain. Then, we propose a Simplify algorithm to simplify the distributed proof procedures when a set of requests are issued according to the functions of some roles in the requesting domain. Our experiments show that our Simplify algorithm significantly simplifies such procedures when the total number of credentials in the environment is sufficiently large, which is quite common in practical applications. Finally, we propose a novel policy integration approach using the special semantics of hybrid role hierarchy to preserve the principle of security. At the end of this dissertation a brief discussion of implemented prototype of our framework is present.
|
145 |
The Real Options to Technology Management: Strategic Options for 3G Wireless Network Architecture and TechnologiesKim, Hak Ju 17 February 2011 (has links)
The increasing demands for high-quality multimedia services have challenged the wireless industry to rapidly develop wireless network architecture and technologies. These demands have led wireless service providers to struggle with the current network migration dilemma of how to best deliver high-quality multimedia services. Currently, there are many alternative wireless network technologies, such as TDMA, GSM, GPRS, EDGE, WCDMA, cdma2000, etc. These wireless technology choices require close examination when making strategic decisions involving network evolution.
This study assesses the technology options for wireless networks to establish next generation networks (i.e., 3G), based on the real options approach (ROA), and discusses wireless network operators¡¯ technology migration strategies. The goal of this study is to develop a theoretical framework for wireless network operators to support their strategic decision-making process when considering technology choices. The study begins by tracing the evolution of technologies in wireless networks to place them in the proper context, and continues by developing the strategic technology option model (STOM) using ROA as an assessment tool. Finally, STOM is applied to the world and US wireless industries for the formulation of technology migration strategies.
Consequently, this study will help wireless network service providers make strategic decisions when upgrading or migrating towards the next generation network architecture by showing the possible network migration paths and their relative value. Through this study, network operators can begin to think in terms of the available network options and to maximize overall gain in networks. Since the migration issues concerning the next generation wireless network architecture and technologies remain the subject of debate, with no substantial implementation in progress now, this study will help the industry to decide where best to focus its efforts and can be expanded for further research.
|
146 |
Providing Fairness Through Detection and Preferential Dropping of High Bandwidth Unresponsive FlowsChatranon, Gwyn 17 February 2011 (has links)
Stability of the Internet today depends largely on cooperation between end hosts that employ TCP (Transmission Control Protocol) protocol in the transport layer, and network routers along an end-to-end path. However, in the past several years, various types of traffic, including streaming media applications, are increasingly deployed over the Internet. Such types of traffic are mostly based on UDP (User Datagram Protocol) and usually do not employ neither end-to-end congestion nor
flow control mechanism, or else very limited. Such applications could unfairly consume greater amount of bandwidth than competing responsive flows such as TCP traffic. In this manner, unfairness problem and congestion collapse could occur. To avoid substantial memory requirement and complexity, fair Active Queue Management (AQM) utilizing no or partial flow state information were proposed in the past several years to solve these problems. These schemes however exhibit several problems under different circumstances.
This dissertation presents two fair AQM mechanisms, BLACK and AFC, that overcome the problems and the limitations of the existing schemes. Both BLACK and AFC need to store only a small amount of state information to maintain and exercise its fairness mechanism. Extensive simulation studies show that both schemes outperform the other schemes in terms of throughput fairness under a large number of scenarios. Not only able to handle multiple unresponsive traffic, but the fairness among TCP connections with different round trip delays is also improved. AFC, with a little overhead than BLACK, provides additional advantages with an ability to achieve good fairness under a scenario with traffic of diff21erent sizes and bursty traffic, and provide smoother transfer rates for the unresponsive flows that are usually transmitting real-time traffic.
This research also includes the comparative study of the existing techniques to estimate the number of active flows which is a crucial component for some fair AQM schemes including BLACK and AFC. Further contribution presented in this dissertation is the first comprehensive evaluation of fair AQM schemes under the presence of various type of TCP friendly traffic.
|
147 |
Topological Design of Multiple Virtual Private Networks UTILIZING SINK-TREE PATHSSrikitja, Anotai 17 February 2011 (has links)
With the deployment of MultiProtocol Label Switching (MPLS) over a core backbone networks, it is possible for a service provider to built Virtual Private Networks (VPNs) supporting various classes of services with QoS guarantees. Efficiently mapping the logical layout of multiple VPNs over a service provider network is a challenging traffic engineering problem. The use of sink-tree (multipoint-to-point) routing paths in a MPLS network makes the VPN design problem different from traditional design approaches where a full-mesh of point-to-point paths is often the choice. The clear benefits of using sink-tree paths are the reduction in the number of label switch paths and bandwidth savings due to larger granularities of bandwidth aggregation within the network.
In this thesis, the design of multiple VPNs over a MPLS-like infrastructure network, using sink-tree routing, is formulated as a mixed integer programming problem to simultaneously find a set of VPN logical topologies and their dimensions to carry multi-service, multi-hour traffic from various customers. Such a problem formulation yields a NP-hard complexity. A heuristic path selection algorithm is proposed here to scale the VPN design problem by choosing a small-but-good candidate set of feasible sink-tree paths over which the optimal routes and capacity assignments are determined. The proposed heuristic has clearly shown to speed up the optimization process and the solution can be obtained within a reasonable time for a realistic-size network.
Nevertheless, when a large number of VPNs are being layout simultaneously, a standard optimization approach has a limited scalability. Here, the heuristics termed the Minimum-Capacity Sink-Tree Assignment (MCSTA) algorithm proposed to approximate the optimal bandwidth and sink-tree route assignment for multiple VPNs within a polynomial computational time. Numerical results demonstrate the MCSTA algorithm yields a good solution within a small error and sometimes yields the exact solution. Lastly, the proposed VPN design models and solution algorithms are extended for multipoint traffic demand including multipoint-to-point and broadcasting connections.
|
148 |
A Neural Network Approach to Treatment OptimizationSanguansintukul, Siripun 17 February 2011 (has links)
An approach for optimizing medical treatment as a function of measurable patient data is analyzed using a two-network system. The two-network approach is inspired by the field of control systems: one network, called a patient model (PM), is used to predict the outcome of the treatment, while the other, called a treatment network (TN), is used to optimize the predicted outcome. The system is tested with a variety of functions: one objective criterion (with and without interaction between treatments) and multi-objective criteria (with and without interaction between treatments). Data are generated using a simple Gaussian function for some studies and with functions derived from the medical literature for other studies. The experimental results can be summarized as follows: 1) the relative importance of symptoms can be adjusted by applying different coefficient weights in the PM objective functions. Different coefficients are employed to modulate the tradeoffs in symptoms. Higher coefficients in the cost function result in higher accuracy. 2) Different coefficients are applied to the objective functions of the TN when both objective functions are quadratic, the experimental results suggest that the higher the coefficient the better the symptom. 3) The simulation results of training the TN with a quadratic cost function and a quartic cost function indicate the threshold-like behavior in the quartic cost function when the dose is in the neighborhood of the threshold. 4) In general, the network illustrates a better performance than the quadratic model. However, the network encounters a local minima problem. Ultimately, the results indicate a proof of idea that this framework might be a useful tool for augmenting a clinicians decision in selecting dose strengths for an individual patient need.
|
149 |
On Proximity Based Sub-Area LocalizationKorkmaz, Aylin 24 August 2011 (has links)
A localization system can save lives in the aftermath of an earthquake; position people or valuable assets during a fire in a building; or track airplanes besides many of its other attractive applications. Global Positioning System (GPS) is the most popular localization system, and it can provide 7-10 meters localization accuracy for outdoor users; however, it has certain drawbacks for indoor environments. Alternatively, wireless networks are becoming pervasive and have been densely deployed for communication of various types of devices indoors, exploiting them for the localization of people or other assets is a convenience. Proximity based localization that estimates locations based on closeness to known reference points, coupled with a widely deployed wireless technology, can reduce the cost and effort for localization in local and indoor areas. In this dissertation, we propose a proximity based localization algorithm that exploits knowledge of the overlapping coverages of known monitoring stations. We call this algorithm Sub-Area Localization (SAL). We present a systematic study of proximity-based localization by defining the factors and parameters that affect the localization performance in terms of metrics such as accuracy and efficiency. Then, we demonstrate that SAL can be used in multi-floor buildings to take advantage of the infrastructure elements deployed across floors to reduce the overall cost (in terms of the number of monitoring stations required) without harming accuracy. Finally, we present a case study of how SAL can be used for spatial spectrum detection in wireless cognitive networks.
|
150 |
La communication gouvernementale en Europe : analyse comparative /Sellier, Dominique, January 1900 (has links)
Texte remanié de: Mémoire de DEA--Paris--Celsa. / Bibliogr. p. 97-98.
|
Page generated in 0.1229 seconds