• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28647
  • 11042
  • 7320
  • 1719
  • 1511
  • 1019
  • 761
  • 508
  • 452
  • 406
  • 356
  • 314
  • 281
  • 236
  • 230
  • Tagged with
  • 61668
  • 14047
  • 10945
  • 10853
  • 9929
  • 9257
  • 9154
  • 8566
  • 8478
  • 8374
  • 7470
  • 6335
  • 5731
  • 5726
  • 5652
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

A model-driven approach to design pattern visualization and evolutions /

Yang, Sheng, January 2006 (has links)
Thesis (Ph. D.)--University of Texas at Dallas, 2006. / Includes vita. Includes bibliographical references (leaves 182-189).
82

GAME THEORETIC APPROACH TO RADIO RESOURCE MANAGEMENT ON THE REVERSE LINK FOR MULTI-RATE CDMA WIRELESS DATA NETWORKS

Teerapabkajorndet, Wiklom 28 January 2005 (has links)
This work deals with efficient power and rate assignment to mobile stations (MSs) involved in bursty data transmission in cellular CDMA networks. Power control in the current CDMA standards is based on a fixed target signal quality called signal to interference ratio (SIR). The target SIR represents a predefined frame error rate (FER). This approach is inefficient for data-MSs because a fixed target SIR can limit the MS's throughput. Power control should thus provide dynamic target SIRs instead of a fixed target SIR. In the research literature, the power control problem has been modeled using game theory. A limitation of the current literature is that in order to implement the algorithms, each MS needs to know information such as path gains and transmission rates of all other MSs. Fast rate control schemes in the evolving cellular data systems such as cdma2000-1x-EV assign transmission rates to MSs using a probabilistic approach. The limitation here is that the radio resources can be either under or over-utilized. Further, all MSs are not assigned the same rates. In the schemes proposed in the literature, only few MSs, which have the best channel conditions, obtain all radio resources. In this dissertation, we address the power control issue by moving the computation of the Nash equilibrium from each MS to the base station (BS). We also propose equal radio resource allocation for all MSs under the constraint that only the maximum allowable radio resources are used in a cell. This dissertation addresses the problem of how to efficiently assign power and rate to MSs based on dynamic target SIRs for bursty transmissions. The proposed schemes in this work maximize the throughput of each data-MS while still providing equal allocation of radio resources to all MSs and achieving full radio resource utilization in each cell. The proposed schemes result in power and rate control algorithms that however require some assistance from the BS. The performance evaluation and comparisons with cdma2000-1x-Evolution Data Only (1x-EV-DO) show that the proposed schemes can provide better effective rates (rates after error) than the existing schemes.
83

GEOINTERPRET: AN ONTOLOGICAL ENGINEERING METHODOLOGY FOR AUTOMATED INTERPRETATION OF GEOSPATIAL QUERIES

Peachavanish, Ratchata 28 January 2005 (has links)
Despite advances in GIS technology, solving geospatial problems using current GIS platforms involves complex tasks requiring specialized skills and knowledge that are attainable through formal training and experience in implementing GIS projects. These requisite skills and knowledge include: understanding domain-specific geospatial problems; understanding GIS representation of real-world objects, concepts, and activities; knowing how to identify, locate, retrieve, and integrate geospatial data sets into GIS projects; knowing specific geoprocessing capabilities available on specific GIS platforms; and skills in utilizing geoprocessing tools in GIS with appropriate data sets to solve problems effectively and efficiently. Users interested in solving application-domain problems often lack such skills and knowledge and resort to GIS experts (this is especially true for applications dealing with diverse geospatial data sets and complex problems). Therefore, there is a gap between users knowledge about geoprocessing and GIS tools and the GIS knowledge and skills needed to solve geospatial problems. To fill this gap, a new approach that automates the tasks involved in geospatial problem solving is needed. Of these tasks, the most important is geospatial query (usually expressed in application-specific concepts and terminologies) interpretation and mapping to geoprocessing operations implementable by GIS. The goal of this research is to develop an ontological engineering methodology, called GeoInterpret, to automate the task of geospatial query interpretation and mapping. This methodology encompasses: a conceptualization of geospatial queries; a multiple-ontology approach for representing knowledge needed to solve geospatial queries; a set of techniques for mapping elements between different ontologies; and a set of algorithms for geospatial query interpretation, mapping, and geoprocessing workflow composition. A proof of concept was developed to demonstrate the working of GeoInterpret.
84

A Framework for the Organization and Discovery of Information Resources in a WWW Environment Using Association, Classification and Deduction

Buranarach, Marut 28 January 2005 (has links)
The Semantic Web is envisioned as a next-generation WWW environment in which information is given well-defined meaning. Although the standards for the Semantic Web are being established, it is as yet unclear how the Semantic Web will allow information resources to be effectively organized and discovered in an automated fashion. This dissertation research explores the organization and discovery of resources for the Semantic Web. It assumes that resources on the Semantic Web will be retrieved based on metadata and ontologies that will provide an effective basis for automated deduction. An integrated deduction system based on the Resource Description Framework (RDF), the DARPA Agent Markup Language (DAML) and description logic (DL) was built. A case study was conducted to study the system effectiveness in retrieving resources in a large Web resource collection. The results showed that deduction has an overall positive impact on the retrieval of the collection over the defined queries. The greatest positive impact occurred when precision was perfect with no decrease in recall. The sensitivity analysis was conducted over properties of resources, subject categories, query expressions and relevance judgment in observing their relationships with the retrieval performance. The results highlight both the potentials and various issues in applying deduction over metadata and ontologies. Further investigation will be required for additional improvement. The factors that can contribute to degraded performance were identified and addressed. Some guidelines were developed based on the lessons learned from the case study for the development of Semantic Web data and systems.
85

Design of Indoor Positioning Systems Based on Location Fingerprinting Technique

Kaemarungsi, Kamol 17 June 2005 (has links)
Positioning systems enable location-awareness for mobile computers in ubiquitous and pervasive wireless computing. By utilizing location information, location-aware computers can render location-based services possible for mobile users. Indoor positioning systems based on location fingerprints of wireless local area networks have been suggested as a viable solution where the global positioning system does not work well. Instead of depending on accurate estimations of angle or distance in order to derive the location with geometry, the fingerprinting technique associates location-dependent characteristics such as received signal strength to a location and uses these characteristics to infer the location. The advantage of this technique is that it is simple to deploy with no specialized hardware required at the mobile station except the wireless network interface card. Any existing wireless local area network infrastructure can be reused for this kind of positioning system. While empirical results and performance studies of such positioning systems are presented in the literature, analytical models that can be used as a framework for efficiently designing the positioning systems are not available. This dissertation develops an analytical model as a design tool and recommends a design guideline for such positioning systems in order to expedite the deployment process. A system designer can use this framework to strike a balance between the accuracy, the precision, the location granularity, the number of access points, and the location spacing. A systematic study is used to analyze the location fingerprint and discover its unique properties. The location fingerprint based on the received signal strength is investigated. Both deterministic and probabilistic approaches of location fingerprint representations are considered. The main objectives of this work are to predict the performance of such systems using a suitable model and perform sensitivity analyses that are useful for selecting proper system parameters such as number of access points and minimum spacing between any two different locations.
86

Quality of Service Support in IEEE 802.11 Wireless LAN

Pattara-atikom, Wasan 17 June 2005 (has links)
Wireless Local Area Networks (WLANs) are gaining popularity at an unprecedented rate, at home, at work, and in public hot spot locations. As these networks become ubiquitous and an integral part of the infrastructure, they will be increasingly used for multi-media applications. The heart of the current 802.11 WLANs mechanism is the Distributed Coordination Function (DCF) which does not have any Quality of Service (QoS) support. The emergence of multimedia applications, such as the local services in WLANs hotspots and distributions of entertainment in residential WLANs, has prompted research in QoS support for WLANs. The absence of QoS support results in applications with drastically different requirements receiving the same (yet potentially unsatisfactory) service. Without absolute throughput support, the performance of applications with stringent throughput requirements will not be met. Without relative throughput support, heterogeneous types of applications will be treated unfairly and their performance will be poor. Without delay constraint support, time-sensitive applications will not even be possible. The objective of this dissertation is, therefore, to develop a comprehensive and integrated solution to provide effective and efficient QoS support in WLANs in a distributed, fair, scalable, and robust manner. In this dissertation, we present a novel distributed QoS mechanism called Distributed Relative/Absolute Fair Throughput with Delay Support (DRAFT+D). DRAFT+D is de- signed specifically to provide integrated QoS support in IEEE 802.11 WLANs. Unlike any other distributed QoS mechanism, DRAFT+D supports two QoS metrics (throughput and delay) with two QoS models (absolute and relative) under two fairness constraints (utilitarian and temporal fairness) in the same mechanism at the same time a fully distributed manner. DRAFT+D is also equipped with safeguards against excessive traffic injection DRAFT+D operates as a fair-queuing mechanism that controls packet transmissions (a) by using a distributed deficit round robin mechanism and (b) by modifying the way Backoff Interval (BI) are calculated for packets of different traffic classes. Fair relative throughput support is achieved by calculating BI based on the throughput requirements. Absolute throughput and delay support are achieved by allocating sufficient shares of bandwidth to these types of traffic.
87

Demand-based Network Planning for WLANs

Prommak, Chutima 17 June 2005 (has links)
The explosive recent growth in Wireless Local Area Network (WLAN) deployment has generated considerable interest among network designers. Previous design approaches have mostly focused on coverage based optimization or the application of trial and error strategies. These only ensure that adequate signal strength is maintained in the intended service area. WLAN service environments, however, require a network designed to provide not only radio coverage but also adequate capacity (data rate) across the service area so that it can carry traffic load from a large number of users with certain Quality of Service (QoS) requirements. Thus, current design techniques are insufficient to provide data communication services to WLAN users. In this dissertation, a novel approach to the WLAN design problem is proposed that takes into account user population density in the service area, traffic demand characteristics and the structure of the service area. The resulting demand-based WLAN design results in a network that provides adequate radio signal coverage and the required data rate capacity to serve expected user traffic demand in the service region. The demand-based WLAN design model is formulated as a Constraint Satisfaction Problem (CSP). An efficient heuristic solution technique is developed to solve the CSP network design problem in reasonable computational time. The solution provides the number of access points required and the parameters of each access point, including location, frequency channel, and power level. Extensive numerical studies have been reported for various service scenarios ranging from a single floor with small and large service areas to a multiple floor design to a design that includes outside areas. The results of these studies illustrate that the demand-based WLAN design approach is more appropriate for the design of the WLAN systems than are existing coverage based design approaches. Additionally, extensive sensitivity analysis was conducted to study the effects of user activity level (traffic load), shadow fading, and the use of different path loss models in network design.
88

Energy Efficient Security Framework for Wireless Local Area Networks

Kiratiwintakorn, Phongsak 28 July 2005 (has links)
Wireless networks are susceptible to network attacks due to their inherent vulnerabilities. The radio signal used in wireless transmission can arbitrarily propagate through walls and windows; thus a wireless network perimeter is not exactly known. This leads them to be more vulnerable to attacks such as eavesdropping, message interception and modifications compared to wired-line networks. Security services have been used as countermeasures to prevent such attacks, but they are used at the expense of resources that are scarce especially, where wireless devices have a very limited power budget. Hence, there is a need to provide security services that are energy efficient. In this dissertation, we propose an energy efficient security framework. The framework aims at providing security services that take into account energy consumption. We suggest three approaches to reduce the energy consumption of security protocols: replacement of standard security protocol primitives that consume high energy while maintaining the same security level, modification of standard security protocols appropriately, and a totally new design of security protocol where energy efficiency is the main focus. From our observation and study, we hypothesize that a higher level of energy savings is achievable if security services are provided in an adjustable manner. We propose an example tunable security or TuneSec system, which allows a reasonably fine-grained security tuning to provide security services at the wireless link level in an adjustable manner. We apply the framework to several standard security protocols in wireless local area networks and also evaluate their energy consumption performance. The first and second methods show improvements of up to 70% and 57% in energy consumption compared to plain standard security protocols, respectively. The standard protocols can only offer fixed-level security services, and the methods applied do not change the security level. The third method shows further improvement compared to fixed-level security by reducing (about 6% to 40%) the energy consumed. This amount of energy saving can be varied depending on the configuration and security requirements.
89

A COLLABORATIVE FILTERING APPROACH TO PREDICT WEB PAGES OF INTEREST FROM NAVIGATION PATTERNS OF PAST USERS WITHIN AN ACADEMIC WEBSITE

Nkweteyim, Denis Lemongew 30 September 2005 (has links)
This dissertation is a simulation study of factors and techniques involved in designing hyperlink recommender systems that recommend to users, web pages that past users with similar navigation behaviors found interesting. The methodology involves identification of pertinent factors or techniques, and for each one, addresses the following questions: (a) room for improvement; (b) better approach, if any; and (c) performance characteristics of the technique in environments that hyperlink recommender systems operate in. The following four problems are addressed: Web Page Classification. A new metric (PageRank × Inverse Links-to-Word count ratio) is proposed for classifying web pages as content or navigation, to help in the discovery of user navigation behaviors from web user access logs. Results of a small user study suggest that this metric leads to desirable results. Data Mining. A new apriori algorithm for mining association rules from large databases is proposed. The new algorithm addresses the problem of scaling of the classical apriori algorithm by eliminating an expensive join step, and applying the apriori property to every row of the database. In this study, association rules show the correlation relationships between user navigation behaviors and web pages they find interesting. The new algorithm has better space complexity than the classical one, and better time efficiency under some conditions and comparable time efficiency under other conditions. Prediction Models for User Interests. We demonstrate that association rules that show the correlation relationships between user navigation patterns and web pages they find interesting can be transformed into collaborative filtering data. We investigate collaborative filtering prediction models based on two approaches for computing prediction scores: using simple averages and weighted averages. Our findings suggest that the weighted averages scheme more accurately computes predictions of user interests than the simple averages scheme does. Clustering. Clustering techniques are frequently applied in the design of personalization systems. We studied the performance of the CLARANS clustering algorithm in high dimensional space in relation to the PAM and CLARA clustering algorithms. While CLARA had the best time performance, CLARANS resulted in clusters with the lowest intra-cluster dissimilarities, and so was most effective in this regard.
90

Geoprocessing Optimization in Grids

Liu, Shuo 30 September 2005 (has links)
Geoprocessing is commonly used in solving problems across disciplines which feature geospatial data and/or phenomena. Geoprocessing requires specialized algorithms and more recently, due to large volumes of geospatial databases and complex geoprocessing operations, it has become data- and/or compute-intensive. The conventional approach, which is predominately based on centralized computing solutions, is unable to handle geoprocessing efficiently. To that end, there is a need for developing distributed geoprocessing solutions by taking advantage of existing and emerging advanced techniques and high-performance computing and communications resources. As an emerging new computing paradigm, grid computing offers a novel approach for integrating distributed computing resources and supporting collaboration across networks, making it suitable for geoprocessing. Although there have been research efforts applying grid computing in the geospatial domain, there is currently a void in the literature for a general geoprocessing optimization. In this research, a new optimization technique for geoprocessing in grid systems, Geoprocessing Optimization in Grids (GOG), is designed and developed. The objective of GOG is to reduce overall response time with a reasonable cost. To meet this objective, GOG contains a set of algorithms, including a resource selection algorithm and a parallelism processing algorithm, to speed up query execution. GOG is validated by comparing its optimization time and estimated costs of generated execution plans with two existing optimization techniques. A proof of concept based on an application in air quality control is developed to demonstrate the advantages of GOG.

Page generated in 0.1247 seconds