Spelling suggestions: "subject:"forminformation."" "subject:"informationation.""
111 |
ANCIENT ARCHITECTURE IN VIRTUAL REALITY DOES IMMERSION REALLY AID LEARNING?Jacobson, Jeffrey 07 July 2008 (has links)
This study explored whether students benefited from an immersive panoramic display while studying subject matter that is visually complex and information-rich. Specifically, middle-school students learned about ancient Egyptian art and society using an educational learning game, Gates of Horus, which is based on a simplified three dimensional computer model of an Egyptian temple. First, we demonstrated that the game is an effective learning tool by comparing written post-test results from students who played the game and students in a no-treatment control group. Next, we compared the learning results of two groups of students who used the same mechanical controls to navigate through the computer model of the temple and to interact with its features. One of the groups saw the temple on a standard computer desktop monitor while the other-saw it in a visually immersive display (a partial dome) The major difference in the test results between the two groups appeared when the students gave a verbal show-and-tell presentation about the Temple and the facts and concepts related to it. During that exercise, the students had no cognitive scaffolding other than the Virtual Egyptian Temple which was projected on a wall. The student navigated through the temple and described its major features. Students who had used the visually immersive display volunteered notably more than those who had used a computer monitor. The other major tests were questionnaires, which by their nature provide a great deal of scaffolding for the task of recalling the required information. For these tests we believe that this scaffolding aided students' recall to the point where it overwhelmed the differences produced by any difference in the display. We conclude that the immersive display provides better supports for the student's learning activities for this material. To our knowledge, this is the first formal study to show concrete evidence that visual immersion can improve learning for a non-science topic.
|
112 |
Credibility-based Binary Feedback Model for Grid Resource PlanningChokesatean, Parasak 31 July 2008 (has links)
Grid service providers (GSPs), in commercial grids, improve their profitability by maintaining the least possible set of resources to meet client demand. Their goal is to maximize profits by optimizing resource planning. In order to achieve such goal, they require feedback from clients to estimate demand for their service. The objective of this research is to develop an approach to build a useful value profile for a collection of heterogeneous grid clients. For developing the approach, we use binary feedback as the theoretical framework to build the value profile, which can be used as a proxy for a demand function that represents clients willingness-to-pay for grid resources. However, clients may require incentives to provide feedback and deterrents from selfish behavior, such as misrepresenting their true preferences to obtain superior services at lower costs. To address this concern, we use credibility mechanisms to detect untruthful feedback and penalize insincere or biased clients. We also use game theory to study how the cooperation can emerge.
In this dissertation, we propose the use of credibility-based binary feedback to build value profiles, which GSPs can use to plan their resources economically. The use of value profiles aims to benefit both GSPs and clients, and helps to accelerate an adoption of commercial grids.
|
113 |
CAN CITYWIDE MUNICIPAL WIFI BE A FEASIBLE SOLUTION FOR LOCAL BROADBAND ACCESS IN THE US? AN EMPIRICAL EVALUATION OF A TECHNO-ECONOMIC MODELHuang, Kuang Chiu 24 July 2008 (has links)
Citywide wireless fidelity (WiFi) offers an opportunity for municipalities and BISPs to break through the duopoly broadband market structure that is prevalent in the US. Although municipal WiFi offers low deployment cost, short building time, high capacity, and wide coverage, the competition from the local broadband market makes it difficult to be self-sustainable from public Internet access revenues. Therefore, it is interesting and useful not only to discuss the demographic features of existing WiFi projects but also to evaluate what is necessary for them to be economically sustainable. We propose to study these questions by building a techno-economic model to determine features, sustainability, and necessary subsidy of citywide WiFi for local broadband access. We evaluate this model with data from several existing projects.
In order to gain insight from previous experience and to evaluate the feasibility of citywide WiFi, we carried this research out in three steps. The first, we undertook a systematic study to analyze all existing and operating citywide WiFi projects in the US. We were interested in identifying what key geo-demographic differences exist between WiFi cities and non-WiFi cities, and how private ISPs and municipalities implemented citywide projects with various business models and strategies. Next, we built a model linking access point density and network coverage, and used this to build a techno-economic model of municipal WiFi. Finally, we evaluated the effectiveness of the model using existing projects identified in the empirical study and determined how much subsidy could be reasonable from municipality to make WiFi projects sustainable. The outcome of this research is designed to assist policy makers, municipalities, and WiFi ISPs in evaluating, designing and implementing a sustainable project.
|
114 |
ON THE USE OF NATURAL LANGUAGE PROCESSING FOR AUTOMATED CONCEPTUAL DATA MODELINGDu, Siqing 13 August 2008 (has links)
This research involved the development of a natural language processing (NLP) architecture for the extraction of entity relation diagrams (ERDs) from natural language requirements specifications. Conceptual data modeling plays an important role in database and software design and many approaches to automating and developing software tools for this process have been attempted. NLP approaches to this problem appear to be plausible because compared to general free texts, natural language requirements documents are relatively formal and exhibit some special regularities which reduce the complexity of the problem. The approach taken here involves a loose integration of several linguistic components. Outputs from syntactic parsing are used by a set of hueristic rules developed for this particular domain to produce tuples representing the underlying meanings of the propositions in the documents and semantic resources are used to distinguish between correct and incorrect tuples. Finally the tuples are integrated into full ERD representations. The major challenge addressed in this research is how to bring the various resources to bear on the translation of the natural language documents into the formal language. This system is taken to be representative of a potential class of similar systems designed to translate documents in other restricted domains into corresponding formalisms. The system is incorporated into a tool that presents the final ERDs to users who can modify them in the attempt to produce an accurate ERD for the requirements document. An experiment demonstrated that users with limited experience in ERD specifications could produce better representations of requirements documents than they could without the system, and could do so in less time.
|
115 |
Risk-based Survivable Network DesignVajanapoom, Korn 25 September 2008 (has links)
Communication networks are part of the critical infrastructure upon which society and the economy depends; therefore it is crucial for communication networks to survive failures and physical attacks to provide critical services. Survivability techniques are deployed to ensure the functionality of communication networks in the face of failures. The basic approach for designing survivable networks is that given a survivability technique (e.g., link protection, or path protection) the network is designed to survive a set of predefined failures (e.g., all single-link failures) with minimum cost. However, a hidden assumption in this design approach is that the sufficient monetary funds are available to protect all predefined failures, which might not be the case in practice as network operators may have a limited budget for improving network survivability. To overcome this limitation, this dissertation proposed a new approach for designing survivable networks, namely; risk-based survivable network design, which integrates risk analysis techniques into an incremental network design procedure with budget constraints.
In the risk-based design approach, the basic design problem considered is that given a working network and a fixed budget, how best to allocate the budget for deploying a survivability technique in different parts of the network based on the risk. The term risk measures two related quantities: the likelihood of failure or attack, and the amount of damage caused by the failure or attack. Various designs with different risk-based design objectives are considered, for example, minimizing the expected damage, minimizing the maximum damage, and minimizing a measure of the variability of damage that could occur in the network.
In this dissertation, a design methodology for the proposed risk-based survivable network design approach is presented. The design problems are formulated as Integer Programming (InP) models; and in order to scale the solution of models, some greedy heuristic solution algorithms are developed. Numerical results and analysis illustrating different risk-based designs are presented.
|
116 |
On Detection Mechanisms and Their Performance for Packet Dropping Attack in Ad Hoc NetworksAnusas-amornkul, Tanapat 11 September 2008 (has links)
Ad hoc networking has received considerable attention in the research community for seamless communications without an existing infrastructure network. However, such networks are not designed with security protection in mind and they are prone to several security attacks. One such simple attack is the packet dropping attack, where a malicious node drops all data packets, while participating normally in routing information exchange. This attack is easy to deploy and can significantly reduce the throughput in ad hoc networks.
In this dissertation, we study this problem through analysis and simulation. The packet dropping attack can be a result of the behavior of a selfish node or pernicious nodes that launch blackhole or a wormhole attacks. We are only interested in detecting this attack but not the causes of the attack. In this dissertation, for simple static ad hoc networks, analysis of the throughput drop due to this attack along with its improvement when mitigated are presented. A watchdog and a newly proposed "cop" detection mechanisms are studied for mitigating the throughput degradation after detection of the attack. The watchdog mechanism is a detection mechanism that has to be typically implemented in every node in the network. The cop detection mechanism is similar to the watchdog mechanism but only a few nodes opportunistically detect malicious nodes instead of all nodes performing this function. For multiple flows in static and mobile ad hoc networks, simulations are used to study and compare both mechanisms. The study shows that the cop mechanism can improve the throughput of the network while reducing the detection load and complexity for other nodes.
|
117 |
Channel Access Management in Data Intensive Sensor NetworksLin, Chih-kuang 11 September 2008 (has links)
There are considerable challenges for channel access in Data Intensive Sensor Networks - DISN, supporting Data Intensive Applications like Structural Health Monitoring. As the data load increases, considerable degradation of the key performance parameters of such sensor networks is observed. Successful packet delivery ratio drops due to frequent collisions and retransmissions. The data glut results in increased latency and energy consumption overall. With the considerable limitations on sensor node resources like battery power, this implies that excessive transmissions in response to sensor queries can lead to premature network death. After a certain load threshold the performance characteristics of traditional WSNs become unacceptable. Research work indicates that successful packet delivery ratio in 802.15.4 networks can drop from 95% to 55% as the offered network load increases from 1 packet/sec to 10 packets/sec. This result in conjunction with the fact that it is common for sensors in an SHM system to generate 6-8 packets/sec of vibration data makes it important to design appropriate channel access schemes for such data intensive applications.
In this work, we address the problem of significant performance degradation in a special-purpose DISN. Our specific focus is on the medium access control layer since it gives a fine-grained control on managing channel access and reducing energy waste. The goal of this dissertation is to design and evaluate a suite of channel access schemes that ensure graceful performance degradation in special-purpose DISNs as the network traffic load increases.
First, we present a case study that investigates two distinct MAC proposals based on random access and scheduling access. The results of the case study provide the motivation to develop hybrid access schemes. Next, we introduce novel hybrid channel access protocols for DISNs ranging from a simple randomized transmission scheme that is robust under channel and topology dynamics to one that utilizes limited topological information about neighboring sensors to minimize collisions and energy waste. The protocols combine randomized transmission with heuristic scheduling to alleviate network performance degradation due to excessive collisions and retransmissions. We then propose a grid-based access scheduling protocol for a mobile DISN that is scalable and decentralized. The grid-based protocol efficiently handles sensor mobility with acceptable data loss and limited overhead. Finally, we extend the randomized transmission protocol from the hybrid approaches to develop an adaptable probability-based data transmission method. This work combines probabilistic transmission with heuristics, i.e., Latin Squares and a grid network, to tune transmission probabilities of sensors, thus meeting specific performance objectives in DISNs. We perform analytical evaluations and run simulation-based examinations to test all of the proposed protocols.
|
118 |
Sync & Sense Enabled Adaptive Packetization VoIPNgamwongwattana, Boonchai 29 June 2007 (has links)
The quality and reliability problem of VoIP comes from the fact that VoIP relies on the network to transport the voice packets. The inherent problem of VoIP is that there is a mismatch between VoIP and the network. Namely, VoIP has a strict requirement of bandwidth, delay, and loss, but the network (particularly best-effort service networks) cannot guarantee such a requirement. A solution to deal with this problem is to enhance VoIP with an adaptive-rate control, called adaptive-rate VoIP. Adaptive-rate VoIP has the ability to detect the state of the network and adjust the transmission accordingly. Therefore, it gives VoIP the intelligence to optimize its performance, and making it resilient and robust to the service offered by the network. The objective of this dissertation is to develop an adaptive-rate VoIP system. We take a comprehensive approach in the study and development. Adaptive-rate VoIP is generally composed of three components: rate adaptation, network state detection, and adaptive-rate control. In the rate adaptation component, we study optimizing packetization, which can be used as an alternative means for rate adaptation. An advantage is that rate adaptation is independent of the speech coder. With this method, an adaptive-rate VoIP can be based on any constant bitrate speech coder. The study shows that the VoIP performance is primarily affected by three factors: packetization, network load, and significance of VoIP traffic; and, optimizing packetization allows us to ensure the highest possible performance. In the network state detection component, we propose a novel measurement methodology called Sync & Sense of periodic stream. Sync & Sense is unique in that it can virtually synchronize the transmission and reception timing of the VoIP session without requiring a synchronized clock. The simulation result shows that Sync & Sense can accurately measure one-way network delay. Other benefits of Sync & Sense include the ability to estimate the available network bandwidth and the full spectrum of the delays of the VoIP session. In the adaptive-rate control component, we consider the design choices and develop an adaptive-rate control that makes use of the first two components. The integration of the three components is a novel and unique adaptive-rate VoIP called Sync & Sense Enabled Adaptive Packetization VoIP. The simulation result shows that our adaptive VoIP can optimize the performance under any given network condition, and deliver a better performance than traditional VoIP. The simulation result also demonstrates that our adaptive VoIP possesses the desirable properties, which include fast response to network condition, aggressiveness to compete for the needed share of bandwidth, TCP-friendliness, and fair bandwidth allocation.
|
119 |
A Location Fingerprint Framework Towards Efficient Wireless Indoor Positioning SystemsSwangmuang, Nattapong 08 January 2009 (has links)
Location of mobile computers, potentially indoors, is essential information to enable locationaware
applications in wireless pervasive computing. The popularity of wireless local area networks (WLANs) inside and around buildings makes positioning systems based on readily available received signal strength (RSS) from access points (APs) desirable. The fingerprinting technique associates location-dependent characteristics such as RSS values from multiple APs to a location (namely location fingerprint) and uses these characteristics to infer the location. The collection of RSS fingerprints from different locations are stored in a database called radio map, which is later used to compare to an observed RSS sample vector for estimating the MSs location.
An important challenge for the location fingerprinting is how to efficiently collect fingerprints
and construct an effective radio map for different indoor environments. In addition, analytical models to evaluate and predict precision performance of indoor positioning systems based on location fingerprinting are lacking. In this dissertation, we provide a location fingerprint framework that will enable a construction of efficient wireless indoor systems. We develop a new analytical model that employs a proximity graph for predicting performance of indoor positioning systems based on location fingerprinting. The model approximates
probability distribution of error distance given a RSS location fingerprint database and its associated statistics. This model also allows a system designer to perform analysis of the internal structure of location fingerprints. The analytical model is employed to identify and eliminate unnecessary location fingerprints stored in the radio map, thereby saving on computation while performing location estimation. Using the location fingerprint properties such as clustering is also shown to help reduce computational effort and create a more scalable model. Finally, by study actual measurement with the analytical results, a useful guideline for collecting fingerprints is given.
|
120 |
SIGNALING OVERLOAD CONTROL FOR WIRELESS CELLULAR NETWORKSSasanus, Saowaphak 16 January 2009 (has links)
As the worldwide market of cellular phone increases, many subscribers have come to rely on cellular phone services. In catastrophes or mass call in situations, the load can be greater than what the cellular network can support, and the entire network may become completely non-functional. This raises serious concerns on the survivability of wireless cellular networks in order to provide necessary services such as 911 calls in those circumstances. In high load cases, overload control must be deployed to reserve network resource for emergency traffic and maintenance services. Over the past several years, many catastrophes have revealed the deficiencies of the existing overload control mechanisms in cellular networks. Improvement to the existing overload controls are needed in order to cope with unexpected situations. A key to the survivability of wireless cellular networks lies in the signaling services from database servers that support a call connection throughout its duration (e.g., for monitoring users' locations and supplying authentication codes for secure communications), this dissertation focuses on the overload control at the database servers.
As loss of different signaling services impacts a user's perception differently, the overload control
is proposed to provide differentiation and guaranteed classes of signaling services. Specifically, multi-class token rate controls are proposed due to theirs flexibility in various network configurations and advantages over other controls such as, percentage blocking and call gapping. The concept of adaptive control decision is used so that the proposed controls react quickly to changes in the load. A simulation based performance evaluation of the proposed control is conducted and compared with existing controls. It is shown that the proposed controls outperform the existing multi-class token based controls due to various reasons. First, the proposed controls use adaptive resource
sharing that guarantees a lower bound, where the percentage of resource sharing among classes
is adaptively set. The existing token rate controls either distribute resource among classes using
static ratios or completely share resources among classes. Although using static ratios guarantees the quality of service within each class, it lowers the total utilization of the server. Whereas,
allowing a complete resource sharing among classes may cause large load fluctuations in each class. Second, the proposed controls use the novel concept of integrating information on the availability of the radio resources into the control decision, allowing servers to save theirs resources from serving signaling that later on might be dropped due to unavailable radio resources.
|
Page generated in 0.1342 seconds