Spelling suggestions: "subject:"forminformation."" "subject:"informationation.""
121 |
COMPLEMENTING THE GSP ROUTING PROTOCOL IN WIRELESS SENSOR NETWORKSCalle Torres, Maria Gabriela 11 May 2009 (has links)
Gossip-Based Sleep Protocol (GSP) is a routing protocol in the flooding family with overhead
generated by duplicate packets. GSP does not have other sources of overhead or additional
information requirements common in routing protocols, such as routing packets, geographical
information, addressing or explicit route computation. Because of its simple functionality, GSP is a candidate routing protocol for Wireless Sensor Networks. However, previous research
showed that GSP uses the majority of energy in the network by keeping the nodes with their
radios on ready to receive, even when there are no transmissions, situation known as Idle
Listening. Complementing GSP implies creating additional protocols that make use of GSP
particular characteristics in order to improve performance without additional overhead. The
research analyzes the performance of GSP with different topologies, number of hops from source
to destination and node densities, and presents one alternative protocol to complement GSP
decreasing idle listening, number of duplicate packets in the network and overall energy
consumption. The study compared the results of this alternative protocol, MACGSP6, to a
protocol stack proposed for Wireless Sensor Networks: Sensor MAC (S-MAC) with Dynamic
Source Routing (DSR), showing the advantages and disadvantages of the different approaches.
|
122 |
Towards An Optimal Core Optical Network Using Overflow ChannelsMenon, Pratibha 12 May 2009 (has links)
This dissertation is based on a traditional circuit switched core WDM network that is supplemented by a pool of wavelengths that carry optical burst switched overflow data. These overflow channels function to absorb channel overflows from traditional circuit switched networks and they also provide wavelengths for newer, high bandwidth applications. The channel overflows that appear at the overflow layer as optical bursts are either carried over a permanently configured, primary light path, or over a burst-switched, best-effort path while traversing the core network.
At every successive hop along the best effort path, the optical bursts will attempt to enter a primary light path to its destination. Thus, each node in the network is a Hybrid Node that will provide entry for optical bursts to hybrid path that is made of a point to point, pre-provisioned light path or a burst switched path. The dissertations main outcome is to determine the cost optimality of a Hybrid Route, to analyze cost-effectiveness of a Hybrid Node and compare it to a route and a node performing non-hybrid operation, respectively. Finally, an example network that consists of several Hybrid Routes and Hybrid Nodes is analyzed for its cost-effectiveness.
Cost-effectiveness and optimality of a Hybrid Route is tested for its dependency on the mean and variance of channel demands offered to the route, the number of sources sharing the route, and the relative cost of a primary and overflow path called path cost ratio. An optimality condition that relates the effect of traffic statistics to the path cost ratio is analytically derived and tested. Cost-effectiveness of a Hybrid Node is compared among different switching fabric architecture that is used to construct the Hybrid Node. Broadcast-Select, Benes and Clos architectures are each considered with different degrees of chip integration. An example Hybrid Network that consists of several Hybrid Routes and Hybrid Nodes is found to be cost-effective and dependent of the ratio of switching to transport costs.
|
123 |
Technical Architectures and Economic Conditions for Viable Spectrum Trading MarketsCaicedo Bastidas, Carlos Enrique 24 July 2009 (has links)
The growing interest from telecommunication services providers to offer wireless based services has spurred an also growing demand for wireless spectrum. This has made the tasks related to spectrum management more complicated, especially those related to the allocation of spectrum between competing uses and users. Economically efficient spectrum allocation and assignment requires up to date information on the value of spectrum. Consequently, many spectrum management authorities are or have been elaborating regulations in order to increase the use of market based mechanisms for spectrum management, thus reducing their emphasis on command and control methods.
Spectrum trading (ST) is a market based mechanism where buyers and sellers determine the assignment of spectrum and its uses. That is, it can address both the allocation and assignment aspects of spectrum use. The assignment of spectrum licenses through spectrum trading markets can be used as a mechanism to grant access to spectrum to those who value it most and can use it more efficiently. For it to be optimally effective, a secondary market must exist that allows spectrum users to optimally choose between capital investment and spectrum use on a continuous basis, not just at the time of initial assignment.
This research identifies the different technical architectures for ST markets and studies the possible behaviors and interactions in spectrum trading markets with the use of Agent based Computational Economics (ACE). The research objective is to understand and determine the conditions that lead to viable spectrum trading markets. This analysis is valuable because it can help regulators prepare for plausible future scenarios and create policy instruments that promote these markets. It is also of value to wireless service providers as they can use the results of this work to understand the economic behavior of different ST market implementations and prepare strategies to participate in these markets.
|
124 |
Protocols for Detection and Removal of Wormholes for Secure Routing and Neighborhood Creation in Wireless Ad Hoc NetworksHayajneh, Thaier 26 August 2009 (has links)
Wireless ad hoc networks are suitable and sometimes the only solution for several applications. Many applications, particularly those in military and critical civilian domains (such as battlefield surveillance and emergency rescue) require that ad hoc networks be secure and stable. In fact, security is one of the main barriers to the extensive use of ad hoc networks in many operations. The primary objective of this dissertation is to propose protocols which will protect ad hoc networks from wormhole attacks - one of the most devastating security attacks - and to improve network stability. Protocols that depend solely on cryptography techniques such as authentication and encryption can prevent/detect several types of security attacks; however, they will not be able to detect or prevent a wormhole attack. This attack on routing in ad hoc networks is also considered to be the main threat against neighborhood discovery protocols. Most of the proposed mechanisms designed to defend against this type of attack are based on location information or time measurements, or require additional hardware or a central entity. Other protocols that relied on connectivity or neighborhood information cannot successfully detect all of the various types and cases of wormhole attacks. In the first part of this dissertation, we present a simple, yet effective protocol to detect wormhole attacks along routes in ad hoc networks. The protocol is evaluated using analysis and simulations. In the second part, we present a secure neighbor creation protocol that can securely discover the neighbors of a node in ad hoc networks, and detect and remove wormhole links, if they exist. The proposed protocols do not require any location information, time synchronization, or special hardware to detect wormhole attacks. To the best of our knowledge, this is the first protocol that makes use of cooperation rules between honest nodes. Use of such rules will reduce the overhead associated with the number of checks to be performed in order to detect wormholes and to create a secure neighborhood. This is also the first protocol, to our knowledge, that addresses the complete removal of bogus links without removing legal links.
|
125 |
HYBRID MODELING OF THE DYNAMIC BEHAVIOR OF MOBILE AD-HOC NETWORKSTipmongkonsilp, Siriluck 02 September 2009 (has links)
The performance of mobile ad-hoc networks is normally studied via simulation over a fixed time horizon using a steady-state type of statistical analysis procedure. However, due to the dynamic nature of the network topology such an approach may be inappropriate in many cases as the network may spend most of the time in a transient or nonstationary state. The objective of this dissertation is to develop a performance modeling framework and detailed techniques for analyzing the time varying performance of mobile ad-hoc networks.
Our approach is a performance modeling tool for queueing analysis using a hybrid of discrete event simulation and numerical method techniques. Network queues are modeled using fluid-flow based differential equations which can be solved with any standard numerical integration methods, while node connectivity that represents topology changes is incorporated into the model using
either discrete event simulation techniques or stochastic modeling of adjacency matrix elements. The hybrid fluid-based approach is believed to be an alternative that can resolve certain issues in
current simulators and provide flexibility in modeling a more complex network by integrating additional features of nonstationary effect to add higher level of fidelity into the proposed model. Numerical and simulation experiments show that the new approach can provide reasonably accurate results without sacrificing a large
amount of computational resources.
|
126 |
TOKEN-BASED APPROACH FOR SCALABLE TEAM COORDINATIONXu, Yang 31 January 2008 (has links)
To form a cooperative multiagent team, autonomous agents are required to harmonize activities and make the best use of exclusive resources to achieve their common goal. In addition, to handle uncertainty and quickly respond to external environmental events, they should share knowledge and sensor in formation. Unlike small team coordination, agents in scalable team must limit the amount of their communications while maximizing team performance. Communication decisions are critical to scalable-team coordination because agents should target their communications, but these decisions cannot be supported by a precise model or by complete team knowledge.
The hypothesis of my thesis is: local routing of tokens encapsulating discrete elements of control, based only on decentralized local probability decision models, will lead to efficient scalable coordination with several hundreds of agents. In my research, coordination controls including all domain knowledge, tasks and exclusive resources are encapsulated into tokens. By passing tokens around, agents transfer team controls encapsulated in the tokens. The team benefits when a token is passed to an agent who can make use of it, but communications incur costs. Hence, no single agent has sole responsible over any shared decision. The key problem lies in how agents make the correct decisions to target communications and pass tokens so that they will potentially benefit the team most when considering communication costs.
My research on token-based coordination algorithm starts from the investigation of random walk of token movement. I found a little increase of the probabilities that agents make the right decision to pass a token, the overall efficiency of the token movement could be greatly enhanced. Moreover, if token movements are modeled as a Markov chain, I found that the efficiency of passing tokens could be significantly varied based on different network topologies.
My token-based algorithm starts at the investigation of each single decision theoretic agents. Although under the uncertainties that exist in large multiagent teams, agents cannot act optimal, it is still feasible to build a probability model for each agents to rationally pass tokens. Specifically, this decision only allow agent to pass tokens over an associate network where only a few of team members are considered as token receiver.
My proposed algorithm will build each agent's individual decision model based on all of its previously received tokens. This model will not require the complete knowledge of the team. The key idea is that I will make use of the domain relationships between pairs of coordination controls. Previously received tokens will help the receiver to infer whether the sender could benefit the team if a related token is received. Therefore, each token is used to improve the routing of other tokens, leading to a dramatic performance improvement when more tokens are added. By exploring the relationships between different types of coordination controls, an integrated coordination algorithm will be built, and an improvement of one aspect of coordination will enhance the performance of the others.
|
127 |
Causal Discovery of Dynamic SystemsVoortman, Mark Johannes 25 January 2010 (has links)
Recently, several philosophical and computational approaches to causality have used an interventionist framework to clarify the concept of causality [Spirtes et al., 2000, Pearl, 2000, Woodward, 2005]. The characteristic feature of the interventionist approach is that causal models are potentially useful in predicting the effects of manipulations. One of the main motivations of such an undertaking comes from humans, who seem to create sophisticated mental causal models that they use to achieve their goals by manipulating the world.
Several algorithms have been developed to learn static causal models from data that can be used to predict the effects of interventions [e.g., Spirtes et al., 2000]. However, Dash [2003, 2005] argued that when such equilibrium models do not satisfy what he calls the Equilibration-Manipulation Commutability (EMC) condition, causal reasoning with these models will be incorrect, making dynamic models indispensable. It is shown that existing approaches to learning dynamic models [e.g., Granger, 1969, Swanson and Granger, 1997] are unsatisfactory, because they do not perform a necessary search for hidden variables.
The main contribution of this dissertation is, to the best of my knowledge, the first provably correct learning algorithm that discovers dynamic causal models from data, which can then be used for causal reasoning even if the EMC condition is violated. The representation that is used for dynamic causal models is called Difference-Based Causal Models (DBCMs) and is based on Iwasaki and Simon [1994]. A comparison will be made to other approaches and the algorithm, called DBCM Learner, is empirically tested by learning physical systems from artificially generated data. The approach is also used to gain insights into the intricate workings of the brain by learning DBCMs from EEG data and MEG data.
|
128 |
USING SOCIAL ANNOTATIONS TO IMPROVE WEB SEARCHChoochaiwattana, Worasit 03 June 2008 (has links)
Web-based tagging systems, which include social bookmarking systems such as Delicious, have become increasingly popular. These systems allow participants to annotate or tag web resources. This research examined the use of social annotations to improve the quality of web searches. The research involved three components. First, social annotations were used to index resources. Two annotation-based indexing methods were proposed: annotation based indexing and full text with annotation indexing. Second, social annotations were used to improve search result ranking. Six annotation based ranking methods were proposed: Popularity Count, Propagate Popularity Count, Query Weighted Popularity Count, Query Weighted Propagate Popularity Count, Match Tag Count and Normalized Match Tag Count. Third, social annotations were used to both index and rank resources. The result from the first experiment suggested that both static feature and similarity feature should be considered when using social annotations to re-rank search result. The result of the second experiment showed that using only annotation as an index of resources may not be a good idea. Since social Annotations could be viewed as a high level concept of the content, combining them to the content of resource could add some more important concepts to the resources. Last but not least, the result from the third experiment confirmed that the combination of using social annotations to rank the search result and using social annotations as resource index augmentation provided a promising rank of search results. It showed that social annotations could benefit web search.
|
129 |
Simulated Ecological Environments for Education: A Tripartite Model Framework of HCI Design Parameters for Situational Learning in Virtual EnvironmentsHarrington, Maria C.R. 11 September 2008 (has links)
While there are many studies on collaborative or guided scientific inquiry in real, virtual,
and simulated environments, there are few that study the interplay between the design of
the simulation and the user interface. The main research aim was to decompose the
simulation and user interface into the design parameters that influence attention,
curiosity, inquiry, and learning of scientific material and acts of creation for children.
The research design investigates what tools support independent exploration of a space,
enhance deep learning, and motivate scientific or creative inquiry. A major interest is in
the role that ecological context plays in the perception of spatial information.
None of the prior work on learning in virtual environments considered a child-centric
computer interaction framing, independent of pedagogy and focused on the impact of
user interface parameters, such as image quality and navigational freedom. A major
contribution of this research is the construction of the Virtual Trillium Trail, as it
represents one square mile of biologically accurate scientific plot study data. It is a
virtual environment based on statistical data visualization, not fantasy. It allowed for a
highly realistic simulation and scientifically true-to-life visualization, as well as for a
planned orthogonal contrast with exceptionally high internal validity in both system and
statistical research design.
Of critical importance is evidence in the pilot study, that virtual reality field trips for
students may be used to prime before and to reinforce after a real field trip. This
research also showed transfer effects on in-situ learning activity, in both directions.
Thus, supports the claim that virtual environments may augment educational practices,
not replace them, to maximize the overall learning impact. The other large contribution
was in the activity analysis of the real field trip, where the Salamander Effect is observed
as an environmental event, which opened a Teachable Moment event for the teacher, and
which was then translated into a system design feature, a Salient Event in the user
interface. A main part of this research is the importance of such events, as ways to
support intrinsic learning activity, and leverage episodic memory.
The main empirical contribution to the design of educational virtual environments was
produced by the 2 x 2 ANOVA with the factors of Visual Fidelity and Navigational
Freedom, set to high and low levels, and the evidence of different effects on Knowledge
Gained. The tool has an impact on intrinsic learning, which is measured here by a pretest
and a post-test on facts and concepts. A two-factor analysis of variance showed a
significant effect of Visual Fidelity on Knowledge Gained, F(1,60) = 10.54, p = 0.0019.
High Visual Fidelity condition has a greater impact on Knowledge Gained (M=30.95, SD
=14.76), than Low Visual Fidelity condition (M=19.99, SD = 13.39). Photorealistic has a
stronger impact on learning than cartoon versions. There was significant interaction
between Visual Fidelity and Navigational Freedom, F(1,60) = 4.85, p = 0.0315, with the
largest impact in the combined conditions of High Visual Fidelity and High Navigational
Freedom on Knowledge Gained (M=37.44, SD = 13.88). Thus, photorealistic, free
navigation virtual environments double learning, when compared to cartoon versions,
ceteris paribus.
The next major contribution to the design of the user interface in educational virtual
environments is the design and use of Salient Events as components to augment the
virtual environment and to facilitate intrinsic inquiry into facts and concepts. A two factor
analysis of variance showed a significant effect of Visual Fidelity on Salient Event
counts, F(1,60) = 4.35, p = 0.00413. High Visual Fidelity condition has a greater impact
on Salient Event counts, (Μ = 14.46, SD = 6), than Low Visual Fidelity condition,
(Μ =11.31, SD = 6.37). Using High Visual Fidelity with High Navigational Freedom
(showing a strong trend of F(1,60) = 3.23, p = 0.0773) to increase Salient Event counts
are critical design features for educational virtual environments, especially since Salient
Events are moderately positively correlated with Knowledge Gained (r = 0.455, N = 64, p
= 0.000).
Emotional, affective, aesthetic, and subjective attitudes were investigated in the post-experience assessment of the main study on system and learning experience. Total
Attitude is strongly positively and significantly correlated with Awe and Wonder (r =
0.727, N = 64, p = 0.000). Also important is the strong, positive, and significant
correlation of Beauty with Awe and Wonder (r = 0.506, N = 64, p = 0.000). And the only
significant subjective emotion or attitude variable correlated to Knowledge Gained, was
Awe and Wonder with a slightly positive statistic: (r = 0.273, N = 64, p = 0.000).
Future research will investigate the complexity and causality of such interactions between
the childs mental model, the virtual environment, and the user interface in the form of
regression equations, partial differential equations, and Markov models.
|
130 |
Competitive Learning Neural Network Ensemble Weighted by Predicted PerformanceYe, Qiang 12 May 2010 (has links)
Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different training signals for base networks in an ensemble can effectively promote diversity and improve ensemble performance. Here a Competitive Learning Neural Network Ensemble is proposed where a secondary output unit predicts the classification performance of the primary output unit in each base network. The networks compete with each other on the basis of classification performance and partition the stimulus space. The secondary units adaptively receive different training signals depending on the competition. As the result, each base network develops ¡°preference¡± over different regions of the stimulus space as indicated by their secondary unit outputs. To form an ensemble decision, all base networks¡¯ primary unit outputs are combined and weighted according to the secondary unit outputs. The effectiveness of the proposed approach is demonstrated with the experiments on one real-world and four artificial classification problems.
|
Page generated in 0.1322 seconds