Spelling suggestions: "subject:" bnetwork"" "subject:" conetwork""
671 |
Reliable Safety Broadcasting in Vehicular Ad hoc Networks using Network CodingHassanabadi, Behnam 09 January 2014 (has links)
We study the application of network coding in periodic safety broadcasting in Vehicular Ad hoc Networks. We design a sub-layer in the application layer of the WAVE architecture. Our design uses rebroadcasting of network coded safety messages, which considerably improves the overall reliability. It also tackles the synchronized collision problem stated in the IEEE 1609.4 standard as well as congestion problem and vehicle-to-vehicle channel loss. We study how massage repetition can be used to optimize the reliability in combination with a simple congestion control algorithm. We analytically evaluate the application of network coding using a sequence of discrete phase-type distributions. Based on this model, a tight safety message loss probability upper bound is derived. Completion delay is defined as the delay that a node receives the messages of its neighbour nodes. We provide asymptotic delay analysis and prove a general and a restricted tighter asymptotic upper bound for the completion delay of random linear network coding.
For some safety applications, average vehicle to vehicle reception delay is of interest. An instantly decodable network coding based on heuristics of index coding problem is proposed. Each node at each transmission opportunity tries to XOR some of its received original messages. The decision is made in a greedy manner and based on the side information provided by the feedback matrix. A distributed feedback mechanism is also introduced to piggyback the side information in the safety messages. We also construct a Tanner graph based on the feedback information and use the Belief Propagation algorithm as an efficient heuristic similar to LDPC decoding. Layered BP is shown to be an effective algorithm for our application.
Lastly, we present a simple experimental framework to evaluate the performance of repetition based MAC protocols. We conduct an experiment to compare the POC-based MAC protocol with a random repetition-based MAC.
|
672 |
Incorporating semantic integrity constraints in a database schemaYang, Heng-li 11 1900 (has links)
A database schema should consist of structures and semantic integrity constraints. Se
mantic integrity constraints (SICs) are invariant restrictions on the static states of the
stored data and the state transitions caused by the primitive operations: insertion, dele
tion, or update. Traditionally, database design has been carried out on an ad hoc basis
and focuses on structure and efficiency. Although the E-R model is the popular concep
tual modelling tool, it contains few inherent SICs. Also, although the relational database
model is the popular logical data model, a relational database in fourth or fifth normal
form may still represent little of the data semantics. Most integrity checking is distributed
to the application programs or transactions. This approach to enforcing integrity via the
application software causes a number of problems.
Recently, a number of systems have been developed for assisting the database design
process. However, only a few of those systems try to help a database designer incorporate
SICs in a database schema. Furthermore, current SIC representation languages in the
literature cannot be used to represent precisely the necessary features for specifying
declarative and operational semantics of a SIC, and no modelling tool is available to
incorporate SICs.
This research solves the above problems by presenting two models and one subsystem.
The E-R-SIC model is a comprehensive modelling tool for helping a database designer in
corporate SICs in a database schema. It is application domain-independent and suitable
for implementation as part of an automated database design system. The SIC Repre
sentation model is used to represent precisely these SICs. The SIC elicitation subsystem
would verify these general SICs to a certain extent, decompose them into sub-SICs if
necessary, and transform them into corresponding ones in the relational model.
A database designer using these two modelling tools can describe more data semantics
than with the widely used relational model. The proposed SIC elicitation subsystem can
provide more modelling assistance for him (her) than current automated database design
systems.
|
673 |
Reconstructing multicultural counselling competency : construct explication approachMinami, Masahiro 05 1900 (has links)
This conceptual study aimed at refining the conceptual rigor of D. W. Sue’s
tricomponential model of multicultural counselling competency, and enhancing with an
addition of new attitude component. This study anchored its theoretical basis on a
concept of nomological network (Cronbach & Meehi, 1955). Construct explication
approach (Murphy & Davidshofer, 1998) was taken to develop full explication of four
componential model of MCC, containing attitude-awareness-knowledge-skills
components. Comprehensive literature review was conducted in the area of multicultural
counselling competency to develop working definitions of awareness-knowledge-skills
component. Another review was conducted to develop a working definition and a
conceptual model of attitude. Under the four-componential framework, a total of 284
characteristic descriptions previously developed under the tricomponential model were
conceptually re-examined and re-categorized. Result of the analyses revealed a total of 13
subcategories under the four components. Full construct explication of the four
componential model was developed. Research implications of the new model to MCC
measurement studies and practical applications to training models will be discussed.
|
674 |
Improving network quality-of-service with unreserved backup pathsChen, Ing-Wher 11 1900 (has links)
To be effective, applications such as streaming multimedia require both a more stable and more reliable service than the default best effort service from the underlying computer network. To guarantee steady data transmission despite the unpredictability of the network, a single reserved path for each traffic flow is used. However, a single dedicated path suffers from single link failures. To allow for continuous service inexpensively, unreserved backup paths are used in this thesis. While there are no wasted resources using unreserved backup paths, recovery from a failure may not be perfect. Thus, a goal for this approach is to design algorithms that compute backup paths to mask the failure for all traffic, and failing that, to maximize the number of flows that can be unaffected by the failure. Although algorithms are carefully designed with the goal to provide perfect recovery, when using only unreserved backup paths, re-routing of all affected flows, at the same service quality as before the failure, may not be possible under some conditions, particularly when the network was already fully loaded prior to the failure. Alternate strategies that trade off service quality for continuous traffic flow to minimize the effects of the failure on traffic should be considered. In addition, the actual backup path calculation can be problematic because finding backup paths that can provide good service often requires a large amount of information regarding the traffic present in the network, so much that the overhead can be prohibitive. Thus, algorithms are developed with trade-offs between good performance and communication overhead. In this thesis, a family of algorithms is designed such that as a whole, inexpensive, scalable, and effective performance can be obtained after a failure. Simulations are done to study the trade-offs between performance and scalability and between soft and hard service guarantees. Simulation results show that some algorithms in this thesis yield competitive or better performance even at lower overhead. The more reliable service provided by unreserved backup paths allows for better performance by current applications inexpensively, and provides the groundwork to expand the computer network for future services and applications.
|
675 |
An Investigation of a Multi-Objective Genetic Algorithm applied to Encrypted Traffic IdentificationBacquet, Carlos 10 August 2010 (has links)
This work explores the use of a Multi-Objective Genetic Algorithm (MOGA) for both, feature selection and cluster count optimization, for an unsupervised machine learning technique, K-Means, applied to encrypted traffic identification (SSH). The performance of the proposed model is benchmarked against other unsupervised learning techniques existing in the literature: Basic K-Means, semi-supervised K-Means, DBSCAN, and EM. Results show that the proposed MOGA, not only outperforms the other models, but also provides a good trade off in terms of detection rate, false positive rate, and time to build and run the model. A hierarchical version of the proposed model is also implemented, to observe the gains, if any, obtained by increasing cluster purity by means of a second layer of clusters. Results show that with the hierarchical MOGA, significant gains are observed in terms of the classification performances of the system.
|
676 |
A Neural Network Growth and Yield Model for Nova Scotia ForestsHiggins, Jenna 09 June 2011 (has links)
Forest growth models are important to the forestry community because they provide
means for predicting future yields and exploring different forest management practices.
The purpose of this thesis is to develop an individual tree forest growth model applicable
for the province of Nova Scotia. The Acadian forest of Nova Scotia is a prime example a
mixed species forest which is best modelled with individual tree models. Individual tree
models also permit modelling variable-density management regimes, which are important
as the Province investigates new silviculture options. Rather than use the conventional
regression techniques, our individual tree growth and yield model was developed using
neural networks. The growth and yield model was comprised of three different neural
networks: a network for each survivability, diameter increment and height increment. In
general, the neural network modelling approach fit the provincial data reasonably well.
In order to have a model applicable to each species in the Province, species was included
as a model input; the models were able to distinguish between species and to perform
nearly as well as species-specific models. It was also found that including site and
stocking level indicators as model inputs improved the model. Furthermore, it was found
that the GIS-based site quality index developed at UNB could be used as a site indicator
rather than land capability. Finally, the trained neural networks were used to create a
growth and yield model which would be limited to shorter prediction periods and a larger
scale.
|
677 |
Networks with additional structured constraintsTrick, Michael Alan 08 1900 (has links)
No description available.
|
678 |
Decomposition of large-scale single-commodity network flow problemsTüfekçi, Süleyman 08 1900 (has links)
No description available.
|
679 |
A dynamic programming approach to planning with decision networksSipper, Daniel 12 1900 (has links)
No description available.
|
680 |
Integer programming for imbedded unimodular constraint matrices with application to a course-time scheduling problemSwart, William Walter 08 1900 (has links)
No description available.
|
Page generated in 0.0451 seconds