Spelling suggestions: "subject:"link"" "subject:"sink""
351 |
Motion planning for multi-link robots with artificial potential fields and modified simulated annealingYagnik, Deval 01 December 2010 (has links)
In this thesis we present a hybrid control methodology using Artificial Potential Fields (APF) integrated with a modified Simulated Annealing (SA) optimization algorithm for motion planning of a multi-link robots team. The principle of this work is based on the locomotion of a snake where subsequent links follow the trace of the head. The proposed algorithm uses the APF method which provides simple, efficient and effective path planning and the modified SA is applied in order for the robots to recover from a local minima. Modifications to the SA algorithm improve the performance of the algorithm and reduce convergence time.
Validation on a three-link snake robot shows that the derived control laws from the motion planning algorithm that combine APF and SA can successfully navigate the robot to reach its destination, while avoiding collisions with multiple obstacles and other robots in its path as well as recover from local minima. To improve the performance of the algorithm, the gradient descent method is replaced by Newton’s method which helps in reducing the zigzagging phenomenon in gradient descent method while the robot moves in the vicinity of an obstacle. / UOIT
|
352 |
Realistic Multi-Cell Interference Coordination in 4G/LTEÖrn, Sara January 2012 (has links)
In the LTE mobile system, all cells use the same set of frequencies. This means that a user could experience interference from other cells. A method that has been studied in order to reduce this interference and thereby increase data rate or system throughput is to coordinate scheduling between cells. Good results of this have been found in different studies. However, the interference is generally assumed to be known. Studies using estimated interference and simulating more than one cluster of cells have found almost no gain. This thesis will focus on how to use information from coordinated scheduling and other traffic estimates to do better interference estimation and link adaption. The suggested method is to coordinate larger clusters and use the coordination information, as well as estimates of which cells will be transmitting, to make estimates of interference from other cells. The additional information from interference estimation is used in the link adaptation. Limitations in bandwidth of the backhaul needed to send data between cells are considered, as well as the delay it may introduce. A limitation of the scope is that MIMO or HetNet scenarios have not been simulated. The suggested method for interference estimation and link adaptation have been implemented and simulated in a system simulator. The method gives a less biased estimate of SINR, but there are no gains in user bit rate. The lesser bias is since the method is better at predicting high SINR than the base estimate is. The lack of gains regarding user bit rate may result from the fact that in the studied scenarios, users where not able to make use of the higher estimated SINR since the base estimate is already high. The conclusion is that the method might be useful in scenarios where there are not full load, but the users either have bad channel quality or are able to make use of very high SINR. Such scenarios could be HetNet or MIMO scenarios, respectively.
|
353 |
Design and Evaluation of Primitives for Passive Link Assessment and Route Selection in Static Wireless NetworksMiskovic, Stanislav 06 September 2012 (has links)
Communication in wireless networks elementally comprises of packet exchanges over individual wireless links and routes formed by these links. To this end, two problems are fundamental: assessment of link quality and identification of the least-cost (optimal) routes. However, little is known about achieving these goals without incurring additional overhead to IEEE 802.11 networks. In this thesis, I design and experimentally evaluate two frameworks that enable individual 802.11 nodes to characterize their wireless links and routes by employing only local and passively collected information.
First, I enable 802.11 nodes to assess their links by characterizing packet delivery failures and failure causes. The key problem is that nodes cannot individually observe many factors that affect the packet delivery at both ends of their links and in both directions of 802.11 communication. To this end, instead of relying on the assistance of other nodes, I design the first practical framework that extrapolates the missing information locally from the nodes' overhearing, the observable causal relationships of 802.11 operation and characterization of the corrupted and undecodable packets. The proposed framework employs only packet-level information generally reported by commodity 802.11 wireless cards.
Next, I design and evaluate routing primitives that enable individual nodes to suppress their poor route selections. I refer to a route selection as poor whenever the employed routing protocol fails to establish the existing least-cost path according to an employed routing metric. This thesis shows that an entire family of the state-of-the art on-demand distance-vector routing protocols, including the standards-proposed protocol for IEEE 802.11s mesh networks, suffers from frequent and long-term poor selections having arbitrary path costs. Consequently, such selections generally induce severe throughput degradations for network users. To address this problem, I design mechanisms that identify optimal paths locally by employing only the information readily available to the affected nodes. The proposed mechanisms largely suppress occurrence of inferior routes. Even when such routes are selected their durations are reduced by several orders of magnitude, often to sub-second time scales.
My work has implications on several key areas of wireless networking: It removes systematic failures from wireless routing and serves as a source of information for a wide range of protocols including the protocols for network management and diagnostics.
|
354 |
Feature Ranking for Text ClassifiersMakrehchi, Masoud January 2007 (has links)
Feature selection based on feature ranking has received much
attention by researchers in the field of text classification. The
major reasons are their scalability, ease of use, and fast computation. %,
However, compared to the search-based feature selection methods such
as wrappers and filters, they suffer from poor performance. This is
linked to their major deficiencies, including: (i) feature ranking
is problem-dependent; (ii) they ignore term dependencies, including
redundancies and correlation; and (iii) they usually fail in
unbalanced data.
While using feature ranking methods for dimensionality reduction, we
should be aware of these drawbacks, which arise from the function of
feature ranking methods. In this thesis, a set of solutions is
proposed to handle the drawbacks of feature ranking and boost their
performance. First, an evaluation framework called feature
meta-ranking is proposed to evaluate ranking measures. The framework
is based on a newly proposed Differential Filter Level Performance
(DFLP) measure. It was proved that, in ideal cases, the performance
of text classifier is a monotonic, non-decreasing function of the
number of features. Then we theoretically and empirically validate
the effectiveness of DFLP as a meta-ranking measure to evaluate and
compare feature ranking methods. The meta-ranking framework is also
examined by a stopword extraction problem. We use the framework to
select appropriate feature ranking measure for building
domain-specific stoplists. The proposed framework is evaluated by
SVM and Rocchio text classifiers on six benchmark data. The
meta-ranking method suggests that in searching for a proper feature
ranking measure, the backward feature ranking is as important as the
forward one.
Second, we show that the destructive effect of term redundancy gets
worse as we decrease the feature ranking threshold. It implies that
for aggressive feature selection, an effective redundancy reduction
should be performed as well as feature ranking. An algorithm based
on extracting term dependency links using an information theoretic
inclusion index is proposed to detect and handle term dependencies.
The dependency links are visualized by a tree structure called a
term dependency tree. By grouping the nodes of the tree into two
categories, including hub and link nodes, a heuristic algorithm is
proposed to handle the term dependencies by merging or removing the
link nodes. The proposed method of redundancy reduction is evaluated
by SVM and Rocchio classifiers for four benchmark data sets.
According to the results, redundancy reduction is more effective on
weak classifiers since they are more sensitive to term redundancies.
It also suggests that in those feature ranking methods which compact
the information in a small number of features, aggressive feature
selection is not recommended.
Finally, to deal with class imbalance in feature level using ranking
methods, a local feature ranking scheme called reverse
discrimination approach is proposed. The proposed method is applied
to a highly unbalanced social network discovery problem. In this
case study, the problem of learning a social network is translated
into a text classification problem using newly proposed actor and
relationship modeling. Since social networks are usually sparse
structures, the corresponding text classifiers become highly
unbalanced. Experimental assessment of the reverse discrimination
approach validates the effectiveness of the local feature ranking
method to improve the classifier performance when dealing with
unbalanced data. The application itself suggests a new approach to
learn social structures from textual data.
|
355 |
QoS Routing in Wireless Mesh NetworksAbdelkader, Tamer Ahmed Mostafa Mohammed January 2008 (has links)
Wireless Mesh Networking is envisioned as an economically viable paradigm and a promising technology in providing wireless broadband services. The wireless mesh backbone consists of fixed mesh routers that interconnect different mesh clients to themselves and to the wireline backbone network. In order to approach the wireline servicing level and provide same or near QoS guarantees to different traffic flows, the wireless mesh backbone should be quality-of-service (QoS) aware. A key factor in designing protocols for a wireless mesh network (WMN) is to exploit its distinct characteristics, mainly immobility of mesh routers and less-constrained power consumption.
In this work, we study the effect of varying the transmission power to achieve the required signal-to-interference noise ratio for each link and, at the same time, to maximize the number of simultaneously active links. We propose a QoS-aware routing framework by using transmission power control. The framework addresses both the link scheduling and QoS routing problems with a cross-layer design taking into consideration the spatial reuse of the network bandwidth. We formulate an optimization problem to find the optimal link schedule and use it as a fitness function in a genetic algorithm to find candidate routes. Using computer simulations, we show that by optimal power allocation the QoS constraints for the different traffic flows are met with more efficient bandwidth utilization than the minimum power allocations.
|
356 |
Lower order solvability of linksMartin, Taylor 16 September 2013 (has links)
The n-solvable filtration of the link concordance group, defined by Cochran, Orr, and Teichner in 2003, is a tool for studying smooth knot and link concordance that yields important results in low-dimensional topology. We focus on the first two stages of the n-solvable filtration, which are the class of 0-solvable links and the class of 0.5-solvable links. We introduce a new equivalence relation on links called 0-solve equivalence and establish both an algebraic and a geometric characterization 0-solve equivalent links. As a result, we completely characterize 0-solvable links and we give a classification of links up to 0-solve equivalence. We relate 0-solvable links to known results about links bounding gropes and Whitney towers in the 4-ball. We then establish a sufficient condition for a link to be 0.5-solvable and show that 0.5-solvable links must have vanishing Sato-Levine invariants.
|
357 |
The Eastern Link : A sustainable discourse?Niskanen, Johan, Gröndal Andersson, Joakim January 2009 (has links)
The local newspapers in Sweden are often used as an arena where groups of different political leanings try to frame current events to suit their purposes. Therefore how the news media presents the discussed issue and how it relates to sustainable development are important for a democratic process. One of the largest infrastructural projects in Sweden currently is the Eastern Link and there are many economical, social and ecological concerns when constructing such a large infrastructural project. It is therefore important to look at how sustainable development is represented in the local news media when concerning this infrastructural project. The aim of this thesis is to study how the local media presents the Eastern Link project in relation to sustainable development and how it affects democracy. This thesis critically discusses the different parts of sustainable development; the impact of and on economical issues, social issues and ecological issues in relation to the study material. Both a quantitative approach and a qualitative approach are used as a method. The thesis also links the results of this study to previous research on communication and theories on sustainable development. The results show that neither of the newspapers Folkbladet or NT is presenting the Eastern Link in a balanced way from a sustainability perspective. A majority of the articles are focusing on the social discourse; this differs from previous research where the focus is on the economical discourse.
|
358 |
Automated Performance Optimization of GSM/EDGE Network Parameters / Automatiserad prestandaoptimering av GSM/EDGE-nätverksparametrarGustavsson, Jonas January 2009 (has links)
The GSM network technology has been developed and improved during several years which have led to an increased complexity. The complexity results in more network parameters and together with different scenarios and situations they form a complex set of configurations. The definition of the network parameters is generally a manual process using static values during test execution. This practice can be costly, difficult and laborious and as the network complexity continues to increase, this problem will continue to grow.This thesis presents an implementation of an automated performance optimization algorithm that utilizes genetic algorithms for optimizing the network parameters. The implementation has been used for proving that the concept of automated optimization is working and most of the work has been carried out in order to use it in practice. The implementation has been applied to the Link Quality Control algorithm and the Improved ACK/NACK feature, which is an apart of GSM EDGE Evolution. / GSM-nätsteknologin har utvecklats och förbättrats under lång tid, vilket har lett till en ökad komplexitet. Denna ökade komplexitet har resulterat i fler nätverksparameterar, tillstånd och standarder. Tillsammans utgör de en komplex uppsättning av olika konfigurationer. Dessa nätverksparameterar har hittills huvudsakligen bestämts med hjälp av en manuell optimeringsprocess. Detta tillvägagångssätt är både dyrt, svårt och tidskrävande och allt eftersom komplexiteten av GSM-näten ökar kommer problemet att bli större.Detta examensarbete presenterar en implementering av en algoritm för automatiserad optimering av prestanda som huvudsakligen använder sig av genetiska algoritmer för att optimera värdet av nätverksparametrarna. Implementeringen har använts för att påvisa att konceptet med en automatiserad optimering fungerar och det mesta av arbetet har utförts för att kunna använda detta i praktiken. Implementeringen har tillämpats på Link Quality Control-algoritmen och Improved ACK/NACK-funktionaliteten, vilket är en del av GSM EDGE Evolution.
|
359 |
Time-efficient Computation with Near-optimal Solutions for Maximum Link Activation in Wireless Communication SystemsGeng, Qifeng January 2012 (has links)
In a generic wireless network where the activation of a transmission link is subject to its signal-to-noise-and-interference ratio (SINR) constraint, one of the most fundamental and yet challenging problem is to find the maximum number of simultaneous transmissions. In this thesis, we consider and study in detail the problem of maximum link activation in wireless networks based on the SINR model. Integer Linear Programming has been used as the main tool in this thesis for the design of algorithms. Fast algorithms have been proposed for the delivery of near-optimal results time-efficiently. With the state-of-art Gurobi optimization solver, both the conventional approach consisting of all the SINR constraints explicitly and the exact algorithm developed recently using cutting planes have been implemented in the thesis. Based on those implementations, new solution algorithms have been proposed for the fast delivery of solutions. Instead of considering interference from all other links, an interference range has been proposed. Two scenarios have been considered, namely the optimistic case and the pessimistic case. The optimistic case considers no interference from outside the interference range, while the pessimistic case considers the interference from outside the range as a common large value. Together with the algorithms, further enhancement procedures on the data analysis have also been proposed to facilitate the computation in the solver.
|
360 |
Feature Ranking for Text ClassifiersMakrehchi, Masoud January 2007 (has links)
Feature selection based on feature ranking has received much
attention by researchers in the field of text classification. The
major reasons are their scalability, ease of use, and fast computation. %,
However, compared to the search-based feature selection methods such
as wrappers and filters, they suffer from poor performance. This is
linked to their major deficiencies, including: (i) feature ranking
is problem-dependent; (ii) they ignore term dependencies, including
redundancies and correlation; and (iii) they usually fail in
unbalanced data.
While using feature ranking methods for dimensionality reduction, we
should be aware of these drawbacks, which arise from the function of
feature ranking methods. In this thesis, a set of solutions is
proposed to handle the drawbacks of feature ranking and boost their
performance. First, an evaluation framework called feature
meta-ranking is proposed to evaluate ranking measures. The framework
is based on a newly proposed Differential Filter Level Performance
(DFLP) measure. It was proved that, in ideal cases, the performance
of text classifier is a monotonic, non-decreasing function of the
number of features. Then we theoretically and empirically validate
the effectiveness of DFLP as a meta-ranking measure to evaluate and
compare feature ranking methods. The meta-ranking framework is also
examined by a stopword extraction problem. We use the framework to
select appropriate feature ranking measure for building
domain-specific stoplists. The proposed framework is evaluated by
SVM and Rocchio text classifiers on six benchmark data. The
meta-ranking method suggests that in searching for a proper feature
ranking measure, the backward feature ranking is as important as the
forward one.
Second, we show that the destructive effect of term redundancy gets
worse as we decrease the feature ranking threshold. It implies that
for aggressive feature selection, an effective redundancy reduction
should be performed as well as feature ranking. An algorithm based
on extracting term dependency links using an information theoretic
inclusion index is proposed to detect and handle term dependencies.
The dependency links are visualized by a tree structure called a
term dependency tree. By grouping the nodes of the tree into two
categories, including hub and link nodes, a heuristic algorithm is
proposed to handle the term dependencies by merging or removing the
link nodes. The proposed method of redundancy reduction is evaluated
by SVM and Rocchio classifiers for four benchmark data sets.
According to the results, redundancy reduction is more effective on
weak classifiers since they are more sensitive to term redundancies.
It also suggests that in those feature ranking methods which compact
the information in a small number of features, aggressive feature
selection is not recommended.
Finally, to deal with class imbalance in feature level using ranking
methods, a local feature ranking scheme called reverse
discrimination approach is proposed. The proposed method is applied
to a highly unbalanced social network discovery problem. In this
case study, the problem of learning a social network is translated
into a text classification problem using newly proposed actor and
relationship modeling. Since social networks are usually sparse
structures, the corresponding text classifiers become highly
unbalanced. Experimental assessment of the reverse discrimination
approach validates the effectiveness of the local feature ranking
method to improve the classifier performance when dealing with
unbalanced data. The application itself suggests a new approach to
learn social structures from textual data.
|
Page generated in 0.0322 seconds