• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 47
  • 34
  • Tagged with
  • 377
  • 85
  • 70
  • 44
  • 44
  • 38
  • 37
  • 34
  • 33
  • 28
  • 25
  • 24
  • 21
  • 21
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Sensor-enhanced imaging

Assam, Aieat January 2013 (has links)
Most approaches to spatial image management involve GPS or image processing. In this thesis, a sensor-focused alternative is explored. It requires user and camera tracking, particularly challenging in indoor environments. Possible indoor tracking methods are evaluated and pedestrian dead reckoning is selected. A study is conducted to evaluate sensors and choose a combination for pedestrian and camera tracking. Gyroscope and accelerometer offer comparable step detection performance, with gyroscope and tilt compensated compass providing heading data. Images taken from the same viewpoint are successfully arranged using panorama stitching without any image processing. The results compare favourably to conventional methods. While lacking visual definition of image processing methods, they can complement them if used in tandem. Sensor compositing and pedestrian tracking are implemented in a unified system. Several methods for fusing compass and gyroscope data are compared, but do not produce statistically significant improvement over using just the compass. The system achieves loop closure accuracy of 91% of path length and performs consistently across multiple participants. The final system can be used in GPS-denied locations and presents an image content independent way of managing photographs. It contributes to pedestrian tracking and image composting fields and has potential commercial uses (illustrated by an example Android app).
162

Reservoir computing photonique et méthodes non-linéaires de représentation de signaux complexes : Application à la prédiction de séries temporelles / Complex signal embedding and photonic reservoir Computing in time series prediction

Marquez Alfonzo, Bicky 27 March 2018 (has links)
Les réseaux de neurones artificiels constituent des systèmes alternatifs pour effectuer des calculs complexes, ainsi que pour contribuer à l'étude des systèmes neuronaux biologiques. Ils sont capables de résoudre des problèmes complexes, tel que la prédiction de signaux chaotiques, avec des performances à l'état de l'art. Cependant, la compréhension du fonctionnement des réseaux de neurones dans la résolution de problèmes comme la prédiction reste vague ; l'analogie avec une boîte-noire est souvent employée. En combinant la théorie des systèmes dynamiques non linéaires avec celle de l'apprentissage automatique (Machine Learning), nous avons développé un nouveau concept décrivant à la fois le fonctionnement des réseaux neuronaux ainsi que les mécanismes à l'œuvre dans leurs capacités de prédiction. Grâce à ce concept, nous avons pu imaginer un processeur neuronal hybride composé d'un réseaux de neurones et d'une mémoire externe. Nous avons également identifié les mécanismes basés sur la synchronisation spatio-temporelle avec lesquels des réseaux neuronaux aléatoires récurrents peuvent effectivement fonctionner, au-delà de leurs états de point fixe habituellement utilisés. Cette synchronisation a entre autre pour effet de réduire l'impact de la dynamique régulière spontanée sur la performance du système. Enfin, nous avons construit physiquement un réseau récurrent à retard dans un montage électro-optique basé sur le système dynamique d'Ikeda. Celui-ci a dans un premier temps été étudié dans le contexte de la dynamique non-linéaire afin d'en explorer certaines propriétés, puis nous l'avons utilisé pour implémenter un processeur neuromorphique dédié à la prédiction de signaux chaotiques. / Artificial neural networks are systems prominently used in computation and investigations of biological neural systems. They provide state-of-the-art performance in challenging problems like the prediction of chaotic signals. Yet, the understanding of how neural networks actually solve problems like prediction remains vague; the black-box analogy is often employed. Merging nonlinear dynamical systems theory with machine learning, we develop a new concept which describes neural networks and prediction within the same framework. Taking profit of the obtained insight, we a-priori design a hybrid computer, which extends a neural network by an external memory. Furthermore, we identify mechanisms based on spatio-temporal synchronization with which random recurrent neural networks operated beyond their fixed point could reduce the negative impact of regular spontaneous dynamics on their computational performance. Finally, we build a recurrent delay network in an electro-optical setup inspired by the Ikeda system, which at first is investigated in a nonlinear dynamics framework. We then implement a neuromorphic processor dedicated to a prediction task.
163

QoS-aware joint power and subchannel allocation algorithms for wireless network virtualization

Wei, Junyi January 2017 (has links)
Wireless network virtualization (WNV) is a promising technology which aims to overcome the network redundancy problems of the current Internet. WNV involves abstraction and sharing of resources among different parties. It has been considered as a long term solution for the future Internet due to its flexibility and feasibility. WNV separates the traditional Internet service provider’s role into the infrastructure provider (InP) and service provider (SP). The InP owns all physical resources while SPs borrow such resources to create their own virtual networks in order to provide services to end users. Because the radio resources is finite, it is sensible to introduce WNV to improve resources efficiency. This thesis proposes three resource allocation algorithms on an orthogonal frequency division multiple access (OFDMA)-based WNV transmission system aiming to improve resources utility. The subject of the first algorithm is to maximize the InP and virtual network operators’ (VNOs’) total throughput by means of subchannel allocation. The second one is a power allocation algorithm which aims to improve VNO’s energy efficiency. In addition, this algorithm also balances the competition across VNOs. Finally, a joint power and subchannel allocation algorithm is proposed. This algorithm tries to find out the overall transmission rate. Moreover, all the above alogorithms consider the InP’s quality of service (QoS) requirement in terms of data rate. The evaluation results indicates that the joint resource allocation algorithm has a better performance than others. Furthermore, the results also can be a guideline for WNV performance guarantees.
164

Χαρακτηρισμός ασύρματου καναλιού σύγχρονων ασύρματων δικτύων με εφαρμογή κάλυψης σε τούνελ

Βερανούδης, Παναγιώτης 10 March 2014 (has links)
Αντικείμενο αυτής της διπλωματικής είναι ο χαρακτηρισμός του ασύρματου καναλιού σύγχρονων ασύρματων δικτύων με εφαρμογές για κάλυψη σε τούνελ. Ο χαρακτηρισμός του ασύρματου καναλιού μέσα σε περιβάλλον τούνελ γίνετε στατιστικά. Παρότι είναι δυνατή η αναλυτική και αριθμητική επίλυση του προβλήματος τα στοχαστικά φαινόμενα που επικρατούν κάνουν ιδανικότερη τη στατιστική επίλυση του προβλήματος. Με τον όρο χαρακτηρισμό του ασύρματου καναλιού εννοείται ο προσδιορισμός τριών χαρακτηριστικών του ασύρματου καναλιού. Ο υπολογισμός των απωλειών ισχύος στη διαδρομή που παρεμβάλλεται μεταξύ πομπού και δέκτη και κατά επέκταση ο υπολογισμός της λαμβανόμενης ισχύος στη κεραία το δέκτη. Επίσης, ο υπολογισμός των διακυμάνσεων που υφίσταται το πλάτος του σήματος που λαμβάνεται στο δέκτη. Το πλάτος δεν είναι σταθερό αλλά μεταβάλλεται καθώς ο δέκτης μετακινείται σε μικρές αποστάσεις συγκρίσιμες με το μήκος κύματος του σήματος. Τέλος, η σύγκριση της ληφθείσας ισχύος στο δέκτη με το κατώφλι ευαισθησίας της συσκευής επικοινωνίας του δέκτη με σκοπό τον υπολογισμό της πιθανότητας ‘’να έχει σήμα ο δέκτης’’ δηλαδή να είναι δυνατή η επικοινωνία πομπού και δέκτη. Για να γίνουν αυτοί οι υπολογισμοί χρησιμοποιούνται ανάλογες προσομοιώσεις σε πρόγραμμα λογισμικού προσομοιώσεων. / The subject of this thesis is the characterization of modern and wireless channel networks with applications for coverage in tunnels. The characterization of the wireless channel inside tunnel environments is studied and modeled statistically. While analytical and numerical solutions of the problem are possible, the stochastic nature of the parameters make ideal the statistical approach of the problem. The characterization of a wireless channel aims to identify three characteristics of the wireless channel. First goal is the calculation of the power loss in the path between the transmitter and receiver. Thus the first goal is the calculation of the received power. The second goal is the calculation of the fading and the fluctuations of the received signal. The amplitude of the received signal is not stable but varies as the receiver moves over short distances comparable to the wavelength of the signal. Finally the third goal is to make a comparison of the received power with the sensitivity threshold of the communication device of the receiver in order to calculate the probability that the receiver has acceptable quality signal. To make these calculations there are used software simulations.
165

Distributed and intelligent routing algorithm

Tekiner, Firat January 2006 (has links)
A Network's topology and its routing algorithm are the key factors in determining the network performance. Therefore, in this thesis a generic model for implementing logical interconnection topologies in the software domain has been proposed to investigate the performance of the logical topologies and their routing algorithms for packet switched synchronous networks. A number of topologies are investigated using this model and a simple priority rule is developed to utilise the usage of the asymmetric 2 x 2 optical node. Although, logical topologies are ideal for optical (or any other) networks because of their relatively simple routing algorithms, there is a requirement for much more flexible algorithms that can be applied to arbitrary network topologies. Antnet is a software agent based routing algorithm that is influenced by the unsophisticated and individual ant's emergent behaviour. In this work a modified antnet algorithm for packet switched networks has been proposed that offers improvement in the packet throughput and the average delay time. Link usage information known as "evaporation" has also been introduced as an additional feedback signal to the algorithm to prevent stagnation within the network for the first time in the literature for the best our knowledge. Results show that, with "evaporation" the average delay experienced by the data packets is reduced nearly 30% compared to the original antnet routing algorithm for all cases when non-uniform traffic model is employed. The multiple ant colonies concept is also introduced and applied to packet switched networks for the first time which has increased the packet throughput. However, no improvement in the average packet delay is observed in this case. Furthermore, for the first time extensive analysis on the effect of a confidence parameter is produced here. A novel scheme which provides a more realistic implementation of the algorithms and flexibility to the programmer for simulating communication networks is proposed and used to implement these algorithms.
166

A study of simulated annealing techniques for multi-objective optimisation

Smith, Kevin I. January 2006 (has links)
Many areas in which computational optimisation may be applied are multi-objective optimisation problems; those where multiple objectives must be minimised (for minimisation problems) or maximised (for maximisation problems). Where (as is usually the case) these are competing objectives, the optimisation involves the discovery of a set of solutions the quality of which cannot be distinguished without further preference information regarding the objectives. A large body of literature exists documenting the study and application of evolutionary algorithms to multi-objective optimisation, with particular focus being given to evolutionary strategy techniques which demonstrate the ability to converge to desired solutions rapidly on many problems. Simulated annealing is a single-objective optimisation technique which is provably convergent, making it a tempting technique for extension to multi-objective optimisation. Previous proposals for extending simulated annealing to the multi-objective case have mostly taken the form of a traditional single-objective simulated annealer optimising a composite (often summed) function of the objectives. The first part of this thesis deals with introducing an alternate method for multiobjective simulated annealing, dealing with the dominance relation which operates without assigning preference information to the objectives. Non-generic improvements to this algorithm are presented, providing methods for generating more desirable suggestions for new solutions. This new method is shown to exhibit rapid convergence to the desired set, dependent upon the properties of the problem, with empirical results on a range of popular test problems with comparison to the popular NSGA-II genetic algorithm and a leading multi-objective simulated annealer from the literature. The new algorithm is applied to the commercial optimisation of CDMA mobile telecommunication networks and is shown to perform well upon this problem. The second section of this thesis contains an investigation into the effects upon convergence of a range of optimiser properties. New algorithms are proposed with the properties desired to investigate. The relationship between evolutionary strategies and the simulated annealing techniques is illustrated, and explanation of the differing performance of the previously proposed algorithms across a standard test suite is given. The properties of problems on which simulated annealer approaches are desirable are investigated and new problems proposed to best provide comparisons between different simulated annealing techniques.
167

Query routing in cooperative semi-structured peer-to-peer information retrieval networks

Alkhawaldeh, Rami Suleiman Ayed January 2016 (has links)
Conventional web search engines are centralised in that a single entity crawls and indexes the documents selected for future retrieval, and the relevance models used to determine which documents are relevant to a given user query. As a result, these search engines suffer from several technical drawbacks such as handling scale, timeliness and reliability, in addition to ethical concerns such as commercial manipulation and information censorship. Alleviating the need to rely entirely on a single entity, Peer-to-Peer (P2P) Information Retrieval (IR) has been proposed as a solution, as it distributes the functional components of a web search engine – from crawling and indexing documents, to query processing – across the network of users (or, peers) who use the search engine. This strategy for constructing an IR system poses several efficiency and effectiveness challenges which have been identified in past work. Accordingly, this thesis makes several contributions towards advancing the state of the art in P2P-IR effectiveness by improving the query processing and relevance scoring aspects of a P2P web search. Federated search systems are a form of distributed information retrieval model that route the user’s information need, formulated as a query, to distributed resources and merge the retrieved result lists into a final list. P2P-IR networks are one form of federated search in routing queries and merging result among participating peers. The query is propagated through disseminated nodes to hit the peers that are most likely to contain relevant documents, then the retrieved result lists are merged at different points along the path from the relevant peers to the query initializer (or namely, customer). However, query routing in P2P-IR networks is considered as one of the major challenges and critical part in P2P-IR networks; as the relevant peers might be lost in low-quality peer selection while executing the query routing, and inevitably lead to less effective retrieval results. This motivates this thesis to study and propose query routing techniques to improve retrieval quality in such networks. Cluster-based semi-structured P2P-IR networks exploit the cluster hypothesis to organise the peers into similar semantic clusters where each such semantic cluster is managed by super-peers. In this thesis, I construct three semi-structured P2P-IR models and examine their retrieval effectiveness. I also leverage the cluster centroids at the super-peer level as content representations gathered from cooperative peers to propose a query routing approach called Inverted PeerCluster Index (IPI) that simulates the conventional inverted index of the centralised corpus to organise the statistics of peers’ terms. The results show a competitive retrieval quality in comparison to baseline approaches. Furthermore, I study the applicability of using the conventional Information Retrieval models as peer selection approaches where each peer can be considered as a big document of documents. The experimental evaluation shows comparative and significant results and explains that document retrieval methods are very effective for peer selection that brings back the analogy between documents and peers. Additionally, Learning to Rank (LtR) algorithms are exploited to build a learned classifier for peer ranking at the super-peer level. The experiments show significant results with state-of-the-art resource selection methods and competitive results to corresponding classification-based approaches. Finally, I propose reputation-based query routing approaches that exploit the idea of providing feedback on a specific item in the social community networks and manage it for future decision-making. The system monitors users’ behaviours when they click or download documents from the final ranked list as implicit feedback and mines the given information to build a reputation-based data structure. The data structure is used to score peers and then rank them for query routing. I conduct a set of experiments to cover various scenarios including noisy feedback information (i.e, providing positive feedback on non-relevant documents) to examine the robustness of reputation-based approaches. The empirical evaluation shows significant results in almost all measurement metrics with approximate improvement more than 56% compared to baseline approaches. Thus, based on the results, if one were to choose one technique, reputation-based approaches are clearly the natural choices which also can be deployed on any P2P network.
168

Analysis of bandwidth attacks in a bittorrent swarm

Adamsky, Florian January 2016 (has links)
The beginning of the 21st century saw a widely publicized lawsuit against Napster. This was the first Peer-to-Peer software that allowed its users to search for and share digital music with other users. At the height of its popularity, Napster boasted 80 million registered users. This marked the beginning of a Peer-to-Peer paradigm and the end of older methods of distributing cultural possessions. But Napster was not entirely rooted in a Peer-to-Peer paradigm. Only the download of a file was based on Peer-to-Peer interactions; the search process was still based on a central server. It was thus easy to shutdown Napster. Shortly after the shutdown, Bram Cohen developed a new Peer-to-Peer protocol called BitTorrent. The main principle behind BitTorrent is an incentive mechanism, called a choking algorithm, which rewards peers that share. Currently, BitTorrent is one of the most widely used protocols on the Internet. Therefore, it is important to investigate the security of this protocol. While significant progress has been made in understanding the Bit- Torrent choking mechanism, its security vulnerabilities have not yet been thoroughly investigated. This dissertation provides a security analysis of the Peer-to-Peer protocol BitTorrent on the application and transport layer. The dissertation begins with an experimental analysis of bandwidth attacks against different choking algorithms in the BitTorrent seed state. I reveal a simple exploit that allows malicious peers to receive a considerably higher download rate than contributing leechers, thereby causing a significant loss of efficiency for benign peers. I show the damage caused by the proposed attack in two different environments—a lab testbed comprised of 32 peers and a global testbed called PlanetLab with 300 peers. Our results show that three malicious peers can degrade the download rate by up to 414.99 % for all peers. Combined with a Sybil attack with as many attackers as leechers, it is possible to degrade the download rate by more than 1000 %. I propose a novel choking algorithm which is immune against bandwidth attacks and a countermeasure against the revealed attack. This thesis includes a security analysis of the transport layer. To make BitTorrent more Internet Service Provider friendly, BitTorrent Inc. invented the Micro Transport Protocol. It is based on User Datagram Protocol with a novel congestion control called Low Extra Delay Background Transport. This protocol assumes that the receiver always provides correct feedback, otherwise this deteriorates throughput or yields to corrupted data. I show through experimental evaluation, that a misbehaving Micro Transport Protocol receiver which is not interested in data integrity, can increase the bandwidth of the sender by up to five times. This can cause a congestion collapse and steal a large share of a victim’s bandwidth. I present three attacks, which increase bandwidth usage significantly. I have tested these attacks in real world environments and demonstrate their severity both in terms of the number of packets and total traffic generated. I also present a countermeasure for protecting against these attacks and evaluate the performance of this defensive strategy. In the last section, I demonstrate that the BitTorrent protocol family is vulnerable to Distributed Reflective Denial-of-Service attacks. Specifically, I show that an attacker can exploit BitTorrent protocols (Micro Transport Protocol, Distributed Hash Table, Message Stream Encryption and BitTorrent Sync) to reflect and amplify traffic from Bit-Torrent peers to any target on the Internet. I validate the efficiency, robustness, and the difficulty of defence of the exposed BitTorrent vulnerabilities in a Peer-to-Peer lab testbed. I further substantiate lab results by crawling more than 2.1 million IP addresses over Mainline Distributed Hash Table and analyzing more than 10,000 BitTorrent handshakes. The experiments suggest that an attacker is able to exploit BitTorrent peers to amplify traffic by a factor of 50, and in the case of BitTorrent Sync 120. Additionally, I observe that the most popular BitTorrent clients are the most vulnerable ones.
169

Noncoherent fusion detection in wireless sensor networks

Yang, Fucheng January 2013 (has links)
The main motivation of this thesis is to design low-complexity high efficiency noncoherent fusion rules for the parallel triple-layer wireless sensor networks (WSNs) based on frequency-hopping Mary frequency shift keying (FH/MFSK) techniques, which are hence referred to as the FH/MFSK WSNs. The FH/MFSKWSNs may be employed to monitor single or multiple source events (SEs)with each SE having multiple states. In the FH/MFSKWSNs, local decisions made by local sensor nodes (LSNs) are transmitted to a fusion center (FC) with the aid of FH/MFSK techniques. At the FC, various noncoherent fusion rules may be suggested for final detection (classification) of the SEs’ states. Specifically, in the context of the FH/MFSK WSNs monitoring single M-ary SE, three noncoherent fusion rules are considered for fusion detection, which include the benchmark equal gain combining (EGC), and the proposed erasure-supported EGC (ES-EGC) as well as the optimum posterior fusion rules. Our studies demonstrate that the ES-EGC fusion rule may significantly outperform the EGC fusion rule, in the cases when the LSNs’ detection is unreliable and when the channel signal-to-noise ratio (SNR) is relative high. For the FH/MFSKWSNs monitoring multiple SEs, six noncoherent fusion rules are investigated, which include the EGC, ES-EGC, EGC assisted N-order IIC (EGC-NIIC), ES-EGC assisted N-order IIC (ES-EGC-NIIC), EGC assisted r-order IIC (EGC-rIIC) and the ES-EGC assisted r-order IIC (ES-EGC-rIIC). The complexity, characteristics as well as detection performance of these fusion rules are investigated. Our studies show that the ES-EGC related fusion rules are highly efficient fusion rules, which have similar complexity as the corresponding EGC related fusion rules, but usually achieve better detection performance than the EGC related fusion rules. Although the ES-EGC is a single-user fusion rule, it is however capable of mitigating the multiple event interference (MEI) generated by multiple SEs. Furthermore, in some of the considered fusion rules, the embedded parameters may be optimized for the FH/MFSK WSNs to achieve the best detection performance. As soft-sensing is often more reliable than hard-sensing, in this thesis, the FH/MFSK WSNs with the LSNs using soft-sensing are investigated associated with the EGC and ES-EGC fusion rules. Our studies reveal that the ES-EGC becomes highly efficient, when the sensing at LSNs is not very reliable. Furthermore, as one of the applications, our FH/MFSK WSN is applied for cognitive spectrum sensing of a primary radio (PR) system constituted by the interleaved frequencydivision multiple access (IFDMA) scheme, which supports multiple uplink users. Associated with our cognitive spectrum sensing system, three types of energy detection based sensing schemes are addressed, and four synchronization scenarios are considered to embrace the synchronization between the received PR IFDMA signals and the sampling operations at cognitive spectrum sensing nodes (CRSNs). The performance of the FH/MFSK WSN assisted spectrum sensing system with EGC or ES-EGC fusion rule is investigated. Our studies show that the proposed spectrum sensing system constitutes one highly reliable spectrum sensing scheme, which is capable of exploiting the space diversity provided by CRSNs and the frequency diversity provided by the IFDMA systems. Finally, the thesis summarises our discoveries and provides discussion on the possible future research issues.
170

Hybrid automatic-repeat-reQuest systems for cooperative wireless communications

Ngo^, Hoa`ng Anh January 2012 (has links)
As a benefit of achieving a diversity gain and/or a multiplexing gain, MIMO techniques are capable of significantly increasing the achievable throughput and/or the network coverage without additional bandwidth or transmit power. For the sake of striking an attractive trade-off between the attainable diversity gain and/or multiplexing gain, in this thesis the novel Space-Time-Frequency Shift Keying (STFSK) concept is proposed for the family of MIMO systems. More specifically, in order to generate space-time-frequency domain codewords, the STFSK encoding schemes activate one out of Q dispersion matrices, and the associated address bits are then combined with a classic time-domain and frequency-domain modulation scheme. The resultant arrangements impose no inter-symbol interference and are capable of eliminating the inter-antenna interference, hence offering a range of benefits over other classic MIMO arrangements. Additionally, a soft-output STFSK demodulator is designed for iterative detection and the complexity of both the hard- as well as soft-decision demodulators is quantified. Furthermore, the STFSK performance is studied in both the single-user and the multipleuser multi-cell environment in order to investigate the effects of these techniques on the performance of the holistically optimized systems. Furthermore, we studied the H-ARQ systems advocated in the context of cooperation-aided wireless networks, where the MIMO elements are constituted by the individual elements of separate network nodes. Both perfect and imperfect coherent detection as well as non-coherent detection aided cooperative H-ARQ schemes are considered. In the perfect coherent detection based pilot symbol assisted scheme, a novel relay-switching aided H-ARQ scheme is proposed for mitigating the effects of correlation in fading wireless channels, followed by a H-ARQ scheme employing systematic Luby transform codes. In contrast to the unrealistic perfect coherent detection, realistic imperfect coherent schemes are studied, where the channel impulse responses are imperfectly estimated. Furthermore, non-coherent differential detection aided cooperative H-ARQ schemes are proposed and compared to their coherent detection assisted counterparts. Finally, a novel cooperative H-ARQ arrangement based on distributed space-time codes is proposed for the sake of improving the attainable system throughput, while reducing the system’s complexity.

Page generated in 0.0248 seconds