• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 5
  • 2
  • 1
  • Tagged with
  • 30
  • 30
  • 21
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Observator för frontlinjen på surfplatta / Target Based Forward Observer

Bergstedt, Martin, Gillström, Tobias January 2017 (has links)
Detta projekt har utförts på Saab Dynamics. Projektets syfte var att utveckla en applikation, TBFO, för att rapportera information om hur missilen GLSDB ska slå till ett mål. TBFO är ämnat för att användas i närheten av målet och information skickas till planeringssystemet GLSDB MPS. Applikationen byggdes runt 3D-motorn Vricon och är anpassat för lättast möjliga användning med pekskärm. Huvuddelen av arbetet berörde utveckling av gränssnitt för pekskärm och utveckling av systemets applikationsprotokoll. Denna rapport redogör för framtagning av systemet samt de verktyg och metoder som användes. Rapporten fördjupar sig inom utveckling av applikationer anpassade för pekskärm. Slutsatsen som kan dras från resultatet av detta projekt är att idén om systemet som utvecklats är användbart för processen att planera angrepp med GLSDB MPS. / This project has been carried out at Saab Dynamics. The project's purpose was to develop an application, TBFO, for reporting information containing how the missile GLSDB would strike a target. TBFO is intended to be used in the proximity of the target and information is sent to the planning system GLSDB MPS. The application was built around the 3D engine from Vricon and is developed to fit for use of touch devices. The main part of the work concerns the development of user interface for touch input and the system’s application protocol. This report describes the processes of developing the system, including what tools and methods that have been used during development. The report also provides an in-depth look at processes used when developing applications for touch devices. The conclusion from the results of this project is that the idea of the described system is useful for the process of planning an assault with GLSDB MPS.
22

Long-Range High-Throughput Wireless Communication Using Microwave Radiation Across Agricultural Fields

Paul Christian Thieme (8151186) 19 December 2019 (has links)
Over the past three decades, agricultural machinery has made the transition from purely mechanical systems to hybrid machines, reliant on both mechanical and electronic systems. A this transformation continues, the most modern agricultural machinery uses networked systems that require a network connection to function to their full potential. In rural areas, providing this network connection has proven difficult. Obstacles, distance from access points, and incomplete coverage of cellular connection are all challenges to be overcome. “Off the shelf” commercial-grade Wi-Fi equipment, including many products from Ubiquiti like the Bullet M2 transceiver and the PowerBeam point-to-point linking system, as well as antennas by Terrawave, Crane, and Hawking, were installed in a purpose-built system which could be implemented on a production farm. This system consisted of a tower-mounted access point which used an antenna with a 65<sup>o</sup> beamwidth, and the test included distances up to 1150 meters in an agricultural setting with corn and soybeans. Some sensors were stationary and the other platform was a tractor following a path around the farm with both 8dBi and 15dBi gain antennas. Through all tests, throughput never dropped below 5 Mb/s, and the latency of successful connections never exceeded 20ms. Packets were rarely dropped and never accounted for a significant portion of all packet transmission attempts. Environmental effects like immediate precipitation, crop heights, recent rainfall, and ambient temperature had little or no effect on wireless network characteristics. As a result, it was proven that as long as line-of-sight was maintained, reliable wireless connectivity could be achieved despite varying conditions using microwave radiation. Network throughput was marginally affected by the change in free space path loss due to increased distance between the access point and the client, as well as travel by the mobile client outside the beamwidth of the access point. By enabling this coverage, it is hoped that the implementation of new agricultural technology utilizing a live network connection will progress more rapidly.
23

Providing quality of service for realtime traffic in heterogeneous wireless infrastructure networks

Teh, Anselm January 2009 (has links)
In recent years, there has been a rapid growth in deployment and usage of realtime network applications, such as Voice-over-IP, video calls/video conferencing, live network seminars, and networked gaming. The continued increase in the popularity of realtime applications requires a more intense focus on the provision of strict guarantees for Quality of Service (QoS) parameters such as delay, jitter and packet loss in access networks. At the same time, wireless networking technologies have become increasingly popular with a wide array of devices such as laptop computers, Personal Digital Assistants (PDAs), and cellular phones being sold with built-in WiFi and WiMAX interfaces. For realtime applications to be popular over wireless networks, simple, robust and effective QoS mechanisms suited for a variety of heterogeneous wireless networks must be devised. Implementing the same QoS mechanisms across multiple neighbouring networks aids seamless handover by ensuring that a flow will be treated in the same way, both before and after handover. To provide guaranteed QoS, an access network should limit load using an admission control algorithm. In this research, we propose a method to provide effective admission control for variable bit rate realtime flows, based on the Central Limit Theorem. Our objective is to estimate the percentage of packets that will be delayed beyond a predefined delay threshold, based on the mean and variance of all the flows in the system. Any flow that will increase the percentage of delayed packets beyond an acceptable threshold can then be rejected. Using simulations we have shown that the proposed method provides a very effective control of the total system load, guaranteeing the QoS for a set of accepted flows with negligible reductions in the system throughput. To ensure that flow data is transmitted according to the QoS requirements of a flow, a scheduling algorithm must handle data intelligently. We propose methods to allow more efficient scheduling by utilising existing Medium Access Control mechanisms to exchange flow information. We also propose a method to determine the delay-dependent "value" of a packet based on the QoS requirements of the flow. Using this value in scheduling is shown to increase the number of packets sent before a predetermined deadline. We propose a measure of fairness in scheduling that is calculated according to how well each flow's QoS requirements are met. We then introduce a novel scheduling paradigm, Delay Loss Controlled-Earliest Deadline First (DLC-EDF), which is shown to provide better QoS for all flows compared to other scheduling mechanisms studied. We then study the performance of our admission control and scheduling methods working together, and propose a feedback mechanism that allows the admission control threshold to be tuned to maximise the efficient usage of available bandwidth in the network, while ensuring that the QoS requirements of all realtime flows are met. We also examine heterogeneous/vertical handover, providing an overview of the technologies supporting seamless handover. The issues studied in this area include a method of using the Signal to Noise Ratio to trigger handover in heterogeneous networks and QoS Mapping between heterogeneous networks. Our proposed method of QoS mapping establishes the minimum set of QoS parameters applicable to individual flows, and then maps these parameters into system parameter formats for both 802.11e and 802.16e networks.
24

Channel based medium access control for ad hoc wireless networks

Ashraf, Manzur January 2009 (has links)
Opportunistic communication techniques have shown to provide significant performance improvements in centralised random access wireless networks. The key mechanism of opportunistic communication is to send back-to-back data packets whenever the channel quality is deemed "good". Recently there have been attempts to introduce opportunistic communication techniques in distributed wireless networks such as wireless ad hoc networks. In line of this research, we propose a new paradigm of medium access control, called Channel MAC based on the channel randomness and opportunistic communication principles. Scheduling in Channel MAC depends on the instance at which the channel quality improves beyond a threshold, while neighbouring nodes are deemed to be silent. Once a node starts transmitting, it will keep transmitting until the channel becomes "bad". We derive an analytical throughput equation of the proposed MAC in a multiple access environment and validate it by simulations. It is observed that Channel MAC outperforms IEEE 802.11 for all probabilities of good channel condition and all numbers of nodes. For higher number of nodes, Channel MAC achieves higher throughput at lower probabilities of good channel condition increasing the operating range. Furthermore, the total throughput of the network grows with increasing number of nodes considering negligible propagation delay in the network. A scalable channel prediction scheme is required to implement the practical Channel MAC protocol in practice. We propose a mean-value based channel prediction scheme, which provides prediction with enough accuracy to be used in the Channel MAC protocol. NS2 simulation result shows that the Channel MAC protocol outperforms the IEEE 802.11 in throughput due to its channel diversity mechanism in spite of the prediction errors and packet collisions. Next, we extend the Channel MAC protocol to support multi-rate communications. At present, two prominent multi-rate mechanisms, Opportunistic Auto Rate (OAR) and Receiver Based Auto Rate (RBAR) are unable to adapt to short term changes in channel conditions during transmission as well as to use optimum power and throughput during packet transmissions. On the other hand, using channel predictions, each source-destinations pair in Channel MAC can fully utilise the non-fade durations. We combine the scheduling of Channel MAC and the rate adaptive transmission based on the channel state information to design the 'Rate Adaptive Channel MAC' protocol. However, to implement the Rate adaptive Channel MAC, we need to use a channel prediction scheme to identify transmission opportunities as well as auto rate adaptation mechanism to select rates and number of packets to transmit during those times. For channel prediction, we apply the scheme proposed for the practical implementation of Channel MAC. We propose a "safety margin" based technique to provide auto rate adaptation. Simulation results show that a significant performance improvement can be achieved by Rate adaptive Channel MAC as compared to existing rate adaptive protocols such as OAR.
25

Channel based medium access control for ad hoc wireless networks

Ashraf, Manzur January 2009 (has links)
Opportunistic communication techniques have shown to provide significant performance improvements in centralised random access wireless networks. The key mechanism of opportunistic communication is to send back-to-back data packets whenever the channel quality is deemed "good". Recently there have been attempts to introduce opportunistic communication techniques in distributed wireless networks such as wireless ad hoc networks. In line of this research, we propose a new paradigm of medium access control, called Channel MAC based on the channel randomness and opportunistic communication principles. Scheduling in Channel MAC depends on the instance at which the channel quality improves beyond a threshold, while neighbouring nodes are deemed to be silent. Once a node starts transmitting, it will keep transmitting until the channel becomes "bad". We derive an analytical throughput equation of the proposed MAC in a multiple access environment and validate it by simulations. It is observed that Channel MAC outperforms IEEE 802.11 for all probabilities of good channel condition and all numbers of nodes. For higher number of nodes, Channel MAC achieves higher throughput at lower probabilities of good channel condition increasing the operating range. Furthermore, the total throughput of the network grows with increasing number of nodes considering negligible propagation delay in the network. A scalable channel prediction scheme is required to implement the practical Channel MAC protocol in practice. We propose a mean-value based channel prediction scheme, which provides prediction with enough accuracy to be used in the Channel MAC protocol. NS2 simulation result shows that the Channel MAC protocol outperforms the IEEE 802.11 in throughput due to its channel diversity mechanism in spite of the prediction errors and packet collisions. Next, we extend the Channel MAC protocol to support multi-rate communications. At present, two prominent multi-rate mechanisms, Opportunistic Auto Rate (OAR) and Receiver Based Auto Rate (RBAR) are unable to adapt to short term changes in channel conditions during transmission as well as to use optimum power and throughput during packet transmissions. On the other hand, using channel predictions, each source-destinations pair in Channel MAC can fully utilise the non-fade durations. We combine the scheduling of Channel MAC and the rate adaptive transmission based on the channel state information to design the 'Rate Adaptive Channel MAC' protocol. However, to implement the Rate adaptive Channel MAC, we need to use a channel prediction scheme to identify transmission opportunities as well as auto rate adaptation mechanism to select rates and number of packets to transmit during those times. For channel prediction, we apply the scheme proposed for the practical implementation of Channel MAC. We propose a "safety margin" based technique to provide auto rate adaptation. Simulation results show that a significant performance improvement can be achieved by Rate adaptive Channel MAC as compared to existing rate adaptive protocols such as OAR.
26

Learning-based Attack and Defense on Recommender Systems

Agnideven Palanisamy Sundar (11190282) 06 August 2021 (has links)
The internet is the home for massive volumes of valuable data constantly being created, making it difficult for users to find information relevant to them. In recent times, online users have been relying on the recommendations made by websites to narrow down the options. Online reviews have also become an increasingly important factor in the final choice of a customer. Unfortunately, attackers have found ways to manipulate both reviews and recommendations to mislead users. A Recommendation System is a special type of information filtering system adapted by online vendors to provide suggestions to their customers based on their requirements. Collaborative filtering is one of the most widely used recommendation systems; unfortunately, it is prone to shilling/profile injection attacks. Such attacks alter the recommendation process to promote or demote a particular product. On the other hand, many spammers write deceptive reviews to change the credibility of a product/service. This work aims to address these issues by treating the review manipulation and shilling attack scenarios independently. For the shilling attacks, we build an efficient Reinforcement Learning-based shilling attack method. This method reduces the uncertainty associated with the item selection process and finds the most optimal items to enhance attack reach while treating the recommender system as a black box. Such practical online attacks open new avenues for research in building more robust recommender systems. When it comes to review manipulations, we introduce a method to use a deep structure embedding approach that preserves highly nonlinear structural information and the dynamic aspects of user reviews to identify and cluster the spam users. It is worth mentioning that, in the experiment with real datasets, our method captures about 92\% of all spam reviewers using an unsupervised learning approach.<br>
27

EXPLOITING THE SPATIAL DIMENSION OF BIG DATA JOBS FOR EFFICIENT CLUSTER JOB SCHEDULING

Akshay Jajoo (9530630) 16 December 2020 (has links)
With the growing business impact of distributed big data analytics jobs, it has become crucial to optimize their execution and resource consumption. In most cases, such jobs consist of multiple sub-entities called tasks and are executed online in a large shared distributed computing system. The ability to accurately estimate runtime properties and coordinate execution of sub-entities of a job allows a scheduler to efficiently schedule jobs for optimal scheduling. This thesis presents the first study that highlights spatial dimension, an inherent property of distributed jobs, and underscores its importance in efficient cluster job scheduling. We develop two new classes of spatial dimension based algorithms to<br>address the two primary challenges of cluster scheduling. First, we propose, validate, and design two complete systems that employ learning algorithms exploiting spatial dimension. We demonstrate high similarity in runtime properties between sub-entities of the same job by detailed trace analysis on four different industrial cluster traces. We identify design challenges and propose principles for a sampling based learning system for two examples, first for a coflow scheduler, and second for a cluster job scheduler.<br>We also propose, design, and demonstrate the effectiveness of new multi-task scheduling algorithms based on effective synchronization across the spatial dimension. We underline and validate by experimental analysis the importance of synchronization between sub-entities (flows, tasks) of a distributed entity (coflow, data analytics jobs) for its efficient execution. We also highlight that by not considering sibling sub-entities when scheduling something it may also lead to sub-optimal overall cluster performance. We propose, design, and implement a full coflow scheduler based on these assertions.
28

Bootstrapping a Private Cloud

Deepika Kaushal (9034865) 29 June 2020 (has links)
Cloud computing allows on-demand provision, configuration and assignment of computing resources with minimum cost and effort for users and administrators. Managing the physical infrastructure that underlies cloud computing services relies on the need to provision and manage bare-metal computer hardware. Hence there is a need for quick loading of operating systems in bare-metal and virtual machines to service the demands of users. The focus of the study is on developing a technique to load these machines remotely, which is complicated by the fact that the machines can be present in different Ethernet broadcast domains, physically distant from the provisioning server. The use of available bare-metal provisioning frameworks require significant skills and time. Moreover, there is no easily implementable standard method of booting across separate and different Ethernet broadcast domains. This study proposes a new framework to provision bare-metal hardware remotely using layer 2 services in a secure manner. This framework is a composition of existing tools that can be assembled to build the framework.
29

Community Detection of Anomaly in Large-Scale Network Dissertation - Adefolarin Bolaji .pdf

Adefolarin Alaba Bolaji (10723926) 29 April 2021 (has links)
<p>The detection of anomalies in real-world networks is applicable in different domains; the application includes, but is not limited to, credit card fraud detection, malware identification and classification, cancer detection from diagnostic reports, abnormal traffic detection, identification of fake media posts, and the like. Many ongoing and current researches are providing tools for analyzing labeled and unlabeled data; however, the challenges of finding anomalies and patterns in large-scale datasets still exist because of rapid changes in the threat landscape. </p><p>In this study, I implemented a novel and robust solution that combines data science and cybersecurity to solve complex network security problems. I used Long Short-Term Memory (LSTM) model, Louvain algorithm, and PageRank algorithm to identify and group anomalies in large-scale real-world networks. The network has billions of packets. The developed model used different visualization techniques to provide further insight into how the anomalies in the network are related. </p><p>Mean absolute error (MAE) and root mean square error (RMSE) was used to validate the anomaly detection models, the results obtained for both are 5.1813e-04 and 1e-03 respectively. The low loss from the training phase confirmed the low RMSE at loss: 5.1812e-04, mean absolute error: 5.1813e-04, validation loss: 3.9858e-04, validation mean absolute error: 3.9858e-04. The result from the community detection shows an overall modularity value of 0.914 which is proof of the existence of very strong communities among the anomalies. The largest sub-community of the anomalies connects 10.42% of the total nodes of the anomalies. </p><p>The broader aim and impact of this study was to provide sophisticated, AI-assisted countermeasures to cyber-threats in large-scale networks. To close the existing gaps created by the shortage of skilled and experienced cybersecurity specialists and analysts in the cybersecurity field, solutions based on out-of-the-box thinking are inevitable; this research was aimed at yielding one of such solutions. It was built to detect specific and collaborating threat actors in large networks and to help speed up how the activities of anomalies in any given large-scale network can be curtailed in time.</p><div><div><div> </div> </div> </div> <br>
30

Ny generation av GPS-transponder / New Generation of GPS-transponder

Lind, Hampus, Flenéus, Lukas January 2016 (has links)
Detta projekt har utförts på uppdrag av Saab Dynamics. Syftet med projektet var att skapa ett system för att ersätta den befintliga utrustning som fanns för att simulera radar vid testning av vissa vapensystem.   Systemet byggdes upp med hjälp av GPRS, GPS och transportprotokollen TCP och UDP. Huvuddelen av arbetet berörde GPS och GPRS.   Denna rapport är en redogörelse för systemets framtagning och de verktyg och metoder som användes, samt en fördjupning i ämnena GPS, GPRS och deras olika protokoll. Rapporten tar även kort upp alternativa lösningar till datasamtal.   Slutsatsen som kan dras utifrån resultatet av detta projekt är att systemet fungerar och kan vara användbart i framtiden efter vidareutveckling. / This project has been carried out on behalf of Saab Dynamics. The purpose of the project was to create a system to replace the existing equipment for simulating radar when testing certain weapon systems.   The system was created using GPRS, GPS and the transport protocols TCP and UDP. GPS and GPRS were used the most.   This report is a description of the system's design and the tools and methods used to create it, as well as an in depth look into the subjects of GPS, GPRS and their various protocols. The report also briefly discusses some alternative solutions that could have been used instead of data calls.   The conclusion that can be drawn from the results of this project is that the system works and can be useful in the future with further development.

Page generated in 0.155 seconds