• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 13
  • 13
  • 13
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automatic robot path planning with constraints

Sanders, David Adrian January 1990 (has links)
In a complex and flexible manufacturing environment tasks maybe dynamically reconfigured. In this situation a robot needs to plan paths automatically to avoid obstacles and rendezvous with changing target points. A novel path planning system is presented which takes into account both kinematic and dynamic constraints. The main part of the system comprises a robot "Path Planner" and "Path Adapter", both using a dynamic "World Model" updated by a vision system. The Path Planner contains a geometric model of the static environment and the robot. Given a task, the Path Planner calculates an efficient collision free path. This is passed to the control computer where a trajectory is generated. Pre-determination of optimum paths using established techniques frequently involve unacceptably high time penalties. To overcome this problem the automatic path refinement techniques employed avoid the necessity for optimality before beginning a movement. Repeated improvements to the sub optimal paths initially generated by the Path Planner are made until the robot is ready to begin the new path. Algorithms are presented which give a rapid solution for simplified obstacle models. The algorithms are robust and are especially suitable for repetitive robot tasks. With the Path Planner, the robot structure is modelled as connected cylinders and spheres and the range of robot motion is quantised. The robot path, calculated initially only takes account of geometric, kinematic and obstacle constraints. Although this path is sub optimal, the calculation time is short. The path avoids obstacle and seeks the "shortest" path in terms of total actuator movement. Several of the new path planning methods presented employ a local method, taking a "best guess" at a path through a 2-D space for two joints and then calculating a path for the third joint such that obstacles are avoided. A different approach is global and depends on searching a 3-D graph of quantised joint space. The Path Planner works in real time. If there is enough time available a "Path Adapter" modifies the planned path in an effort to improve the path subject to selected criteria. The Path Adapter considers dynamic constraints. The first robot path improvement method depends on detecting the joint motor currents in order to minimise changes in joint direction, the other is based on a set of adaptive rules based on simplified dynamic software models of the robot stored within the planning computer. The adapted path is passed to the control computer. The static model of the robot work-cell is held in computer memory as several solid polyhedral. With the aid of a vision system, this model is updated as new obstacles enter or leave the work-place. Overlapping spheres and 2-D slices in joint space are used to model obstacles. In this form the vision system can be updated quickly and the obstacle data can de accessed efficiently by the path planning and path improvement algorithms.
2

The use of measures of modularity in electronic design

Rogers, Ian Armstrong January 2012 (has links)
This Dissertation describes the creation of novel electronic devices using a modular design process, and novel algorithms to evaluate the modularity of the electronic designs. Some advantages of modularity are to be able to aim to save time, reduce risk and therefore to save money during the development process. Existing algorithms were investigated and five were identified in the literature that had aspects that could have been suitable for electronic designs. A range of different features were extracted from them for use in the research. As a start to the research, the modularity of electronic designs created at a collaborating company was evaluated by applying the two most promising algorithms to some existing products. From that work, a new modular design methodology was introduced at the collaborating company and five new products were designed. The modularity of these new products was evaluated using three algorithms selected from the five identified in the literature. The design process was revised to improve the measures of modularity and another six new products were designed and their modularity evaluated using all five algorithms identified in the literature. A single new algorithm did not appear to be suitable to evaluate the modularity of electronic designs in the different situations identified. Instead, five new and novel algorithms were created. Two of the new algorithms were for two different types of electronic design created at the collaborating company: bespoke design and design for mass manufacture. The other three new algorithms were more general algorithms for: new bespoke design, new products closely based on an existing product (product progression), and design for mass manufacture. The five new algorithms were applied to the two products created prior to the start of the research and to the other eleven new products created during the research. The results are presented, discussed and compared to results from the five algorithms selected from the literature.
3

Bandwidth estimation and optimisation in rain faded DVB-RCS networks

Al-Mosawi, Mohamed A. January 2014 (has links)
Broadband satellite communication networks operating at Ka band (20-30 GHz) play a very important role in today’s worldwide telecommunication infrastructure. The problem, however, is that rain can be the most dominant impairment factor for radio propagation in these frequency bands. Allocating frequency bandwidth based on the worst-case rain fading leads to the waste of the frequency spectrum due to over reservation, as actual rain levels may vary. Therefore, it is essential that satellite systems include adaptive radio resource allocation combined with fade mitigation techniques to efficiently counteract rain impairments in real-time. This thesis studies radio resource management problem for rain faded Digital Video Broadcast-Return Channel via Satellite (DVB-RCS) networks. This research stems from taking into account two aspects in the bandwidth estimation and allocation process: the consideration of multiple rain fading levels; and the geographical area size where users are distributed. The thesis investigates how using multiple rain fading levels in time slot allocation can improve bandwidth utilisation in DVB-RCS return links. The thesis presents a mathematical model to calculate the bandwidth on demand. The radio resource allocation is formulated as an optimisation problem, and a novel algorithm for dynamic carrier bandwidth and time slots allocation is proposed, which works with constant bit rate type of traffic. The research provides theoretical analysis for the time slot allocation problem and shows that the proposed algorithm achieves optimal results. This thesis also studies Return Channel Satellite Terminals (RCSTs) geographical distribution effects on bandwidth demand and presents a novel mathematical model to estimate the maximum instantaneous bandwidth demand for RCSTs randomly distributed over a geographical area in a satellite spot beam. All the proposed algorithms have been evaluated using a novel simulation with historical rain data.
4

Investigation of spatially and temporally varying broadband wireless urban channels

Toautachone, Somboon January 2015 (has links)
The use of wireless communication systems is rapidly expanding due to its flexibility, mobility and, low installation and maintenance costs. However as the number of services over wireless networks increase, users expect the same quality of service as on wired networks. Compared to wired channels, a wireless channel is influenced by static and mobile objects in its surroundings. A detailed understanding of these influences are required in order to develop adequate models that can be used to support high data rate services, especially in urban environments where network antennas are frequently installed below surrounding object (building and trees) heights. The research reported in this thesis was carried out to study the impact of mobile objects (vehicle) on wideband channels in urban environment and the signal variations in a picocell. To conduct this study, a wideband channel sounder that transmits a flat spectrum Pseudo-Random Gaussian Noise (PRGN) signal over a bandwidth of 200 MHz at a carrier frequency of 2 GHz was used. The implemented data acquisition strategy enabled channel measurements to be made every 184.32 μs (equivalent to channel sampling rate of 5.4 kHz). This speed is indispensable for studying fast varying channels. A series of controlled experiment were conducted and reported in this thesis to test the capabilities of the channel sounder and to gain an understanding of the impact of passing vehicles on wideband channels. A number of experimental measurements were then carried out to study static and dynamic channels and the results are reported in this thesis. To gain a detailed understanding of the channel variations, an algorithm that can resolve closely spaced multipath components was required. Singular Value Decomposition Prony (SVDP) algorithm has been developed and tested. Test results show that it achieves time delay resolution of up to 1 ns, a factor of 5 better than Fast Fourier Transform (FFT) algorithm. Analysis of the signal variations with distance shows that none of the proposed path loss model can be used to predict signal variation with distance in picocell environment, for path length up to 30 m. A linear model is found to provide the best prediction and is proposed for picocell channels. In addition, Rice probability density function is found to provide the best fit to temporal signal variation with increasing standard deviation as the path length increases. The increase in standard deviation is also reflected in the increase of root mean square delay spread and a reduction in channel coherence bandwidth with increasing path length. Overall, this research has shown that a greater attention needs to be paid to the impact of mobile traffic in urban environments than it was initially thought. Passing vehicles have been shown to cause severe fading within the bandwidth of up to 40 dB but only manifest as between 1 dB and 4 dB fade in the averaged signal power across the bandwidth. This type of fade will introduce error burst in digital communication systems and vary in time and space, especially if either the receiver or object is moving. In addition to the results mentioned, one of the key contributions of this research is that it shows that a greater attention has to be paid to moving objects in the channel irrespective of their positions relative to the transmitter to receiver path. The sounder used employs technology which allows measurement speeds that have, up until now, not been possible over large bandwidths. Together with SVDP algorithm developed, they present an opportunity for more detailed study to be carried out. This research has also laid the foundation for this to be carried out.
5

Intelligent performance assessment in a virtual electronic laboratory

Achumba, Ifeyinwa Eucharia January 2011 (has links)
Laboratory work, in the undergraduate engineering course, is aimed at enhancing students’ understanding of taught concepts and integrating theory and practice. This demands that laboratory work is synchronised with lectures in order to maximise its derivable learning outcomes, measurable through assessment. The typical high costs of raditional engineering laboratory, which often militates against its increased use and the synchronisation of laboratory and lectures, have, in addition to other factors, catalysed the increased adoption of virtual laboratories as a complement to the traditional engineering laboratory. In extreme cases, virtual laboratories could serve as alternative means of providing, albeit simulated, meaningful practical experiences. A Virtual Electronic Laboratory (VEL), which can be used to undertake a range of undergraduate electronic engineering curriculum-based laboratory activities, in a realistic manner, has been implemented as part of the work presented in this thesis. The VEL incorporates a Bayesian Network (BN)-based model for the performance assessment of students’ laboratory work in the VEL. Detailed descriptions of the VEL and the assessment model are given. The evaluation of the entire system is in two phases: evaluation of the VEL as a tool for facilitating students’ deeper understanding of fundamental engineering concepts taught in lectures; and evaluation of the assessment model within the context of the VEL environment. The VEL is evaluated at two different engineering faculties, in two separate universities. Results from the evaluation of the VEL show the effectiveness of the VEL to enhance students’ learning, in the light of appropriate learning scenarios, and provide evidence and support for the use of virtual laboratories in the engineering educational context. Performance data, extracted from students’ behaviour logs (captured and recorded during the evaluation of the VEL) are used to evaluate the assessment model. Results of the evaluation demonstrate the effectiveness of the model as an assessment tool, and the practicability of the performance assessment of students’ laboratory work from their observed behaviour in a virtual learning environment.
6

A study of application level information from the volatile memory of Windows computer systems

Olajide, Funminiyi Akanfe January 2011 (has links)
The purpose of this research work was to investigate into the seven most commonly used applications in order to uncover information that may have been hidden from forensic investigators by extracting the application level information from volatile memory of a Windows system and performing analysis of that volatile memory. The aim of this research was to formulate how the extracted application level information can be reconstructed to describe what user activities had taken place on the application under investigation. After reviewing the relevant literature on volatile memory analysis and forensically relevant data from Windows applications, this thesis confines its research to a study of the application level information and the volatile memory analysis of Windows applications. Quantitative and qualitative results were produced in this study. The quantitative assessment consists of four metrics and that were used to investigate the quantity of user input on the applications while the qualitative measures were formulated to infer what the user is doing on the application, what they have been doing and what they are using the applications for. The reconstruction of user input activities was carried out by using some commonly used English words to search for user input and pattern matching techniques for when the user input is known in the investigation. The analysis of user input was discussed based on four scenarios developed for this research. The result shows that different amounts of user input can be recovered from various applications. The result in scenario 1, indicates that user input can be recovered easily from Word, PowerPoint, Outlook Email and Internet Explorer 7.0 and that little user input can be found on Excel, MS Access and Adobe Reader 8.0. In scenario 2, a significant amount of user input was recovered in the memory allocated to all the applications except MS Access where little user input was found. In scenario3, only Outlook Email and Internet Explorer 7.0 resulted in a large amount of user input being recovered. The rest of the applications retain little user input in memory. In scenario 4, a greatly reduced amount of information was found for all the applications. But some user input was found from Outlook Email and Internet Explorer 7.0 which shows that user input can be retained for some time in the memory. After the analysis of user input, the importance of volatile memory of the application level information was discussed. A procedure has been formulised for the extraction and analysis of application level information and these have been discussed with respect to their use in the court of law based on the five Daubert tests of scientific method of gathering digital evidence. As presented, three out of the Daubert tests have been completed while the two others forms the unique contribution of the research project to digital forensic community. The author recommends that the research theory of application level information should be extended to other operating systems using the scenarios formulated in this research project.
7

Energy aware routing protocols in ad hoc wireless networks

Kanakaris, Venetis January 2012 (has links)
In Mobile Ad hoc Network, communication at mobile nodes can be achieved by using multi-hop wireless links. The architecture of such a network is based, not on a centralized base station but on each node acting as a router to forward data packets to other nodes in the network. The aim of each protocol, in an ad hoc network, is to find valid routes between two communicating nodes. These protocols must be able to handle high mobility of the nodes which often cause changes in the network topology. Every ad hoc network protocol uses some form of a routing algorithm to transmit between nodes based on a mechanism that forwards packets from one node to another in the network. These algorithms have their own way of finding a new route or modifying an existing one when there are changes in the network. The novel area of this research is a proposed routing algorithm which improves routing and limits redundant packet forwarding, especially in dense networks. It reduces the routing messages and consequently power consumption, which increases the average remaining power and the lifetime of the network. The first aim of this research was to evaluate various routing algorithms in terms of power. The next step was to modify an existing ad hoc routing protocol in order to improve the power consumption. This resulted in the implementation of a dynamic probabilistic algorithm in the route request mechanism of an ad hoc On-Demand Distance Vector protocol which led to a 3.0% improvement in energy consumption. A further extension of the approach using Bayesian theory led to 3.3% improvement in terms of energy consumption as a consequence of a reduction in MAC Load for all network sizes, up to 100 nodes.
8

Artificial immune system for the detection of abnormal activity in ambient assisted living

Bersch, Sebastian Dominik January 2014 (has links)
This thesis is concerned with the use of Artificial Immune System (AIS)in the area of Ambient Assisted Living (AAL). The hypothesis for the work presented herein is that the AIS features of self-learning and adaptability address the complex problem of improving the detection of unknown abnormal; behaviour in the long-term monitoring of the elderly. The work presents and affordable Open Hardware Data Acquisition Device that in combination with a Markov chain-based software simulation environment can be used for the collection of human activity data and the generation of necessary information for long-term simulation. The main contributions from the work presented herein relate to the design and use of AIS based solutions, and the selection of appropriate parameter combinations for supervised classifiers. Firstly, a novel seeding technique for AIS is presented that improves the placement of detectors in the search space. Secondly, a novel AIS-based monitoring algorithm, inspired by Hierarchical Temporal Memory architecture, is designed to learn and approximate sensor data to detect and report activity abnormalities. Thirdly, an empirical analysis is carried out to provide a clear understanding of how sampling frequency, segmentation method, window size, and computational load in an area of AAL. Fourthly, a Pareto curve based technique has been devised and demonstrated as a useful tool for the informed selection of parameter combinations to achieve the best possible classification accuracy and computational load. The evaluation of the AIS-based algorithm showed that the detection rate of abnormal activity outperformed the results of supervised classifiers with parameter combinations selected based on the Pareto curve. The results are encouraging and support the decision to introduce the use of AIS for the detection of abnormal activity in AAL environments.
9

Energy efficient cluster based algorithm for underwater sensor networks

Ovaliadis, Kyriakos January 2015 (has links)
In this thesis, an innovative and evolving Cluster Based Routing Algorithm (CBRA) is proposed to provide an improved energy efficiency cluster system which can also capable of handling cluster-head and mobile sensor node connectivity failures. In addition, to develop, implement and test CBRA, a new simulator called USNeT (Underwater Sensor Network simulation Tool) has been designed, developed and implemented. This USNeT simulator follows the object-oriented design style and all network entities are implemented as classes in the C++, encapsulating thread mechanisms. Initially significant adjustments have been made in order for the algorithm to become more energy efficient. Some of these alterations are: transmission range management, re-cluster process activation for each group separately, sensor node sleeping mode and unwanted information rejection. All the simulation results which were implemented against Low Energy Adaptive Clustering Hierarchy (LEACH) protocol indicate a small but significant improvement in the performance of the CBRA especially in energy efficiency. This study also suggests that system Cluster Head (CH) failures could be further minimized when simultaneously a CH (primary CH) and a backup CH are selected. Thus, when a primary CH fails due to an irreparable fault, a backup CH will take its place and it will operate as a head node. Therefore, the CBRA is redefined and optimised to be able to handle this issue and also to diminish any communication link establishment interruptions. The analysis of the simulation results shows that the redefined CBRA (r-CBRA) is more energy efficient and can effectively enhance the network survivability capacity in the event of cluster-head failures, than the scheme with the non-optimised algorithm CBRA and the LEACH protocol. Thereafter, the r-CBRA is used again, to address sensor node connectivity failures. In case of a mobile sensor node that is close to a cluster but not in the range of a CH, r-CBRA changes the status of the nearest sensor node to a CH and then it establishes a communication link between them. Simulation results show once more that the new cluster based routing algorithm ensures the connectivity of the network without sacrificing the energy efficiency of the network.
10

Biometric identification using user interaction with virtual worlds

Al-Khazzar, Ahmed M. A. January 2012 (has links)
A virtual world is an interactive 3D virtual environment that visually resembles complex physical spaces, and provides an online community through which the users can connect, shop, work, learn, establish emotional relations, and explore different virtual environments. The use of virtual worlds is becoming popular in many fields such as education, economy, space, and games. With the widespread use of virtual worlds, establishing the security of these systems becomes more important. To this date, there is no mechanism to identify users of virtual worlds based on their interactions with the virtual world. Current virtual worlds use knowledge-based authentication mechanisms such as passwords to authenticate users. However they are not capable of distinguishing between genuine users and imposters who possess the knowledge needed to gain access to the virtual world. The aim of the research reported in this thesis is to develop a behavioural biometric system to identify the users of a virtual world based on their behaviour inside these environments. In this thesis, three unique virtual worlds are designed and implemented with different 3D environments and avatars simulating the different environments of virtual worlds. Two experiments are conducted to collect data from user interactions with the virtual worlds. In the first experiment 53 users participated and in the second experiment, a year later, 66 different users participated in the experiment. This research also studies the parameters of user behaviour inside virtual worlds and presents novel feature extraction methods to extract four main biometric features from the collected data, namely: action, time, speed, and entropy biometric features. A sample classification methodology is formulated. Using distance measure algorithms and based on the collected data, users are identified inside the virtual worlds. Also in this thesis the application of biometric fusion in enhancing the performance of the behavioural biometric system is studied. The achieved average equal error rates in this research were between 26-33% depending on the virtual world environment and movement freedom inside virtual worlds. It has been found that avatar actions inside virtual worlds carry more identifying attributes than parameters such as the avatar position inside the virtual world. Also it has been found that virtual worlds with very open environments with respect to avatar movement showed higher EERs when using the biometric system implemented in this research.

Page generated in 0.1133 seconds