• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 198
  • 198
  • 198
  • 26
  • 16
  • 13
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Regions-of-interest-driven medical image compression

Firoozbakht, Mohsen January 2014 (has links)
Advances in medical imaging technologies, particularly magnetic resonance imaging and multi-detector Computed Tomography (CT), have resulted in substantial increase in the size of datasets. In order to reduce the cost of storage and diagnostic analysis and transmission time without significant reduction in image quality, a state of the art image compression technique is required. We propose here a context based and regions of interest (ROI) based approach for the compression of 3D CT images and in particular vascular images, where a high spatial resolution and contrast sensitivity is required in specific areas. The methodology is developed based on the JPEG2000 standard to provide a variable level of compression in the (x,y) plane as well as in the z axis. The proposed lossy-to-lossless method compresses multiple ROIs depending on the degrees of clinical interest. High priority areas are assigned a higher precision (up to lossless compression) than other areas such as background. ROIs are annotated automatically. The method has been optimized and applied to the vascular images from CT angiography for peripheral arteries and compared with a standard medical image codec on 10 datasets regarding image quality and diagnostic performances. The average size of the compressed images can be reduced to 61, 60, 66, and 89 percent with respect to the lossless JP2K, Lossless JP3D, Lossless H.264, and original image respectively with no remarkable impairment for the diagnostic accuracy based on visual judgement of two radiologists.
62

Crowd control and management enterprise modelling (CCMEM) utilising the MECCA (mega event coordination and control architecture) framework

Alsubaie, Hatab January 2014 (has links)
Crowds are often an integral part of an event or an activity that may potentially be overlooked, yet present a substantial threat to the health and safety of all those attending such an event. In the majority of crowd control situations, the importance of managing the event will not simply rest with the event managers themselves, but is likely to involve having to create efficient enterprise wide systems which several third parties would need to interact with, in order to deal with difficulties, should they arise, such as the need to liaise with the police or fire service, as appropriate. This research focuses on the practices of crowd management and the way in which those involved in crowd management should potentially change their approach, in order to enhance safety, but also to enhance the efficiency of managing and controlling the crowd, something which is becoming increasingly important, given the economic impact that large-scale events can have on a region. To enable the above a Crowd Control and Management Enterprise Modelling (CCMEM) framework was developed. The first stage of this was the synthesis of the appropriate components within various existing crowd management models found in literature. This synthesis, formed the basis of the theoretical components from which the Mega Event Command and Control Architecture (MECCA) framework was developed. This framework was evaluated with two case studies involving very large or mega events, ! namely the Hajj to Mecca and the London Olympics 2012. A research study that used both qualitative and quantitative methods to collect primary data was designed, which further developed and validated the CCMEM and the MECCA frameworks. The application of MECCA framework with the two case studies was evaluated using the Crowd Management Evaluation Components (CMEC). When looking at the results of the data collected and the case studies in this particular research, it became apparent that the enterprise wide view understanding of mega event management enabled the effective mapping of and development of associated integrated systems for each of the components of the framework. This in turn leads to more efficient and effective crowd management. Also this better understanding enables officials to react much more effectively and much more quickly to changes in the crowd dynamics. Further work can be carried out to develop the various integrated information systems which will be required and this will be based on the enterprise wide CCMEM - MECCA framework.
63

Investigation of tracking processes applicable to adjacent non-overlapping RGB-D sensors

Almazan, Emilio J. January 2014 (has links)
The work presented in this thesis provides a framework for monitoring wide area indoor spaces built from multiple Microsoft Kinect sensors. A large field of coverage is achieved by placing the sensors in a non-overlapping configuration to reduce the interference between the projected structured patterns. A novel procedure is proposed for estimating the geometric calibration between sensors that enables a common representation for all data by providing many corresponding planes in the view volume of each sensor using a “paddle”. Within this framework, an investigation is conducted of di↵erent depth-based spaces for people detection and tracking purposes. Kinect v.1 sensors bring a multitude of benefits to surveillance applications, mainly for occlusion reasoning. However, this sensor has important limitations in terms of resolution, noise and range. In particular, data becomes more scattered with distance along the optical axis of the camera resulting in non-homogeneous representations throughout the range. Furthermore, when considering the aggregated view, each camera produces a di↵erent orientation of data. The polar coordinate space representation of the common ground plane is proposed that mitigates these limitations and e↵ectively aggregates the data from all sensors. The use of discriminative appearance models is a chief aspect in order to properly distinguish people from each other, especially where the density of people is high. A multi-part appearance model is presented in this work – the chromogram – which combines colour with the height dimension o↵ering high discriminative capabilities especially during occlusions periods. A critical stage for multi-target tracking systems is establishing the correct association between targets and measurements; also known as the data association problem. In this context, the data association stage is investigated by evaluating di↵erent well known data association methodologies. An alternative tracking approach which does not require a data association process is also analysed – the Mean-Shift tracker. A modified version of the Mean-Shift tracker is proposed for tracking on the ground plane that integrates the use of chromograms that reduces distractions from the background and other targets. A new challenging dataset is proposed for the evaluation of multi-target tracking algorithms. The tracking methodologies proposed in this work are compared quantitatively in this framework.
64

Analysing learning behaviour to inform the pedagogical design of e-learning resources : a case study method applied to computer programming courses

Campos Hebrero, A. M. January 2015 (has links)
The work presented in this thesis is motivated by the need to develop practical guidelines to inform the pedagogical design of learning objects and the instructional contexts in which they are used. The difficulty is that there is no standard definition for pedagogical design or appropriate guidelines, in contrast with technical guidelines. Researchers and academic practitioners hold different understandings of the pedagogical values in the design of learning objects that determine their quality and effectiveness as educational software. Traditionally, empirical studies for the evaluation of learning objects gather rating data from the main consumers (i.e. instructional designers, teachers, and students) to assess a variety of design aspects. In this research, it is argues that, in order to evaluate and improve pedagogical design, valuable information can be extracted by analysing existing differences between students and how they use learning objects in real instructional contexts. Given this scenario, investigating the pedagogical aspects of the design of learning objects and how the study of students' behaviour with them can serve to inform such design became the main research interest of this thesis. The exploratory research presents a review of standard technical guidelines and seven evaluation frameworks for learning objects that emerged in the period from 2000 to 2013, revealing a wide spectrum of criteria used to assess their quality and effectiveness. The review explores the advantages and faults of well-known methodologies and instruments for the evaluation of learning materials and presents a selection of 12 pedagogical attributes of design, with a detailed analysis of their meanings and implications for the development of learning objects. The 12 pedagogical attributes of design are: Learning Objective, Integration, Context, Multimedia Richness, Previous Knowledge, Support, Feedback, Self-direction, Interactivity, Navigation, Assessment, and Alignment. The empirical research is based on two case studies where blended learning techniques are used as a new teaching approach for first-year Computer Programming courses at the Austral University of Chile. A virtual learning environment was customized and used in these courses to deliver different types of learning contents and assignments. Three studies were carried out for each course: the first study shows the relationships between students' interactions with different materials; the second study demonstrates the influence that learning styles exert upon these interactions, and the third study collects students' scores about the twelve pedagogical aspects of the learning resources used during the course. The results demonstrate that a relationship exists between the pedagogical attributes of the design of different learning resources and students' interactions with them. Regardless of the learning style preferences of individuals in both cohorts, the design attributes that have the greatest effect on students' behaviour with learning objects and with the whole instructional context are Interactivity, Support, Feedback, and Assessment. From the three sources of data only a combination of two of them, behavioural data and students' scores are valuable sources of empirical data to inform pedagogical design aspects of learning resources. However, it is necessary to establish a direct mapping between design attributes and expected behavioural indicators to facilitate the identification of improvements in the pedagogical design of learning resources.
65

Real time predictive monitoring system for urban transport

Khan, Nauman Ahmed January 2017 (has links)
Ubiquitous access to mobile and internet technology has influenced a significant increase in the amount of data produced, communicated and stored by corporations as well as by individual users, in recent years. The research presented in this thesis proposes an architectural framework to acquire, store, manipulate and integrate data and information within an urban transport environment, to optimise its operations in real-time. The deployed architecture is based on the integration of a number of technologies and tailor-made algorithms implemented to provide a management tool to aid traffic monitoring, using intelligent decision-making processes. A creative combination of Data Mining techniques and Machine Learning algorithms was used to implement predictive analytics, as a key component in the process of addressing challenges in monitoring and managing an urban transport network operation in real-time. The proposed solution has then been applied to an actual urban transport management system, within a partner company, Mermaid Technology, Copenhagen to test and evaluate the proposed algorithms and the architectural integration principles used. Various visualization methods have been employed, at numerous stages of the project to dynamically interpret the large volume and diversity of data to effectively aid the monitoring and decision-making process. The deliverables on this project include: the system architecture design, as well as software solutions, which facilitate predictive analytics and effective visualisation strategies to aid real-time monitoring of a large system, in the context of urban transport. The proposed solutions have been implemented, tested and evaluated in a Case Study in collaboration with Mermaid Technology. Using live data from their network operations, it has aided in evaluating the efficiency of the proposed system.
66

Parallel machine vision for the inspection of surface mount electronic assemblies

Netherwood, Paul January 1993 (has links)
The aim of this thesis is to analyse and evaluate some of the problems associated with developing a parallel machine vision system applied to the problem of inspection of surface mount electronic assemblies. In particular it analyses the problems associated with 2-D feature and shape extraction. Surface Mount Technology is increasingly being used for manufacturing electronic circuit boards because of its light weight and compactness allowing the use of high pin count packages and greater component density. However with this comes significant problems with regards inspection, especially the inspection of solder joints. Existing inspection systems are either prohibitively expensive for most manufacturers and/or have limited functionality. Consequently a low cost architecture for automated inspection is proposed that would consist of sophisticated machine vision software, running on a fast computing platform, that captures images from a simple optical system. This thesis addresses a specific part of this overall architecture, namely the machine vision software required for 2-D feature and shape extraction. Six stages are identified in 2-D feature and shape extraction: Canny Edge Detection, Hysteresis Thresholding, Linking, Dropout Correction, Shape Description and Shape Abstraction. To evaluate the performance of each stage, each is fully implemented and tested on examples of synthetic data and real data from the inspection problem. After Canny Edge Detection, significant edge points are isolated using Hysteresis Thresholding which determines which edge points are important based on thresholds and connectivity. Edge points on their own do not describe a boundary of an object. A linking algorithm is developed in this thesis which groups edge points to describe the outline of a shape. A process of dropout correction is developed to overcome the problem of missing edge points after Canny and Hysteresis. Connected edges are converted to a more abstract form which facilitates recognition. Shape abstraction: is required to remove minor details on a boundary without removing significant points of interest to extract the underlying shape. Finally these stages are integrated into a demonstrator system. 2-D feature and shape extraction is computationally expensive so a parallel processing system based on a network of transputers is used. Transputers can provide the necessary computational power at a relatively low cost. The 2-D feature and shape extraction software is then required to run in parallel so a distributed form of shape extraction is proposed. This minimises communication overheads and maximises processor usage which increases execution speed. For this, a generic method for routing data around a transputer network, called Spatial Routing, is proposed.
67

Developing quality of service management architecture for delivering multicast applications

Roshanaei, Maryam January 2005 (has links)
Multicast applications have been a topic of intense research and development efforts over the past couple of years. Both the Internet Engineering (IETF) and International Telecommunication Union (ITU) have been heavily involved in providing quality of service to support multicast application requirements. Multicast applications have varying performance requirements; therefore it is necessary to design a framework that serves to guarantee quality of services. However the existing best effort services cannot provide the guaranteed service level required by multicast applications. Two solutions have already been proposed to overcome this problem. The first solution proposed the tree-based functionality approach in the multicast transport protocol providing reliability and scalability between a sender and a group of receivers. The other solution has proposed end-to-end quality of service (QoS) over the network environment using interoperation of Integrated services (IntServ) and Differentiated services (DiffServ) principles. Both QoS architectures, Integrated and Differentiated services, have their own advantages and disadvantages. With the interoperation of both architectures, it might be possible to build a scalable system, which would provide predictable services. This framework has to be supported by a multicast transport protocol to provide reliability and scalability over the nodes. The aim of this research is to develop a framework to provide reliably and scalability on nodes (tree-functionality) along with the end-to-end resources, dynamic admission control and scalability over the network (interoperation of IntServ and DiffServ) for multicast applications. The "Enhanced Communication Transport Protocol" (ECTP) transport protocol was chosen for this research. ECTP transport protocol is a multicast transport protocol with tree-based functionality to support multicast applications. ECTP transport protocol is also able to provide QoS management functionality established by Integrated or/and Differentiated services to support multicast application. With the QoS management functionality, ECTP transport protocol could provide reliability and scalability (over nodes) along with end-to-end resource, dynamic admission control and scalability over the network for multicast applications. This research is focused on the further enhancement and implementation of an ECTP transport protocol, QoS management specification. Two models have been proposed to enable ECTP transport protocol with QoS management functionality established by the IntServ or/and DiffServ principles. Model (I) enables ECTP transport protocol to negotiate end-to-end resource reservation using the standard RSVP (IntServ) signalling protocol. Model (II) enables the ECTP transport protocol to negotiate end-to-end resource reservation using the standard and aggregated RSVP (IntServ and DiffServ) signalling protocol. The "Optimized Network Engineering Tool 8.1" (OPNET) has been used in this research to implement and investigate the ECTP specifications. OPNET simulator provides a comprehensive development environment for modelling and performance of communications networks. The investigation consists of three case studies. The simulation results have proved that ECTP transport protocol with the tree-based functionality and the QoS management provided by IntServ and DiffServ interoperation produces the best performance for the traffic delay parameter over voice applications.
68

Robust wide-area multi-camera tracking of people and vehicles to improve CCTV usage

Yin, Fei January 2011 (has links)
This thesis describes work towards a more advanced multiple camera tracking system. This work was sponsored by BARCO who had developed a motion tracker (referred to as the BARCO tracker) and wanted to assess its performance, improve the tracker and explore applications especially for multi-camera systems. The overall requirement then gave rise to specific work in this project: Two trackers (the BARCO tracker and OpenCV 1.0 blobtracker) are tested using a set of datasets with a range of challenges, and their performances are quantitatively evaluated and compared. Then, the BARCO tracker has been further improved by adding three new modules: ghost elimination, shadow removal and improved Kalman filter. Afterwards, the improved tracker is used as part of a multi-camera tracking system. Also, automatic camera calibration methods are proposed to effectively calibrate a network of cameras with minimum manual support (draw lines features in the scene image) and a novel scene modelling method is proposed to overcome the limitations of previous methods. The main contributions of this work to knowledge are listed as follows: A rich set of track based metrics is proposed which allows the user to quantitatively identify specific strengths and weaknesses of an object tracking system, such as the performance of specific modules of the system or failures under specific conditions. Those metrics also allow the user to measure the improvements that have been applied to a tracking system and to compare performance of different tracking methods. For single camera tracking, new modules have been added to the BARCO tracker to improve the tracking performance and prevent specific tracking failures. A novel method is proposed by the author to identify and remove ghost objects. Another two methods are adopted from others to reduce the effect of shadow and improve the accuracy of tracking. For multiple camera tracking, a quick and efficient method is proposed for automatically calibrating multiple cameras into a single view map based on homography mapping. Then, vertical axis based approach is used to fuse detections from single camera views and Kalman filter is employed to track objects on the ground plane. Last but not least, a novel method is proposed to automatically learn a 3D non-coplanar scene model (e.g. multiple levels, stairs, and overpass) by exploiting the variation of pedestrian heights within the scene. Such method will extend the applicability of the existing multi-camera tracking algorithm to a larger variety of environments: both indoors and outdoors where objects (pedestrians and/or vehicles) are not constrained to move on a single flat ground plane.
69

Tracking people across multiple cameras in the presence of clutter and noise

Colombo, Alberto January 2011 (has links)
As video surveillance systems become more and more pervasive in our society, it is evident that simply increasing the number of cameras does not guarantee increased security, since each operator can only attend to a limited number of monitors. To overcome this limit, automatic video surveillance systems (AVSS, computer-based surveillance systems that automate some of the most tedious work of security operators) are being deployed. One such task is tracking, defined by the end users in this project as "keeping a selected passenger always visible on a surveillance monitor". The purpose of this work was to develop a single-person, multi-camera tracker that can be used in real time to follow a manually-selected individual. The operation of selecting an individual for tracking is called tagging, and therefore this type of tracker is known as a tag and track system. The developed system is conceived to be deployed as part of a large surveillance network, consisting of possibly hundreds of cameras, with possibly large blind regions between cameras. The main contribution of this thesis is a probabilistic framework that can be used to develop a multi-camera tracker by fusing heterogeneous information coming from visual sensors and from prior knowledge about the relative poisitioning of cameras in the surveillance network. The developed tracker has been demonstrated to work in real time on a standard PC independently of the number of cameras in the network. The developed tracker has been demonstrated to work in real time on a standard PC independently of the number of cameras in the network. Quantitative performance evaluation is carried out using realistic tracking scenarios.
70

Coordination and control mechanisms for embedded swarm-like agents

Mullen, Robert January 2011 (has links)
Observations of the mechanisms of natural systems have given us a wide-range of problem solving tools that can be applied to computational and technology related challenges. This thesis explores the use of swarm intelligence mechanisms to facilitate group level cooperative coordination and control of swarm-like agents that are embedded in 2D or 3D environments, and explores how distributed dynamic behaviours can be integrated in to the self-organisation process. Specifically a number of algorithms are developed to facilitate adaptive pattern formation and manipulation for two distinctly different problems. Firstly, large-scale pattern formation is considered using an embedded swarm of software agents. The agents are considered as virtual entities which are embedded into digital images at the pixel level, such that the intensity map of the image corresponds to a landscape within which the swarm of agents move. The agent-agent and agent-environment interactions are then studied in the context of emergent pattern formation, from which a number of ant-algorithms are developed to achieve a range of image and video processiug solutions by inducing swarm self-organisation in response to user specified image features. Artificial pheromones are used to reinforce features of interest in the image landscape, and after the swarm has self-organised, the resultant pheromone map reveals the pattern feature to be extracted. The algorithm can be adapted for different types of image features with relative ease, and simultaneous self-organisation of multiple swarms in the same image environment is implemented to achieve distributed multi-feature extraction. The dynamic nature of the self-organising process is exploited to extend the functionality of the algorithm to feature tracking in real-time imagery, where the swarms effectively track features of interest from frame to frame. An adaptive threshold method is developed which exploits the distributed nature of the swarm approach, by allowing individual ant agents to adapt their own feature threshold parameters in response to their local environment. This is both an interesting study with regards to artificial swarm pattern formation, and also provides practical image and video processing solutions which do not require a full image scan or any filtering operations, unlike many traditional methods. The novel adaptive threshold method eliminates the requirement for a user-set threshold, and allows for distributed, multi-level thresholding across image environments, as well as adaptive capabilities for dynamic imagery. The second problem focuses on pattern formation and manipulation of a small swarm of hardware agents in a swarm robotics problem setting. Transferring from software agents to hardware agents introduces several difficulties to overcome in order to fully realise the distributed nature of the swarm intelligence approach to multi-robot formation control. The second part of this thesis focuses on designing a control architecture that enables cooperative coordination and control of multiple robots, leading to group level adaptive pattern formation and manipulation, using a fully distributed algorithim that requires no inter-robot communication and retains robot anonymity. This is achieved using a distributed variation of the virtual forces approach. The use of a genetic algorithm for problem specific parameter optimisation is investigated to improve performance with respect to pattern formation for area coverage. A multi-behavioural approach is investigated for the problem scenario of locating and monitoring multiple target areas within a partially observable environment, where the self-organising pattern formation behaviours are exploited to provide distributed coverage. A new mechanism called Virtual Robot Nodes (VRNs) is introduced which improves swarm-level cohesion and allows for more complex formation and pattern management. The VRN method allows individual robots to self-manage their experieneed virtual forces in response to their perception of their local environment and neighbouring robots, allowing for distributed dynamic adaptation. Verification of the proposed algorithms is carried out through a range of experiments in 2D simulation, physics and sensor based simulation, and embedded simulation on real robots in a laboratory environment, for a range of test scenarios. The application of different nature inspired control architectures for small to large sized swarms, and from software entities to hardware entities, promotes a focal point for discussion on the wide-ranging potential for harnessing the knowledge of nature in solving computational problems.

Page generated in 0.1578 seconds