• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1258
  • 960
  • 482
  • 266
  • 46
  • 35
  • 27
  • 22
  • 17
  • 10
  • 8
  • 6
  • 6
  • 4
  • 3
  • Tagged with
  • 3510
  • 745
  • 681
  • 667
  • 657
  • 648
  • 606
  • 461
  • 371
  • 323
  • 302
  • 295
  • 241
  • 222
  • 203
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Detection, tracking and classification of vehicles in urban environments

Chen, Zezhi January 2012 (has links)
The work presented in this dissertation provides a framework for object detection,tracking and vehicle classification in urban environment. The final aim is to produce a system for traffic flow statistics analysis. Based on level set methods and a multi-phase colour model, a general variational formulation which combines Minkowski-form distance L2 and L3 of each channel and their homogenous regions in the index is defined. The active segmentation method successfully finds whole object boundaries which include different known colours, even in very complex background situations, rather than splitting an object into several regions with different colours. For video data supplied by a nominally stationary camera, an adaptive Gaussian mixture model (GMM), with a multi-dimensional Gaussian kernel spatio-temporal smoothing transform, has been used for modeling the distribution of colour image data. The algorithm improves the segmentation performance in adverse imaging conditions. A self-adaptive Gaussian mixture model, with an online dynamical learning rate and global illumination changing factor, is proposed to address the problem of sudden change in illumination. The effectiveness of a state-of-the-art classification algorithm to categorise road vehicles for an urban traffic monitoring system using a set of measurement-based feature (BMF) and a multi-shape descriptor is investigated. Manual vehicle segmentation was used to acquire a large database of labeled vehicles form a set of MBF in combination with pyramid histogram of orientation gradient (PHOG) and edge-based PHOG features. These are used to classify the objects into four main vehicle categories: car, van (van, minivan, minibus and limousine), bus (single and double decked) and motorcycle (motorcycle and bicycle). Then, an automatic system for vehicle detection, tracking and classification from roadside CCTV is presented. The system counts vehicles and separates them into the four categories mentioned above. The GMM and shadow removal method have been used to deal with sudden illumination changes and camera vibration. A Kalman filter tracks a vehicle to enable classification by majority voting over several consecutive frames, and a level set method has been used to refine the foreground blob. Finally, a framework for confidence based active learning for vehicle classification in an urban traffic environment is presented. Only a small number of low confidence samples need to be identified and annotated according to their confidence. Compared to passive learning, the number of annotated samples needed for the training dataset can be reduced significantly, yielding a high accuracy classifier with low computational complexity and high efficiency.
252

Regions-of-interest-driven medical image compression

Firoozbakht, Mohsen January 2014 (has links)
Advances in medical imaging technologies, particularly magnetic resonance imaging and multi-detector Computed Tomography (CT), have resulted in substantial increase in the size of datasets. In order to reduce the cost of storage and diagnostic analysis and transmission time without significant reduction in image quality, a state of the art image compression technique is required. We propose here a context based and regions of interest (ROI) based approach for the compression of 3D CT images and in particular vascular images, where a high spatial resolution and contrast sensitivity is required in specific areas. The methodology is developed based on the JPEG2000 standard to provide a variable level of compression in the (x,y) plane as well as in the z axis. The proposed lossy-to-lossless method compresses multiple ROIs depending on the degrees of clinical interest. High priority areas are assigned a higher precision (up to lossless compression) than other areas such as background. ROIs are annotated automatically. The method has been optimized and applied to the vascular images from CT angiography for peripheral arteries and compared with a standard medical image codec on 10 datasets regarding image quality and diagnostic performances. The average size of the compressed images can be reduced to 61, 60, 66, and 89 percent with respect to the lossless JP2K, Lossless JP3D, Lossless H.264, and original image respectively with no remarkable impairment for the diagnostic accuracy based on visual judgement of two radiologists.
253

Crowd control and management enterprise modelling (CCMEM) utilising the MECCA (mega event coordination and control architecture) framework

Alsubaie, Hatab January 2014 (has links)
Crowds are often an integral part of an event or an activity that may potentially be overlooked, yet present a substantial threat to the health and safety of all those attending such an event. In the majority of crowd control situations, the importance of managing the event will not simply rest with the event managers themselves, but is likely to involve having to create efficient enterprise wide systems which several third parties would need to interact with, in order to deal with difficulties, should they arise, such as the need to liaise with the police or fire service, as appropriate. This research focuses on the practices of crowd management and the way in which those involved in crowd management should potentially change their approach, in order to enhance safety, but also to enhance the efficiency of managing and controlling the crowd, something which is becoming increasingly important, given the economic impact that large-scale events can have on a region. To enable the above a Crowd Control and Management Enterprise Modelling (CCMEM) framework was developed. The first stage of this was the synthesis of the appropriate components within various existing crowd management models found in literature. This synthesis, formed the basis of the theoretical components from which the Mega Event Command and Control Architecture (MECCA) framework was developed. This framework was evaluated with two case studies involving very large or mega events, ! namely the Hajj to Mecca and the London Olympics 2012. A research study that used both qualitative and quantitative methods to collect primary data was designed, which further developed and validated the CCMEM and the MECCA frameworks. The application of MECCA framework with the two case studies was evaluated using the Crowd Management Evaluation Components (CMEC). When looking at the results of the data collected and the case studies in this particular research, it became apparent that the enterprise wide view understanding of mega event management enabled the effective mapping of and development of associated integrated systems for each of the components of the framework. This in turn leads to more efficient and effective crowd management. Also this better understanding enables officials to react much more effectively and much more quickly to changes in the crowd dynamics. Further work can be carried out to develop the various integrated information systems which will be required and this will be based on the enterprise wide CCMEM - MECCA framework.
254

Investigation of tracking processes applicable to adjacent non-overlapping RGB-D sensors

Almazan, Emilio J. January 2014 (has links)
The work presented in this thesis provides a framework for monitoring wide area indoor spaces built from multiple Microsoft Kinect sensors. A large field of coverage is achieved by placing the sensors in a non-overlapping configuration to reduce the interference between the projected structured patterns. A novel procedure is proposed for estimating the geometric calibration between sensors that enables a common representation for all data by providing many corresponding planes in the view volume of each sensor using a “paddle”. Within this framework, an investigation is conducted of di↵erent depth-based spaces for people detection and tracking purposes. Kinect v.1 sensors bring a multitude of benefits to surveillance applications, mainly for occlusion reasoning. However, this sensor has important limitations in terms of resolution, noise and range. In particular, data becomes more scattered with distance along the optical axis of the camera resulting in non-homogeneous representations throughout the range. Furthermore, when considering the aggregated view, each camera produces a di↵erent orientation of data. The polar coordinate space representation of the common ground plane is proposed that mitigates these limitations and e↵ectively aggregates the data from all sensors. The use of discriminative appearance models is a chief aspect in order to properly distinguish people from each other, especially where the density of people is high. A multi-part appearance model is presented in this work – the chromogram – which combines colour with the height dimension o↵ering high discriminative capabilities especially during occlusions periods. A critical stage for multi-target tracking systems is establishing the correct association between targets and measurements; also known as the data association problem. In this context, the data association stage is investigated by evaluating di↵erent well known data association methodologies. An alternative tracking approach which does not require a data association process is also analysed – the Mean-Shift tracker. A modified version of the Mean-Shift tracker is proposed for tracking on the ground plane that integrates the use of chromograms that reduces distractions from the background and other targets. A new challenging dataset is proposed for the evaluation of multi-target tracking algorithms. The tracking methodologies proposed in this work are compared quantitatively in this framework.
255

Analysing learning behaviour to inform the pedagogical design of e-learning resources : a case study method applied to computer programming courses

Campos Hebrero, A. M. January 2015 (has links)
The work presented in this thesis is motivated by the need to develop practical guidelines to inform the pedagogical design of learning objects and the instructional contexts in which they are used. The difficulty is that there is no standard definition for pedagogical design or appropriate guidelines, in contrast with technical guidelines. Researchers and academic practitioners hold different understandings of the pedagogical values in the design of learning objects that determine their quality and effectiveness as educational software. Traditionally, empirical studies for the evaluation of learning objects gather rating data from the main consumers (i.e. instructional designers, teachers, and students) to assess a variety of design aspects. In this research, it is argues that, in order to evaluate and improve pedagogical design, valuable information can be extracted by analysing existing differences between students and how they use learning objects in real instructional contexts. Given this scenario, investigating the pedagogical aspects of the design of learning objects and how the study of students' behaviour with them can serve to inform such design became the main research interest of this thesis. The exploratory research presents a review of standard technical guidelines and seven evaluation frameworks for learning objects that emerged in the period from 2000 to 2013, revealing a wide spectrum of criteria used to assess their quality and effectiveness. The review explores the advantages and faults of well-known methodologies and instruments for the evaluation of learning materials and presents a selection of 12 pedagogical attributes of design, with a detailed analysis of their meanings and implications for the development of learning objects. The 12 pedagogical attributes of design are: Learning Objective, Integration, Context, Multimedia Richness, Previous Knowledge, Support, Feedback, Self-direction, Interactivity, Navigation, Assessment, and Alignment. The empirical research is based on two case studies where blended learning techniques are used as a new teaching approach for first-year Computer Programming courses at the Austral University of Chile. A virtual learning environment was customized and used in these courses to deliver different types of learning contents and assignments. Three studies were carried out for each course: the first study shows the relationships between students' interactions with different materials; the second study demonstrates the influence that learning styles exert upon these interactions, and the third study collects students' scores about the twelve pedagogical aspects of the learning resources used during the course. The results demonstrate that a relationship exists between the pedagogical attributes of the design of different learning resources and students' interactions with them. Regardless of the learning style preferences of individuals in both cohorts, the design attributes that have the greatest effect on students' behaviour with learning objects and with the whole instructional context are Interactivity, Support, Feedback, and Assessment. From the three sources of data only a combination of two of them, behavioural data and students' scores are valuable sources of empirical data to inform pedagogical design aspects of learning resources. However, it is necessary to establish a direct mapping between design attributes and expected behavioural indicators to facilitate the identification of improvements in the pedagogical design of learning resources.
256

Real time predictive monitoring system for urban transport

Khan, Nauman Ahmed January 2017 (has links)
Ubiquitous access to mobile and internet technology has influenced a significant increase in the amount of data produced, communicated and stored by corporations as well as by individual users, in recent years. The research presented in this thesis proposes an architectural framework to acquire, store, manipulate and integrate data and information within an urban transport environment, to optimise its operations in real-time. The deployed architecture is based on the integration of a number of technologies and tailor-made algorithms implemented to provide a management tool to aid traffic monitoring, using intelligent decision-making processes. A creative combination of Data Mining techniques and Machine Learning algorithms was used to implement predictive analytics, as a key component in the process of addressing challenges in monitoring and managing an urban transport network operation in real-time. The proposed solution has then been applied to an actual urban transport management system, within a partner company, Mermaid Technology, Copenhagen to test and evaluate the proposed algorithms and the architectural integration principles used. Various visualization methods have been employed, at numerous stages of the project to dynamically interpret the large volume and diversity of data to effectively aid the monitoring and decision-making process. The deliverables on this project include: the system architecture design, as well as software solutions, which facilitate predictive analytics and effective visualisation strategies to aid real-time monitoring of a large system, in the context of urban transport. The proposed solutions have been implemented, tested and evaluated in a Case Study in collaboration with Mermaid Technology. Using live data from their network operations, it has aided in evaluating the efficiency of the proposed system.
257

Parallel machine vision for the inspection of surface mount electronic assemblies

Netherwood, Paul January 1993 (has links)
The aim of this thesis is to analyse and evaluate some of the problems associated with developing a parallel machine vision system applied to the problem of inspection of surface mount electronic assemblies. In particular it analyses the problems associated with 2-D feature and shape extraction. Surface Mount Technology is increasingly being used for manufacturing electronic circuit boards because of its light weight and compactness allowing the use of high pin count packages and greater component density. However with this comes significant problems with regards inspection, especially the inspection of solder joints. Existing inspection systems are either prohibitively expensive for most manufacturers and/or have limited functionality. Consequently a low cost architecture for automated inspection is proposed that would consist of sophisticated machine vision software, running on a fast computing platform, that captures images from a simple optical system. This thesis addresses a specific part of this overall architecture, namely the machine vision software required for 2-D feature and shape extraction. Six stages are identified in 2-D feature and shape extraction: Canny Edge Detection, Hysteresis Thresholding, Linking, Dropout Correction, Shape Description and Shape Abstraction. To evaluate the performance of each stage, each is fully implemented and tested on examples of synthetic data and real data from the inspection problem. After Canny Edge Detection, significant edge points are isolated using Hysteresis Thresholding which determines which edge points are important based on thresholds and connectivity. Edge points on their own do not describe a boundary of an object. A linking algorithm is developed in this thesis which groups edge points to describe the outline of a shape. A process of dropout correction is developed to overcome the problem of missing edge points after Canny and Hysteresis. Connected edges are converted to a more abstract form which facilitates recognition. Shape abstraction: is required to remove minor details on a boundary without removing significant points of interest to extract the underlying shape. Finally these stages are integrated into a demonstrator system. 2-D feature and shape extraction is computationally expensive so a parallel processing system based on a network of transputers is used. Transputers can provide the necessary computational power at a relatively low cost. The 2-D feature and shape extraction software is then required to run in parallel so a distributed form of shape extraction is proposed. This minimises communication overheads and maximises processor usage which increases execution speed. For this, a generic method for routing data around a transputer network, called Spatial Routing, is proposed.
258

Developing quality of service management architecture for delivering multicast applications

Roshanaei, Maryam January 2005 (has links)
Multicast applications have been a topic of intense research and development efforts over the past couple of years. Both the Internet Engineering (IETF) and International Telecommunication Union (ITU) have been heavily involved in providing quality of service to support multicast application requirements. Multicast applications have varying performance requirements; therefore it is necessary to design a framework that serves to guarantee quality of services. However the existing best effort services cannot provide the guaranteed service level required by multicast applications. Two solutions have already been proposed to overcome this problem. The first solution proposed the tree-based functionality approach in the multicast transport protocol providing reliability and scalability between a sender and a group of receivers. The other solution has proposed end-to-end quality of service (QoS) over the network environment using interoperation of Integrated services (IntServ) and Differentiated services (DiffServ) principles. Both QoS architectures, Integrated and Differentiated services, have their own advantages and disadvantages. With the interoperation of both architectures, it might be possible to build a scalable system, which would provide predictable services. This framework has to be supported by a multicast transport protocol to provide reliability and scalability over the nodes. The aim of this research is to develop a framework to provide reliably and scalability on nodes (tree-functionality) along with the end-to-end resources, dynamic admission control and scalability over the network (interoperation of IntServ and DiffServ) for multicast applications. The "Enhanced Communication Transport Protocol" (ECTP) transport protocol was chosen for this research. ECTP transport protocol is a multicast transport protocol with tree-based functionality to support multicast applications. ECTP transport protocol is also able to provide QoS management functionality established by Integrated or/and Differentiated services to support multicast application. With the QoS management functionality, ECTP transport protocol could provide reliability and scalability (over nodes) along with end-to-end resource, dynamic admission control and scalability over the network for multicast applications. This research is focused on the further enhancement and implementation of an ECTP transport protocol, QoS management specification. Two models have been proposed to enable ECTP transport protocol with QoS management functionality established by the IntServ or/and DiffServ principles. Model (I) enables ECTP transport protocol to negotiate end-to-end resource reservation using the standard RSVP (IntServ) signalling protocol. Model (II) enables the ECTP transport protocol to negotiate end-to-end resource reservation using the standard and aggregated RSVP (IntServ and DiffServ) signalling protocol. The "Optimized Network Engineering Tool 8.1" (OPNET) has been used in this research to implement and investigate the ECTP specifications. OPNET simulator provides a comprehensive development environment for modelling and performance of communications networks. The investigation consists of three case studies. The simulation results have proved that ECTP transport protocol with the tree-based functionality and the QoS management provided by IntServ and DiffServ interoperation produces the best performance for the traffic delay parameter over voice applications.
259

Robust wide-area multi-camera tracking of people and vehicles to improve CCTV usage

Yin, Fei January 2011 (has links)
This thesis describes work towards a more advanced multiple camera tracking system. This work was sponsored by BARCO who had developed a motion tracker (referred to as the BARCO tracker) and wanted to assess its performance, improve the tracker and explore applications especially for multi-camera systems. The overall requirement then gave rise to specific work in this project: Two trackers (the BARCO tracker and OpenCV 1.0 blobtracker) are tested using a set of datasets with a range of challenges, and their performances are quantitatively evaluated and compared. Then, the BARCO tracker has been further improved by adding three new modules: ghost elimination, shadow removal and improved Kalman filter. Afterwards, the improved tracker is used as part of a multi-camera tracking system. Also, automatic camera calibration methods are proposed to effectively calibrate a network of cameras with minimum manual support (draw lines features in the scene image) and a novel scene modelling method is proposed to overcome the limitations of previous methods. The main contributions of this work to knowledge are listed as follows: A rich set of track based metrics is proposed which allows the user to quantitatively identify specific strengths and weaknesses of an object tracking system, such as the performance of specific modules of the system or failures under specific conditions. Those metrics also allow the user to measure the improvements that have been applied to a tracking system and to compare performance of different tracking methods. For single camera tracking, new modules have been added to the BARCO tracker to improve the tracking performance and prevent specific tracking failures. A novel method is proposed by the author to identify and remove ghost objects. Another two methods are adopted from others to reduce the effect of shadow and improve the accuracy of tracking. For multiple camera tracking, a quick and efficient method is proposed for automatically calibrating multiple cameras into a single view map based on homography mapping. Then, vertical axis based approach is used to fuse detections from single camera views and Kalman filter is employed to track objects on the ground plane. Last but not least, a novel method is proposed to automatically learn a 3D non-coplanar scene model (e.g. multiple levels, stairs, and overpass) by exploiting the variation of pedestrian heights within the scene. Such method will extend the applicability of the existing multi-camera tracking algorithm to a larger variety of environments: both indoors and outdoors where objects (pedestrians and/or vehicles) are not constrained to move on a single flat ground plane.
260

Tracking people across multiple cameras in the presence of clutter and noise

Colombo, Alberto January 2011 (has links)
As video surveillance systems become more and more pervasive in our society, it is evident that simply increasing the number of cameras does not guarantee increased security, since each operator can only attend to a limited number of monitors. To overcome this limit, automatic video surveillance systems (AVSS, computer-based surveillance systems that automate some of the most tedious work of security operators) are being deployed. One such task is tracking, defined by the end users in this project as "keeping a selected passenger always visible on a surveillance monitor". The purpose of this work was to develop a single-person, multi-camera tracker that can be used in real time to follow a manually-selected individual. The operation of selecting an individual for tracking is called tagging, and therefore this type of tracker is known as a tag and track system. The developed system is conceived to be deployed as part of a large surveillance network, consisting of possibly hundreds of cameras, with possibly large blind regions between cameras. The main contribution of this thesis is a probabilistic framework that can be used to develop a multi-camera tracker by fusing heterogeneous information coming from visual sensors and from prior knowledge about the relative poisitioning of cameras in the surveillance network. The developed tracker has been demonstrated to work in real time on a standard PC independently of the number of cameras in the network. The developed tracker has been demonstrated to work in real time on a standard PC independently of the number of cameras in the network. Quantitative performance evaluation is carried out using realistic tracking scenarios.

Page generated in 0.0499 seconds