• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1258
  • 959
  • 482
  • 266
  • 46
  • 35
  • 27
  • 22
  • 17
  • 10
  • 8
  • 6
  • 6
  • 4
  • 3
  • Tagged with
  • 3509
  • 745
  • 681
  • 667
  • 657
  • 648
  • 606
  • 460
  • 371
  • 323
  • 302
  • 295
  • 241
  • 222
  • 203
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

A QoS framework for modeling and simulation of distributed services within a cloud environment

Oppong, Eric Asamoah January 2014 (has links)
Distributed computing paradigm such as Cloud and SOA provides the architecture and medium for service computing which gives flexibility to organisations in implementing IT solutions meeting specific business objectives. The advancement of internet technology opens up the use of service computing and broaden the scope into areas classified as utility computing, computing solutions modelled as services allowing consumers to use and pay for solutions that includes applications and physical devices. The model of service computing offers great opportunity in cutting cost and deployment but also presents a case for user demands changes that is different from the usual service level agreement in computing deployment. Service providers must consider different aspects of consumer demands in provisioning of services including non-functional requirements such as Quality of Service, this not only relates to the users expectations but also managing the effective distribution of resources and applications. The normal model for meeting user requirements is over-stretched and therefore requires more information gathering and analysis of requirements that can be used to determine effective management in service computing by leveraging SOA and Cloud computing based on QoS factors. A model is needed to consider multiple criteria in decision making to enable proper mapping of resources from service composition level to resource provision for processing user request, a framework that is capable of analysing service composition and resource requirements. Thus, the aim of the thesis is to develop a framework for enabling service allocation in Cloud Computing based on SOA and QoS for analysing user requirements to ensure effective allocation and performance in a distributed system. The framework is designed to handle the top layer of user requirements in terms of application development and the lower layer of resource management, analysing the requirement in terms of QoS in order to identify the common factors that matches the user requirement and the available resources. The framework is evaluated using Cloudsim simulator to test its effectiveness in improving service and resource allocation in Distributed Computing environment. This approach offers a greater flexible to overcome issues of over-provisioning and underprovisioning of resources by maintaining an effective provisioning using Service Oriented QoS Enabled Framework (SOQ-Framework) for requirement analysis of service composition and resource capabilities.
242

Video quality and QoS-driven downlink scheduling for 2D and 3D video over LTE networks

Nasralla, Moustafa January 2015 (has links)
In recent years, cellular operators throughout the world have observed a rapid increase in the number of mobile broadband subscribers. Similarily, the amount of traffic per subscriber is growing rapidly, in particular with the emergence of advanced mobile phones, smart phones, and real-time services (such as 2D and 3D video, IP telephony, etc.). On the other hand, Long-Term Evolution (LTE) is a technology which is capable of providing high data rates for multimedia applications through its IP-based framework. The Third Generation Partnership Project (3GPP) LTE and its subsequent modification called LTE-Advanced (LTE-A) are the latest standards in the series of mobile telecommunication systems, and they have been already deployed in the developed countries. The 3GPP standard has left the scheduling approaches understandardised, hence this enabled the proposal of standard-compatible solutions to enhance the Quality of Service (QoS) and Quality of Experience (QoE) performance in multi-user wireless network scenarios. The main objective of the PhD project was the design and evaluation of LTE downlink scheduling strategies for efficient transmission of multi-user 2D and 3D video and multi-traffic classes over error prone and bandwith-limited wireless communication channels. The strategies developed are aimed at maximising and balancing the QoS among the users and improving the QoE at the receiver end. Following a review and a novel taxonomy of the existing content-aware and content-unaware downlink scheduling algorithms and a network-centric and user-centric performance evaluation, this thesis proposes a novel QoS-driven downlink scheduling approach for 2D and 3D video and multi-traffic classes over LTE wireless systems. Moreover, this thesis explores the quality of 3D video over LTE wireless networks through network-centric and user-centric performance evaluation of existing and our proposed scheduling algorithms. Admission control is also proposed by considering the different LTE bandwith sizes, in order to achieve high system resource utilization and deliver high 2D and 3D video quality for the LTE users. This thesis introduces the transmission of 3D video over a modelled LTE wireless network. The channel is modelled via Gilbert-Elliot (GE) parameters which represent real statistics of an LTE wireless channel. The results of subjective and objective assessments of the 3D video sequences are provided for different levels of wireless impairments.
243

Surveillance video data fusion

Wang, Simi January 2016 (has links)
The overall objective under consideration is the design of a system capable of automatic inference about events occurring in the scene under surveillance. Using established video processing techniques. low level inferences are relatively straightforward to establish as they only determine activities of some description. The challenge is to design a system that is capable of higher-level inference, that can be used to notify stakeholders about events having semantic importance. It is argued that re-identification of the entities present in the scene (such as vehicles and pedestrians) is an important intermediate objective, to support many of the types of higher level interference required. The input video can be processed in a number of ways to obtain estimates of the attributes of the objects and events in the scene. These attributes can then be analysed, or 'fused', to enable the high-level inference. One particular challenge is the management of the uncertainties, which are associated with the estimates, and hence with the overall inferences. Another challenge is obtaining accurate estimates of prior probabilities, which can have a significant impact on the final inferences. This thesis makes the following contributions. Firstly, a review of the nature of the uncertainties present in a visual surveillance system and quantification of the uncertainties associated with current techniques. Secondly, an investigation into the benefits of using a new high resolution dataset for the problem of pedestrain re-identification under various scenarios including occlusoon. This is done by combining state-of-art techniques with low level fusion techniques. Thirdly, a multi-class classification approach to solve the classification of vehicle manufacture logos. The approach uses the Fisher Discriminative classifier and decision fusion techniques to identify and classify logos into its correct categories. Fourthly, two probabilistic fusion frameworks were developed, using Bayesian and Evidential Dempster-Shafer methodologies, respectively, to allow inferences about multiple objectives and to reduce the uncertainty by combining multiple information sources. Fifthly, an evaluation framework was developed, based on the Kelly Betting Strategy, to effectively accommodate the additional information offered by the Dempster-Shafer approach, hence allowing comparisons with the single probabilistic output provided by a Bayesian analysis.
244

Scene analysis and risk estimation for domestic robots, security and smart homes

Dupre, Rob January 2017 (has links)
The evaluation of risk within a scene is a new and emerging area of research. With the advent of smart enabled homes and the continued development and implementation of domestic robotics, the platform for automated risk assessment within the home is now a possibility. The aim of this thesis is to explore a subsection of the problems facing the detection and quantification of risk in a domestic setting. A Risk Estimation framework is introduced which provides a flexible and context aware platform from which measurable elements of risk can be combined to create a final risk score for a scene. To populate this framework, three elements of measurable risk are proposed and evaluated: Firstly, scene stability, assessing the location and stability of objects within an environment through the use of physics simulation techniques. Secondly, hazard feature analysis using two specifically designed novel feature descriptors (3D Voxel HOG and the Physics Behaviour Feature) which determine if the objects within a scene have dangerous or risky properties such as blades or points. Finally, environment interaction, which uses human behaviour simulation to predict human reactions to detected risks and highlight areas of a scene most likely to be visited. Additionally methodologies are introduced to support these concepts including: a simulation prediction framework which reduces the computational cost of physics simulation, a Robust Filter and Complex Adaboost which aim to improve the robustness and training times required for hazard feature classification models. The Human and Group Behaviour Evaluation framework is introduced to provide a platform from which simulation algorithms can be evaluated without the need for extensive ground truth data. Finally the 3D Risk Scenes (3DRS) dataset is introduced, creating a risk specific dataset for the evaluation of future domestic risk analysis methodologies.
245

Multiscale analysis for off-line handwriting recognition

Sharma, Sanjeer January 2001 (has links)
The aim of this thesis is to investigate how ‘multiscale analysis’ can help to solve some of the problems associated with achieving reliable automatic off-line handwriting recognition based on feature extraction and model matching. The thesis concentrates on recognising off-line handwriting, in which no explicit dynamic information about the act of writing is present. Image curvature has emerged as being an important feature for describing and recognising shapes. However, it is highly susceptible to noise, requiring smoothing of the data. In many systems, smoothing is performed at a pre-determined fixed scale. The feature of this work is that Multiscale analysis is performed by applying Gaussian smoothing over a ‘range’ of octave separated scales. This process not only eliminates noise and unwanted detail, but also highlights and quantifies those features stable over a ‘range’ of scales. Curvature features are extracted by evaluating the 1[sup]st and 2[sup]nd order derivative values for the Gaussian kernels, and a method is proposed for automatically selecting those scales of significance at which to perform optimum matching. A set of describing elements (features) is defined, and combined into a representation known as "codons" for matching. Handwritten characters are recognised in terms of their constituent codons, following the process of multiscale analysis. This is done by extracting codons from a range of octave separated scales, and matching the codons at scales of significance with a database of model codons created for the different types of handwritten characters. Other approaches for matching are reviewed and contrasted, including the use of artificial neural networks. The main contribution of this thesis is the investigation into applying multiscale analysis to ascertain the most appropriate scale(s) at which to perform matching by removing noise, ascertaining, and extracting features that are significant over a range of scales. Importantly, this is performed without having to pre-determine the amount of smoothing required, and therefore avoiding arbitrary thresholds for the amount of smoothing performed. The proposed method shows great potential as a robust approach for recognising handwriting.
246

Innovation nuclei in SMEs involved in Internet B2C e-commerce

Mellor, Robert Brooke January 2006 (has links)
The research carried out aimed to illuminate how innovation arises and spreads within an SME internal environment. SMEs are an area where innovations can be readily identified and the company size makes tracking the spread of innovations possible. B2C e-commerce was chosen because the sector is smaller and thus more manageable than B2B. The period chosen (1997-2003) was a period where companies, especially SMEs, had to deal simultaneously with technological change, market change and organizational change and this called for a good deal of innovation and innovation management. Since IT is used to enable both business and marketing innovations it provides a good thematic link between the areas of innovation and Internet marketing. Thus innovations, especially in Internet marketing and advertising, were analysed further and compared to popular predictions. An empirical analysis of nineteen innovations from SME case companies in several EU nations revealed the importance of a hitherto underrated type of innovation similar to inspiration and here called 'Diversity Innovation'. It is postulated that it is 'Diversity Innovation' which is the major driving force in SMEs, because SMEs are typically cut off from invention innovation. Furthermore - by using simple algebra - it was seen that it is the transaction costs associated with communication that are the limiting factor for 'Diversity Innovation'. The logical consequence of this is that the major management challenge for growing SMEs occurs around size 50 employees. This is in stark contrast to conventional nomenclature, which ignores this important division and lumps all 10-99 employee companies together as 'small enterprises'. The analyses also showed innovation nuclei - the persons around whom the innovations crystallized - to be individuals with multiple specialist backgrounds. This is interpreted as again pointing towards the importance of transaction costs for communication between specialists, because transaction costs are lower when the individual is multiply specialized. Trans-nationals (trans-migrants, 'foreigners', here called CED's; people ,culturally and/or ethnically dífferent from the people in the SME's home nation) were especially prominent amongst innovation nuclei and it {s speculated that this group had been exposed to especially high retraining pressures. CED's in small companies active in immature markets experienced little difficulty in gaining acceptance for their innovations. Conversely, CEO's in companies within mature markets experienced great difficulty in spreading innovations within their environment, and the most likely explanation is because of the large distance (the 'Innovation Gap') between the CEO involved and the leadershlp/consensus group, as defined by Adaption-Innovation theory. Indeed, 'in mature markets, initial innovations by CEO's provoked a Trickle Down effect, this rebound often taking the form of disenfranchisement of the CED involved, who saw their ideas transformed into a consensus group concept, from which they were excluded, resulting in de-motivation and the consequent restriction in the generation and spread of innovation in the corporate environment. Whilst qualitative and semi-quantitative techniques were used in research into innovation, research into Internet Marketing were analysed by quantitative techniques and showed that many generally assumed popular concepts are misleading. Results at variance with accepted wisdom included: • Market transparency on the Internet is quite restricted and open to manipulation by suppliers. • There was no evidence that URL submissions to web search engines will improve sales. • There was no evidence that communication between the company and those clients requesting information, improved sales. • There was no evidence that 'chat' or other peer-to-peer web facilities improved sales. • Returning customers are few and it is their satisfaction with the product, not with the web site, that determined if they return. • A very high background rate of random hits, as opposed to customers, makes analysing web statistics a fruitless task. Conversely sales statistics can be used to prioritise which products are given good web coverage. • Bulk e-mailing of offers may be a less successful method for achieving sales than a web site is. • On-line payment is not a great advantage because third-party payment gateways and even the company bank, mostly fail to support the small merchant. • Intemiedlatlon amongst SME partners lacks adequate support, but dis- and re-intermediation is not rapid. 1997-2003 was a time when Internet knowledge was scarce and popular predictions from this period were chillingly wrong for SMEs. Those companies where such knowledge was part of their core competencies - and thus may have relied less on popular predictions - succeeded most, but overstepping core competencies, or where the leadership/consensus group kept them rigidly partitioned from the necessary technical knowledge, resulted in potentially serious negative consequences. To avoid this it is suggested that SME management should include a two-way 'innovation pipeline' for companies with around 120 employees or more.
247

An intelligent multi-component distributed architecture for knowledge management

Ong, David C. C. January 2009 (has links)
The aim of this thesis is to propose an integrated generic intelligent decision-making framework that can be employed in the design and construction of computing infrastructures where high flexibility and dynamism are essential. In fact, the main problem with many decision making systems is that they are designed for specific purposes which make them unsuitable to be deployed in a complex system, where the level of unknown / uncertainties is high. The proposed framework is generic enough to address this limitation as it could be deployed across different computing architectures or systems, or redeployed to serve a particular purpose. The research study starts with the proposal of two theoretical concepts as part of an intelligent information management approach for a new integrated intelligent decision-making framework. The first concentrates on the thinking and learning processes to achieve the best effort decision via logical reasoning strategies. It determines the best execution path under particular circumstances in a given computing environment. The second concept focuses on data capturing techniques using distributed sensing devices which act as sensors for a decision-making unit (i.e. an input / output (IO) interface for thinking and learning processes). A model to describe perceived sensory perception is proposed, as well as an observation technique to monitor the proposed model. These concepts are then translated into an intelligent decision-making framework, which is capable of interpreting and manipulating available information to offer the best effort solution based on available resources, rather than relying heavily on additional powerful physical resources to provide a precise solution. Therefore, the accuracy and precision of decision-making depends on the applied logic and learning processes. Indirectly, this framework attempts to solve integration problems related to the aspect of "Intelligent" into practical day-to-day problem solving applications. A working prototype based on the proposed framework was developed and presented for an evaluation, to verify the framework competence in operating with computing infrastructure, and whether it is capable of making sensible decisions upon request, and whether it is able to learn from its decisions via the received feedback. To achieve this, the behaviour of the prototype is accessed against the growth in the number of experiences and amount of knowledge collected during the execution process. Finally, it has concluded that proposed concepts and framework operates well in term of decision-making capabilities and reasoning strategies.
248

Classification of vehicles for urban traffic scenes

Buch, Norbert Erich January 2010 (has links)
An investigation into detection and classification of vehicles and pedestrians from video in urban traffic scenes is presented. The final aim is to produce systems to guide surveillance operators and reduce human resources for observing hundreds of cameras in urban traffic surveillance. Cameras are a well established means for traffic managers to observe traffic states and improve journey experiences. Firstly, per frame vehicle detection and classification is performed using 3D models on calibrated cameras. Motion silhouettes (from background estimation) are extracted and compared to a projected model silhouette to identify the ground plane position and class of vehicles and pedestrians. The system has been evaluated with the reference i-LIDS data sets from the UK Home Office. Performance has been compared for varying numbers of classes, for three different weather conditions and for different video input filters. The full system including detection and classification achieves a recall of 87% at a precision of 85.5% outperforming similar systems in the literature. To improve robustness, the use of local image patches to incorporate object appearance is investigated for surveillance applications. As an example, a novel texture saliency classifier has been proposed to detect people in a video frame by identifying salient texture regions. The image is classified into foreground and background in real- time.No temporal image information is used during the classification. The system, used for the task of detecting people entering a sterile zone, a common scenario for visual surveillance. Testing has been performed on the i-LIDS sterile zone benchmark data set of the UK Home Qffice. The basic detector is extended by fusing its output with simple motion infonriation, which significantly outperforms standard motion tracking. Lower detection time can be achieved by combining texture classification with Kalman filtering. The fusion approach running on 10 frames per second gives the highest result of Fl=O.92 for the 24 hour test data set. Based on the good results for local features, a novel classifier has been introduced by combining the concept of 3D models with local features to overcome limitations of conventional silhouette-based methods and local features in 2D. The appearance of vehicles varies substantially with the viewing angle and local features may often be occluded. In this thesis, full 3D models are used for the object categories to be detected and the feature patches are defined over these models. A calibrated camera allows an affine transformation of the observation into a normalised representation from which '3DHOG' features (3D extended histogram of oriented gradients) are defined. A variable set of interest points is used in the detection and classification processes, depending on which points in the 3D model are visible. The 3DHOG feature is compared with features based on FFf and simple histograms and also to the motion silhouette baseline on the same data. The results demonstrate that the proposed method achieves comparable performance. In particular, an advantage of .the proposed, method is that it is robust against miss-Shaped motion silhouettes which can be caused by variable lighting, camera quality and occlusions from other objects. The proposed algorithms are evaluated further on a new data set from a different camera with higher resolution, which demonstrates the portability of the training data to novel camera views. Kalman filter tracking is introduced to gain trajectory information, is used for behaviour analysis. Correctly detected tracks of 94% outperforming a baseline motion tracker (OpenCV) tested under the same conditions. A demonstrator for bus lane monitoring is introduced using the output of the detection and classification system. The thesis concludes with a critical analysis of the work and the outlook for future research opportunities.
249

An investigation into the generation, encoding and retrieval of CCTV-derived knowledge

Annesley, James Alexander Grove January 2008 (has links)
Modern video surveillance systems generate diverse forms of data and to facilitate the effective exchange of these data a methodical approach is required. This thesis proposes the Video Surveillance Content Description Interface (VSCDI), a component of ISO/IEC 23000-10 - Information technology - Multimedia application format (MPEG-A) - Part 10: Video surveillance application format. The interface is designed to describe content associated with and generated by a surveillance system. In particular, a set of descriptors are included for: content-based image retrieval; user-defined Classification Schemes to impose any required description ontology; and to provide consistent descriptions across multiple sources. The VSCDI is evaluated using comparisons with other meta-data frameworks and in terms of the performance of its colour descriptor components. Two new data sets are created of pedestrians in indoor environments with multiple camera views for re-identification experiments. The experiments use a novel application of colour constancy for cross-camera comparisons. Two evaluation measures are used: the Average Normalised Mean Retrieval Rate (ANMRR) for ranked estimates; and the Information Gain metric for probabilistic estimates. Techniques are investigated for using more than one descriptor both to provide the estimate and to represent a person whose image is split into Top and Bottom clothing components. The re-identification of pedestrians is discussed in the context of providing both a coherent description of the overall scene activity and within an embedded system.
250

Medical quality of service for optimized ultrasound streaming in wireless robotic tele-ultrasonography system

Philip, Nada Y. January 2008 (has links)
Mobile healthcare (m-health) is a new paradigm that brings together the evolution of emerging wireless communications and network technologies with the concept of 'connected healthcare' anytime and anywhere. There are two critical issues facing the successful deployment of m-health applications from the wireless communications perspectives. First, wireless connectivity issues and mobility requirements of real-time bandwidth demanding m-health applications. Second, the Quality of Service (QoS) issues from the healthcare perspective and their required levels to guarantee robust and clinically acceptable healthcare services. This thesis consider the concept of medical QoS (m-QoS) issues for typical bandwidth demanding m-health application (tele-ultrasound streaming) in 3G and 3.5G wireless environments. Specifically this thesis introduces a new concept of m-QoS that provide a sub-category of quality of services from the m-health perspective. A wireless robotic tele-ultrasonography system is adopted in this research as the m-health application. Accordingly the m-QoS metrics and its functional bounds for this m-health application are defined. To validate this concept a new optimal video streaming rate control policy is proposed based on the Q-Iearning approach to deliver the ultrasound images over 3G and 3.5G environments. To achieve these objectives an end-to-end architecture for streaming M-JPEG compressed ultrasound video over 3G and 3.5G communication network is developed. Through this a client-server test bed is developed to capture the ultrasound images and adaptively varying its frame rate and the quality and send to the other end. The new rate control algorithm was implemented via this test bed. This thesis shows the performance analysis of the proposed rate control algorithm in terms of achieving the defined m-QoS over 3G and 3.5G wireless connections. In collaboration with medical expert in ultrasonography field, subjective image analysis are also carried out to validate the quality of the processed images.

Page generated in 0.0733 seconds