• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 11
  • 9
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 94
  • 94
  • 23
  • 21
  • 19
  • 15
  • 14
  • 14
  • 14
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

GLOBAL CHANGE REACTIVE BACKGROUND SUBTRACTION

Sathiyamoorthy, Edwin Premkumar 01 January 2011 (has links)
Background subtraction is the technique of segmenting moving foreground objects from stationary or dynamic background scenes. Background subtraction is a critical step in many computer vision applications including video surveillance, tracking, gesture recognition etc. This thesis addresses the challenges associated with the background subtraction systems due to the sudden illumination changes happening in an indoor environment. Most of the existing techniques adapt to gradual illumination changes, but fail to cope with the sudden illumination changes. Here, we introduce a Global change reactive background subtraction to model these changes as a regression function of spatial image coordinates. The regression model is learned from highly probable background regions and the background model is compensated for the illumination changes by the model parameters estimated. Experiments were performed in the indoor environment to show the effectiveness of our approach in modeling the sudden illumination changes by a higher order regression polynomial. The results of non-linear SVM regression were also presented to show the robustness of our regression model.
32

Towards Scalable Analysis of Images and Videos

Zhao, Bin 01 September 2014 (has links)
With widespread availability of low-cost devices capable of photo shooting and high-volume video recording, we are facing explosion of both image and video data. The sheer volume of such visual data poses both challenges and opportunities in machine learning and computer vision research. In image classification, most of previous research has focused on small to mediumscale data sets, containing objects from dozens of categories. However, we could easily access images spreading thousands of categories. Unfortunately, despite the well-known advantages and recent advancements of multi-class classification techniques in machine learning, complexity concerns have driven most research on such super large-scale data set back to simple methods such as nearest neighbor search, one-vs-one or one-vs-rest approach. However, facing image classification problem with such huge task space, it is no surprise that these classical algorithms, often favored for their simplicity, will be brought to their knees not only because of the training time and storage cost they incur, but also because of the conceptual awkwardness of such algorithms in massive multi-class paradigms. Therefore, it is our goal to directly address the bigness of image data, not only the large number of training images and high-dimensional image features, but also the large task space. Specifically, we present algorithms capable of efficiently and effectively training classifiers that could differentiate tens of thousands of image classes. Similar to images, one of the major difficulties in video analysis is also the huge amount of data, in the sense that videos could be hours long or even endless. However, it is often true that only a small portion of video contains important information. Consequently, algorithms that could automatically detect unusual events within streaming or archival video would significantly improve the efficiency of video analysis and save valuable human attention for only the most salient contents. Moreover, given lengthy recorded videos, such as those captured by digital cameras on mobile phones, or surveillance cameras, most users do not have the time or energy to edit the video such that only the most salient and interesting part of the original video is kept. To this end, we also develop algorithm for automatic video summarization, without human intervention. Finally, we further extend our research on video summarization into a supervised formulation, where users are asked to generate summaries for a subset of a class of videos of similar nature. Given such manually generated summaries, our algorithm learns the preferred storyline within the given class of videos, and automatically generates summaries for the rest of videos in the class, capturing the similar storyline as in those manually summarized videos.
33

Event Analytics on Social Media: Challenges and Solutions

January 2014 (has links)
abstract: Social media platforms such as Twitter, Facebook, and blogs have emerged as valuable - in fact, the de facto - virtual town halls for people to discover, report, share and communicate with others about various types of events. These events range from widely-known events such as the U.S Presidential debate to smaller scale, local events such as a local Halloween block party. During these events, we often witness a large amount of commentary contributed by crowds on social media. This burst of social media responses surges with the "second-screen" behavior and greatly enriches the user experience when interacting with the event and people's awareness of an event. Monitoring and analyzing this rich and continuous flow of user-generated content can yield unprecedentedly valuable information about the event, since these responses usually offer far more rich and powerful views about the event that mainstream news simply could not achieve. Despite these benefits, social media also tends to be noisy, chaotic, and overwhelming, posing challenges to users in seeking and distilling high quality content from that noise. In this dissertation, I explore ways to leverage social media as a source of information and analyze events based on their social media responses collectively. I develop, implement and evaluate EventRadar, an event analysis toolbox which is able to identify, enrich, and characterize events using the massive amounts of social media responses. EventRadar contains three automated, scalable tools to handle three core event analysis tasks: Event Characterization, Event Recognition, and Event Enrichment. More specifically, I develop ET-LDA, a Bayesian model and SocSent, a matrix factorization framework for handling the Event Characterization task, i.e., modeling characterizing an event in terms of its topics and its audience's response behavior (via ET-LDA), and the sentiments regarding its topics (via SocSent). I also develop DeMa, an unsupervised event detection algorithm for handling the Event Recognition task, i.e., detecting trending events from a stream of noisy social media posts. Last, I develop CrowdX, a spatial crowdsourcing system for handling the Event Enrichment task, i.e., gathering additional first hand information (e.g., photos) from the field to enrich the given event's context. Enabled by EventRadar, it is more feasible to uncover patterns that have not been explored previously and re-validating existing social theories with new evidence. As a result, I am able to gain deep insights into how people respond to the event that they are engaged in. The results reveal several key insights into people's various responding behavior over the event's timeline such the topical context of people's tweets does not always correlate with the timeline of the event. In addition, I also explore the factors that affect a person's engagement with real-world events on Twitter and find that people engage in an event because they are interested in the topics pertaining to that event; and while engaging, their engagement is largely affected by their friends' behavior. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2014
34

Social Media Analytics for Crisis Response

January 2015 (has links)
abstract: Crises or large-scale emergencies such as earthquakes and hurricanes cause massive damage to lives and property. Crisis response is an essential task to mitigate the impact of a crisis. An effective response to a crisis necessitates information gathering and analysis. Traditionally, this process has been restricted to the information collected by first responders on the ground in the affected region or by official agencies such as local governments involved in the response. However, the ubiquity of mobile devices has empowered people to publish information during a crisis through social media, such as the damage reports from a hurricane. Social media has thus emerged as an important channel of information which can be leveraged to improve crisis response. Twitter is a popular medium which has been employed in recent crises. However, it presents new challenges: the data is noisy and uncurated, and it has high volume and high velocity. In this work, I study four key problems in the use of social media for crisis response: effective monitoring and analysis of high volume crisis tweets, detecting crisis events automatically in streaming data, identifying users who can be followed to effectively monitor crisis, and finally understanding user behavior during crisis to detect tweets inside crisis regions. To address these problems I propose two systems which assist disaster responders or analysts to collaboratively collect tweets related to crisis and analyze it using visual analytics to identify interesting regions, topics, and users involved in disaster response. I present a novel approach to detecting crisis events automatically in noisy, high volume Twitter streams. I also investigate and introduce novel methods to tackle information overload through the identification of information leaders in information diffusion who can be followed for efficient crisis monitoring and identification of messages originating from crisis regions using user behavior analysis. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2015
35

Signal processing techniques for data reduction and event recognition in cough counting

Barton, Antony James January 2013 (has links)
This thesis presents novel techniques for the reduction of audio recordings and signal processing techniques as part of cough recognition. Evidence collected shows the reduction technique to be effective and the recognition techniques to give consistent performance across different patients. Cough is one of the commonest symptoms reported by patients to GPs. Despite this, it remains a significantly unmet medical need. At present, there exists no practical and validated technique for assessing the efficacy of therapies to treat cough on a large enough scale. Research that is presently undertaken requires fitting a patient with a recording system which will record their coughing and all other sound for a predefined period, usually 24 hours or less. This audio is then counted manually by trained cough counters to produce counts for each record which can be used as data for cough studies. Research in this field is relatively new, but a number of attempts have been made to automate this process. None so far have shown sufficient reliability or precision to be of sufficient use. The aim of this research is to analyse from the ground up signal processing techniques which can aid cough research. Specifically, the research will look into data minimisation techniques to improve the efficiency of manual counting techniques and recognition algorithmsThe research has produced a published record reduction system which can reduce 24 hour cough records down to around 10% of their original size without compromising the statistics of subsequent manual counts. Additionally, a review of signal processing techniques for cough recognition has produced a robust event detection technique and measurement techniques which have shown remarkable consistency between patients and conditions. Throughout the research a clear understanding of the limitations and possible solutions are pursued and reported on to aid further progress on what is a young and developing research field.
36

Location Estimation and Geo-Correlated Information Trends

Liu, Zhi 12 1900 (has links)
A tremendous amount of information is being shared every day on social media sites such as Facebook, Twitter or Google+. However, only a small portion of users provide their location information, which can be helpful in targeted advertising and many other services. Current methods in location estimation using social relationships consider social friendship as a simple binary relationship. However, social closeness between users and structure of friends have strong implications on geographic distances. In the first task, we introduce new measures to evaluate the social closeness between users and structure of friends. Then we propose models that use them for location estimation. Compared with the models which take the friend relation as a binary feature, social closeness can help identify which friend of a user is more important and friend structure can help to determine significance level of locations, thus improving the accuracy of the location estimation models. A confidence iteration method is further introduced to improve estimation accuracy and overcome the problem of scarce location information. We evaluate our methods on two different datasets, Twitter and Gowalla. The results show that our model can improve the estimation accuracy by 5% - 20% compared with state-of-the-art friend-based models. In the second task, we also propose a Local Event Discovery and Summarization (LEDS) framework to detect local events from Twitter. Many existing algorithms for event detection focus on larger-scale events and are not sensitive to smaller-scale local events. Most of the local events detected by these methods are major events like important sports, shows, or big natural disasters. In this work, we propose the LEDS framework to detect both bigger and smaller events. LEDS contains three key steps: 1) Detecting possible event related terms by monitoring abnormal distribution in different locations and times; 2) Clustering tweets based on their key terms, time, and location distribution; and 3) Extracting descriptions include time, location, and key sentences of local events from clusters. The model is evaluated on a real-world Twitter dataset with more than 60 million tweets. The analysis of Twitter data can help to predict or explain many real-world phenomena. The relationships among events in the real world can be reflected among the topics on social media. In the third task, we propose the concept of topic association and the associated mining algorithms. Topics with close temporal and spatial relationship may have direct or potential association in the real world. Our goal is to mine such topic associations and show their relationships in different time-region frames. We propose to use the concepts of participation ratio and participation index to measure the closeness among topics and propose a spatiotemporal index to calculate them efficiently. With the topic filtering and the topic combination, we further optimize the mining process and the mining results.
37

Détection et analyse d’événement dans les messages courts / Event detection and analysis on short text messages

Edouard, Amosse 02 October 2017 (has links)
Les réseaux sociaux ont transformé le Web d'un mode lecture, où les utilisateurs pouvaient seulement consommer les informations, à un mode interactif leur permettant de les créer, partager et commenter. Un défi majeur du traitement d'information dans les médias sociaux est lié à la taille réduite des contenus, leur nature informelle et le manque d'informations contextuelles. D'un autre côté, le web contient des bases de connaissances structurées à partir de concepts d'ontologies, utilisables pour enrichir ces contenus. Cette thèse explore le potentiel d'utiliser les bases de connaissances du Web de données, afin de détecter, classifier et suivre des événements dans les médias sociaux, particulièrement Twitter. On a abordé 3 questions de recherche : i) Comment extraire et classifier les messages qui rapportent des événements ? ii) Comment identifier des événements précis ? iii) Étant donné un événement, comment construire un fil d'actualité représentant les différents sous-événements ? Les travaux de la thèse ont contribué à élaborer des méthodes pour la généralisation des entités nommées par des concepts d'ontologies pour mitiger le sur-apprentissage dans les modèles supervisés ; une adaptation de la théorie des graphes pour modéliser les relations entre les entités et les autres termes et ainsi caractériser des événements pertinents ; l'utilisation des ontologies de domaines et les bases de connaissances dédiées, pour modéliser les relations entre les caractéristiques et les acteurs des événements. Nous démontrons que l'enrichissement sémantique des entités par des informations du Web de données améliore la performance des modèles d'apprentissages supervisés et non supervisés. / In the latest years, the Web has shifted from a read-only medium where most users could only consume information to an interactive medium allowing every user to create, share and comment information. The downside of social media as an information source is that often the texts are short, informal and lack contextual information. On the other hand, the Web also contains structured Knowledge Bases (KBs) that could be used to enrich the user-generated content. This dissertation investigates the potential of exploiting information from the Linked Open Data KBs to detect, classify and track events on social media, in particular Twitter. More specifically, we address 3 research questions: i) How to extract and classify messages related to events? ii) How to cluster events into fine-grained categories? and 3) Given an event, to what extent user-generated contents on social medias can contribute in the creation of a timeline of sub-events? We provide methods that rely on Linked Open Data KBs to enrich the context of social media content; we show that supervised models can achieve good generalisation capabilities through semantic linking, thus mitigating overfitting; we rely on graph theory to model the relationships between NEs and the other terms in tweets in order to cluster fine-grained events. Finally, we use in-domain ontologies and local gazetteers to identify relationships between actors involved in the same event, to create a timeline of sub-events. We show that enriching the NEs in the text with information provided by LOD KBs improves the performance of both supervised and unsupervised machine learning models.
38

Development of Dropwise Additive Manufacturing with non-Brownian Suspensions: Applications of Computer Vision and Bayesian Modeling to Process Design, Monitoring and Control: Video Files in Chapter 5 and Appendix E

Andrew J. Radcliffe (9080312) 24 July 2020 (has links)
Video files found in Chapter 5. : AUTOMATED OBJECT TRACKING, EVENT DETECTION AND RECOGNITION FOR HIGH-SPEED VIDEO OF DROP FORMATION PHENOMENA.<div><br></div><div>Video files found in APPENDIX E. CHAPTER 5, RESOURCE 2.</div>
39

TOUCH EVENT DETECTION AND TEXTURE ANALYSIS FOR VIDEO COMPRESSION

Qingshuang Chen (11198871) 29 July 2021 (has links)
<div>Touch event detection investigates the interaction between two people from video recordings. We are interested in a particular type of interaction which occurs between a caregiver and an infant, as touch is a key social and emotional signal used by caregivers when interacting with their children. We propose an automatic touch event detection and recognition method to determine the potential timing when the caregiver touches the infant, and classify the event into six touch types based on which body part of the infant has been touched. We leverage deep learning based human pose estimation and person segmentation to analyze the spatial relationship between the caregivers’ hands and the infant. We demonstrate promising performance on touch event detection and classification, showing great potential for reducing human effort when generating groundtruth annotation.</div><div><br></div><div>Recently, artificial intelligence powered techniques have shown great potential to increase the efficiency of video compression. In this thesis, we describe a texture analysis pre-processing method that leverages deep learning based scene understanding to extract semantic areas for the improvement of subsequent video coder. Our proposed method generates a pixel-level texture mask by combining the semantic segmentation with simple post-processing strategy. Our approach is integrated into a switchable texture-based video coding method. We demonstrate that for many standard and user generated test sequences, the proposed method achieves significant data rate reduction without noticeable visual artifacts.</div>
40

Classification, detection and prediction of adverse and anomalous events in medical robots

Cao, Feng 24 August 2012 (has links)
No description available.

Page generated in 0.1219 seconds