• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 11
  • 9
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 94
  • 94
  • 23
  • 21
  • 19
  • 15
  • 14
  • 14
  • 14
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Optical Flow Features for Event Detection

Afrooz mehr, Mohammad, Haghpanah, Maziar January 2014 (has links)
In this thesis, we employ optical flow features for the detection of the rigid or non‐rigid single object on an input video. For optical flow estimation, we use the Point Line [PL] method [2] (as a local method) to estimate the motion of the image sequence which is generated from the input video stream. Although the Lukas and Kanade [LK] is a popular local method for estimation of the optical flow, it is weak in dealing with the linear symmetric images even by use of regularization [e.g. Tikhonov]. The PL method is more powerful than the LK method and can properly separate both line flow and point flow. For dealing with rapidly changing data in some part of an image (high motion problem), a gaussian pyramid with five levels (different image resolutions) is employed. In this way, the pyramid height (Level) must be chosen properly according to the maximum optical flow that we expect in each section of the image without iteration. After determining the best‐estimated optical flow vector for every pixel, the algorithm should detect an object on video with its direction to the right or left. By using techniques such as segmentation and averaging the magnitude of flow vectors the program can detect and distinguish rigid objects (e.g. a car) and non‐rigid objects (e.g. a human). Finally the algorithm makes a new video output that includes detected object with flow vectors, the pyramid levels map which has been used for optical flow estimation and a respective binary image.
2

Advanced Driving Environment and Intelligent Vehicle Control by Visual Rhythm Analysis

Hsu, Cheng-Jie 05 September 2010 (has links)
The motivation of this paper is to propose a simple and reliable method to identify on-road vehicle events, particularly in the driving situations. A content rhythm is extracted by applying a virtual line lies on the same position of each frame. Thereupon a simplified representation of a continuous video is to record the temporal information of vehicle status. Thus, vehicle situations such as changing lane, safe distance and speed display can be detected instantly by analyzing the statistical characteristics of content rhythm. The proposed method can not only prevent accidents but also improve the traffic safety by monitoring the on-road vehicle status. Experimental results show the proposed method is reliable for vehicle event detection.
3

Use of Text Summarization for Supporting Event Detection

Wu, Pao-Feng 12 August 2003 (has links)
Environmental scanning, which acquires and use the information about event, trends, and changes in an organization¡¦s external environment, is an important process in the strategic management of an organization and permits the organization to quickly adapt to the changes of its external environment. Event detection that detects the onset of new events from news documents is essential to facilitating an organization¡¦s environmental scanning activity. However, traditional feature-based event detection techniques detect events by comparing the similarity between features of news stories and incur several problems. For example, for illustration and comparison purpose, a news story may contain sentences or paragraphs that are not highly relevant to defining its event. Without removing such less relevant sentences or paragraphs before detection, the effectiveness of traditional event detection techniques may suffer. In this study, we developed a summary-based event detection (SED) technique that filters less relevant sentences or paragraphs in a news story before performing feature-based event detection. Using a traditional feature-based event detection technique (i.e., INCR) as benchmark, the empirical evaluation results showed that the proposed SED technique could achieve comparable or even better detection effectiveness (measured by miss and false alarm rates) than the INCR technique, for data corpora where the percentage of news stories discussing old events is high.
4

Event Modeling in Social Media with Application to Disaster Damage Assessment

Liang, Yuan 16 December 2013 (has links)
This thesis addresses the modeling of events in social media, with an emphasis on the detection, tracking, and analysis of disaster-related events like the 2011 Tohuku Earthquake in Japan. Successful event modeling is critical for many applications including information search, entity extraction, disaster assessment, and emergency monitoring. However, modeling events in social media is challenging since: (i) social media is noisy and oftentimes incomplete, in the sense that users provide only partial evidence of their participation in an event; (ii) messages in social media are usually short, providing only little textual narrative (thereby making event detection difficult); and (iii) the size of short-lived events typically changes rapidly, growing and shrinking in sharp bursts. With these challenges in mind, this thesis proposes a framework for event modeling in social media and makes three major contributions: The first contribution is a signal processing-inspired approach for event detection from social media. Concretely, this research proposes an iterative spatial- temporal event mining algorithm for identifying and extracting topics from social media. One of the key aspects of the proposed algorithm is a signal processing-inspired approach for viewing spatial-temporal term occurrences as signals, analyzing the noise contained in the signals, and applying noise filters to improve the quality of event extraction from these signals. The second contribution is a new model of population dynamics of event-related crowds in social media as they first form, evolve, and eventually dissolve. To- ward robust population modeling, a duration model is proposed to predict the time users spend in a particular crowd. And then a time-evolving population model is designed for estimating the number of people departing a crowd, which enables the prediction of the total population remaining in a crowd. The third contribution of this thesis is a set of methods for event analytics for leveraging social media in an earthquake damage assessment scenario. Firstly, the difference between text tweets and image tweets is investigated, and then three features – tweet density, re-tweet density, and user tweeting count – are extracted to model the intensity attenuation of earthquakes. The observation that the relationship between social media activity vs. the loss/damage attenuation suggests that social media following a catastrophic event can provide rapid insight into the extent of damage.
5

Spatio-temporal Event Detection and Forecasting in Social Media

Zhao, Liang 01 August 2016 (has links)
Nowadays, knowledge discovery on social media is attracting growing interest. Social media has become more than a communication tool, effectively functioning as a social sensor for our society. This dissertation focuses on the development of methods for social media-based spatiotemporal event detection and forecasting for a variety of event topics and assumptions. Five methods are proposed, namely dynamic query expansion for event detection, a generative framework for event forecasting, multi-task learning for spatiotemporal event forecasting, multi-source spatiotemporal event forecasting, and deep learning based epidemic modeling for forecasting influenza outbreaks. For the first of these methods, existing solutions for spatiotemporal event detection are mostly supervised and lack the flexibility to handle the dynamic keywords used in social media. The contributions of this work are: (1) Develop an unsupervised framework; (2) Design a novel dynamic query expansion method; and (3) Propose an innovative local modularity spatial scan algorithm. For the second of these methods, traditional solutions are unable to capture the spatiotemporal context, model mixed-type observations, or utilize prior geographical knowledge. The contributions of this work include: (1) Propose a novel generative model for spatial event forecasting; (2) Design an effective algorithm for model parameter inference; and (3) Develop a new sequence likelihood calculation method. For the third method, traditional solutions cannot deal with spatial heterogeneity or handle the dynamics of social media data effectively. This work's contributions include: (1) Formulate a multi-task learning framework for event forecasting; (2) simultaneously model static and dynamic terms; and (3) Develop efficient parameter optimization algorithms. For the fourth method, traditional multi-source solutions typically fail to consider the geographical hierarchy or cope with incomplete data blocks among different sources. The contributions here are: (1) Design a framework for event forecasting based on hierarchical multi-source indicators; (2) Propose a robust model for geo-hierarchical feature selection; and (3) Develop an efficient algorithm for model parameter optimization. For the last method, existing work on epidemic modeling either cannot ensure timeliness, or cannot characterize the underlying epidemic propagation mechanisms. The contributions of this work include: (1) Propose a novel integrated framework for computational epidemiology and social media mining; (2) Develop a semi-supervised multilayer perceptron for mining epidemic features; and (3) Design an online training algorithm. / Ph. D.
6

Topics, Events, Stories in Social Media

Hua, Ting 05 February 2018 (has links)
The rise of big data, especially social media data (e.g., Twitter, Facebook, Youtube), gives new opportunities to the understanding of human behavior. Consequently, novel computing methods for mining patterns in social media data are therefore desired. Through applying these approaches, it has become possible to aggregate public available data to capture triggers underlying events, detect on-going trends, and forecast future happenings. This thesis focuses on developing methods for social media analysis. Specifically, five directions are proposed here: 1) semi-supervised detection for targeted-domain events, 2) topical interaction study among multiple datasets, 3) discriminative learning about the identifications for common and distinctive topics, 4) epidemics modeling for flu forecasting with simulation via signals from social media data, 5) storyline generation for massive unorganized documents. / Ph. D.
7

Anomalous Information Detection in Social Media

Tao, Rongrong 10 March 2021 (has links)
This dissertation focuses on identifying various types of anomalous information pattern in social media and news outlets. We focus on three types of anomalous information, including (1) media censorship in news outlets, which is information that should be published but is actually missing, (2) fake news in social media, which is unreliable information shown to the public, and (3) media propaganda in news outlets, which is trustworthy information but being over-populated. For the first problem, existing approaches on censorship detection mostly rely on monitoring posts in social media. However, media censorship in news outlets has not received nearly as much attention, mostly because it is difficult to systematically detect. The contributions of our work include: (1) a hypothesis testing framework to identify and evaluate censored clusters of keywords, (2) a near-linear-time algorithm to identify the highest scoring clusters as indicators of censorship, and (3) extensive experiments on six Latin American countries for performance evaluation. For the second problem, existing approaches studying fake news in social media primarily focus on topic-level modeling or prediction based on a set of aggregated features from a col- lection of posts. However, the credibility of various information components within the same topic can be quite different. The contributions of our work in this space include: (1) a new benchmark dataset for fake news research, (2) a cluster-based approach to improve instance- level prediction of information credibility, and (3) extensive experiments for performance evaluations. For the last problem, existing approaches to media propaganda detection primarily focus on investigating the pattern of information shared over social media or evaluation from domain experts. However, these approaches cannot be generalized to a large-scale analysis of media propaganda in news outlets. The contributions of our work include: (1) non- parametric scan statistics to identify clusters of over-populated keywords, (2) a near-linear-time algorithm to identify the highest scoring clusters as indicators of propaganda, and (3) extensive experiments on two Latin American countries for performance evaluation. / Doctor of Philosophy / Nowadays, massive information is available through a variety of social media platforms. However, the information accessed by the audience might be not exactly correct in different ways. In order for the audience being able to get access to the correct information, we develop various machine learning algorithms to uncover the anomalous information pattern in social media and explain the reason behind this behavior. Our algorithms can be used to learn what different information patterns can exist in the open data source.
8

Stair-specific algorithms for identification of touch-down and foot-off when descending or ascending a non-instrumented staircase.

Foster, Richard J., De Asha, Alan R., Reeves, N.D., Maganaris, C.N., Buckley, John 05 November 2013 (has links)
yes / The present study introduces four event detection algorithms for defining touch-down and foot-off during stair descent and stair ascent using segmental kinematics. For stair descent, vertical velocity minima of the whole body center-of-mass was used to define touch-down, and foot-off was defined as the instant of trail limb peak knee flexion. For stair ascent, vertical velocity local minima of the lead-limb toe was used to define touch-down, and foot-off was defined as the local maxima in vertical displacement between the toe and pelvis. The performance of these algorithms was determined as the agreement in timings of kinematically derived events to those defined kinetically (ground reaction forces). Data were recorded while 17 young and 15 older adults completed stair descent and ascent trials over a four-step instrumented staircase. Trials were repeated for three stair riser height conditions (85 mm, 170 mm, and 255 mm). Kinematically derived touch-down and foot-off events showed good agreement (small 95% limits of agreement) with kinetically derived events for both young and older adults, across all riser heights, and for both ascent and descent. In addition, agreement metrics were better than those returned using existing kinematically derived event detection algorithms developed for overground gait. These results indicate that touch-down and foot-off during stair ascent and descent of non-instrumented staircases can be determined with acceptable precision using segmental kinematic data.
9

An Efficient Computation of Convex Closure on Abstract Events

Bedasse, Dwight Samuel January 2005 (has links)
The behaviour of distributed applications can be modeled as the occurrence of events and how these events relate to each other. Event data collected according to this event model can be visualized using process-time diagrams that are constructed from a collection of traces and events. One of the main characteristics of a distributed system is the large number of events that are involved, especially in practical situations. This large number of events, and hence large process-time diagrams, make distributed-system observation difficult for the user. However, event-predicate detection, a search mechanism able to detect and locate arbitrary predicates within a process-time diagram or event collection, can help the user to make sense of this large amount of data. Ping Xie used the convex-abstract event concept, developed by Thomas Kunz, to search for hierarchical event predicates. However, his algorithm for computing convex closure to construct compound events, and especially hierarchical compound events (i. e. , compound events that contain other compound events), is inefficient. In one case it took, on average, close to four hours to search the collection of event data for a specific hierarchical event predicate. In another case, it took nearly one hour. This dissertation discusses an efficient algorithm, an extension of Ping Xie?s algorithm, that employs a caching scheme to build compound and hierarchical compound events based on matched sub-patterns. In both cases cited above, the new execution times were reduced by over 94%. They now take, on average, less than four minutes.
10

Real-time event detection in massive streams

Petrovic, Sasa January 2013 (has links)
New event detection, also known as first story detection (FSD), has become very popular in recent years. The task consists of finding previously unseen events from a stream of documents. Despite the apparent simplicity, FSD is very challenging and has applications anywhere where timely access to fresh information is crucial: from journalism to stock market trading, homeland security, or emergency response. With the rise of user generated content and citizen journalism we have entered an era of big and noisy data, yet traditional approaches for solving FSD are not designed to deal with this new type of data. The amount of information that is being generated today exceeds by many orders of magnitude previously available datasets, making traditional approaches obsolete for modern event detection. In this thesis, we propose a modern approach to event detection that scales to unbounded streams of text, without sacrificing accuracy. This is a crucial property that enables us to detect events from large streams like Twitter, which none of the previous approaches were able to do. One of the major problems in detecting new events is vocabulary mismatch, also known as lexical variation. This problem is characterized by different authors using different words to describe the same event, and it is inherent to human language. We show how to mitigate this problem in FSD by using paraphrases. Our approach that uses paraphrases achieves state-of-the-art results on the FSD task, while still maintaining efficiency and being able to process unbounded streams. Another important property of user generated content is the high level of noise, and Twitter is no exception. This is another problem that traditional approaches were not designed to deal with, and here we investigate different methods of reducing the amount of noise. We show that by using information from Wikipedia, it is possible to significantly reduce the amount of spurious events detected in Twitter, while maintaining a very small latency in detection. A question is often raised as to whether Twitter is at all useful, especially if one has access to a high-quality stream such as the newswire, or if it should be considered as sort of a poor man’s newswire. In our comparison of these two streams we find that Twitter contains events not present in the newswire, and that it also breaks some events sooner, showing that it is useful for event detection, even in the presence of newswire.

Page generated in 0.1182 seconds