• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 7
  • 7
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Experimental Frame Structuring For Automated Model Construction: Application to Simulated Weather Generation

Cheon, Saehoon January 2007 (has links)
The source system is the real or virtual environment that we are interested in modeling. It is viewed as a source of observable data, in the form of time-indexed trajectories of variables. The data that has been gathered from observing or experimenting with a system is called the system behavior data base. The time indexed trajectories of variables provide an important clue to compose the DEVS (discrete event specification) model. Once event set is derived from the time indexed trajectories of variable, the DEVS model formalism can be extracted from the given event set. The process must not be a simple model generation but a meaningful model structuring of a request. The source data and query designed with SES are converted to XML Meta data by XML converting process. The SES serves as a compact representation for organizing all possible hierarchical composition of system so that it performs an important role to design the structural representation of query and source data to be saved. For the real data application, the model structuring with the US Climate Normals is introduced. Moreover, complex systems are able to be developed at different levels of resolution. When the huge size of source data in US Climate Normals are implemented for the DEVS model, the model complexity is unavoidable. This issue is dealt with the creation of the equivalent lumped model based on the concept of morphism. Two methods to define the resolution level are discussed, fixed and dynamic definition. Aggregation is also discussed as the one of approaches for the model abstraction. Finally, this paper will introduce the process to integrate the DEVSML(DEVS Modeling Language) engine with the DEVS model creation engine for the Web Service Oriented Architecture.
2

Modeling Information Precursors for Event Forecasting

Ning, Yue 02 August 2018 (has links)
This dissertation is focused on the design and evaluation of machine learning algorithms for modeling information precursors for use in event modeling and forecasting. Given an online stream of information (e.g., news articles, social media postings), how can we model and understand how events unfold, how they influence each other, and how they can act as determinants of future events? First, we study information reciprocity in joint news and social media streams to capture how events evolve. We present an online story chaining algorithm which links related news articles together in a low complexity manner and a mechanism to classify the interaction between a news article and social media (Twitter) activity into four categories. This is followed by identification of major information sources for a given story chain based on the interaction states of news and Twitter. We demonstrate through this study that Twitter as a social network platform serves as a fast way to draw attention from the public to many social events such as sports, whereas news media is quicker to report events regarding political, economical, and business issues. In the second problem we focus on forecasting and understanding large-scale societal events from open source datasets. Our goal here is to develop algorithms that can automatically reconstruct precursors to societal events. We develop a nested framework involving multi-instance learning for mining precursors by harnessing temporal constraints. We evaluate the proposed model for various event categories in multiple geo-locations with comprehensive experiments. Next, to reinforce the fact that events are typically inter-connected and influenced by events in other locations, we develop an approach that creates personalized models for exploring spatio-temporal event correlations; this approach also helps tackle data/label sparsity problems across geolocations. Finally, this dissertation demonstrates how our algorithms can be used to study key characteristics of mass events such as protests. Some mass gatherings run the risk of turning violent, causing damage to both property and people. We propose a tailored solution for uncovering triggers from both news media and social media for violent event analysis. This work was partially supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior National Business Center (DoI/NBC) contract number D12PC000337, the Office of Naval Research under contract N00014-16-C-1054, and the U.S. Department of Homeland Security under Grant Award Number 2017-ST-061-CINA01. The US Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the author and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF, IARPA, DoI/NBC, or the US Government. / Ph. D. / Today, massive open source information is widely available through news and social media, but analyzing this information is a complex task. It is imperative to develop algorithms that can automatically reconstruct the clues to societal events that are reported in news or social media. The focus of this dissertation is on simultaneously uncovering precursors to societal events and using such precursors to forecast upcoming events. We develop various machine learning algorithms that can model event-related data and determine the key happenings prior to an event that have the greatest predictability to such events in the future. We use our algorithms to understand the nature of precursors to civil unrest events (protests, strikes, and ‘occupy’ events) and why some of these events turn violent.
3

A comparison of two simulation languages for microcomputer based discrete event modeling

Chikkala, Ramesh January 1989 (has links)
No description available.
4

Towards Semantically Enabled Complex Event Processing

Keskisärkkä, Robin January 2017 (has links)
The Semantic Web provides a framework for semantically annotating data on the web, and the Resource Description Framework (RDF) supports the integration of structured data represented in heterogeneous formats. Traditionally, the Semantic Web has focused primarily on more or less static data, but information on the web today is becoming increasingly dynamic. RDF Stream Processing (RSP) systems address this issue by adding support for streaming data and continuous query processing. To some extent, RSP systems can be used to perform complex event processing (CEP), where meaningful high-level events are generated based on low-level events from multiple sources; however, there are several challenges with respect to using RSP in this context. Event models designed to represent static event information lack several features required for CEP, and are typically not well suited for stream reasoning. The dynamic nature of streaming data also greatly complicates the development and validation of RSP queries. Therefore, reusing queries that have been prepared ahead of time is important to be able to support real-time decision-making. Additionally, there are limitations in existing RSP implementations in terms of both scalability and expressiveness, where some features required in CEP are not supported by any of the current systems. The goal of this thesis work has been to address some of these challenges and the main contributions of the thesis are: (1) an event model ontology targeted at supporting CEP; (2) a model for representing parameterized RSP queries as reusable templates; and (3) an architecture that allows RSP systems to be integrated for use in CEP. The proposed event model tackles issues specifically related to event modeling in CEP that have not been sufficiently covered by other event models, includes support for event encapsulation and event payloads, and can easily be extended to fit specific use-cases. The model for representing RSP query templates was designed as an extension to SPIN, a vocabulary that supports modeling of SPARQL queries as RDF. The extended model supports the current version of the RSP Query Language (RSP-QL) developed by the RDF Stream Processing Community Group, along with some of the most popular RSP query languages. Finally, the proposed architecture views RSP queries as individual event processing agents in a more general CEP framework. Additional event processing components can be integrated to provide support for operations that are not supported in RSP, or to provide more efficient processing for specific tasks. We demonstrate the architecture in implementations for scenarios related to traffic-incident monitoring, criminal-activity monitoring, and electronic healthcare monitoring.
5

Reengineering Primary Health Care for Information and Communication Technology

Leung, Gloria Unknown Date
No description available.
6

Intelligent Event Focused Crawling

Farag, Mohamed Magdy Gharib 23 September 2016 (has links)
There is need for an integrated event focused crawling system to collect Web data about key events. When an event occurs, many users try to locate the most up-to-date information about that event. Yet, there is little systematic collecting and archiving anywhere of information about events. We propose intelligent event focused crawling for automatic event tracking and archiving, as well as effective access. We extend the traditional focused (topical) crawling techniques in two directions, modeling and representing: events and webpage source importance. We developed an event model that can capture key event information (topical, spatial, and temporal). We incorporated that model into the focused crawler algorithm. For the focused crawler to leverage the event model in predicting a webpage's relevance, we developed a function that measures the similarity between two event representations, based on textual content. Although the textual content provides a rich set of features, we proposed an additional source of evidence that allows the focused crawler to better estimate the importance of a webpage by considering its website. We estimated webpage source importance by the ratio of number of relevant webpages to non-relevant webpages found during crawling a website. We combined the textual content information and source importance into a single relevance score. For the focused crawler to work well, it needs a diverse set of high quality seed URLs (URLs of relevant webpages that link to other relevant webpages). Although manual curation of seed URLs guarantees quality, it requires exhaustive manual labor. We proposed an automated approach for curating seed URLs using social media content. We leveraged the richness of social media content about events to extract URLs that can be used as seed URLs for further focused crawling. We evaluated our system through four series of experiments, using recent events: Orlando shooting, Ecuador earthquake, Panama papers, California shooting, Brussels attack, Paris attack, and Oregon shooting. In the first experiment series our proposed event model representation, used to predict webpage relevance, outperformed the topic-only approach, showing better results in precision, recall, and F1-score. In the second series, using harvest ratio to measure ability to collect relevant webpages, our event model-based focused crawler outperformed the state-of-the-art focused crawler (best-first search). The third series evaluated the effectiveness of our proposed webpage source importance for collecting more relevant webpages. The focused crawler with webpage source importance managed to collect roughly the same number of relevant webpages as the focused crawler without webpage source importance, but from a smaller set of sources. The fourth series provides guidance to archivists regarding the effectiveness of curating seed URLs from social media content (tweets) using different methods of selection. / Ph. D.
7

Enhancement and Evaluation of a Rainfall-Runoff Single Event Model

Salazar Mejia, Germania 12 May 2012 (has links)
Planning and design of stormwater facilities (including best management practices and low impact development) involve the calculation of peak flows and runoff volumes. Rainfall-runoff models are frequently utilized to estimate this information. A userriendly rainfall-runoff tool (LIDIA) was developed using Visual Basic for Applications in Microsoft Office Excel. This research showed comprehensive guidelines on how to setup a model in LIDIA and reported the first evaluation of LIDIA using field data. LIDIA hydrologic module was tested using 10-minute rainfall, land cover, soil series, land cover management, and runoff data from two small watersheds in North Mississippi. Eleven storm events, over a period of seven months were used for the one evaluation site and 11 storm events were used for the second case study. Overall the development and results of LIDIA tool showed in this study are positive in keeping the enhancement of the model.

Page generated in 0.0952 seconds