Spelling suggestions: "subject:"event depresentation"" "subject:"event prepresentation""
1 |
Migrating to a real-time distributed parallel simulator architectureDuvenhage, Bernardt 23 January 2009 (has links)
The South African National Defence Force (SANDF) currently requires a system of systems simulation capability for supporting the different phases of a Ground Based Air Defence System (GBADS) acquisition program. A non-distributed, fast-as-possible simulator and its architectural predecessors developed by the Council for Scientific and Industrial Research (CSIR) was able to provide the required capability during the concept and definition phases of the acquisition life cycle. The non-distributed simulator implements a 100Hz logical time Discrete Time System Specification (DTSS) in support of the existing models. However, real-time simulation execution has become a prioritised requirement to support the development phase of the acquisition life cycle. This dissertation is about the ongoing migration of the non-distributed simulator to a practical simulation architecture that supports the real-time requirement. The simulator simulates a synthetic environment inhabited by interacting GBAD systems and hostile airborne targets. The non-distributed simulator was parallelised across multiple Commod- ity Off the Shelf (COTS) PC nodes connected by a commercial Gigabit Eth- ernet infrastructure. Since model reuse was important for cost effectiveness, it was decided to reuse all the existing models, by retaining their 100Hz logical time DTSSs. The large scale and event-based High Level Architecture (HLA), an IEEE standard for large-scale distributed simulation interoperability, had been identified as the most suitable distribution and parallelisation technology. However, two categories of risks in directly migrating to the HLA were iden- tified. The choice was made, with motivations, to mitigate the identified risks by developing a specialised custom distributed architecture. In this dissertation, the custom discrete time, distributed, peer-to-peer, message-passing architecture that has been built by the author in support of the parallelised simulator requirements, is described and analysed. It reports on empirical studies in regard to performance and flexibility. The architecture is shown to be a suitable and cost effective distributed simulator architecture for supporting a speed-up of three to four times through parallelisation of the 100 Hz logical time DTSS. This distributed architecture is currently in use and working as expected, but results in a parallelisation speed-up ceiling irrespective of the number of distributed processors. In addition, a hybrid discrete-time/discrete-event modelling approach and simulator is proposed that lowers the distributed communication and time synchronisation overhead—to improve on the scalability of the discrete time simulator—while still economically reusing the existing models. The pro- posed hybrid architecture was implemented and its real-time performance analysed. The hybrid architecture is found to support a parallelisation speed- up that is not bounded, but linearly related to the number of distributed pro- cessors up to at least the 11 processing nodes available for experimentation. / Dissertation (MSc)--University of Pretoria, 2009. / Computer Science / unrestricted
|
2 |
Learning, Detection, Representation, Indexing And Retrieval Of Multi-agent Events In VideosHakeem, Asaad 01 January 2007 (has links)
The world that we live in is a complex network of agents and their interactions which are termed as events. An instance of an event is composed of directly measurable low-level actions (which I term sub-events) having a temporal order. Also, the agents can act independently (e.g. voting) as well as collectively (e.g. scoring a touch-down in a football game) to perform an event. With the dawn of the new millennium, the low-level vision tasks such as segmentation, object classification, and tracking have become fairly robust. But a representational gap still exists between low-level measurements and high-level understanding of video sequences. This dissertation is an effort to bridge that gap where I propose novel learning, detection, representation, indexing and retrieval approaches for multi-agent events in videos. In order to achieve the goal of high-level understanding of videos, firstly, I apply statistical learning techniques to model the multiple agent events. For that purpose, I use the training videos to model the events by estimating the conditional dependencies between sub-events. Thus, given a video sequence, I track the people (heads and hand regions) and objects using a Meanshift tracker. An underlying rule-based system detects the sub-events using the tracked trajectories of the people and objects, based on their relative motion. Next, an event model is constructed by estimating the sub-event dependencies, that is, how frequently sub-event B occurs given that sub-event A has occurred. The advantages of such an event model are two-fold. First, I do not require prior knowledge of the number of agents involved in an event. Second, no assumptions are made about the length of an event. Secondly, after learning the event models, I detect events in a novel video by using graph clustering techniques. To that end, I construct a graph of temporally ordered sub-events occurring in the novel video. Next, using the learnt event model, I estimate a weight matrix of conditional dependencies between sub-events in the novel video. Further application of Normalized Cut (graph clustering technique) on the estimated weight matrix facilitate in detecting events in the novel video. The principal assumption made in this work is that the events are composed of highly correlated chains of sub-events that have high conditional dependency (association) within the cluster and relatively low conditional dependency (disassociation) between clusters. Thirdly, in order to represent the detected events, I propose an extension of CASE representation of natural languages. I extend CASE to allow the representation of temporal structure between sub-events. Also, in order to capture both multi-agent and multi-threaded events, I introduce a hierarchical CASE representation of events in terms of sub-events and case-lists. The essence of the proposition is that, based on the temporal relationships of the agent motions and a description of its state, it is possible to build a formal description of an event. Furthermore, I recognize the importance of representing the variations in the temporal order of sub-events, that may occur in an event, and encode the temporal probabilities directly into my event representation. The proposed extended representation with probabilistic temporal encoding is termed P-CASE that allows a plausible means of interface between users and the computer. Using the P-CASE representation I automatically encode the event ontology from training videos. This offers a significant advantage, since the domain experts do not have to go through the tedious task of determining the structure of events by browsing all the videos. Finally, I utilize the event representation for indexing and retrieval of events. Given the different instances of a particular event, I index the events using the P-CASE representation. Next, given a query in the P-CASE representation, event retrieval is performed using a two-level search. At the first level, a maximum likelihood estimate of the query event with the different indexed event models is computed. This provides the maximum matching event. At the second level, a matching score is obtained for all the event instances belonging to the maximum matched event model, using a weighted Jaccard similarity measure. Extensive experimentation was conducted for the detection, representation, indexing and retrieval of multiple agent events in videos of the meeting, surveillance, and railroad monitoring domains. To that end, the Semoran system was developed that takes in user inputs in any of the three forms for event retrieval: using predefined queries in P-CASE representation, using custom queries in P-CASE representation, or query by example video. The system then searches the entire database and returns the matched videos to the user. I used seven standard video datasets from the computer vision community as well as my own videos for testing the robustness of the proposed methods.
|
3 |
Ereigniswissen / Insights into event knowledgeWelke, Tinka 22 October 2014 (has links)
Ausgehend von dem Fokus der Ereignisrepräsentation auf die Patiens-Rolle (Personen und Objekte, die der im Ereignis stattfindenden Zustandsveränderung unterliegen) wird untersucht, ob die sich während des Ereignisses verändernden Merkmale des Patiens Bestandteil des Ereigniswissens sind und zur Repräsentation des chronologischen Verlaufs von Ereignissen beitragen. Dies wurde anhand der Bearbeitung von antonymen Adjektiven geprüft, die Anfangs- und Endmerkmale des Patiens eines zuvor dargebotenen Ereignisverbs benennen. Ausgewertet wurden behaviorale Daten und Blickbewegungen. Dabei wurden mit zeit-impliziten und zeit-expliziten Aufgaben folgende Ergebnisse erzielt: (1) Die Ereignisrepräsentation enthält sich verändernde Merkmale des Patiens. (2) Die Merkmale des Patiens werden abhängig von der angewandten Strategie (sprachliche vs. Simulationsstrategie) in einer chronologischen Abfolge mental simuliert. (3) Endmerkmale haben gegenüber Anfangsmerkmalen Priorität in der Ereignisrepräsentation. Sie sind im Ereignisverb impliziert und können so sprachlich bereitgestellt werden. (4) Die Zeiteffekte (Chronologie und Zielpräferenz) treten bereits unter automatischen Bedingungen (SOA 250 ms, zeit-implizite Aufgabe) auf. (5) Antwortstrategien wurden insbesondere durch Blickbewegungen indiziert. Antwortstrategien modifizieren die Zeiteffekte und geben Aufschluss über den Anteil der sprachlichen Verarbeitung und der Simulation. Insgesamt lässt sich aus den Untersuchungen schließen, dass die Veränderung des Patiens und damit Aspekte des zeitlichen Verlaufs von Ereignissen zur Ereignisrepräsentation gehören. Die Befundlage deutet auf ein dynamisches Zusammenspiel von sprachlichen und Simulationsprozessen bei der Repräsentation des zeitlichen Verlaufs hin. / This thesis comprises three investigations into the mental representation of events. Proceeding on the assumption that representations of events focus on the role of the patient (the person or object undergoing a change of state during the event), it is investigated whether the changing features of the patient form part of event knowledge and whether or not they contribute to the way in which the temporal progression of events is represented. The study involved time-implicit and time-explicit tasks that required participants to process antonymous adjectives denoting the source and resulting features of the patient involved in an event prime. Behavioural and eye movement data were analysed and the following results obtained: (1) The changing features of the patient form part of the representation of the event. (2) Depending on the strategy adopted (linguistic vs. simulation), patient features can be mentally simulated in chronological order. (3) Resulting features play a more prominent role in event representations than source features. Resulting features are implied by the event verb and can thus be accessed linguistically. (4) Temporal effects (preference for resulting features, effect of chronology) already occur in the automatic condition (SOA 250 ms, time-implicit tasks). (5) Response strategies are indicated by eye movements. Response strategies modify temporal effects and provide an indication of how much linguistic processing is taking place and how much simulation. All in all the investigations show that the change undergone by the patient, i.e. the aspect which expresses the temporal progression of an event, forms part of the representation of that event. The results point to a dynamic interplay of linguistic and simulation processing in the representation of temporal progression.
|
4 |
Digital FabricGoshi, Sudheer 01 January 2012 (has links)
Continuing advances with VLSI have enabled engineers to build high performance computer systems to solve complex problems. The real-world problems and tasks like pattern recognition, speech recognition, etc. still remain elusive to the most advanced computer systems today. Many advances in the science of computer design and technology are coming together to enable the creation of the next-generation computing machines to solve real-world problems, which the human brain does with ease. One such engineering advance is the field of neuromorphic engineering, which tries to establish closer links to biology and help us investigate the problem of designing better computing machines. A chip built with the principles of neuromorphic engineering is called as neuromorphic chip. Neuromorphic chip aims to solve real-world problems. As the complexity of the problem increases, the computation capability of these chips can become a limitation. In order to improve the performance and accomplish a complex task in the real-world, many such chips need to be integrated into a system. Hence, efficiency of such a system depends on effective inter-chip communication. Here, the work presented aims at building a message-passing network (Digital Fabric) simulator, that integrates many such chips. Each chip represents a binary event-based unit called spiking analog cortical module. The inter-chip communication protocol employed here is called as Address Event Representation. Here, the Digital Fabric is built in three revisions, with different architectures being considered in each revision. The complexity is increased at each iteration stage. The experiments performed in each revision test the performance of such configuration systems and results proves to lay a foundation for further studies. In the future, building a high level simulation model will assist in scaling and evaluating various network topologies.
|
Page generated in 0.152 seconds