Spelling suggestions: "subject:"aemantic pap"" "subject:"aemantic cpap""
1 |
Statistical Understanding of Broadcast Baseball Videos from the Perspective of Semantic Shot DistributionTeng, Chih-chung 07 September 2009 (has links)
Recently, sport video analysis has attracted lots of researcher¡¦s attention because of its entertainment applications and potential commercial benefits. Sport video analysis aims to identify what trigged the excitement of audiences. Previous methods rely mainly on video decomposition using domain specific knowledge. The study and development of suitable and efficient techniques for sport video analysis have been conducted extensively over the last decade. However, several longstanding challenges, such as semantic gap and commercial detection are still waiting to be solved. In this work, we consider using semantic analysis to adjacent pitch scenes which we called ¡§gap length.¡¨ Difference kinds of baseball games show its specific distribution for gap length, which depicts the potential significance of each baseball game.
|
2 |
Knowledge driven approaches to e-learning recommendationMbipom, Blessing January 2018 (has links)
Learners often have difficulty finding and retrieving relevant learning materials to support their learning goals because of two main challenges. The vocabulary learners use to describe their goals is different from that used by domain experts in teaching materials. This challenge causes a semantic gap. Learners lack sufficient knowledge about the domain they are trying to learn about, so are unable to assemble effective keywords that identify what they wish to learn. This problem presents an intent gap. The work presented in this thesis focuses on addressing the semantic and intent gaps that learners face during an e-Learning recommendation task. The semantic gap is addressed by introducing a method that automatically creates background knowledge in the form of a set of rich learning-focused concepts related to the selected learning domain. The knowledge of teaching experts contained in e-Books is used as a guide to identify important domain concepts. The concepts represent important topics that learners should be interested in. An approach is developed which leverages the concept vocabulary for representing learning materials and this influences retrieval during the recommendation of new learning materials. The effectiveness of our approach is evaluated on a dataset of Machine Learning and Data Mining papers, and our approach outperforms benchmark methods. The results confirm that incorporating background knowledge into the representation of learning materials provides a shared vocabulary for experts and learners, and this enables the recommendation of relevant materials. We address the intent gap by developing an approach which leverages the background knowledge to identify important learning concepts that are employed for refining learners' queries. This approach enables us to automatically identify concepts that are similar to queries, and take advantage of distinctive concept terms for refining learners' queries. Using the refined query allows the search to focus on documents that contain topics which are relevant to the learner. An e-Learning recommender system is developed to evaluate the success of our approach using a collection of learner queries and a dataset of Machine Learning and Data Mining learning materials. Users with different levels of expertise are employed for the evaluation. Results from experts, competent users and beginners all showed that using our method produced documents that were consistently more relevant to learners than when the standard method was used. The results show the benefits in using our knowledge driven approaches to help learners find relevant learning materials.
|
3 |
Mapping the semantic landscape of film: computational extraction of indices through film grammarAdams, Brett January 2002 (has links)
This thesis presents work aimed at exploiting the grammar of film for the purpose of automated film understanding, and addresses the semantic gap that exists between the simplicity of features that can be currently computed in automated content indexing systems and the richness of semantics in user queries posed for media search and retrieval. The problem is set within the broader context of the need for enabling technologies for multimedia content management, and arises in response to the growing presence of multimedia data made possible by advances in storage, processing, and transmission technologies. The first demonstration of this philosophy uses the attributes of motion and shot length to define and compute a novel measure of film tempo. Tempo flow plots are defined and derived for a number of full length movies, and edge analysis is performed leading to the extraction of dramatic story sections and events signaled by their unique tempo. In addition to the development of this computable tempo measure, a study is conducted as to the usefulness of biasing it toward either of its constituents, namely motion or shot length. Thirdly, a refinement is made to the shot length normalizing mechanism, driven by the peculiar characteristics of shot length distribution exhibited by movies. The next aspect of film examined is film rhythm. In the rhythm model presented, motion behaviour is classified as being either nonexistent, fluid or staccato for a given shot. Shot neighbourhoods in movies are then grouped by proportional makeup of these motion behavioural classes to yield seven high-level rhythmic arrangements that prove adept at indicating likely scene content (e.g., dialogue or chase sequence). The second part of the investigation presents a novel computational model to detect editing patterns as either metric, accelerated, decelerated, or free. / It is also found that combined motion and editing rhythms allow us to determine that the media content has changed and hypothesize as to why this is so. Three such categories are presented along with their efficacy for capturing useful film elements (e.g., scene change precipitated by plot event). Finally, the first attempt to extract narrative structure, the prevalent 3-Act storytelling paradigm in film, is detailed. The identification of act boundaries in the narrative allows for structuralizing film at a level far higher than existing segmentation frameworks which include shot detection and scene identification, and provides a reliable basis for inferences about the semantic content of dramatic events in film. Additionally, the narrative constructs identified have analogues in many other domains, including news, training video, sitcoms, etc., making these ideas widely applicable. A novel act boundary posterior function for Act 1 and 2 is derived using a Bayesian formulation under guidance from film grammar, tested under many configurations, and the results are reported for experiments involving 25 full-length movies. The framework is shown to have a role in both the automatic and semi-interactive setting for semantic analysis of film.
|
4 |
Image Retrieval using Automatic Region TaggingAwg Iskandar, Dayang Nurfatimah, dnfaiz@fit.unimas.my January 2008 (has links)
The task of tagging, annotating or labelling image content automatically with semantic keywords is a challenging problem. To automatically tag images semantically based on the objects that they contain is essential for image retrieval. In addressing these problems, we explore the techniques developed to combine textual description of images with visual features, automatic region tagging and region-based ontology image retrieval. To evaluate the techniques, we use three corpora comprising: Lonely Planet travel guide articles with images, Wikipedia articles with images and Goats comic strips. In searching for similar images or textual information specified in a query, we explore the unification of textual descriptions and visual features (such as colour and texture) of the images. We compare the effectiveness of using different retrieval similarity measures for the textual component. We also analyse the effectiveness of different visual features extracted from the images. We then investigate the best weight combination of using textual and visual features. Using the queries from the Multimedia Track of INEX 2005 and 2006, we found that the best weight combination significantly improves the effectiveness of the retrieval system. Our findings suggest that image regions are better in capturing the semantics, since we can identify specific regions of interest in an image. In this context, we develop a technique to tag image regions with high-level semantics. This is done by combining several shape feature descriptors and colour, using an equal-weight linear combination. We experimentally compare this technique with more complex machine-learning algorithms, and show that the equal-weight linear combination of shape features is simpler and at least as effective as using a machine learning algorithm. We focus on the synergy between ontology and image annotations with the aim of reducing the gap between image features and high-level semantics. Ontologies ease information retrieval. They are used to mine, interpret, and organise knowledge. An ontology may be seen as a knowledge base that can be used to improve the image retrieval process, and conversely keywords obtained from automatic tagging of image regions may be useful for creating an ontology. We engineer an ontology that surrogates concepts derived from image feature descriptors. We test the usability of the constructed ontology by querying the ontology via the Visual Ontology Query Interface, which has a formally specified grammar known as the Visual Ontology Query Language. We show that synergy between ontology and image annotations is possible and this method can reduce the gap between image features and high-level semantics by providing the relationships between objects in the image. In this thesis, we conclude that suitable techniques for image retrieval include fusing text accompanying the images with visual features, automatic region tagging and using an ontology to enrich the semantic meaning of the tagged image regions.
|
5 |
The semantic approach as an anti-physicalist renewal of the explanatory gap problem in contemporary philosophy of mindCanning, Adrienne 02 January 2014 (has links)
Contemporary philosopher, Joseph Levine, has argued that human phenomenological experience cannot be explained solely through the resources of neuroscience, and that a significant ‘explanatory gap’ exists between the rich features of human experience and scientific explanations of the mind. This thesis examines Guiseppina D’Oro’s novel suggestion that the gap exists, but that it is a semantic rather than an empirical problem. D’Oro argues that the ‘gap’ is a persistent philosophical problem because of its semantic nature, and that advances in neuroscience will fail to resolve the gap because its source is a conceptual distinction that is not marked by empirical difference. In the thesis I will discuss some virtues and difficulties with D’Oro’s thesis, and the implications her claim has more broadly for philosophers of mind. / Graduate / 0422
|
6 |
Bridging the Semantic Gap between Sensor Data and Ontological KnowledgeAlirezaie, Marjan January 2015 (has links)
The rapid growth of sensor data can potentially enable a better awareness of the environment for humans. In this regard, interpretation of data needs to be human-understandable. For this, data interpretation may include semantic annotations that hold the meaning of numeric data. This thesis is about bridging the gap between quantitative data and qualitative knowledge to enrich the interpretation of data. There are a number of challenges which make the automation of the interpretation process non-trivial. Challenges include the complexity of sensor data, the amount of available structured knowledge and the inherent uncertainty in data. Under the premise that high level knowledge is contained in ontologies, this thesis investigates the use of current techniques in ontological knowledge representation and reasoning to confront these challenges. Our research is divided into three phases, where the focus of the first phase is on the interpretation of data for domains which are semantically poor in terms of available structured knowledge. During the second phase, we studied publicly available ontological knowledge for the task of annotating multivariate data. Our contribution in this phase is about applying a diagnostic reasoning algorithm to available ontologies. Our studies during the last phase have been focused on the design and development of a domain-independent ontological representation model equipped with a non-monotonic reasoning approach with the purpose of annotating time-series data. Our last contribution is related to coupling the OWL-DL ontology with a non-monotonic reasoner. The experimental platforms used for validation consist of a network of sensors which include gas sensors whose generated data is complex. A secondary data set includes time series medical signals representing physiological data, as well as a number of publicly available ontologies such as NCBO Bioportal repository.
|
7 |
Ontologies dans les images satellitaires : interprétation sémantique des images / Ontologies for semantic interpretation of satellite imagesAndrés, Samuel 13 December 2013 (has links)
Étant donnée l'évolution technologique des capteurs embarqués à bord des satellites, le potentiel d'images satellitaires accessible s'accroît de telle manière que se pose maintenant la question de son exploitation la plus efficace possible. C'est l'objectif du projet CARTAM-SAT que de fluidifier la chaîne de traitement depuis les satellites jusqu'aux utilisateurs des images. La thèse s'inscrit dans ce cadre. Les traitements relatifs aux images ont évolué au cours des années. Les images basse résolution étaient traitées par une approche dite pixel alors que la haute résolution a permis le développement d'une approche dite objet. Cette dernière s'attache à analyser non plus des pixels isolés, mais des groupes de pixels représentatifs d'objets concrets sur le terrain. Ainsi, en principe, ces groupes de pixels sont dotés d'une sémantique propre au domaine de la télédétection. La représentation des connaissances a évolué parallèlement aux images satellitaires. Les standards de représentation ont profité de l'expansion du web pour donner naissance à des standards comme OWL. Celui-ci repose en grande partie sur les logiques de description qui permettent l'utilisation de raisonneurs automatiques capables d'inférer une connaissance implicite.Cette thèse se place à la jonction de ces deux sciences et propose une approche ontologique d'analyse des images satellitaires. Il s'agit de formaliser différents types de connaissances et de conceptualisations implicitement utilisés par les logiciels de traitement d'image et par les experts en télédétection, puis de raisonner automatiquement sur la description d'une image pour en obtenir une interprétation sémantique.Ce principe général est susceptible de nombreuses déclinaisons techniques. La mise en œuvre a consisté en la réalisation d'un prototype alliant une bibliothèque d'analyse d'images satellitaires et un raisonneur basé sur les ontologies. L'implémentation proposée dans la thèse permet d'explorer quatre déclinaisons techniques du principe qui mènent à des discussions sur la complémentarité des paradigmes d'analyse pixel et objet, la représentation de certaines relations spatiales et la place de la connaissance par rapport aux traitements. / Given the technological development of embedded satellite sensors, the potential of available satellite images increases so that the question now arises of their most efficient exploitation possible. This is the purpose of the CARTAM-SAT project to fluidize the processing workflow from satellite images to users. The thesis is part of this framework.Processing operations relating to images have evolved over the years. Low-resolution images were processed by a so-called pixel approach while the high-resolution has allowed the development of a so-called object approach. The latter focuses on analysing not about the isolated pixels, but about groups of pixels representative of concrete objects on the ground. Thus, in principle, these are groups of pixels with a domain-specific remote sensing semantics.Along with satellite imagery, knowledge representation has evolved. The standards of representation have benefited from the expansion of the web to give rise to standards like OWL. This one is widely based on description logics that allow the use of automated reasoners able to infer implicit knowledge.This thesis is at the junction of these two sciences and provides an ontological approach for analysing satellite images. The aim is to formalize different types of knowledges and conceptualizations implicitly used by image processing programs and by remote sensing experts, and then reasoning automatically on an image description to obtain one semantic interpretation.This general principle may have numerous technical variations. The implementation consisted in a prototype combining a satellite image analysis library and an ontology-based reasoner. The implementation proposed in the thesis allows to explore four technical variations of the principle that lead to discussions on the complementarity of pixel and object analysis paradigms, the representation of some of the spatial relations and the role of knowledge in relation to processing.
|
8 |
Architectural Introspection and ApplicationsLitty, Lionel 30 August 2010 (has links)
Widespread adoption of virtualization has resulted in an increased interest in Virtual Machine (VM) introspection. To perform useful analysis of the introspected VMs, hypervisors must deal with the semantic gap between the low-level information available to them and the high-level OS abstractions they need. To bridge this gap, systems have proposed making assumptions derived from the operating system source code or symbol information. As a consequence, the resulting systems create a tight coupling between the hypervisor and the operating systems run by the introspected VMs. This coupling is undesirable because any change to the internals of the operating system can render the output of the introspection system meaningless. In particular, malicious software can evade detection by making modifications to the introspected OS that break these assumptions.
Instead, in this thesis, we introduce Architectural Introspection, a new introspection approach that does not require information about the internals of the introspected VMs. Our approach restricts itself to leveraging constraints placed on the VM by the hardware and the external environment. To interact with both of these, the VM must use externally specified interfaces that are both stable and not linked with a specific version of an operating system. Therefore, systems that rely on architectural introspection are more versatile and more robust than previous approaches to VM introspection.
To illustrate the increased versatility and robustness of architectural introspection, we describe two systems, Patagonix and P2, that can be used to detect rootkits and unpatched software, respectively. We also detail Attestation Contracts, a new approach to attestation that relies on architectural introspection to improve on existing attestation approaches. We show that because these systems do not make assumptions about the operating systems used by the introspected VMs, they can be used to monitor both Windows and Linux based VMs. We emphasize that this ability to decouple the hypervisor from the introspected VMs is particularly useful in the emerging cloud computing paradigm, where the virtualization infrastructure and the VMs are managed by different entities. Finally, we show that these approaches can be implemented with low overhead, making them practical for real world deployment.
|
9 |
Architectural Introspection and ApplicationsLitty, Lionel 30 August 2010 (has links)
Widespread adoption of virtualization has resulted in an increased interest in Virtual Machine (VM) introspection. To perform useful analysis of the introspected VMs, hypervisors must deal with the semantic gap between the low-level information available to them and the high-level OS abstractions they need. To bridge this gap, systems have proposed making assumptions derived from the operating system source code or symbol information. As a consequence, the resulting systems create a tight coupling between the hypervisor and the operating systems run by the introspected VMs. This coupling is undesirable because any change to the internals of the operating system can render the output of the introspection system meaningless. In particular, malicious software can evade detection by making modifications to the introspected OS that break these assumptions.
Instead, in this thesis, we introduce Architectural Introspection, a new introspection approach that does not require information about the internals of the introspected VMs. Our approach restricts itself to leveraging constraints placed on the VM by the hardware and the external environment. To interact with both of these, the VM must use externally specified interfaces that are both stable and not linked with a specific version of an operating system. Therefore, systems that rely on architectural introspection are more versatile and more robust than previous approaches to VM introspection.
To illustrate the increased versatility and robustness of architectural introspection, we describe two systems, Patagonix and P2, that can be used to detect rootkits and unpatched software, respectively. We also detail Attestation Contracts, a new approach to attestation that relies on architectural introspection to improve on existing attestation approaches. We show that because these systems do not make assumptions about the operating systems used by the introspected VMs, they can be used to monitor both Windows and Linux based VMs. We emphasize that this ability to decouple the hypervisor from the introspected VMs is particularly useful in the emerging cloud computing paradigm, where the virtualization infrastructure and the VMs are managed by different entities. Finally, we show that these approaches can be implemented with low overhead, making them practical for real world deployment.
|
10 |
Semantic view re-creation for the secure monitoring of virtual machinesCarbone, Martim 28 June 2012 (has links)
The insecurity of modern-day software has created the need for security monitoring applications. Two serious deficiencies are commonly found in these applications. First, the absence of isolation from the system being monitored allows malicious software to tamper with them. Second, the lack of secure and reliable monitoring primitives in the operating system makes them easy to be evaded.
A technique known as Virtual Machine Introspection attempts to solve these problems by leveraging the isolation and mediation properties of full-system virtualization. A problem known as semantic gap, however, occurs as a result of the low-level separation enforced by the hypervisor.
This thesis proposes and investigates novel techniques to overcome the semantic gap, advancing the state-of-the-art on the syntactic and semantic view re-creation for applications that conduct passive and active monitoring of virtual machines.
First, we propose a new technique for reconstructing a syntactic view of the guest OS kernel's heap state by applying a combination of static code and dynamic memory analysis. Our key contribution is the accuracy and completeness of our analysis. We also propose a new technique that allows out-of-VM applications to invoke and securely execute API functions inside the monitored guest's kernel, eliminating the need for the application to know details of the guest's internals. Our key contribution is the ability to overcome the semantic gap in a robust and secure manner. Finally, we propose a new virtualization-based event monitoring technique based on the interception of kernel data modifications. Our key contribution is the ability to monitor operating system events in a general and secure fashion.
|
Page generated in 0.0485 seconds