391 |
Simulation-based fault propagation analysis of process industry using process variable interaction analysisHosseini, Amir Hossein 01 January 2013 (has links)
There are increasing safety concerns in chemical and petrochemical process industry. The huge explosion of Nowruz oil Field platform that happened in Persian gulf-IRAN at 1983, along with other disastrous events have effected chemical industrial renaissance and led to high demand to enhance safety. Oil and chemical Industries involve complex processes and handle hazardous materials that may potentially cause catastrophic consequences in terms of human losses, injuries, asset lost and environmental stresses. One main reason of such catastrophic events is the lack of effective control and monitoring approaches that are required to achieve successful fault diagnosis and accurate hazard identification. Currently, there are aggressive worldwide efforts to propose an effective, robust, and high accuracy fault propagation analysis and monitoring techniques to prevent undesired events at early stages prior to their occurrence. Among these requirements is the development of an intelligent and automated control and monitoring system to first diagnose faulty equipment and process variable deviations, and then identify hazards associated with faults and deviations. Research into safety and control issues become high priority in all aspects. To support these needs, predictive control and intelligent monitoring system is under study and development at the Energy Safety and Control Laboratory (ESCL) – University of Ontario Institute of Technology (UOIT). The purpose of this research is to present a real time fault propagation analysis method for chemical / petrochemical process industry through fault semantic network (FSN) using accurate process variable interactions (PV-PV interactions). The effectiveness, feasibility, and robustness of the proposed method are demonstrated on simulated data emanating from a well-known Tennessee Eastman (TE) chemical process. Unlike most existing probabilistic approaches, fault propagation analysis module classifies faults and identifies faulty equipment and deviations according to obtained data from the underlying processes. It is an expert system that identifies corresponding causes and
consequences and links them together. FSN is an integrated framework that is used to link fault propagation scenarios qualitatively and quantitatively. Probability and fuzzy rules are used for reasoning causes and consequences and tuning FSN. / UOIT
|
392 |
A Graph Approach to Measuring Text DistanceTsang, Vivian 26 February 2009 (has links)
Text comparison is a key step in many natural language processing (NLP)
applications in which texts can be classified on the basis of their semantic
distance (how similar or different the texts are). For example, comparing the
local context of an ambiguous word with that of a known word can help identify
the sense of the ambiguous word. Typically, a distributional measure is used
to capture the implicit semantic distance between two pieces of text. In this
thesis, we introduce an alternative method of measuring the semantic distance
between texts as a combination of distributional information and
relational/ontological knowledge. In this work, we propose a novel distance
measure within a network-flow formalism that combines these two distinct
components in a way that they are not treated as separate and orthogonal
pieces of information. First, we represent each text as a collection of
frequency-weighted concepts within a relational thesaurus. Then, we make use
of a network-flow method which provides an efficient way of measuring the
semantic distance between two texts by taking advantage of the inherently
graphical structure in an ontology. We evaluate our method in a variety of
NLP tasks.
In our task-based evaluation, we find that our method performs well on two of
three tasks. We introduce a novel measure which is intended to capture how
well our network-flow method perform on a dataset (represented as a collection
of frequency-weighted concepts). In our analysis, we find that an integrated
approach, rather than a purely distributional or graphical analysis, is more
effective in explaining the performance inconsistency.
Finally, we address a complexity issue that arises from the overhead
required to incorporate more sophisticated concept-to-concept distances
into the network-flow framework. We propose a graph transformation
method which generates a pared-down network that requires less time to
process. The new method achieves a significant speed improvement, and
does not seriously hamper performance as a result of the transformation,
as indicated in our analysis.
|
393 |
External Argument IntroducersKim, Kyumin 10 January 2012 (has links)
This thesis shows that the mapping of semantics to syntax can be more complex than is generally assumed. In general, the mapping of semantics to syntax is thought to be many-to-one; for instance, many types of external argument roles are mapped to a subject position, and a theme or patient role is mapped to an object position. Contrary to this view, I show, by studying the syntax and semantics of external arguments, that one-to-one mapping between syntax and semantics is possible. External arguments are generally assumed to be introduced by a functional head, called Voice or v, regardless of the semantics of the argument, rather than being actual arguments of the verbs. A high Appl head similar to Voice has recently been argued to introduce external arguments as well as arguments of other semantic types. At present, no theories propose how these heads are distinguished in argument structure. This thesis articulates the differences between the external argument introducing heads and explores the consequences of these differences. Moreover, this thesis proposes a new type of event-related applicative, namely peripheral Appl. Like Voice and high Appl, peripheral Appl introduces an argument external to the verb phrase. The key differences among the external argument introducing heads are in their semantics as well as their syntactic position. Semantically, Voice is specified for agentivity, but high and peripheral Appls are specified for non-agentivity. Syntactically, high Appl merges below Voice, not above, while peripheral Appl can merge above Voice. An important result emerging from this thesis is that not all external arguments are treated in the same way in syntax: not only are agent and non-agent external argument roles mapped into different positions, but different types of non-agent roles are also mapped into different positions.
|
394 |
A Graph Approach to Measuring Text DistanceTsang, Vivian 26 February 2009 (has links)
Text comparison is a key step in many natural language processing (NLP)
applications in which texts can be classified on the basis of their semantic
distance (how similar or different the texts are). For example, comparing the
local context of an ambiguous word with that of a known word can help identify
the sense of the ambiguous word. Typically, a distributional measure is used
to capture the implicit semantic distance between two pieces of text. In this
thesis, we introduce an alternative method of measuring the semantic distance
between texts as a combination of distributional information and
relational/ontological knowledge. In this work, we propose a novel distance
measure within a network-flow formalism that combines these two distinct
components in a way that they are not treated as separate and orthogonal
pieces of information. First, we represent each text as a collection of
frequency-weighted concepts within a relational thesaurus. Then, we make use
of a network-flow method which provides an efficient way of measuring the
semantic distance between two texts by taking advantage of the inherently
graphical structure in an ontology. We evaluate our method in a variety of
NLP tasks.
In our task-based evaluation, we find that our method performs well on two of
three tasks. We introduce a novel measure which is intended to capture how
well our network-flow method perform on a dataset (represented as a collection
of frequency-weighted concepts). In our analysis, we find that an integrated
approach, rather than a purely distributional or graphical analysis, is more
effective in explaining the performance inconsistency.
Finally, we address a complexity issue that arises from the overhead
required to incorporate more sophisticated concept-to-concept distances
into the network-flow framework. We propose a graph transformation
method which generates a pared-down network that requires less time to
process. The new method achieves a significant speed improvement, and
does not seriously hamper performance as a result of the transformation,
as indicated in our analysis.
|
395 |
External Argument IntroducersKim, Kyumin 10 January 2012 (has links)
This thesis shows that the mapping of semantics to syntax can be more complex than is generally assumed. In general, the mapping of semantics to syntax is thought to be many-to-one; for instance, many types of external argument roles are mapped to a subject position, and a theme or patient role is mapped to an object position. Contrary to this view, I show, by studying the syntax and semantics of external arguments, that one-to-one mapping between syntax and semantics is possible. External arguments are generally assumed to be introduced by a functional head, called Voice or v, regardless of the semantics of the argument, rather than being actual arguments of the verbs. A high Appl head similar to Voice has recently been argued to introduce external arguments as well as arguments of other semantic types. At present, no theories propose how these heads are distinguished in argument structure. This thesis articulates the differences between the external argument introducing heads and explores the consequences of these differences. Moreover, this thesis proposes a new type of event-related applicative, namely peripheral Appl. Like Voice and high Appl, peripheral Appl introduces an argument external to the verb phrase. The key differences among the external argument introducing heads are in their semantics as well as their syntactic position. Semantically, Voice is specified for agentivity, but high and peripheral Appls are specified for non-agentivity. Syntactically, high Appl merges below Voice, not above, while peripheral Appl can merge above Voice. An important result emerging from this thesis is that not all external arguments are treated in the same way in syntax: not only are agent and non-agent external argument roles mapped into different positions, but different types of non-agent roles are also mapped into different positions.
|
396 |
Constructions, Semantic Compatibility, and Coercion: An Empirical Usage-based ApproachYoon, Soyeon 24 July 2013 (has links)
This study investigates the nature of semantic compatibility between constructions and lexical items that occur in them in relation with language use, and the related concept, coercion, based on a usage-based approach to language, in which linguistic knowledge (grammar) is grounded in language use.
This study shows that semantic compatibility between linguistic elements is a gradient phenomenon, and that speakers’ knowledge about the degree of semantic compatibility is intimately correlated with language use. To show this, I investigate two constructions of English: the sentential complement construction and the ditransitive construction. I observe speakers’ knowledge of the semantic compatibility between the constructions and lexical items and compared it with empirical data obtained from linguistic corpora and experiments on sentence processing and acceptability judgments. My findings specifically show that the relative semantic compatibility of the lexical items and the construction is significantly correlated with the frequency of use of their co-occurrences and the processing effort and speakers’ acceptability judgments for the co-occurrences.
The empirical data show that a lexical item and a construction which are less than fully compatible can be actually used together when the incompatibility is resolved. The resolution of the semantic incompatibility between the lexical item and its host construction has been called coercion. Coercion has been invoked as a theoretical concept without being examined in depth, particularly without regard to language use. By correlating degree of semantic compatibility with empirical data of language use, this study highlights that coercion is an actual psychological process which occurs during the composition of linguistic elements. Moreover, by examining in detail how the semantics of a lexical item and a construction interact in order to reconcile the incompatibility, this study reveals that coercion is semantic integration that involves not only dynamic interaction of linguistic components but also non-linguistic contexts.
Investigating semantic compatibility and coercion in detail with empirical data tells about the processes by which speakers compose linguistic elements into larger units. It also supports the assumption of the usage-based model that grammar and usage are not independent, and ultimately sheds light on the dynamic aspect of our linguistic system.
|
397 |
Constructions, Semantic Compatibility, and Coercion: An Empirical Usage-based ApproachYoon, So Yeon 24 July 2013 (has links)
This study investigates the nature of semantic compatibility between constructions and lexical items that occur in them in relation with language use, and the related concept, coercion, based on a usage-based approach to language, in which linguistic knowledge (grammar) is grounded in language use.
This study shows that semantic compatibility between linguistic elements is a gradient phenomenon, and that speakers’ knowledge about the degree of semantic compatibility is intimately correlated with language use. To show this, I investigate two constructions of English: the sentential complement construction and the ditransitive construction. I observe speakers’ knowledge of the semantic compatibility between the constructions and lexical items and compared it with empirical data obtained from linguistic corpora and experiments on sentence processing and acceptability judgments. My findings specifically show that the relative semantic compatibility of the lexical items and the construction is significantly correlated with the frequency of use of their co-occurrences and the processing effort and speakers’ acceptability judgments for the co-occurrences.
The empirical data show that a lexical item and a construction which are less than fully compatible can be actually used together when the incompatibility is resolved. The resolution of the semantic incompatibility between the lexical item and its host construction has been called coercion. Coercion has been invoked as a theoretical concept without being examined in depth, particularly without regard to language use. By correlating degree of semantic compatibility with empirical data of language use, this study highlights that coercion is an actual psychological process which occurs during the composition of linguistic elements. Moreover, by examining in detail how the semantics of a lexical item and a construction interact in order to reconcile the incompatibility, this study reveals that coercion is semantic integration that involves not only dynamic interaction of linguistic components but also non-linguistic contexts.
Investigating semantic compatibility and coercion in detail with empirical data tells about the processes by which speakers compose linguistic elements into larger units. It also supports the assumption of the usage-based model that grammar and usage are not independent, and ultimately sheds light on the dynamic aspect of our linguistic system.
|
398 |
Multiple feature temporal models for the characterization of semantic video contentsSánchez Secades, Juan María 11 December 2003 (has links)
La estructura de alto nivel del vídeo se puede obtener a partir de conocimiento sobre el dominio más una representación de los contenidos que proporcione información semántica. En este contexto, las representaciones de la semántica de nivel medio vienen dadas en términos de características de bajo nivel y de la información que expresan acerca de los contenidos del vídeo. Las representaciones de nivel medio permiten obtener de forma automática agrupamientos semánticamente significativos de los shots, que son posteriormente utilizados conjuntamente con conocimientos de alto nivel específicos del dominio para obtener la estructura del vídeo. En general, las representaciones de nivel medio también dependen del dominio. Los descriptores que forman parte de la representación están específicamente diseñados para una aplicación concreta, teniendo en cuenta los requisitos del dominio y el conocimiento que tenemos del mismo. En esta tesis se propone una representación de nivel medio de los contenidos videográficos que permite obtener agrupamientos de shots que son semánticamente significativos. Esta representación no depende del dominio, y sin embargo aporta la información necesaria para obtener la estructura de alto nivel del vídeo, gracias a la combinación de las contribuciones de diferentes características de bajo nivel de las imágenes a la semántica de nivel medio.La semántica de nivel medio se encuentra implícita en las características de bajo nivel, dado que un concepto semántico concreto genera una combinación específica de valores de las mismas. El problema consiste en "tender un puente sobre el vacío" entre las características de bajo nivel que se observan y sus correspondientes conceptos semánticos de nivel medio ocultos. Para establecer relaciones entre estos dos niveles, se utilizan técnicas de visión por computador y procesamiento de imágenes. Otras disciplinas como la cinematografía y la semiótica también proporcionan pistas importantes para determinar como se usan las características de bajo nivel para crear conceptos semánticos. Una descripción adecuada de las características de bajo nivel puede proporcionar una representación de sus correspondientes contenidos semánticos. Más en concreto, el color resumido en un histograma se utiliza para representar la apariencia de los objetos. Cuando el objeto es el fondo de la escena, su color aporta información sobre la localización. De la misma manera, en esta tesis se analiza la semántica que transmite una descripción del movimiento. Las características de movimiento resumidas en una matriz de coocurrencias temporales proporcionan información sobre las operaciones de la cámara y el tipo de toma (primer plano, etc.) en función de la distancia relativa entre la cámara y los objetos filmados.La principal contribución de esta tesis es una representación de los contenidos visuales del vídeo basada en el resumen del comportamiento dinámico de las características de bajo nivel como procesos temporales descritos por cadenas de Markov. Los estados de la cadena de Markov vienen dados por los valores observados de una característica de bajo nivel. A diferencia de las representaciones de los shots basadas en keyframes, el modelo de cadena de Markov considera información de todos los frames del shot en la misma representación. Las medidas de similitud naturales en un marco probabilístico, como la divergencia de Kullback-Leibler, pueden ser utilizadas para comparar cadenas de Markov y, por tanto, el contenido de los shots que representan. En la misma representación se pueden combinar múltiples características de las imágenes mediante el acoplamiento de sus correspondientes cadenas. Esta tesis presenta diferentes formas de acoplar cadenas de Markov, y en particular la llamada Cadenas Acopladas de Markov (Coupled Markov Chains, CMC). También se detalla un método para encontrar la estructura de acoplamiento óptima en términos de coste mínimo y mínima pérdida de información, ya que esta merma se relaciona directamente con la pérdida de precisión de la estructura acoplada para representar contenidos de vídeo. Durante el proceso de cálculo de las representaciones de los shots se detectan las fronteras entre éstos usando el mismo modelo y medidas de similitud.Cuando las características de color y movimiento se combinan, la representación en cadenas acopladas de Markov proporciona un descriptor semántico de nivel medio que contiene información implícita sobre objetos (sus identidades, tamaños y patrones de movimiento), movimiento de cámara, localización, tipo de toma, relaciones temporales entre los elementos que componen la escena y actividad global, entendida como la cantidad de acción. Conceptos semánticos más complejos emergen de la unión de estos descriptores de nivel medio, tales como "cabeza parlante", que surge de la combinación de un primer plano con el color de la piel de la cara. Añadiendo el componente de localización en el dominio de Noticiarios, las cabezas parlantes se pueden subclasificar en "presentadores" (localizados en estudio) y "corresponsales" (localizados en exteriores). Estas y otras categorías semánticamente significativas aparecen cuando los shots representados usando el modelo CMC se agrupan de forma no supervisada. Los conceptos mejor definidos se corresponden con grupos compactos, que pueden ser detectados usando una medida de densidad. Conocimiento de alto nivel sobre el dominio se puede definir mediante simples reglas basadas en estos conceptos, que establecen fronteras en la estructura semántica del vídeo. El modelado de contenidos de vídeo por cadenas acopladas de Markov unifica los primeros pasos del proceso de análisis semántico de vídeo y proporciona una representación de nivel medio semánticamente significativa sin necesidad de detectar previamente las fronteras entre shots. / The high-level structure of a video can be obtained once we have knowledge about the domain plus a representation of the contents that provides semantic information. In this context, intermediate-level semantic representations are defined in terms of low-level features and the information they convey about the contents of the video. Intermediate-level representations allow us to obtain semantically meaningful clusterings of shots, which are then used together with high-level domain-specific knowledge in order to obtain the structure of the video. Intermediate-level representations are usually domain-dependent as well. The descriptors involved in the representation are specifically tailored for the application, taking into account the requirements of the domain and the knowledge we have about it. This thesis proposes an intermediate-level representation of video contents that allows us to obtain semantically meaningful clusterings of shots. This representation does not depend on the domain, but still provides enough information to obtain the high-level structure of the video by combining the contributions of different low-level image features to the intermediate-level semantics.Intermediate-level semantics are implicitly supplied by low-level features, given that a specific semantic concept generates some particular combination of feature values. The problem is to bridge the gap between observed low-level features and their corresponding hidden intermediate-level semantic concepts. Computer vision and image processing techniques are used to establish relationships between them. Other disciplines such as filmmaking and semiotics also provide important clues to discover how low-level features are used to create semantic concepts. A proper descriptor of low-level features can provide a representation of their corresponding semantic contents. Particularly, color summarized as a histogram is used to represent the appearance of objects. When this object is the background, color provides information about location. In the same way, the semantics conveyed by a description of motion have been analyzed in this thesis. A summary of motion features as a temporal cooccurrence matrix provides information about camera operation and the type of shot in terms of relative distance of the camera to the subject matter.The main contribution of this thesis is a representation of visual contents in video based on summarizing the dynamic behavior of low-level features as temporal processes described by Markov chains (MC). The states of the MC are given by the values of an observed low-level feature. Unlike keyframe-based representations of shots, information from all the frames is considered in the MC modeling. Natural similarity measures such as likelihood ratios and Kullback-Leibler divergence are used to compare MC's, and thus the contents of the shots they are representing. In this framework, multiple image features can be combined in the same representation by coupling their corresponding MC's. Different ways of coupling MC's are presented, particularly the one called Coupled Markov Chains (CMC). A method to find the optimal coupling structure in terms of minimal cost and minimal loss of information is detailed in this dissertation. The loss of information is directly related to the loss of accuracy of the coupled structure to represent video contents. During the same process of computing shot representations, the boundaries between shots are detected using the same modeling of contents and similarity measures.When color and motion features are combined, the CMC representation provides an intermediate-level semantic descriptor that implicitly contains information about objects (their identities, sizes and motion patterns), camera operation, location, type of shot, temporal relationships between elements of the scene and global activity understood as the amount of action. More complex semantic concepts emerge from the combination of these intermediate-level descriptors, such as a "talking head" that combines a close-up with the skin color of a face. Adding the location component in the News domain, talking heads can be further classified into "anchors" (located in the studio) and "correspondents" (located outdoors). These and many other semantically meaningful categories are discovered when shots represented using the CMC model are clustered in an unsupervised way. Well-defined concepts are given by compact clusters, which can be determined by a measure of their density. High-level domain knowledge can then be defined by simple rules on these salient concepts, which will establish boundaries in the semantic structure of the video. The CMC modeling of video shots unifies the first steps of the video analysis process providing an intermediate-level semantically meaningful representation of contents without prior shot boundary detection.
|
399 |
Spatial Ontology for the Production Domain of Petroleum GeologyLiadey, Dickson M. 11 May 2012 (has links)
ABSTRACT
The availability of useful information for research strongly depends on well structured relationships between consistently defined concepts (terms) in that domain. This can be achieved through ontologies. Ontologies are models of the knowledge of specific domain such as petroleum geology, in a computer understandable format. Knowledge is a collection of facts. Facts are represented by RDF triples (subject-predicate-object). A domain ontology is therefore a collection of many RDF triples, which represent facts of that domain. The SWEET ontologies are upper or top-level ontologies (foundation ontologies) consisting of thousands of very general concepts. These concepts are obtained from of Earth System science and include other related concepts. The work in this thesis deals with scientific knowledge representation in which the SWEET ontologies are extended to include wider, more specific and specialized concepts used in Petroleum Geology. Thus Petroleum Geology knowledge modeling is presented in this thesis.
|
400 |
Life-long mapping of objects and places in domestic environmentsRogers, John Gilbert 10 January 2013 (has links)
In the future, robots will expand from industrial and research applications to the home. Domestic service robots will work in the home to perform useful tasks such as object retrieval, cleaning, organization, and security. The tireless support of these systems will not only enable able bodied people to avoid mundane chores; they will also enable the elderly to remain independent from institutional care by providing service, safety, and companionship. Robots will need to understand the relationship between objects and their environments to perform some of these tasks. Structured indoor environments are organized according to architectural guidelines and convenience for their residents. Utilizing this information makes it possible to predict the location of objects. Conversely, one can also predict the function of a room from the detection of a few objects within a given space.
This thesis introduces a framework for combining object permanence and context called the probabilistic cognitive model. This framework combines reasoning about spatial extent of places and the identity of objects and their relationships to one another and to the locations where they appear. This type of reasoning takes into account the context in which objects appear to determine their identity and purpose. The probabilistic cognitive model combines a mapping system called OmniMapper with a conditional random field probabilistic model for context representation. The conditional random field models the dependencies between location and identity in a real-world domestic environment. This model is used by mobile robot systems to predict the effects of their actions during autonomous object search tasks in unknown environments.
|
Page generated in 0.0552 seconds