Spelling suggestions: "subject:"abstraction."" "subject:"obstraction.""
41 |
Abstraction, analogy and induction : toward a general account of ampliative inference /Barker, Gillian Abernathy, January 1997 (has links)
Thesis (Ph. D.)--University of California, San Diego, 1997. / Vita. Includes bibliographical references (leaves 239-255).
|
42 |
L'intellect agent et son rôle d'abstraction /Cromp, Germaine. January 1900 (has links)
Quatrième partie de: Th.--Philos.--Montréal--Université de Montréal, 1980. / 1980 d'après "Canadiana" bibliogr. pp. 213-228.
|
43 |
Surveying American and Turkish middle school students' existing knowledge of earthquakes by using a systemic networkOguz, Ayse. January 2005 (has links)
Thesis (Ph. D.)--Ohio State University, 2005. / Title from first page of PDF file. Includes bibliographical references (p. 135-145).
|
44 |
Bridging abstraction layers in process miningBaier, Thomas, Mendling, Jan, Weske, Mathias 12 1900 (has links) (PDF)
While the maturity of process mining algorithms increases and more process
mining tools enter the market, process mining projects still face the
problem of different levels of abstraction when comparing events with modeled
business activities. Current approaches for event log abstraction try to
abstract from the events in an automated way that does not capture the
required domain knowledge to fit business activities. This can lead to misinterpretation
of discovered process models. We developed an approach that
aims to abstract an event log to the same abstraction level that is needed
by the business. We use domain knowledge extracted from existing process
documentation to semi-automatically match events and activities. Our abstraction
approach is able to deal with n:m relations between events and
activities and also supports concurrency. We evaluated our approach in two
case studies with a German IT outsourcing company.
|
45 |
Analyse de systèmes modulaires à l'aide de techniques d'abstractions hiérarchiques / Analysing modular systems with hierarchical abstractionsLe Cornec, Yves-Stan 11 July 2016 (has links)
Dans cette thèse, on s’intéresse au problème de l’explosion combinatoire du model-checking sur des systèmes modulaires. Les techniques d’abstractions hiérarchiques permettent de construire de manière incrémentale une abstraction d’un modèle en composant des abstractions de ses parties. Cette opération doit en outre préserver les propriétés d’intérêt du modèle.Dans un premier temps, on définit le formalisme des réseaux de régulation modulaires, dans le but de pouvoir étudier des systèmes biologiques en utilisant de l’abstraction hiérarchique. On utilise ensuite cette méthode pour détecter les états stables accessibles depuis un état donné sur un modèle multicellulaire intervenant lors du développement embryonnaire de la drosophile. L’opération d’abstraction SAFETY utilisée préserve l’accessibilité de tous les états stable. C’est une réduction classique et générique dans le sens ou elle préserve également toutes les propriétés de sûreté d’un système. Dans un second temps, on suppose que l’on a connaissance de la formule globale (exprimée en μ-calcul) que l’on veut vérifier, ainsi que de l’état initial du module. On défini alors deux opérations de réduction différentes. Ces réductions ont pour seule contrainte de préserver la valeur global de la formule et il est donc possible d’obtenir des réductions plus importantes qu’avec les méthodes génériques (le but reste le même : réduire la taille d’un module ou d’un sous-système donné sans avoir connaissance du reste du système). Lors du calcul de cette réduction, on effectue de l’analyse partielle pour déterminer si le sous-système contient assez d’information pour connaître la valeur de vérité de la for-mule sur le système global. Si c’est le cas, on peut arrêter là l’analyse par abstraction hiérarchique ; sinon, cette étape est tout de même utile pour réduire le module.Enfin, on teste la première de ces réductions à l’aide d’un prototype sur quelques exemples simples. En plus de constater les coefficients de réduction obtenus, cela permet de fournir des données pour s’attaquer par la suite au problème du meilleur ordre d’analyse hiérarchique. C’est en effet un problème difficile et l’ordre d’analyse a une grande influence sur les performances de ces méthodes. / In this thesis, we are interested in limiting the combinatorial explosion that happens when model-checking modular systems. We use hierarchical abstraction techniques, which allow one to build an abstraction of a modular system by composing abstractions of his part, while ensuring that this abstraction does not change the temporal properties we are interested in. At first, we define the modular regulation networks formalism in or-der to apply hierarchical abstraction techniques to the study of biological systems. We then use this approach to find the reachable stable states of a multi-cellular model involved in the development of the fruit fly embryo. For this, we use the abstraction called SAFETY which reduces a system while keeping all of its reachable stable states. This is a classical reduction which is quite general in the sens that it also preserves all the safety properties of the model. After this, we define two new reduction operations, which are dependent on the μ-calculus formula we seek to verify on the global system. We also assume that we know the initial state of the module or sub-system to be reduced. Since these reductions must only preserve the value of one formula over the global system, they should be able to return smaller systems than the general ones. While computing these reductions on one module or sub-system, we use partial analysis techniques to test if it contains enough information to conclude about the truth value of the formula on the global system. If it is the case, we can stop the incremental analysis right away ; otherwise, this step is still useful for the computation of the reduced sub-system. Finally, we use a prototype to test the first one of our reduction operations on some simple examples. This enables us to observe how big the reductions are as well as getting data in order to tackle the problem of the order of analysis in the future. It is a difficult question and an important one since the hierarchical order in which we build the abstraction has a huge weight on the efficiency of these methods.
|
46 |
Mytran: A Programming Language for Data AbstractionSnider, Timothy West January 1981 (has links)
<p> This project is about the design and implementation of a new programming language, Mytran. Two new control statements are implemented in Mytran. Data abstraction is supported through parameterized types or "type constructors".</p> / Thesis / Master of Science (MSc)
|
47 |
Empirical Validation Of Requirement Error Abstraction And Classification: A Multidisciplinary ApproachWalia, Gursimran Singh 05 August 2006 (has links)
Software quality and reliability is a primary concern for successful development organizations. Over the years, researchers have focused on monitoring and controlling quality throughout the software process by helping developers to detect as many faults as possible using different fault based techniques. This thesis analyzed the software quality problem from a different perspective by taking a step back from faults to abstract the fundamental causes of faults. The first step in this direction is developing a process of abstracting errors from faults throughout the software process. I have described the error abstraction process (EAP) and used it to develop error taxonomy for the requirement stage. This thesis presents the results of a study, which uses techniques based on an error abstraction process and investigates its application to requirement documents. The initial results show promise and provide some useful insights. These results are important for our further investigation.
|
48 |
"The Length of Our Vision": Thoreau, Berry, and SustainabilityGibbs, Jared Andrew 12 May 2010 (has links)
The past several years have seen increased awareness of environmental degradation, climate change, and energy concerns—and with good reason; addressing the problem of sustainability is vital if American culture is to both persist and thrive. Because this issue affects all aspects of our lives, it can easily seem overwhelming, encouraging the belief that solutions to these problems lie beyond the scope of individual action. This study seeks to identify legitimate personal responses one can make to issues of sustainability.
I approach this subject with an eye toward answering a simple series of questions: Where are we?; How did we get here?; Where are we going?; Is that where we want to go? I briefly investigate the history of the idea of progress, focusing especially on our culture's fascination with and embrace of technological progress. Following this investigation, I examine two works that offer critiques of progress: Thoreau's classic text, Walden, and Wendell Berry's, The Unsettling of America. These texts are chosen for a few reasons. First, a clear tradition of critical inquiry can be traced from Thoreau to Berry. Second, the historical distance between these authors makes a comparison of their work particularly illuminating. Though they are citizens of the same country, speak the same language, and ask similar questions, each author writes in response to different worlds—Thoreau's just beginning to embrace industrial capitalism and technological progress, and Berry's very much the product of that embrace. Most importantly, however, both authors focus on individual action and responsibility. / Master of Arts
|
49 |
APPLICATIONS OF A HARDWARE SPECIFICATION FOR INSTRUMENTATION METADATAHamilton, John, Fernandes, Ronald, Graul, Mike, Jones, Charles H. 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / In this paper, we discuss the benefits of maintaining a neutral-format hardware specification along with the telemetry metadata specification. We present several reasons and methods for maintaining the hardware specifications, as well as several potential uses of hardware specification. These uses include cross-validation with the telemetry metadata and automatic generation of both metadata and instrumentation networks.
|
50 |
Learning domain abstractions for long lived robotsRosman, Benjamin Saul January 2014 (has links)
Recent trends in robotics have seen more general purpose robots being deployed in unstructured environments for prolonged periods of time. Such robots are expected to adapt to different environmental conditions, and ultimately take on a broader range of responsibilities, the specifications of which may change online after the robot has been deployed. We propose that in order for a robot to be generally capable in an online sense when it encounters a range of unknown tasks, it must have the ability to continually learn from a lifetime of experience. Key to this is the ability to generalise from experiences and form representations which facilitate faster learning of new tasks, as well as the transfer of knowledge between different situations. However, experience cannot be managed na¨ıvely: one does not want constantly expanding tables of data, but instead continually refined abstractions of the data – much like humans seem to abstract and organise knowledge. If this agent is active in the same, or similar, classes of environments for a prolonged period of time, it is provided with the opportunity to build abstract representations in order to simplify the learning of future tasks. The domain is a common structure underlying large families of tasks, and exploiting this affords the agent the potential to not only minimise relearning from scratch, but over time to build better models of the environment. We propose to learn such regularities from the environment, and extract the commonalities between tasks. This thesis aims to address the major question: what are the domain invariances which should be learnt by a long lived agent which encounters a range of different tasks? This question can be decomposed into three dimensions for learning invariances, based on perception, action and interaction. We present novel algorithms for dealing with each of these three factors. Firstly, how does the agent learn to represent the structure of the world? We focus here on learning inter-object relationships from depth information as a concise representation of the structure of the domain. To this end we introduce contact point networks as a topological abstraction of a scene, and present an algorithm based on support vector machine decision boundaries for extracting these from three dimensional point clouds obtained from the agent’s experience of a domain. By reducing the specific geometry of an environment into general skeletons based on contact between different objects, we can autonomously learn predicates describing spatial relationships. Secondly, how does the agent learn to acquire general domain knowledge? While the agent attempts new tasks, it requires a mechanism to control exploration, particularly when it has many courses of action available to it. To this end we draw on the fact that many local behaviours are common to different tasks. Identifying these amounts to learning “common sense” behavioural invariances across multiple tasks. This principle leads to our concept of action priors, which are defined as Dirichlet distributions over the action set of the agent. These are learnt from previous behaviours, and expressed as the prior probability of selecting each action in a state, and are used to guide the learning of novel tasks as an exploration policy within a reinforcement learning framework. Finally, how can the agent react online with sparse information? There are times when an agent is required to respond fast to some interactive setting, when it may have encountered similar tasks previously. To address this problem, we introduce the notion of types, being a latent class variable describing related problem instances. The agent is required to learn, identify and respond to these different types in online interactive scenarios. We then introduce Bayesian policy reuse as an algorithm that involves maintaining beliefs over the current task instance, updating these from sparse signals, and selecting and instantiating an optimal response from a behaviour library. This thesis therefore makes the following contributions. We provide the first algorithm for autonomously learning spatial relationships between objects from point cloud data. We then provide an algorithm for extracting action priors from a set of policies, and show that considerable gains in speed can be achieved in learning subsequent tasks over learning from scratch, particularly in reducing the initial losses associated with unguided exploration. Additionally, we demonstrate how these action priors allow for safe exploration, feature selection, and a method for analysing and advising other agents’ movement through a domain. Finally, we introduce Bayesian policy reuse which allows an agent to quickly draw on a library of policies and instantiate the correct one, enabling rapid online responses to adversarial conditions.
|
Page generated in 0.0796 seconds