121 |
Moral FallibilismSpino, Amy 05 June 2023 (has links)
No description available.
|
122 |
Orientation discrimination in periphery: Surround suppression or crowding?Gong, Mingliang 05 May 2015 (has links)
No description available.
|
123 |
Minkowski's Linear Forms Theorem in Elementary Function ArithmeticKnapp, Greg 30 August 2017 (has links)
No description available.
|
124 |
Modelling Fault Tolerance using Deontic Logic: a case studyKhan, Ahmed Jamil 04 1900 (has links)
<p>Many computer systems in our daily life require highly available applications (such as medical equipment) and some others run on difficult to access places (such as satellites). These systems are subject to a variety of potential failures that may degrade their performance. Therefore, being able to reason about faults and their impact on systems is gaining considerable attention. Existing work on fault tolerance is mostly focused on addressing faults at the programming language level. In the recent past, significant efforts have been made to use formal methods to specify and verify fault tolerant systems to provide more reliable software. Related with this, some researchers have pointed out that Deontic Logic is useful for reasoning about fault tolerant systems due to its expressive nature in relation to defining norms, used to describe expected behaviour and prescribing what happens when these norms are violated.</p> <p>In this thesis, we demonstrate how Deontic Logic can be used to model an existing real world problem concerning fault tolerance mechanisms. We consider different situations that a vehicle faces on the road and the consequent reactions of the driver or vehicle based on good and bad behaviour. We got the idea and motivation for this case study from the SASPENCE sub-project, conducted under the European Integrated Project PReVENT. This sub-project focuses on a vehicle’s behaviour in maintaining safe speed and safe distance on the road. As our first modelling attempt, we use a Propositional Deontic Logic approach, to justify to what extent we can apply this Logical approach to model a real world problem. Subsequently, we use a First Order Deontic Logic approach, as it can incorporate the use of parameters and quantification over them, which is more useful to model real world scenarios.</p> <p>We state and prove some interesting expected properties of the models using a First Order proof system. Based on these modelling exercises, we acquired different engineering ideas and lessons, and present them in this thesis in order to aid modelling of future fault tolerant systems.</p> / Master of Science (MSc)
|
125 |
Evaluation of Uncertainty in Hydrodynamic ModelingCamacho Rincon, Rene Alexander 17 August 2013 (has links)
Uncertainty analysis in hydrodynamic modeling is useful to identify and report the limitations of a model caused by different sources of error. In the practice, the main sources of errors are divided into model structure errors, errors in the input data due to measurement imprecision among other, and parametric errors resulting from the difficulty of identifying physically representative parameter values valid at the temporal and spatial scale of the models. This investigation identifies, implements, evaluates, and recommends a set of methods for the evaluation of model structure uncertainty, parametric uncertainty, and input data uncertainty in hydrodynamic modeling studies. A comprehensive review of uncertainty analysis methods is provided and a set of widely applied methods is selected and implemented in real case studies identifying the main limitations and benefits of their use in hydrodynamic studies. In particular, the following methods are investigated: the First Order Variance Analysis (FOVA) method, the Monte Carlo Uncertainty Analysis (MCUA) method, the Bayesian Monte Carlo (BMC) method, the Markov Chain Monte Carlo (MCMC) method and the Generalized Likelihood Uncertainty Estimation (GLUE) method. The results of this investigation indicate that the uncertainty estimates computed with FOVA are consistent with the results obtained by MCUA. In addition, the comparison of BMC, MCMC and GLUE indicates that BMC and MCMC provide similar estimations of the posterior parameter probability distributions, single-point parameter values, and uncertainty bounds mainly due to the use of the same likelihood function, and the low number of parameters involved in the inference process. However, the implementation of MCMC is substantially more complex than the implementation of BMC given that its sampling algorithm requires a careful definition of auxiliary proposal probability distributions along with their variances to obtain parameter samples that effectively belong to the posterior parameter distribution. The analysis also suggest that the results of GLUE are inconsistent with the results of BMC and MCMC. It is concluded that BMC is a powerful and parsimonious strategy for evaluation of all the sources of uncertainty in hydrodynamic modeling. Despites of the computational requirements of BMC, the method can be easily implemented in most practical applications.
|
126 |
Validating reasoning heuristics using next generation theorem proversSteyn, Paul Stephanes 31 January 2009 (has links)
The specification of enterprise information systems using formal specification languages
enables the formal verification of these systems. Reasoning about the properties of a formal
specification is a tedious task that can be facilitated much through the use of an automated
reasoner. However, set theory is a corner stone of many formal specification languages and
poses demanding challenges to automated reasoners. To this end a number of heuristics has
been developed to aid the Otter theorem prover in finding short proofs for set-theoretic
problems. This dissertation investigates the applicability of these heuristics to next generation
theorem provers. / Computing / M.Sc. (Computer Science)
|
127 |
Towards both-and land : a journey from answers to questions about the therapeutic selfZagnoev, Joanne 01 1900 (has links)
This thesis constitutes a narrative description of the evolution of my therapeutic self during my training as a clinical psychologist. During the telling of this story, I review the ways in which I was perturbed by the mix between the various theories and the various contexts visited during the years of my post-graduate training. I have described and critically compared my responses to the following models: psychoanalytic, psychodynamic, first-order cybernetic, and secondorder cybernetic (covering the first, second and third movements). Throughout, I have attempted to track the development of a congruent, personal therapeutic self while simultaneously assuming that this self is constantly coming-into-being. / Psychology / M.A. (Clinical Psychology)
|
128 |
A self-verifying theorem proverDavis, Jared Curran 24 August 2010 (has links)
Programs have precise semantics, so we can use mathematical proof to establish their properties. These proofs are often too large to validate with the usual "social process" of mathematics, so instead we create and check them with theorem-proving software. This software must be advanced enough to make the proof process tractable, but this very sophistication casts doubt upon the whole enterprise: who verifies the verifier?
We begin with a simple proof checker, Level 1, that only accepts proofs composed of the most primitive steps, like Instantiation and Cut. This program is so straightforward the ordinary, social process can establish its soundness and the consistency of the logical theory it implements (so we know theorems are "always true").
Next, we develop a series of increasingly capable proof checkers, Level 2, Level 3, etc. Each new proof checker accepts new kinds of proof steps which were not accepted in the previous levels. By taking advantage of these new proof steps, higher-level proofs can be written more concisely than lower-level proofs, and can take less time to construct and check. Our highest-level proof checker, Level 11, can be thought of as a simplified version of the ACL2 or NQTHM theorem provers. One contribution of this work is to show how such systems can be verified.
To establish that the Level 11 proof checker can be trusted, we first use it, without trusting it, to prove the fidelity of every Level n to Level 1: whenever Level n accepts a proof of some phi, there exists a Level 1 proof of phi. We then mechanically translate the Level 11 proof for each Level n into a Level n - 1 proof---that is, we create a Level 1 proof of Level 2's fidelity, a Level 2 proof of Level 3's fidelity, and so on. This layering shows that each level can be trusted, and allows us to manage the sizes of these proofs.
In this way, our system proves its own fidelity, and trusting Level 11 only requires us to trust Level 1. / text
|
129 |
Application of local semantic analysis in fault prediction and detectionShao, Danhua 06 October 2010 (has links)
To improve quality of software systems, change-based fault prediction and scope-bounded checking have been used to predict or detect faults during software development. In fault prediction, changes to program source code, such as added lines or deleted lines, are used to predict potential faults. In fault detection, scope-bounded checking of programs is an effective technique for finding subtle faults. The central idea is to check all program executions up to a given bound. The technique takes two basic forms: scope-bounded static checking, where all bounded executions of a program are transformed into a formula that represents the violation of a correctness property and any solution to the formula represents a counterexample; or scope-bounded testing where a program is tested against all (small) inputs up to a given bound on the input size.
Although the accuracies of change-based fault prediction and scope-bounded checking have been evaluated with experiments, both of them have effectiveness and efficiency limitations. Previous change-based fault predictions only consider the code modified by a change while ignoring the code impacted by a change. Scope-bounded testing only concerns the correctness specifications, and the internal structure of a program is ignored. Although scope-bounded static checking considers the internal structure of programs, formulae translated from structurally complex programs might choke the backend analyzer and fail to give a result within a reasonable time.
To improve effectiveness and efficiency of these approaches, we introduce local semantic analysis into change-based fault prediction and scope-bounded checking. We use data-flow analysis to disclose internal dependencies within a program. Based on these dependencies, we identify code segments impacted by a change and apply fault prediction metrics on impacted code. Empirical studies with real data showed that semantic analysis is effective and efficient in predicting faults in large-size changes or short-interval changes. While generating inputs for scope-bounded testing, we use control-flow to guide test generation so that code coverage can be achieved with minimal tests. To increase the scalability of scope-bounded checking, we split a bounded program into smaller sub-programs according to data-flow and control-flow analysis. Thus the problem of scope-bounded checking for the given program reduces to several sub-problems, where each sub-problem requires the constraint solver to check a less complex formula, thereby likely reducing the solver’s overall workload. Experimental results show that our approach provides significant speed-ups over the traditional approach. / text
|
130 |
Analiza osobina dinamičkih postuslova u Horovim tripletima / Analyses of characteristics of dynamic postconditions in Hoare tripletsKupusinac Aleksandar 01 January 2010 (has links)
<p>Doktorska disertacija prezentuje nov i opštiji način analiziranja semantike strukturiranih i objektno orijentisanih programa i to isključivo u okvirima predikatske logike prvog reda. Doktorska disertacija razmatra sledeće teme:<br />1.) S-programski račun,<br />2.) Definicija i osobine dinamičkih postuslova u S-računu,<br />3.) Konceptualne definicije objekta, klase i invarijante,<br />4.) Analiza invarijanata u klasi (SP-analiza i DP-analiza).</p> / <p>Doctoral thesis presents a new and more general method for analizing of structured and object-oriented program semantics, based on the first-order predicate logic. Doctoral thesis consideres next topics:<br />1.) S-program calculus,<br />2.) Definition and characteristics of dynamic postconditions in S-calculus,<br />3.) Conceptual definitions of object, class and invariant,<br />4.) Analyses of invariants in class (SP-analyses and DP-analyses).</p>
|
Page generated in 0.0203 seconds