151 |
Hardware Trojan Detection in Sequential Logic DesignsDharmadhikari, Pranav Hemant January 2018 (has links)
No description available.
|
152 |
Bounds for the Maximum-Time Stochastic Shortest Path ProblemKozhokanova, Anara Bolotbekovna 13 December 2014 (has links)
A stochastic shortest path problem is an undiscounted infinite-horizon Markov decision process with an absorbing and costree target state, where the objective is to reach the target state while optimizing total expected cost. In almost all cases, the objective in solving a stochastic shortest path problem is to minimize the total expected cost to reach the target state. But in probabilistic model checking, it is also useful to solve a problem where the objective is to maximize the expected cost to reach the target state. This thesis considers the maximum-time stochastic shortest path problem, which is a special case of the maximum-cost stochastic shortest path problem where actions have unit cost. The contribution is an efficient approach to computing high-quality bounds on the optimal solution for this problem. The bounds are useful in themselves, but can also be used by other algorithms to accelerate search for an optimal solution.
|
153 |
Community Drug Checking and Substance Use Stigma: An Analysis of Stigma-Related Barriers and Potential ResponsesDavis, Samantha 12 September 2022 (has links)
The illicit drug overdose crisis is an ongoing epidemic that continues to take lives at unprecedented rates and British Columbia, Canada has been identified as the epicenter in Canada, where approximately five deaths per day are linked to unregulated substances most often including fentanyl (Service, 2022). In Victoria, British Columbia, community drug checking sites have been implemented as a public health response to the ongoing overdose crisis and the unregulated illicit drug market through a community-based research project called the Vancouver Island Drug Checking Project. In addition to providing anonymous, confidential, and non-judgmental drug checking services with rapid results, the project has conducted qualitative research aimed to better understand drug checking as a potential harm reduction response to the illicit drug overdose crisis and the unregulated illicit drug market (Wallace et al., 2021; Wallace et al., 2020).
An analytical framework was utilized to understand the impact substance use stigma has on those accessing drug checking services, as well as those who avoid accessing these services as a direct result of substance use stigma. This study found that the risk of criminalization and the anticipation of being poorly treated appear to be the most significant barriers related to stigma, rather than actually experiencing stigma. Further, it appears the implementation of community drug checking creates tensions that need to be navigated as sites and services balance a hierarchy of substances and stigma; differing definitions of peers; public yet private locations; and, normalization within criminalization. The findings suggest the solution to substance use stigma and drug checking will not come from continuing as we are, but through making changes at all levels (individual, interpersonal, and structural) and thus for all people who access community drug checking. / Graduate
|
154 |
Improving Error Discovery Using Guided Model CheckingRungta, Neha Shyam 12 September 2006 (has links) (PDF)
State exploration in directed software model checking is guided using a heuristic function to move states near errors to the front of the search queue. Distance heuristic functions rank states based on the number of transitions needed to move the current program state into an error location. Lack of calling context information causes the heuristic function to underestimate the true distance to the error; however, inlining functions at call sites in the control flow graph to capture calling context leads to exponential growth in the computation. This paper presents a new algorithm that implicitly inlines functions at call sites to compute distance data with unbounded calling context that is polynomial in the number of nodes in the control flow graph. The new algorithm propagates distance data through call sites during a depth-first traversal of the program. We show in a series of benchmark examples that the new heuristic function with unbounded distance data is more efficient than the same heuristic function that inlines functions up to a certain depth.
|
155 |
Machine Code Verification Using The Bogor FrameworkEdelman, Joseph R. 22 May 2008 (has links) (PDF)
Verification and validation of embedded systems software is tedious and time consuming. Software model checking uses a tool-based approach automating this process. In order to more accurately model software it is necessary to provide hardware support that enables the execution of software as it should run on native hardware. Hardware support often requires the creation of model checking tools specific to the instruction set architecture. The creation of software model checking tools is non-trivial. We present a strategy for using an "off-the-shelf" model checking tool, Bogor, to provide support for multiple instruction set architectures. Our strategy supports key hardware features such as instruction execution, exceptional control flow, and interrupt servicing as extensions to Bogor. These extensions work within the tool framework using existing interfaces and require significantly less code than creating an entire model checking tool.
|
156 |
Verifying Abstract Components Within Concrete Software EnvironmentsBao, Tonglaga 26 March 2009 (has links) (PDF)
In order to model check a software component which is not a standalone program, we need a model of the software which completes the program. This problem is important for software engineers who need to deploy an existing component into a new environment. The model is typically generated by abstracting the surrounding software environment in which the component will be executed. However, abstracting the surrounding software is a difficult and error-prone task, particularly when the surrounding software is a complex software artifact which can not be easily abstracted. In this dissertation, we present a new approach to the problem by abstracting the software component under test and leaving the surrounding software concrete. We derive this abstract-concrete mixed model automatically for both sequential and concurrent C programs and verify them using the SPIN model checker. We give verification results for several components under test contained within complex software environments to demonstrate the strengths and weaknesses of our approach. We are able to find errors in components which were too complex for analysis by existing model checking techniques. We prove that this mixed abstract-concrete model can be bisimilar to the original complete software system using an abstraction refinement scheme. We then show how to generate test cases for the component under test using this abstraction refinement process.
|
157 |
Kernel P systems: from modelling to verification and testingGheorghe, Marian, Ceterchi, R., Ipate, F., Konur, Savas, Lefticaru, Raluca 13 December 2017 (has links)
Yes / A kernel P system integrates in a coherent and elegant manner some of the most successfully used features of the P systems employed in modelling various applications. It also provides a theoretical framework for analysing these applications and a software environment for simulating and verifying them. In this paper, we illustrate the modelling capabilities of kernel P systems by showing how other classes of P systems can be represented with this formalism and providing a number of kernel P system models for a sorting algorithm and a broadcasting problem. We also show how formal verification can be used to validate that the given models work as desired. Finally, a test generation method based on automata is extended to non-deterministic kernel P systems. / The work of MG, FI and RL were supported by a grant of the Romanian National Author- ity for Scientific Research, CNCS-UEFISCDI (project number: PN-II-ID-PCE-2011-3-0688); RCUK
|
158 |
Criticism and robustification of latent Gaussian modelsCabral, Rafael 28 May 2023 (has links)
Latent Gaussian models (LGMs) are perhaps the most commonly used class of statistical models with broad applications in various fields, including biostatistics, econometrics, and spatial modeling. LGMs assume that a set of unobserved or latent variables follow a Gaussian distribution, commonly used to model spatial and temporal dependence in the data. The availability of computational tools, such as R-INLA, that permit fast and accurate estimation of LGMs has made their use widespread. Nevertheless, it is easy to find datasets that contain inherently non-Gaussian features, such as sudden jumps or spikes, that adversely affect the inferences and predictions made from an LGM. These datasets require more general latent non-Gaussian models (LnGMs) that can automatically handle these non-Gaussian features by assuming more flexible and robust non-Gaussian distributions on the latent variables. However, fast implementation and easy-to-use software are lacking, which prevents LnGMs from becoming widely applicable.
This dissertation aims to tackle these challenges and provide ready-to-use implementations for the R-INLA package. We view scientific learning as an iterative process involving model criticism followed by model improvement and robustification. Thus, the first step is to provide a framework that allows researchers to criticize and check the adequacy of an LGM without fitting the more expensive LnGM. We employ concepts from Bayesian sensitivity analysis to check the influence of the latent Gaussian assumption on the statistical answers and Bayesian predictive checking to check if the fitted LGM can predict important features in the data. In many applications, this procedure will suffice to justify using an LGM. For cases where this check fails, we provide fast and scalable implementations of LnGMs based on variational Bayes and Laplace approximations. The approximation leads to an LGM that downweights extreme events in the latent variables, reducing their impact and leading to more robust inferences. Each step, the first of LGM criticism and the second of LGM robustification, can be executed in R-INLA, requiring only the addition of a few lines of code. This results in a robust workflow that applied researchers can readily use.
|
159 |
Comparing to Perceived Perfection: An Examination of Two Potential Moderators of the Relationship between Naturally Occuring Social Comparisons to Peers and Media Images and Body DissatisfactionRidolfi, Danielle R. 07 October 2009 (has links)
No description available.
|
160 |
Verification of Genetic Fuzzy SystemsArnett, Timothy J. 06 June 2016 (has links)
No description available.
|
Page generated in 0.0377 seconds