541 |
Can commercial satellite data aid in the detection of covert nuclear weapons programs?Lance, Jay Logan January 1993 (has links)
This research was conducted to determine the effectiveness of using commercial satellite data to detect covert nuclear weapons programs. Seven-band Landsat Thematic Mapper data covering the Pahute Mesa (an area within the United States Nevada Nuclear Testing Site), acquired on October 16, 1985, were analyzed to determine if underground nuclear test sites were spectrally distinguishable from the surrounding area. The analysis consisted of four steps: (1) analyzing the raw data, (2) manipulating the raw data through contrast stretching, filter application, matrix algebra, and principal components analyses, (3) identifying parameters that affect classification of underground nuclear tests and (4) selectively limiting parameters. The results of limiting parameters showed that a supervised classification of a signature created with a five-original-pixel seed of one representative, known test site provided an accurate classification of most known test sites. To further eliminate erroneous classification of roads and other areas of similar reflectance, these areas were seeded to create a second signature. This signature, whose spectral responses were different, was then used in a simultaneous classification. This classification further eliminated erroneous classification of non-test site areas, demonstrating that commercial satellite digital data can aid in the detection of covert nuclear weapons programs, in this case, underground nuclear testing. An application of the classification scheme used is proposed to confront a scenario in which a country seeks additional verification of another party's proposed violation of test ban treaties. / Department of Physics and Astronomy
|
542 |
Verification of knowledge shared across design and manufacture using a foundation ontologyAnjum, Najam A. January 2011 (has links)
Seamless computer-based knowledge sharing between departments of a manufacturing enterprise is useful in preventing unnecessary design revisions. A lack of interoperability between independently developed knowledge bases, however, is a major impediment in the development of a seamless knowledge sharing system. Interoperability, being an ability to overcome semantic and syntactic differences during computer-based knowledge sharing can be enhanced through the use of ontologies. Ontologies in computer science terms are hierarchical structures of knowledge stored in a computer-based knowledge base. Ontologies have been accepted by all as an interoperable medium to provide a non-subjective way of storing and sharing knowledge across diverse domains. Some semantic and syntactic differences, however, still crop up when these ontological knowledge bases are developed independently. A case study in an aerospace components manufacturing company suggests that shape features of a component are perceived differently by the designing and manufacturing departments. These differences cause further misunderstanding and misinterpretation when computer-based knowledge sharing systems are used across the two domains. Foundation or core ontologies can be used to overcome these differences and to ensure a seamless sharing of knowledge. This is because these ontologies provide a common grounding for domain ontologies to be used by individual domains or department. This common grounding can be used by the mediation and knowledge verification systems to authenticate the meaning of knowledge understood across different domains. For this reason, this research proposes a knowledge verification framework for developing a system capable of verifying knowledge between those domain ontologies which are developed out of a common core or foundation ontology. This framework makes use of ontology logic to standardize the way concepts from a foundation and core-concepts ontology are used in domain ontologies and then by using the same principles the knowledge being shared is verified. The Knowledge Frame Language which is based on Common Logic is used for formalizing example ontologies. The ontology editor used for browsing and querying ontologies is the Integrated Ontology Development Environment (IODE) by Highfleet Inc. An ontological product modelling technique is also developed in this research, to test the proposed framework in the scenario of manufacturability analysis. The proposed framework is then validated through a Java API specially developed for this purpose. Real industrial examples experienced during the case study are used for validation.
|
543 |
Tractable query answering for description logics via query rewritingPerez-Urbina, Hector M. January 2010 (has links)
We consider the problem of answering conjunctive queries over description logic knowledge bases via query rewriting. Given a conjunctive query Q and a TBox T, we compute a new query Q′ that incorporates the semantic consequences of T such that, for any ABox A, evaluating Q over T and A can be done by evaluating the new query Q′ over A alone. We present RQR—a novel resolution-based rewriting algorithm for the description logic ELHIO¬ that generalizes and extends existing approaches. RQR not only handles a spectrum of logics ranging from DL-Lite_core up to ELHIO¬, but it is worst-case optimal with respect to data complexity for all of these logics; moreover, given the form of the rewritten queries, their evaluation can be delegated to off-the-shelf (deductive) database systems. We use RQR to derive the novel complexity results that conjunctive query answering for ELHIO¬ and DL-Lite+ are, respectively, PTime and NLogSpace complete with respect to data complexity. In order to show the practicality of our approach, we present the results of an empirical evaluation. Our evaluation suggests that RQR, enhanced with various straightforward optimizations, can be successfully used in conjunction with a (deductive) database system in order to answer queries over knowledge bases in practice. Moreover, in spite of being a more general procedure, RQR will often produce significantly smaller rewritings than the standard query rewriting algorithm for the DL-Lite family of logics.
|
544 |
Static analyses over weak memoryNimal, Vincent P. J. January 2014 (has links)
Writing concurrent programs with shared memory is often not trivial. Correctly synchronising the threads and handling the non-determinism of executions require a good understanding of the interleaving semantics. Yet, interleavings are not sufficient to model correctly the executions of modern, multicore processors. These executions follow rules that are weaker than those observed by the interleavings, often leading to reorderings in the sequence of updates and readings from memory; the executions are subject to a weaker memory consistency. Reorderings can produce executions that would not be observable with interleavings, and these possible executions also depend on the architecture that the processors implement. It is therefore necessary to locate and understand these reorderings in the context of a program running, or to prevent them in an automated way. In this dissertation, we aim to automate the reasoning behind weak memory consistency and perform transformations over the code so that developers need not to consider all the specifics of the processors when writing concurrent programs. We claim that we can do automatic static analysis for axiomatically-defined weak memory models. The method that we designed also allows re-use of automated verification tools like model checkers or abstract interpreters that were not designed for weak memory consistency, by modification of the input programs. We define an abstraction in detail that allows us to reason statically about weak memory models over programs. We locate the parts of the code where the semantics could be affected by the weak memory consistency. We then provide a method to explicitly reveal the resulting reorderings so that usual verification techniques can handle the program semantics under a weaker memory consistency. We finally provide a technique that synthesises synchronisations so that the program would behave as if only interleavings were allowed. We finally test these approaches on artificial and real software. We justify our choice of an axiomatic model with the scalability of the approach and the runtime performance of the programs modified by our method.
|
545 |
Evaluation of COAMPS performance forecasting along coast wind events during a frontal passage / Evaluation of COAMPS forecasting performance of along coast wind events during frontal passagesJames, Carl S. 03 1900 (has links)
Approved for public release, distribution is unlimited / Performance of high resolution mesoscale models has been in a continuous state of refinement since their inception. Mesoscale models have become quite skillful in forecasting synoptic scale events such as mid-latitude cyclones. However, atmospheric forcing becomes a much more complicated process when faced with the challenge of forecasting near topography along the coastline. Phenomena such as gap flows, blocked flow winds and low level stratification become important to predictability at these scales. The problem is further complicated by the dynamics of a frontal passage event. The skill of mesoscale models in predicting these winds is not as well developed. This study examines several forecasts by the Coupled Ocean Atmospheric Mesoscale Prediction System (COAMPS) during frontal passage events for the Winter of 2003-2004. An attempt is made to characterize the predictability of the wind speed and direction both before and after frontal passage along the California coast. Synoptic forcing during this time is strong due to the effects of the mid-latitude cyclones propagate across the Pacific. The study's results indicate that the wind field predictability is subject to several consistent errors associated with the passage of fronts over topography. These errors arise due to difficulty in the model capturing weak thermal advection events and topographic wind funneling. The deficiencies in model representation of topography contributes to these errors. / Lieutenant, United States Navy
|
546 |
Spécification formelle de systèmes temps réel répartis par une approche flots de données à contraintes temporelles / Formal specification of distributed real time systems using an approach based on temporally constrained data flowsLe Berre, Tanguy 23 March 2010 (has links)
Une définition des systèmes temps réel est que leur correction dépend de la correction fonctionnelle mais aussi du temps d'exécution des différentes opérations. Les propriétés temps réels sont alors exprimées comme des contraintes temporelles sur les opérations du système. Nous proposons dans cette thèse un autre point de vue où la correction est définie relativement à la validité temporelle des valeurs prises par les variables du système et aux flots de données qui parcourent le système. Pour définir ces conditions de validité, nous nous intéressons au rythme de mise à jour des variables mais aussi aux liens entre les valeurs des différentes variables du système. Une relation dite d'observation est utilisée pour modéliser les communications et les calculs du système qui définissent les liens entre les variables. Un ensemble de relations d'observation modélise l'architecture et les flots de données du système en décrivant les chemins de propagation des valeurs dans le système. Les propriétés temps réels sont alors exprimées comme des contraintes sur ces chemins de propagation permettant d'assurer la validité temporelle des valeurs prises par les variables. La validité temporelle d'une valeur est définie selon la validité temporelle des valeurs des autres variables dont elle dépend et selon le décalage temporel logique ou événementiel introduit par les communications ou les calculs le long des chemins de propagation. Afin de prouver la satisfiabilité d'une spécification définie par une telle architecture et de telles propriétés, nous construisons un système de transitions à état fini bisimilaire à la spécification. L'existence de ce système fini est justifiée par des bornes sur le décalage temporel entre les variables du système. Il est alors possible d'explorer les exécutions définies par ce système de transitions afin de prouver l'existence d'exécutions infinies satisfaisant la spécification. / Real time systems are usually defined as systems where the total correctness of an operation depends not only on its logical correctness, but also on the execution time. Under this definition, time constraints are defined according to system operations. Another definition of real time systems is centered on data where the correctness of a system depends on the timed correctness of its data and of the data flows across the system. i.e. we expect the values taken by the variable to be regularly renewed and to be consistent with the environment and the other variables. I propose a modeling framework based on this later definition. This approach allows users to focus on specifying time constraints attached to data and to postpone task and communication scheduling matters. The timed requirements are not expressed as constraints on the implantation mechanism, but on the relations binding the system’s variables. These relations between data are expressed in terms of a so called observation relation which abstracts the relation between the values that are taken by some variables, the set of sources and the image. This relation abstracts the communication as well as the computational operations and a set of observation relations models the system architecture and the data flows by defining the paths along which values of sources are propagated to build the values of an image. The real time properties are expressed as constraints on the propagation paths and state the temporal validity of the values. This temporal validity is defined by the time shift between the source and the image, and specifies the propagation of timely sound values along the path to build temporally correct values of the system outputs. At this level of abstraction, the designer gives a specification of the system based on timed properties about the timeline of data such as their freshness, stability, latency etc. In order to prove the feasibility of an observation-based model, a finite state transition system bi-similar with the specification is built. The existence of a finite bi-similar system is deduced from the bounded time shift between the variables. The existence of an infinite execution in this system proves the feasibility of the specification.
|
547 |
Méthodes et outils ensemblistes pour le pré-dimensionnement de systèmes mécatroniques / Set-membership methods and tools for the pre-design of mechatronic systemsRaka, Sid-Ahmed 21 June 2011 (has links)
Le pré-dimensionnement se situe en amont du processus de conception d'un système : à partir d'un ensemble d'exigences, il consiste à déterminer un ensemble de solutions techniques possibles dans un espace de recherche souvent très grand et structuré par une connaissance partielle et incertaine du futur système et de son environnement. Bien avant de penser au choix définitif des composants et au dimensionnement précis du système complet, les concepteurs doivent s'appuyer sur un premier cahier des charges, des modélisations des principaux phénomènes et divers retours d'expériences pour formaliser des contraintes, faire des hypothèses simplificatrices, envisager diverses architectures et faire des choix sur des données imprécises (i.e. caractérisées sous la forme d'intervalles, de listes de valeurs possibles, etc…). Les choix effectués lors du pré-dimensionnement pouvant être très lourds de conséquences sur la suite du développement, il est primordial de détecter au plus tôt d'éventuelles incohérences et de pouvoir vérifier la satisfaction des exigences dans un contexte incertain. Dans ce travail, une méthodologie de vérification des exigences basée sur l'échange de modèles ensemblistes entre donneurs d'ordre et fournisseurs est proposée. Elle s'inscrit dans le cadre d'un paradigme de conception basé sur la réduction d'incertitudes. Après un travail portant sur la modélisation des systèmes mécatroniques, une attention particulière est portée à la prise en compte d'incertitudes déterministes sur des grandeurs continues : des techniques basées sur l'analyse par intervalles telles que la satisfaction de contraintes (CSP), des calculs d'atteignabilité pour des modèles dynamiques de connaissances ou encore l'identification de modèles de comportements ensemblistes sont ainsi mis en œuvre et développés afin d'outiller la méthodologie proposée et contribuer à répondre à l'objectif d'une vérification à couverture garantie. / The pre-sizing takes place upstream to the process of designing a system: from a set of requirements, it consists in determining a set of possible technical solutions in an often very large search space which is structured by the uncertain and partial knowledge available about the future system and its environment. Long before making the final choice and the final sizing of the system components, the designers have to use specifications, models of the main phenomena, and experience feedbacks to formalize some constraints, make simplifying assumptions, consider various architectures and make choices based on imprecise data (i.e. intervals, finite sets of possible values, etc…). The choices made during the pre-sizing process often involving strong commitments for the further developments, it is very important to early detect potential inconsistencies and to verify the satisfaction of the requirements in an uncertain context. In this work, a methodology based on the exchange of set-membership models between principals and suppliers is proposed for the verification of requirements. This methodology is fully consistent with a design paradigm based on the reduction of uncertainties. After a work dedicated to the modeling of mechatronic systems, a special attention is paid to dealing with deterministic uncertainties affecting continuous values: some techniques based on interval analysis such as constraint satisfaction (interval CSP), reachability computations for knowledge dynamic models or identification of set-membership behavioral models are used and developed, so providing a set of tools to implement the proposed methodology and contribute to reach the goal of a verification with a full and guaranteed coverage.
|
548 |
Rigorous Simulation : Its Theory and ApplicationsDuracz, Adam January 2016 (has links)
Designing Cyber-Physical Systems is hard. Physical testing can be slow, expensive and dangerous. Furthermore computational components make testing all possible behavior unfeasible. Model-based design mitigates these issues by making it possible to iterate over a design much faster. Traditional simulation tools can produce useful results, but their results are traditionally approximations that make it impossible to distinguish a useful simulation from one dominated by numerical error. Verification tools require skills in formal specification and a priori understanding of the particular dynamical system being studied. This thesis presents rigorous simulation, an approach to simulation that uses validated numerics to produce results that quantify and bound all approximation errors accumulated during simulation. This makes it possible for the user to objectively and reliably distinguish accurate simulations from ones that do not provide enough information to be useful. Explicitly quantifying the error in the output has the side-effect of leading to a tool for dealing with inputs that come with quantified uncertainty. We formalize the approach as an operational semantics for a core subset of the domain-specific language Acumen. The operational semantics is extended to a larger subset through a translation. Preliminary results toward proving the soundness of the operational semantics with respect to a denotational semantics are presented. A modeling environment with a rigorous simulator based on the operational semantics is described. The implementation is portable, and its source code is freely available. The accuracy of the simulator on different kinds of systems is explored through a set of benchmark models that exercise different aspects of a rigorous simulator. A case study from the automotive domain is used to evaluate the applicability of the simulator and its modeling language. In the case study, the simulator is used to compute rigorous bounds on the output of a model.
|
549 |
The Frontiers of Technology in Warhead VerificationToivanen, Henrietta N 01 January 2017 (has links)
How might new technical verification capabilities enhance the prospects of success in future nuclear arms control negotiations? Both theory and evidence suggest that verification technologies can influence the dynamics of arms control negotiations by shaping and constraining the arguments and strategies that are available to the involved stakeholders. In the future, new technologies may help transcend the specific verification challenge of high-security warhead authentication, which is a verification capability needed in future disarmament scenarios that address fewer warheads, limit new categories of warheads, and involve nuclear weapons states other than the United States and Russia. Under these circumstances, the core challenge is maintaining the confidentiality of the classified information related to the warheads under inspection, while providing transparency in the verification process. This analysis focuses on a set of emerging warhead authentication approaches that rely on the cryptographic concept of zero-knowledge proofs and intend to solve the paradox between secrecy and transparency, making deeper reductions in warhead arsenals possible and thus facilitating future nuclear arms control negotiations.
|
550 |
Mathematical Methods for Enhanced Information Security in Treaty VerificationMacGahan, Christopher, MacGahan, Christopher January 2016 (has links)
Mathematical methods have been developed to perform arms-control-treaty verification tasks for enhanced information security. The purpose of these methods is to verify and classify inspected items while shielding the monitoring party from confidential aspects of the objects that the host country does not wish to reveal. Advanced medical-imaging methods used for detection and classification tasks have been adapted for list-mode processing, useful for discriminating projection data without aggregating sensitive information. These models make decisions off of varying amounts of stored information, and their task performance scales with that information. Development has focused on the Bayesian ideal observer, which assumes com- plete probabilistic knowledge of the detector data, and Hotelling observer, which assumes a multivariate Gaussian distribution on the detector data. The models can effectively discriminate sources in the presence of nuisance parameters. The chan- nelized Hotelling observer has proven particularly useful in that quality performance can be achieved while reducing the size of the projection data set. The inclusion of additional penalty terms into the channelizing-matrix optimization offers a great benefit for treaty-verification tasks. Penalty terms can be used to generate non- sensitive channels or to penalize the model's ability to discriminate objects based on confidential information. The end result is a mathematical model that could be shared openly with the monitor. Similarly, observers based on the likelihood probabilities have been developed to perform null-hypothesis tasks. To test these models, neutron and gamma-ray data was simulated with the GEANT4 toolkit. Tasks were performed on various uranium and plutonium in- spection objects. A fast-neutron coded-aperture detector was simulated to image the particles.
|
Page generated in 0.1077 seconds