1 |
Secure and Trusted VerificationCai, Yixian 06 1900 (has links)
In our setting, verification is a process that checks whether a device's program (implementation) has been produced according to its corresponding requirements specification. Ideally a client builds the requirements specification of a program and asks a developer to produce the actual program according to the requirements specification it provides. After the program is built, a verifier is asked to verify the program. However, nowadays verification methods usually require good knowledge of the program to be verified and thus sensitive information about the program itself can be easily leaked during the process.
In this thesis, we come up with the notion of secure and trusted verification which allows the developer to hide non-disclosed information about the program from the verifier during the verification process and a third party to check the correctness of the verification result. Moreover, we formally study the mutual trust between the verifier and the developer and define the above notion in the context of both an honest and a malicious developer.
Besides, we implement the notion above both in the setting of an honest and a malicious developer using cryptographic primitives and tabular expressions. Our construction allows the developer to hide the modules of a program and the verifier to do some-what white box verification. The security properties of the implementation are also formally discussed and strong security results are proved to be achieved. / Thesis / Master of Science (MSc)
|
2 |
An aid to convert spreadsheets to higher quality presentationsOlajide, Wasiu Olaniyi 29 August 2005 (has links)
A table is often the preferred medium for presenting quantative information. In some cases the presentation of quantative information can be presented as textual data or graphics at a loss of precision and clarity. The subject of this thesis is to aid the extraction and production of quality tables from a common means of preparing data in tabular form, the spreadsheet. Spreadsheet processors are in common use. Many tables are prepared by a range of users from the na??ıve users to experts in graphic arts. Spreadsheet data is also produced in automatic form from applications. We will review the specification of tabular data, presentation formats, and the systems and their associated formats for storing and interchange of data. The goal of this research is the specification and development of a system to convert common spreadsheet data to a markup language that will allow for presentation of the data at a higher level of typographic excellence. The desired characteristics of this system will include
1. Robust importing of data from an array of commercial and open spreadsheet processors 2. Formatting decisions of the output specified by the user rather than taken from the spreadsheet 3. Development or identification of a canonical form that is robust, does not lose data, and allows for repeated automatic application of styles 4. Development of a program to convert this canonical form into a markup system.
|
3 |
Tabulation, grouping and separation techniques in the presentation of printed accounts and financial statements during the time of the letterpressInkson, Pamela January 2001 (has links)
No description available.
|
4 |
A Methodology for the Simplification of Tabular Designs in Model-Based DevelopmentBialy, Monika 06 1900 (has links)
Model-based development (MBD) is an increasingly used approach for the development of embedded control software, with Matlab Simulink/Stateflow as the widely accepted language. The adoption of this development paradigm is prevalent in many safety-critical domains, including the automotive industry. With an increasing reliance on software for controlling vehicle functionality and the yearly advent of new vehicle features, automotive models have been growing in size and complexity, causing them to become increasingly difficult to maintain, refactor, and test. Given the centrality of models in MBD, it is a requisite that they be maintained under well-defined and principled software development processes that use precise notation to document system requirements and behavioural design description.
Tabular methods have long been used for defining decision-making logic in software, due to their concise and precise manner of communicating complex behaviour, so it is not surprising that they are finding increased use in automotive software models. Thus their presence in Simulink models is increasingly prominent in the implementation of complex behaviour in production code. As a result of the safety-critical nature of the automotive industry, as well as the increasing size and complexity of its models, reliable refactoring and simplification techniques for tabular expressions are becoming an important need for automotive companies. To address this need, this thesis presents a methodology for refactoring complex tabular designs to improve requirements traceability with a focus on Matlab Simulink/Stateflow and the MBD approach.
A case study of industrial examples from an automotive partner are used to motivate the work and demonstrate the proposed methodology's effectiveness in reducing design size and complexity, while also increasing testability and requirements traceability. / Thesis / Master of Applied Science (MASc)
|
5 |
Implementation of Tabular Verification and RefinementZhou, Ning 02 1900 (has links)
<p> It has been argued for some time that tabular representations of formal specifications can help in writing them, in understanding them, and in checking them. Recently it has been suggested that tabular representations also help in breaking down large verification and refinement conditions into a number of smaller ones.</p> <p> The article [32] developed the theory, but the real proof in terms of an implementation is not provided. This project is about formalizing tables in a theorem prover, Simplify, defining theorems of [32] in terms of functions written in the OCaml programming language, and conducting some case studies in verifying and refining realistic problems.</p> <p> A parser is designed to ease our job of inputting expressions. Pretty-print is also provided: all predicates and tables of the examples in our thesis are automatically generated.</p> <p> Our first example is a control system, a luxury sedan car seat. This example gives us an overall impression on how to prove correctness from tabular specification. The second example specifies a visitor information system. The design features of this example involve modeling properties and operations on sets, relations and functions by building self-defined axioms. The third example illustrates another control system, an elevator. Theorems of algorithmic refinements, stepwise data refinements, and the combination of algorithmic abstraction and data abstraction are applied correspondingly to different operations.</p> / Thesis / Master of Science (MSc)
|
6 |
Um estudo das páginas da série Samurai X: O sistema das ações através da sintaxe visual dos mangásSilva, André Luiz Souza da 12 September 2013 (has links)
Submitted by Pós-Com Pós-Com (pos-com@ufba.br) on 2013-09-11T14:14:41Z
No. of bitstreams: 1
André Luiz Souza da Silva.rar: 81961815 bytes, checksum: 0beaf77c211deb481c22f8b21f414cb0 (MD5) / Approved for entry into archive by Alda Lima da Silva(sivalda@ufba.br) on 2013-09-12T17:52:38Z (GMT) No. of bitstreams: 1
André Luiz Souza da Silva.rar: 81961815 bytes, checksum: 0beaf77c211deb481c22f8b21f414cb0 (MD5) / Made available in DSpace on 2013-09-12T17:52:38Z (GMT). No. of bitstreams: 1
André Luiz Souza da Silva.rar: 81961815 bytes, checksum: 0beaf77c211deb481c22f8b21f414cb0 (MD5) / Capes / O propósito deste trabalho é o de entender como os elementos gráficos das histórias em quadrinhos ajudam a compor os mangás. A ideia é analisar como se dá a construção espacial das páginas desse modelo de história em
quadrinhos através de aspectos mais pregnantes, tais como, a sobreposição e o tamanho das imagens, o formato e a variação de dimensões dos quadrinhos, e também o formato, o posicionamento e o tamanho dos balões, das onomatopéias e das legendas. Usaremos como campo de provas a série de mangá Samurai X, nos momentos em que a ação se torna mais evidente através do confronto entre as personagens, tais como as ações corriqueiras e aquelas em que ocorre o combate físico entre os protagonistas e antagonistas. Para o entender como funciona a
sintaxe visual na série Samurai X, serão usados como ponto de partida os pressupostos de Pierre Fresnault-Deruelle (1976) sobre a composição das imagens dentro de um regime denominado pelo autor de Tabular, isto é,
como a disposição dos elementos gráficos em um espaço limitado como uma página, por exemplo, determina um guia para uma leitura multilinear. Utilizaremos também os conceitos de Tierry Groensteen (1999), que aborda a
questão da articulação das imagens e dos espaços de uma página de história em quadrinhos, denominada pelo autor de artrologia, isto é, o sistema de códigos visuais interligados em uma página em uma dimensão do sistema espaço-tópico que visa produzir um sentido narrativo nas hqs. / Faculdade de Comunicação - UFBA
|
7 |
CSVValidation: uma ferramenta para validação de arquivos CSV a partir de metadados / CSV Validation: uma ferramenta para validação de arquivos CSV a partir de metadadosOLIVEIRA, Hugo Santos 14 August 2015 (has links)
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2017-03-14T18:10:49Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Dissertação Hugo Santos de Oliveira - Versão Depósito Bib Central.pdf: 2529045 bytes, checksum: a83fb438eaa8daaa0b4dcba01cb0b729 (MD5) / Made available in DSpace on 2017-03-14T18:10:49Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Dissertação Hugo Santos de Oliveira - Versão Depósito Bib Central.pdf: 2529045 bytes, checksum: a83fb438eaa8daaa0b4dcba01cb0b729 (MD5)
Previous issue date: 2015-08-14 / Modelos de dados tabulares têm sido amplamente utilizados para a publicação de dados na
Web, devido a sua simplicidade de representação e facilidade de manipulação. Entretanto,
nem sempre os dados são dispostos em arquivos tabulares de maneira adequada, o que
pode causar dificuldades no momento do processamento dos dados. Dessa forma, o
consórcio W3C tem trabalhado em uma proposta de especificação padrão para
representação de dados em formatos tabulares. Neste contexto, este trabalho tem como
objetivo geral propor uma solução para o problema de validação de arquivos de Dados
Tabulares. Estes arquivos, são representados no formato CSV e descritos por metadados,
os quais são representados em JSON e definidos de acordo com a especificação proposta
pelo W3C. A principal contribuição deste trabalho foi a definição do processo de
validação de arquivos de dados tabulares e dos algoritmos necessários para a execução
desse processo, além da implementação de um protótipo que tem por objetivo realizar a
validação dos dados tabulares, conforme especificado pelo W3C. Outra importante
contribuição foi a realização de experimentos com fontes de dados disponíveis na Web,
com o objetivo de avaliar a abordagem proposta neste trabalho. / Tabular data models have been used a lot for publishing data on the Web because of its
simplicity of representation and easy manipulation. However, in some cases the data are
not disposed in tabular files appropriately, which can cause data processing problems.
Thus, the W3C proposed a standard specification for representing data in tabular format.
In this context this work has as main objective to propose a solution to the problem of
validating tabular data files, represented in CSV, files and described by metadata
represented as JSON files and described, according to the specification proposed by the
W3C. The main contribution of this work is the definition of a tabular data file validation
process and algorithms necessary for the implementation of this process as well as the
implementation of a prototype that aimed to validate tabular data as specified by the
W3C. Other important contribution is the execution of experiments with data sources
available on the Web with the objective to evaluate the approach proposed in this work.
|
8 |
Uma Ontologia para as Doenças Tropicais Negligenciáveis - NTDOSilva, Filipe Santana da 05 March 2012 (has links)
Submitted by Pedro Henrique Rodrigues (pedro.henriquer@ufpe.br) on 2015-03-05T16:55:31Z
No. of bitstreams: 2
Filipe Santana da Silva - Dissertação de Mestrado - Centro de Informática - 2012.pdf: 1889466 bytes, checksum: 40445fee934930648cf335bdbaebbc31 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-05T16:55:31Z (GMT). No. of bitstreams: 2
Filipe Santana da Silva - Dissertação de Mestrado - Centro de Informática - 2012.pdf: 1889466 bytes, checksum: 40445fee934930648cf335bdbaebbc31 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Previous issue date: 2012-03-05 / Muitas aplicações não conseguem tratar a ambiguidade presente em fontes de dados e informação. Tal fato ganhou maior notoriedade a partir do desenvolvimento de tecnologias relacionadas à web semântica, principalmente com as ontologias. O estudo de modelos com certo grau de complexidade representacional relacionados às doenças infecciosas, especificamente as Doenças Tropicais Negligenciáveis (DTNs), vem gradualmente ganhando interesse por parte dos pesquisadores. O presente estudo visa representar um conjunto de conhecimento complexo sobre a transmissão de Doenças Tropicais Negligenciáveis e os possíveis processos que ocorrem a partir do desenvolvimento destas, como o falecimento de indivíduos, em uma ontologia: a NTDO (Neglected Tropical Disease Ontology). A partir do modelo básico de transmissão de doenças, incluindo vetores artrópodes, e do conteúdo tabular com a representação de vetores, patógenos, hospedeiros, locais de ocorrência e doenças causadas, foi possível descrever um Padrão de Projeto Ontológico (PPO) para a representação de tais processos, refinados e testados segundo consultas em Lógica de Descrições. Outros resultados foram encontrados a partir da representação de processos complexos relacionados ao falecimento de indivíduos por causas específicas. No presente estudo, conhecimento acerca das DTNs foi descrito a partir de informações legadas presentes em tabelas, e puderam ser expressas em uma ontologia formal. A NTDO evidencia eventos complexos com marcações temporais e sequência de processos, desde a transmissão de um patógeno ao falecimento de um indivíduo por uma doença. Assim, a NTDO pode permitir a construção de consultas inteligentes em bancos de dados de Morbidade e Mortalidade. Ainda, pode permitir uma inovação no que concerne a vigilância de casos de doenças relacionados a infecções por doenças, principalmente negligenciadas, por possibilitar o estudo de um amplo conjunto de variáveis, inerentes aos registros de morbidade e mortalidade, e a conseqüente construção de novo conhecimento sobre os dados de saúde.
|
9 |
A Framework for Automated Discovery and Analysis of Suspicious Trade RecordsDatta, Debanjan 27 May 2022 (has links)
Illegal logging and timber trade presents a persistent threat to global biodiversity and national security due to its ties with illicit financial flows, and causes revenue loss. The scale of global commerce in timber and associated products, combined with the complexity and geographical spread of the supply chain entities present a non-trivial challenge in detecting such transactions. International shipment records, specifically those containing bill of lading is a key source of data which can be used to detect, investigate and act upon such transactions. The comprehensive problem can be described as building a framework that can perform automated discovery and facilitate actionability on detected transactions. A data driven machine learning based approach is necessitated due to the volume, velocity and complexity of international shipping data. Such an automated framework can immensely benefit our targeted end-users---specifically the enforcement agencies.
This overall problem comprises of multiple connected sub-problems with associated research questions. We incorporate crucial domain knowledge---in terms of data as well as modeling---through employing expertise of collaborating domain specialists from ecological conservationist agencies. The collaborators provide formal and informal inputs spanning across the stages---from requirement specification to the design. Following the paradigm of similar problems such as fraud detection explored in prior literature, we formulate the core problem of discovering suspicious transactions as an anomaly detection task. The first sub-problem is to build a system that can be used find suspicious transactions in shipment data pertaining to imports and exports of multiple countries with different country specific schema. We present a novel anomaly detection approach---for multivariate categorical data, following constraints of data characteristics, combined with a data pipeline that incorporates domain knowledge. The focus of the second problem is U.S. specific imports, where data characteristics differ from the prior sub-problem---with heterogeneous attributes present. This problem is important since U.S. is a top consumer and there is scope of actionable enforcement. For this we present a contrastive learning based anomaly detection model for heterogeneous tabular data, with performance and scalability characteristics applicable to real world trade data. While the first two problems address the task of detecting suspicious trades through anomaly detection, a practical challenge with anomaly detection based systems is that of relevancy or scenario specific precision. The third sub-problem addresses this through a human-in-the-loop approach augmented by visual analytics, to re-rank anomalies in terms of relevance---providing explanations for cause of anomalies and soliciting feedback. The last sub-problem pertains to explainability and actionability towards suspicious records, through algorithmic recourse. Algorithmic recourse aims to provides meaningful alternatives towards flagged anomalous records, such that those counterfactual examples are not judged anomalous by the underlying anomaly detection system. This can help enforcement agencies advise verified trading entities in modifying their trading patterns to avoid false detection, thus streamlining the process. We present a novel formulation and metrics for this unexplored problem of algorithmic recourse in anomaly detection. and a deep learning based approach towards explaining anomalies and generating counterfactuals.
Thus the overall research contributions presented in this dissertation addresses the requirements of the framework, and has general applicability in similar scenarios beyond the scope of this framework. / Doctor of Philosophy / Illegal timber trade presents multiple global challenges to ecological biodiversity, vulnerable ecosystems, national security and revenue collection. Enforcement agencies---the target end-users of this framework---face a myriad of challenges in discovering and acting upon shipments with illegal timber that violate national and transnational laws due to volume and complexity of shipment data, coupled with logistical hurdles. This necessitates an automated framework based upon shipment data that can address this task---through solving problems of discovery, analysis and actionability.
The overall problem is decomposed into self contained sub-problems that address the associated specific research questions. These comprise of anomaly detection in multiple types of high dimensional tabular data, improving precision of anomaly detection through expert feedback and algorithmic recourse for anomaly detection. We present data mining and machine learning solutions to each of the sub-problems that overcome limitations and inapplicability of prior approaches. Further, we address two broader research questions. First is incorporation domain knowledge into the framework, which we accomplish through collaboration with domain experts from environmental conservation organizations. Secondly, we address the issue of explainability in anomaly detection for tabular data in multiple contexts. Such real world data presents with challenges of complexity and scalability, especially given the tabular format of the data that presents it's own set of challenges in terms of machine learning. The solutions presented to these machine learning problems associated with each of components of the framework provide an end-to-end solution to it's requirements. More importantly, the models and approaches presented in this dissertation have applicability beyond the application scenario with similar data and application specific challenges.
|
10 |
Comparison of Various Display Representation Formats for Older Adults Using Inlab and Remote Usability TestingNarayan, Sajitha 19 July 2005 (has links)
The population of seniors is growing and will continue to increase in the next decade. Computer technology holds the promise of enhancing the quality of life and independence of older people as it may increase their ability to perform a variety of tasks. This is true for elderly. By the year 2030, people age 65 or older will comprise 22% of the population in the United States. As the population shifts so that a greater percentage are middle-aged and older adults, and as dependence on computer technology increases, it becomes more crucial to understand how to design computer displays for these older age groups.
The research has compared various display representation formats to try to find out which is the best way to represent information to seniors in any form of display and the reason for the preferences. The formats compared include high and low density screens for abstract icon representation, concrete icon representation, tabular representation and graphical representation.This research also endeavored to study the effectiveness of remote usability testing as compared to inlab testing for seniors.
Results indicated that density of screen is a very important factor affecting the performance of older adults. Density effect showed statistical significance F (1,112)=8.934, p< .05 from further post-hoc analysis that was conducted. Although significant results were not obtained, different formats of display representations may still be an area worth pursuing. Also it was noted that remote usability testing is not as effective as inlab testing for seniors in terms of time taken to conduct the study and the number of user comments collected. Implications, as well as recommendations and conclusions, of the study are presented. / Master of Science
|
Page generated in 0.1093 seconds