201 |
A combinatorial study of soundness and normalization in n-graphsANDRADE, Laís Sousa de 29 July 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-04-24T14:03:12Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
dissertacao-mestrado.pdf: 2772669 bytes, checksum: 25b575026c012270168ca5a4c397d063 (MD5) / Made available in DSpace on 2017-04-24T14:03:12Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
dissertacao-mestrado.pdf: 2772669 bytes, checksum: 25b575026c012270168ca5a4c397d063 (MD5)
Previous issue date: 2015-07-29 / CNPQ / N-Graphs is a multiple conclusion natural deduction with proofs as directed graphs, motivated by the idea of proofs as geometric objects and aimed towards the study of the geometry of Natural Deduction systems. Following that line of research, this work revisits the system under a purely combinatorial perspective, determining geometrical conditions on the graphs of proofs to explain its soundness criterion and proof growth during normalization. Applying recent developments in the fields of proof graphs, proof-nets and N-Graphs itself, we propose a linear time algorithm for proof verification of the full system, a result that can be related to proof-nets solutions from Murawski (2000) and Guerrini (2011), and a normalization procedure based on the notion of sub-N-Graphs, introduced by Carvalho, in 2014. We first present a new soundness criterion for meta-edges, along with the extension of Carvalho’s sequentization proof for the full system. For this criterion we define an algorithm for proof verification that uses a DFS-like search to find invalid cycles in a proof-graph. Since the soundness criterion in proof graphs is analogous to the proof-nets procedure, the algorithm can also be extended to check proofs in the multiplicative linear logic without units (MLL−) with linear time complexity. The new normalization proposed here combines a modified version of Alves’ (2009) original beta and permutative reductions with an adaptation of Carbone’s duplication operation on sub-N-Graphs. The procedure is simpler than the original one and works as an extension of both the normalization defined by Prawitz and the combinatorial study developed by Carbone, i.e. normal proofs enjoy the separation and subformula properties and have a structure that can represent how patterns lying in normal proofs can be recovered from the graph of the original proof with cuts. / N-Grafos é uma dedução natural de múltiplas conclusões onde provas são representadas como grafos direcionados, motivado pela idéia de provas como objetos geométricos e com o objetivo de estudar a geometria de sistemas de Dedução Natural. Seguindo esta linha de pesquisa, este trabalho revisita o sistema sob uma perpectiva puramente combinatorial, determinando condições geométricas nos grafos de prova para explicar seu critério de corretude e crescimento da prova durante a normalização. Aplicando desenvolvimentos recentes nos campos de grafos de prova, proof-nets e dos próprios N-Grafos, propomos um algoritmo linear para verificação de provas para o sistema completo, um resultado que pode ser comparado com soluções para roof-nets desenvolvidas por Murawski (2000) e Guerrini (2011), e um procedimento de normalização baseado na noção de sub-N-Grafos, introduzidas por Carvalho, em 2014. Apresentamos primeiramente um novo critério de corretude para meta-arestas, juntamente com a extensão para todo o sistema da prova da sequentização desenvolvida por Carvalho. Para este critério definimos um algoritmo para verificação de provas que utiliza uma busca parecida com a DFS (Busca em Profundidade) para encontrar ciclos inválidos em um grafo de prova. Como o critério de corretude para grafos de provas é análogo ao procedimento para proof-nets, o algoritmo pode também ser estendido para validar provas em Lógica Linear multiplicativa sem units (MLL−) com complexidade de tempo linear. A nova normalização proposta aqui combina uma versão modificada das reduções beta e permutativas originais de Alves com uma adaptação da operação de duplicação proposta por Carbone para ser aplicada a sub-N-Grafos. O procedimento é mais simples do que o original e funciona como uma extensão da normalização definida por Prawitz e do estudo combinatorial desenvolvido por Carbone, i.e. provas em forma normal desfrutam das propriedades da separação e subformula e possuem uma estrutura que pode representar como padrões existentes em provas na forma normal poderiam ser recuperados a partir do grafo da prova original com cortes.
|
202 |
Colour proof quality verificationSundell, Johanna January 2004 (has links)
BACKGROUND When a customer delivers a colour proof to a printer, they expect the final print to look similar to that proof. Today it is impossible to control if a match between proof and print is technically possible to reach at all. This is mainly due to the fact that no information regarding the production circumstances of the proof is provided, for instance the printer does not know which proofer, RIP or ICC-profile that was used. Situations where similarity between proof and print cannot be reached and the press has to be stopped are both costly and time consuming and are therefore wished to be avoided. PURPOSE The purpose of this thesis was to investigate the possibility to form a method with the ability control if a proof is of such good quality that it is likely to produce a print that is similar to it. METHOD The basic assumption was that the quality of a proof could be decided by spectrally measuring known colour patches and compare those values to reference values representing the same patches printed at optimal press conditions. To decide which and how many patches that are required, literature and reports were studied, then a test printing and a comparison between proofing systems were performed. To be able to analyse the measurement data in an effective way a tool that analyses the difference between reference and measurement data was developed using MATLAB. RESULT The result was a suggestion for a colour proof quality verification method that consists two parts that are supposed to complement each other.The first one was called Colour proofing system evaluation and is supposed to evaluate entire proofing systems. It consists of a test page containing colour patches, grey balance fields, gradations and photographs. The second part is called Colour proof control and consists of a smaller set of colour patches that is supposed to be attached to each proof. CONCLUSIONS The method is not complete since more research regarding the difference between measurement results and visual impression is needed. To be able to obtain realistic tolerance levels for differences between measurement- and reference data, the method must be tested in every-day production. If this is done the method is thought to provide a good way of controlling the quality of colour proofs.
|
203 |
L'expertise dans les procédures contentieuses interétatiques / The use of experts in interstate litigationTribolo, Julie 28 March 2017 (has links)
Le traitement des questions scientifiques est aujourd'hui un enjeu majeur pour les acteurs du contentieux interétatique : au-delà de son coût, il s'avère souvent décisif lorsqu'il s'agit pour les états de défendre leurs intérêts et pour le juge international de promouvoir la légitimité et la pérennité de son institution. La science est en effet considérée comme un gage d'objectivité, un outil capable de dire "le vrai" qui s'avère d'autant plus précieux pour les acteurs du contentieux interétatique que l'ordre juridique international est décentralisé. Pourtant, l'on ne peut manquer de constater le profond désenchantement suscité par la science au cours des dernières décennies : outre l'allongement souvent indu des délais, la multiplication des batailles d'experts dans le prétoire a fait naître un sentiment de méfiance grandissant à l'égard de la preuve scientifique, un doute quant à la part de vérité intrinsèque qu'elle est supposée pouvoir revendiquer. Pour déstabilisante qu'elle puisse être au regard de l'enjeu particulier du maintien de la paix, cette constatation ne doit néanmoins pas conduire à dénier à la science sa place et sa pertinence dans le cadre du règlement des différends interétatiques. La présente étude montrera ainsi que, dépouillé de toute prétention particulière à la vérité, l'expertise est susceptible de s'illustrer (et s'illustre parfois déjà) comme un outil utile et productif dans le cadre du règlement juridictionnel des différends entre états ; au-delà, l'on montrera encore que, dans certaines circonstances, l'expert possède même le pouvoir d'apaiser et de rapprocher les parties, favorisant ainsi l'émergence d'un règlement amiable entre elles / Dealing with scientific issues is nowadays a major concern in inter-state disputes : beyond the question of costs, these issues are often decisive for states in succeeding to defend their case and they are no less critical for international tribunals when it comes to promote their legitimacy and ultimately, their survival. Science is indeed perceived as possessing extraordinary qualities for the pursuit of the truth (in the widest sense of this word) and for this reason, it has traditionally been considered a very powerful and precious instrument, mostly in the international legal order which is naturally decentralized. Nevertheless, one cannot but notice the disenchantment science has given rise to over the last decades : the multiplication of expert battles in court has frequently induced an undesirable loss of time as well as a growing feeling of mistrust, both the litigants and the international judiciary finally doubting the power of science for ascertaining the real truth. However destabilizing this reality may be in the particular context of the maintenance of the peace, this study is dedicated to show that science is definitely relevant in the settling of inter-state disputes. Most often deprived nowadays of their alleged (and misleading) capacities for perfect objectivity and the search for truth, experts will be shown to be paradoxically useful and productive actors in the course of international judicial settlement ; moreover, they will be shown to possess, under certain circumstances, a real power for relieving pressure and promoting appeasement between the litigants, thus making it easier for them to reach an amicable settlement of their conflict
|
204 |
Nondeterminism and Language Design in Deep InferenceKahramanogullari, Ozan 21 December 2006 (has links)
This thesis studies the design of deep-inference deductive systems. In the systems with deep inference, in contrast to traditional proof-theoretic systems, inference rules can be applied at any depth inside logical expressions. Deep applicability of inference rules provides a rich combinatorial analysis of proofs. Deep inference also makes it possible to design deductive systems that are tailored for computer science applications and otherwise provably not expressible. By applying the inference rules deeply, logical expressions can be manipulated starting from their sub-expressions. This way, we can simulate analytic proofs in traditional deductive formalisms. Furthermore, we can also construct much shorter analytic proofs than in these other formalisms. However, deep applicability of inference rules causes much greater nondeterminism in proof construction. This thesis attacks the problem of dealing with nondeterminism in proof search while preserving the shorter proofs that are available thanks to deep inference. By redesigning the deep inference deductive systems, some redundant applications of the inference rules are prevented. By introducing a new technique which reduces nondeterminism, it becomes possible to obtain a more immediate access to shorter proofs, without breaking certain proof theoretical properties such as cutelimination. Different implementations presented in this thesis allow to perform experiments on the techniques that we developed and observe the performance improvements. Within a computation-as-proof-search perspective, we use deepinference deductive systems to develop a common proof-theoretic language to the two fields of planning and concurrency.
|
205 |
Проблематика алгоритмизации мышления в свете концепции Дж. Хокинса : магистерская диссертация / The Problem of Algorithmization of Thinking in the Light of the Concept of J. HawkinsКрасов, И. И., Krasov, I. I. January 2018 (has links)
Проблематику алгоритмизации мышления и исследования в области создания систем искусственного интеллекта объединяет вопрос «Может ли машина мыслить?» Несмотря на то, что две данные области по-разному отвечают на вопрос о возможности мышления машины, результаты достигнутые в одной области могут повлиять на другую.
Объектом исследования являются проблематика алгоритмизации мышления и интеллект в концепции Дж. Хокинса. Предметом исследования являются ограничения на алгоритмизацию в связи с моделью «память-предсказание».
Цель исследования - рассмотреть проблематику алгоритмизации мышления в связи с концепцией Дж. Хокинса.
Методы, применяемые в исследовании: концептуальный и логический анализ.
Новизна данной диссертационной работы заключается в сопоставлении проблематики алгоритмизации мышления с современным исследование в области создания ИИ, концепцией Дж. Хокинса.
В результате исследования установлено, что в основе интеллекта лежит модель «память-предсказание». Используя данную модель, становится возможным решить практически все проблемы, связанные с ограничениями на алгоритмизацию мышления. Выяснено, что концепт обозримости доказательства можно применить для оптимизации работы интеллектуальных систем. / The problem of algorithmizing thinking and research in the field of creating artificial intelligence systems unites the question "Can the machine think?" Although these two areas of knowledge respond differently to the question of the machine's thinking capabilities, the results achieved in one area can affect the other.
The object of research work are problems of algorithmization of thinking and intellect in the theory of J. Hawkins. The subject of the research work are constraints on algorithmization in connection with the memory-prediction model.
The purpose of the research work is to consider the problems of algorithmizing thinking in connection with the theory of J. Hawkins.
Methods used in the research work: conceptual and logical analysis.
The novelty of this research work is to compare the problems of algorithmizing thinking with modern research in the field of creating AI, the concept of J. Hawkins.
As a result of the research it was established that the intellect is based on the memory-prediction model. Using this model, it becomes possible to solve almost all the problems associated with limitations on the algorithmization of thinking. It is clarified that the concept of surveyability of proof can be applied to optimize the operation of intelligent systems.
|
206 |
Energy Consumption and Security in BlockchainBorzi, Eleonora, Salim, Djiar January 2020 (has links)
Blockchain is a Distributed Ledger Technology that was popularized after the release of Bitcoin in 2009 as it was the first popular blockchain application. It is a technology for maintaining a digital and public ledger that is decentralized, which means that no single authority controls nor owns the public ledger. The ledger is formed by a chain of data structures, called blocks, that contain information. This ledger is shared publicly in a computer network where each node is called a peer. The problem that arises is how to make sure that every peer has the same ledger. This is solved with consensus mechanisms which are a set of rules that every peer must follow. Consensus mechanisms secure the ledger by ensuring that the majority of peers can reach agreement on the same ledger and that the malicious minority of peers cannot influence the majority agreement. There are many different consensus mechanisms. A problem with consensus mechanisms is that they have to make a trade-off between low energy consumption and high security. The purpose of this report is to explore and investigate the relationship between energy consumption and security in consensus mechanisms. The goal is to perform a comparative study of consensus mechanisms from an energy consumption and security perspective. The consensus mechanisms that are compared are Proof of Work, Proof of Stake and Delegated Proof of Stake. The methodology used is literature study and comparative study by using existing work and data from applications based on those consensus mechanisms. The results conclude that Proof of Work balances the trade-off by having high energy-consumption and high security, meanwhile Proof of Stake and Delegated Proof of Stake balance it by having low energy consumption but lower security level. In the analysis, a new factor arose, decentralization. The new insight in consensus mechanisms is that decentralization and security is threatened by an inevitable centralization where the ledger is controlled by few peers. / Blockchain är en så kallad distribuerad huvudbok teknologi som fick ett stort genombrott med den populära blockchain applikationen Bitcoin i 2009. Teknologin möjliggör upprätthållandet av en digital och offentlig huvudbok som är decentraliserad, vilket betyder att ingen ensam person eller organisation äger och kontrollerar den offentliga huvudboken. Huvudboken i blockchain är uppbyggt som en kedja av block, dessa block är datastrukturer som innehåller information. Huvudboken distribueras i ett nätverk av datorer som kallas för noder, dessa noder ägs av en eller flera personer. Problemet är att alla noderna i nätverket måste ha identiska huvudbok. Detta problem löses med en uppsättning av regler som noderna måste följa, denna uppsättning kallas för konsensus mekanism. Konsensus mekanismer säkrar huvudboken genom att möjliggöra en överenskommelse bland majoriteten av noderna om huvudbokens innehåll, och ser till att oärliga noder inte kan påverka majoritetens överenskommelse. Det finns flera olika konsensus mekanismer. Ett problem med konsensus mekanismer är att de är tvungna att göra en avvägning mellan låg energianvändning och hög säkerhet. Syftet med denna rapport är att undersöka och utreda relationen mellan energianvändning och säkerhet i konsensus mekanismer. Målet är att utföra en komparativ analys av konsensus mekanismer utifrån energianvändning och säkerhet. Konsensus mekanismerna som jämförs är Proof of Work, Proof of Stake och Delegated Proof of Stake. Metodologin som används är litteraturstudier och komparativ analys med hjälp av existerande metoder och data från applikationer som använder konsensus mekanismerna. Resultatet visar att Proof of Work väljer hög säkerhet på bekostnad av hög energianvändning, medan Proof of Stake och Delegated Proof of Stake väljer låg energianvändning men på bekostnad av lägre säkerhet. Analysen ger en ny inblick som visar att centralisering är en oundviklig faktor som hotar säkerheten.
|
207 |
Degree Sequences, Forcibly Chordal Graphs, and Combinatorial Proof SystemsAltomare, Christian J. January 2009 (has links)
No description available.
|
208 |
Proof-theoretical observations of BI and BBI base-logic interactions, and development of phased sequence calculus to define logic combinationsArisaka, Ryuta January 2013 (has links)
I study sequent calculus of combined logics in this thesis. Two specific logics are looked at-Logic BI that combines intuitionistic logic and multiplicative intuitionistic linear logic and Logic BBI that combines classical logic and multiplicative linear logic. A proof-theoretical study into logical combinations themsel ves then follows. To consolidate intuition about what this thesis is all about, let us suppose that we know about two different logics, Logic A developed for reasoning about Purpose A and Logic B developed for reasoning about Purpose B. Logic A serves Purpose A very well, but not Purpose B. Logic B serves Purpose B very well but not Purpose A. We wish to fulfill both Purpose A and Purpose B, but presently we can only afford to let one logic guide through our reasoning. What shall we do? One option is to be content with having Logic A with which we handle Purpose A efficiently and Purpose B rather inefficiently. Another option is to choose Logic B instead. But there is yet another option: we combine Logic A and Logic B to derive a new logic Logic C which is still one logic but which serves both Purpose A and Purpose B efficiently. The combined logic is synthetic of the strengths in more basic logics (Logic A and Logic B). As it nicely takes care of our requirements, it may be the best choice among all that have been so far considered. Yet this is not the end of the story. Depending on the manner Logic A and Logic B combine, Logic C may have extensions serving more purposes than just Purpose A and Purpose B. Ensuing is the following problem: we know about Logic A and Logic B, but we may not know about combined logics of the base logics. To understand the combined logics, we need to understand the extensions in which base logics interact each other. Analysis on the interesting parts tends to be non-trivial, however. The mentioned two specific combined logics BI and BBI do not make an exception, for which proof-theoretical development has been particularly slow. It has remained in obscurity how to properly handle base-logic interactions of the combined logics as appearing syntactically. As one objective of this thesis, I provide analysis on the syntactic phenomena of the BI and BBI base-logic interactions within sequent calculus, to augment the knowledge. For BI, I deliver, through appropriate methodologies to reason about the syntactic phenomena of the base-logic interactions, the first BI sequent calculus free of any structural rules. Given its positive consequence to efficient proof searches, this is a significant step forward in further maturity of BI proof theory. Based on the calculus, I prove decidability of a fragment of BI purely syntactically. For BBI which is closely connected to application via separation logic, I develop adequate sequent calculus conventions and consider the implication of the underlying semantics onto syntax. Sound BBI sequent calculi result with a closer syntax-semantics correspondence than previously envisaged. From them, adaptation to separation logic is also considered. To promote the knowledge of combined logics in general within computer science, it is also important that we be able to study logical combinations themselves. Towards this direction of generalisation, I present the concept of phased sequent calculus - sequent calculus which physically separates base logics, and in which a specific manner of logical combination to take place between them can be actually developed and analysed. For a demonstration, the said decidable BI fragment is formulated in phased sequent calculus, and the sense of logical combination in effect is analysed. A decision procedure is presented for the fragment.
|
209 |
Accounting for proof test data in Reliability Based Design OptimizationNdashimye, Maurice 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: Recent studies have shown that considering proof test data in a Reliability
Based Design Optimization (RBDO) environment can result in design improvement.
Proof testing involves the physical testing of each and every component
before it enters into service. Considering the proof test data as part of the
RBDO process allows for improvement of the original design, such as weight
savings, while preserving high reliability levels.
Composite Over-Wrapped Pressure Vessels (COPV) is used as an example
application of achieving weight savings while maintaining high reliability levels.
COPVs are light structures used to store pressurized fluids in space shuttles, the
international space station and other applications where they are maintained at
high pressure for extended periods of time. Given that each and every COPV
used in spacecraft is proof tested before entering service and any weight savings
on a spacecraft results in significant cost savings, this thesis put forward an
application of RBDO that accounts for proof test data in the design of a COPV.
The method developed in this thesis shows that, while maintaining high
levels of reliability, significant weight savings can be achieved by including
proof test data in the design process. Also, the method enables a designer
to have control over the magnitude of the proof test, making it possible to
also design the proof test itself depending on the desired level of reliability for
passing the proof test.
The implementation of the method is discussed in detail. The evaluation
of the reliability was based on the First Order Reliability Method (FORM)
supported by Monte Carlo Simulation. Also, the method is implemented in a
versatile way that allows the use of analytical as well as numerical (in the form
of finite element) models. Results show that additional weight savings can be
achieved by the inclusion of proof test data in the design process. / AFRIKAANSE OPSOMMING: Onlangse studies het getoon dat die gebruik van ontwerp spesifieke proeftoets
data in betroubaarheids gebaseerde optimering (BGO) kan lei tot 'n
verbeterde ontwerp. BGO behels vele aspekte in die ontwerpsgebied. Die
toevoeging van proeftoets data in ontwerpsoptimering bring te weë; die toetsing
van 'n ontwerp en onderdele voor gebruik, die aangepaste en verbeterde
ontwerp en gewig-besparing met handhawing van hoë betroubaarsheidsvlakke.
'n Praktiese toepassing van die BGO tegniek behels die ontwerp van drukvatte
met saamgestelde materiaal bewapening. Die drukvatontwerp is 'n ligte
struktuur wat gebruik word in die berging van hoë druk vloeistowwe in bv.
in ruimtetuie, in die internasionale ruimtestasie en in ander toepassings waar
hoë druk oor 'n tydperk verlang word. Elke drukvat met saamgestelde materiaal
bewapening wat in ruimtevaartstelsels gebruik word, word geproeftoets
voor gebruik. In ruimte stelselontwerp lei massa besparing tot 'n toename in
loonvrag.
Die tesis beskryf 'n optimeringsmetode soos ontwikkel en gebaseer op 'n
BGO tegniek. Die metode word toegepas in die ontwerp van drukvatte met
saamgestelde materiaal bewapening. Die resultate toon dat die gebruik van
proeftoets data in massa besparing optimering onderhewig soos aan hoë betroubaarheidsvlakke
moontlik is. Verdermeer, die metode laat ook ontwerpers
toe om die proeftoetsvlak aan te pas om sodoende by ander betroubaarheidsvlakke
te toets.
In die tesis word die ontwikkeling en gebruik van die optimeringsmetode
uiteengelê. Die evaluering van betroubaarheidsvlakke is gebaseer op 'n eerste
orde betroubaarheids-tegniek wat geverifieer word met talle Monte Carlo
simulasie resultate. Die metode is ook so geskep dat beide analitiese sowel
as eindige element modelle gebruik kan word. Ten slotte, word 'n toepassing getoon waar resultate wys dat die gebruik van die optimeringsmetode met die
insluiting van proeftoets data wel massa besparing kan oplewer.
|
210 |
A Discrete Approach to the Poincare-Miranda TheoremAhlbach, Connor Thomas 12 May 2013 (has links)
The Poincare-Miranda Theorem is a topological result about the existence of a zero of a function under particular boundary conditions. In this thesis, we explore proofs of the Poincare-Miranda Theorem that are discrete in nature - that is, they prove a continuous result using an intermediate lemma about discrete objects. We explain a proof by Tkacz and Turzanski that proves the Poincare-Miranda theorem via the Steinhaus Chessboard Theorem, involving colorings of partitions of n-dimensional cubes. Then, we develop a new proof of the Poincare-Miranda Theorem that relies on a polytopal generalization of Sperner's Lemma of Deloera - Peterson - Su. Finally, we extend these discrete ideas to attempt to prove the existence of a zero with the boundary condition of Morales.
|
Page generated in 0.0268 seconds