• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 282
  • 148
  • 84
  • 32
  • 27
  • 14
  • 14
  • 13
  • 10
  • 6
  • 6
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 737
  • 137
  • 130
  • 93
  • 85
  • 84
  • 82
  • 80
  • 65
  • 60
  • 49
  • 48
  • 48
  • 46
  • 46
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Proof theory and algorithms for answer set programming

Gebser, Martin January 2011 (has links)
Answer Set Programming (ASP) is an emerging paradigm for declarative programming, in which a computational problem is specified by a logic program such that particular models, called answer sets, match solutions. ASP faces a growing range of applications, demanding for high-performance tools able to solve complex problems. ASP integrates ideas from a variety of neighboring fields. In particular, automated techniques to search for answer sets are inspired by Boolean Satisfiability (SAT) solving approaches. While the latter have firm proof-theoretic foundations, ASP lacks formal frameworks for characterizing and comparing solving methods. Furthermore, sophisticated search patterns of modern SAT solvers, successfully applied in areas like, e.g., model checking and verification, are not yet established in ASP solving. We address these deficiencies by, for one, providing proof-theoretic frameworks that allow for characterizing, comparing, and analyzing approaches to answer set computation. For another, we devise modern ASP solving algorithms that integrate and extend state-of-the-art techniques for Boolean constraint solving. We thus contribute to the understanding of existing ASP solving approaches and their interconnections as well as to their enhancement by incorporating sophisticated search patterns. The central idea of our approach is to identify atomic as well as composite constituents of a propositional logic program with Boolean variables. This enables us to describe fundamental inference steps, and to selectively combine them in proof-theoretic characterizations of various ASP solving methods. In particular, we show that different concepts of case analyses applied by existing ASP solvers implicate mutual exponential separations regarding their best-case complexities. We also develop a generic proof-theoretic framework amenable to language extensions, and we point out that exponential separations can likewise be obtained due to case analyses on them. We further exploit fundamental inference steps to derive Boolean constraints characterizing answer sets. They enable the conception of ASP solving algorithms including search patterns of modern SAT solvers, while also allowing for direct technology transfers between the areas of ASP and SAT solving. Beyond the search for one answer set of a logic program, we address the enumeration of answer sets and their projections to a subvocabulary, respectively. The algorithms we develop enable repetition-free enumeration in polynomial space without being intrusive, i.e., they do not necessitate any modifications of computations before an answer set is found. Our approach to ASP solving is implemented in clasp, a state-of-the-art Boolean constraint solver that has successfully participated in recent solver competitions. Although we do here not address the implementation techniques of clasp or all of its features, we present the principles of its success in the context of ASP solving. / Antwortmengenprogrammierung (engl. Answer Set Programming; ASP) ist ein Paradigma zum deklarativen Problemlösen, wobei Problemstellungen durch logische Programme beschrieben werden, sodass bestimmte Modelle, Antwortmengen genannt, zu Lösungen korrespondieren. Die zunehmenden praktischen Anwendungen von ASP verlangen nach performanten Werkzeugen zum Lösen komplexer Problemstellungen. ASP integriert diverse Konzepte aus verwandten Bereichen. Insbesondere sind automatisierte Techniken für die Suche nach Antwortmengen durch Verfahren zum Lösen des aussagenlogischen Erfüllbarkeitsproblems (engl. Boolean Satisfiability; SAT) inspiriert. Letztere beruhen auf soliden beweistheoretischen Grundlagen, wohingegen es für ASP kaum formale Systeme gibt, um Lösungsmethoden einheitlich zu beschreiben und miteinander zu vergleichen. Weiterhin basiert der Erfolg moderner Verfahren zum Lösen von SAT entscheidend auf fortgeschrittenen Suchtechniken, die in gängigen Methoden zur Antwortmengenberechnung nicht etabliert sind. Diese Arbeit entwickelt beweistheoretische Grundlagen und fortgeschrittene Suchtechniken im Kontext der Antwortmengenberechnung. Unsere formalen Beweissysteme ermöglichen die Charakterisierung, den Vergleich und die Analyse vorhandener Lösungsmethoden für ASP. Außerdem entwerfen wir moderne Verfahren zum Lösen von ASP, die fortgeschrittene Suchtechniken aus dem SAT-Bereich integrieren und erweitern. Damit trägt diese Arbeit sowohl zum tieferen Verständnis von Lösungsmethoden für ASP und ihrer Beziehungen untereinander als auch zu ihrer Verbesserung durch die Erschließung fortgeschrittener Suchtechniken bei. Die zentrale Idee unseres Ansatzes besteht darin, Atome und komposite Konstrukte innerhalb von logischen Programmen gleichermaßen mit aussagenlogischen Variablen zu assoziieren. Dies ermöglicht die Isolierung fundamentaler Inferenzschritte, die wir in formalen Charakterisierungen von Lösungsmethoden für ASP selektiv miteinander kombinieren können. Darauf aufbauend zeigen wir, dass unterschiedliche Einschränkungen von Fallunterscheidungen zwangsläufig zu exponentiellen Effizienzunterschieden zwischen den charakterisierten Methoden führen. Wir generalisieren unseren beweistheoretischen Ansatz auf logische Programme mit erweiterten Sprachkonstrukten und weisen analytisch nach, dass das Treffen bzw. Unterlassen von Fallunterscheidungen auf solchen Konstrukten ebenfalls exponentielle Effizienzunterschiede bedingen kann. Die zuvor beschriebenen fundamentalen Inferenzschritte nutzen wir zur Extraktion inhärenter Bedingungen, denen Antwortmengen genügen müssen. Damit schaffen wir eine Grundlage für den Entwurf moderner Lösungsmethoden für ASP, die fortgeschrittene, ursprünglich für SAT konzipierte, Suchtechniken mit einschließen und darüber hinaus einen transparenten Technologietransfer zwischen Verfahren zum Lösen von ASP und SAT erlauben. Neben der Suche nach einer Antwortmenge behandeln wir ihre Aufzählung, sowohl für gesamte Antwortmengen als auch für Projektionen auf ein Subvokabular. Hierfür entwickeln wir neuartige Methoden, die wiederholungsfreies Aufzählen in polynomiellem Platz ermöglichen, ohne die Suche zu beeinflussen und ggf. zu behindern, bevor Antwortmengen berechnet wurden.
402

Difficulties of secondary three students in writing geometric proofs

Fok, Sui-sum, Selina., 霍遂心. January 2001 (has links)
published_or_final_version / Education / Master / Master of Education
403

Αυτόματη αναγνώριση CAPTCHAs με χρήση τεχνικών ΨΕΕ

Γκολώνη, Σταυρούλα 06 March 2015 (has links)
Γνωρίζουμε πως στην εποχή που διανύουμε το Διαδίκτυο προσφέρει μια παγκόσμια επικοινωνία δημιουργώντας ταυτόχρονα και μια παγκόσμια οικονομία. Η παροχή δωρεάν υπηρεσιών από αρκετές ιστοσελίδες οδήγησε στη συστηματική κατάχρησή τους με ευνόητο σκοπό το κέρδος. Ως αντίσταση σε αυτή την κακόβουλη νέα πηγή εσόδων για κάποιους, προβάλλονται τα CAPTCHAs. Στόχος τους είναι να εξακριβώσουν αν μία αίτηση σε μία υπηρεσία γίνεται από έναν χρήστη ή από ένα αυτοματοποιημένο πρόγραμμα. Κάθε ιστοσελίδα που δίνει την δυνατότητα στον χρήστη να δημιουργήσει δικό του περιεχόμενο ή να χρησιμοποιήσει τις υπηρεσίες της οφείλει πλέον να εμπεριέχει CAPTCHAs. Η παρούσα ειδική επιστημονική εργασία έχει ως στόχο τη μελέτη και την ερμηνεία όλων των διαφορετικών ειδών CAPTCHAs που έχουν δημιουργηθεί ώστε να είναι ανθεκτικά στις κακόβουλες προσπάθειες λύσης και εξετάζεται το κατά πόσο αυτό είναι εφικτό. Γίνεται μια προσπάθεια αρχικά να κατανοήσουμε ακριβώς τι είναι τα CAPTCHAs και γιατί η χρήση τους καθίσταται αναγκαία. Αυτό που επίσης θα διερευνηθεί, μέσα από συγκεκριμένες δημοσιεύσεις που έχουν πραγματοποιηθεί, είναι ποιες αρχές πρέπει να διέπουν το σχεδιασμό ενός CAPTCHA. Προκειμένου να συμβεί αυτό θα ανατρέξουμε σε διαφορετικές προσεγγίσεις και αφού τις παρουσιάσουμε γίνεται μια κριτική ανάλυση των μεθόδων της κάθε ερευνητικής ομάδας. / It is fact that in the modern world the Internet offers a global communication while creating a global economy. The provision of free services from several websites has led to this systematic abuse solely for the purpose of making a profit. In order to stem the tide of this malicious new source of income for some, “CAPTCHAs” are employed. Their goal is to determine whether a request to a service is made by a user or by an automated program. Every website that enables the user to create their own content or use its services must now deploy CAPTCHAs. This current special scientific work aims at the study and interpretation of all different kinds of CAPTCHAs created so that they are resistant to malicious efforts of solutions and examines whether this is possible. An attempt is made initially to understand exactly what the “CAPTCHAs” are and why their use is necessary. What will also be explored through specific publications made, is what principles should govern the design of a CAPTCHA. In order to do this we will go back to different approaches and after presenting them, a critical analysis of the methods of each research group will be conducted.
404

Playing and solving the game of Hex

Henderson, Philip Unknown Date
No description available.
405

Game tree search algorithms for the game of cops and robber

Moldenhauer, Carsten Unknown Date
No description available.
406

Certified Compilation and Worst-Case Execution Time Estimation

Oliveira Maroneze, André 17 June 2014 (has links) (PDF)
Safety-critical systems - such as electronic flight control systems and nuclear reactor controls - must satisfy strict safety requirements. We are interested here in the application of formal methods - built upon solid mathematical bases - to verify the behavior of safety-critical systems. More specifically, we formally specify our algorithms and then prove them correct using the Coq proof assistant - a program capable of mechanically checking the correctness of our proofs, providing a very high degree of confidence. In this thesis, we apply formal methods to obtain safe Worst-Case Execution Time (WCET) estimations for C programs. The WCET is an important property related to the safety of critical systems, but its estimation requires sophisticated techniques. To guarantee the absence of errors during WCET estimation, we have formally verified a WCET estimation technique based on the combination of two main methods: a loop bound estimation and the WCET estimation via the Implicit Path Enumeration Technique (IPET). The loop bound estimation itself is decomposed in three steps: a program slicing, a value analysis based on abstract interpretation, and a loop bound calculation stage. Each stage has a chapter dedicated to its formal verification. The entire development has been integrated into the formally verified C compiler CompCert. We prove that the final estimation is correct and we evaluate its performances on a set of reference benchmarks. The contributions of this thesis include (a) the formalization of the techniques used to estimate the WCET, (b) the estimation tool itself (obtained from the formalization), and (c) the experimental evaluation. We conclude that our formally verified development obtains interesting results in terms of precision, but it requires special precautions to ensure the proof effort remains manageable. The parallel development of specifications and proofs is essential to this end. Future works include the formalization of hardware cost models, as well as the development of more sophisticated analyses to improve the precision of the estimated WCET.
407

Proof system for logic of correlated knowledge / Įrodymų sistema koreliatyvių žinių logikai

Giedra, Haroldas 30 December 2014 (has links)
Automated proof system for logic of correlated knowledge is presented in the dissertation. The system consists of the sequent calculus GS-LCK and the proof search procedure GS-LCK-PROC. Sequent calculus is sound, complete and satisfy the properties of invertibility of rules, admissibility of weakening, contraction and cut. The procedure GS-LCK-PROC is terminating and allows to check if the sequent is provable. Also decidability of logic of correlated knowledge has been proved. Using the terminating procedure GS-LCK-PROC the validity of all formulas of logic of correlated knowledge can be checked. / Automatinė įrodymų sistema koreliatyvių žinių logikai yra pristatoma disertacijoje. Sistemą sudaro sekvencinis skaičiavimas GS-LCK ir įrodymo paieškos procedūra GS-LCK-PROC. Sekvencinis skaičiavimas yra pagrįstas, pilnas ir tenkina taisyklių apverčiamumo, silpninimo, prastinimo ir pjūvio leistinumo savybes. Procedūra GS-LCK-PROC yra baigtinė ir leidžia patikrinti, ar sekvencija yra išvedama. Taip pat buvo įrodytas koreliatyvių žinių logikos išsprendžiamumas. Naudojant baigtinę procedūra GS-LCK-PROC, visų koreliatyvių žinių logikos formulių tapatus teisingumas gali būti patikrintas.
408

Prediction of secondary structures for large RNA molecules

Mathuriya, Amrita 12 January 2009 (has links)
The prediction of correct secondary structures of large RNAs is one of the unsolved challenges of computational molecular biology. Among the major obstacles is the fact that accurate calculations scale as O(n⁴), so the computational requirements become prohibitive as the length increases. We present a new parallel multicore and scalable program called GTfold, which is one to two orders of magnitude faster than the de facto standard programs mfold and RNAfold for folding large RNA viral sequences and achieves comparable accuracy of prediction. We analyze the algorithm's concurrency and describe the parallelism for a shared memory environment such as a symmetric multiprocessor or multicore chip. We are seeing a paradigm shift to multicore chips and parallelism must be explicitly addressed to continue gaining performance with each new generation of systems. We provide a rigorous proof of correctness of an optimized algorithm for internal loop calculations called internal loop speedup algorithm (ILSA), which reduces the time complexity of internal loop computations from O(n⁴) to O(n³) and show that the exact algorithms such as ILSA are executed with our method in affordable amount of time. The proof gives insight into solving these kinds of combinatorial problems. We have documented detailed pseudocode of the algorithm for predicting minimum free energy secondary structures which provides a base to implement future algorithmic improvements and improved thermodynamic model in GTfold. GTfold is written in C/C++ and freely available as open source from our website.
409

An investigation of the Australian layered elastic tool for flexible aircraft pavement thickness design

White, Gregory William January 2007 (has links)
APSDS is a layered elastic tool for aircraft pavement thickness determination developed and distributed by Mincad Systems and based on the sister software Circly. As aircraft pavement thickness determination remains an empirical science, mechanistic-empirical design tools such as APSDS require calibration to full scale pavement performance, via the S77-1 curve. APSDS provides the unique advantage over other tools that it models all the aircraft in all their wandering positions, negating the need for designers to use pass to cover ratios and acknowledging that different aircraft have their wheels located at difference distances from the aircraft centerline. APSDS requires a range of input parameters to be entered, including subgrade modulus, aircraft types, masses and passes and a pavement structure. A pavement thickness is then returned which has 50% design reliability. Greater levels of reliability are obtained by conservative selection of input values. Whilst most input parameters have a linear influence on pavement thickness, subgrade modulus changes have a greater influence at lower values and less influence at higher values. When selecting input values, designers should concentrate their efforts on subgrade modulus and aircraft mass as these have the greatest influence on the required pavement thickness. Presumptive or standard values are generally acceptable for the less influential parameters. S77-1 pavement thicknesses are of a standard composition with only the subbase thickness varying. Non-standard pavement structures are determined using the principle of material equivalence and the FAA provides range of material equivalence factors, of which the mid-range values are most commonly used. APSDS allows direct modelling of non-standard pavement structures. By comparing different APSDS pavements of equal structural capacity, implied material equivalences can be calculated. These APSDS implied material equivalences lie at the lower end of the ranges published by FAA. In order to obtain consistence between APSDS and the FAA guidance, the following material equivalence values are recommended: * Asphalt for Crushed Rock. 1.3. * Crushed Rock for Uncrushed Gravel. 1.2. * Asphalt for Uncrushed Gravel. 1.6. Proof rolling regimes remain an important part of the design and construction of flexible aircraft pavements. Historically, designers relied on Bousinesq's equation and the assumption of point loads on semi-finite homogenous materials to determine proof rolling regimes using stress as the indicator of damage. The ability of APSDS to generate stress, strain and deflection at any depth and any location across the pavement allows these historical assumptions to be tested. As the design of a proof rolling regime is one of comparing damage indicators modelled under aircraft loads to those under heavy roller loads, the historical simplifications are generally valid for practical design scenarios. Where project specific data is required, APSDS can readily calculate stresses induced by proof rollers and aircraft at any location and depth for comparison. APSDS is a leading tool for flexible aircraft pavement thickness determination due to its flexibility, transparency and being free from bias. However, the following possible areas for improvement are considered worthy of future research and development: * Improvements to the user interface. * Ability to model aircraft masses as frequency distributions. * Ability to copy stress with depth data to Excel(tm) spreadsheets. * Ability to perform parametric runs. * Inclusion of a reliability based design module.
410

Secure communications for critical infrastructure control systems

Dawson, Robert Edward January 2008 (has links)
In March 2000, 1 million litres of raw sewage was released into the water system of Maroochy Shire on Queensland’s sunshine coast. This environmental disaster was caused by a disgruntled ex-contractor using a radio transmitter to illicitly access the electronically controlled pumps in the control system. In 2007 CNN screened video footage of an experimental attack against a electrical generator. The attack caused the generator to shake and smoke, visually showing the damage caused by cyber attack. These attacks highlight the importance of securing the control systems which our critical infrastructures depend on. This thesis addresses securing control systems, focusing on securing the communications for supervisory control and data acquisition (SCADA) systems. We review the architectures of SCADA systems and produce a list of the system constraints that relate to securing these systems. With these constraints in mind, we survey both the existing work in information and SCADA security, observing the need to investigate further the problem of secure communications for SCADA systems. We then present risk modelling techniques, and model the risk in a simple SCADA system, using the ISM, a software tool for modelling information security risk. In modelling the risk, we verify the hypothesis that securing the communications channel is an essential part of an effective security strategy for SCADA systems. After looking at risk modelling, and establishing the value of securing communications, we move on to key management for SCADA systems. Appropriate key management techniques are a crucial part of secure communications, and form an important part of the contributions made in this work. We present a key management protocol that has been designed to run under the constraints specific to SCADA systems. A reductionist security proof is developed for a simplified version of the protocol, showing it is secure in the Bellare Rogaway model.

Page generated in 0.0585 seconds