• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 354
  • 85
  • 42
  • 24
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 9
  • 7
  • 4
  • 3
  • 2
  • Tagged with
  • 715
  • 715
  • 408
  • 303
  • 302
  • 213
  • 120
  • 106
  • 96
  • 95
  • 94
  • 84
  • 59
  • 58
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

GUESS/1: a general purpose expert systems shell

Lee, Newton Saiyuen January 1985 (has links)
Expert systems are very useful and probably the most fruitful products of applied artificial intelligence. Expert systems, however, are very· expensive to develop. Powerful construction tools are indispensable to construct, modify and maintain a practical expert system. GUESS/l is a domain-independent expert systems shell that captures and enhances the strengths of its predecessors while at the same time overcoming.their limitations. GUESS/l gives a strong emphasis on human engineering, language generality, diversity of data representation and control structures, programming and run-time environment, database construction facilities and security, and many other aspects that are related to the ease of development and maintenance of expert systems. / Master of Science
532

Semantic analysis for system level design automation

Greenwood, Rob 06 October 2009 (has links)
This thesis describes the design and implementation of a system to extract meaning from natural language specifications of digital systems. This research is part of the ASPIN project which has the long-term goal of providing an automated system for digital system synthesis from informal specifications. This work makes several contributions, one being the application of artificial intelligence techniques to specifications writing. Also, the work deals with the subset of the English language used to describe digital systems, and the concepts within this domain have been classified into a type hierarchy. Finally, a set of relations has been defined to represent the interrelationships between the concepts of the sublanguage. This work centers around the modeling of information found in natural language specifications of digital systems. The target know ledge representation for the work is the conceptual graph, developed by John Sowa. Conceptual graphs provide a sound theoretical base as well as enough versatility to model the information found in digital system specifications. The transformation from natural language to conceptual graphs is done in two stages. In the first stage, a previously developed context-free English language parser is used to create trees of sentence structure. In the second stage, the trees are processed by a semantic analyzer which uses a conceptual type hierarchy and a database of rules to extract the meaning from the English sentence and create the conceptual graph. The main work of this thesis centers around the semantic analyzer which is written in Quintus Prolog. The semantic analyzer currently contains approximately 380 canonical conceptual graphs that cover usage of over 680 words consisting of over 240 nouns and over 460 verbs. / Master of Science
533

Selection of programming languages for structural engineering

Huxford, David C., Jr. 14 November 2012 (has links)
This thesis presents the concepts of structured programming and illustrates how they can be used to develop efficient and reliable programs and aid in language selection. Topics are presented and used to compare several languages with each other rather than with some abstract ideal. Structured design is a set of concepts that aids the decomposition of a problem using basic block structures into manageable subproblems. Decomposition is a process whereby the large problem is decomposed into components that can be easily understood. This process is continued until the smallest component can be represented by a unit of code performing a single action. By means of the four basic building blocks the atom, concatenation, selection, and repetition one can produce a correct well structured program. In addition, the top-down approach and/or the bottom up approach can assist in producing a structured program that is easy to design, code, debug, modify, and maintain. These approaches minimize the number of bugs and the time spent in the debugging process. Various testing techniques supporting the structured programming process are presented to aid in determining a program's correctness. The languages must support structured programming. Microsoft FORTRAN, Microsoft QuickBASIC, Turbo Pascal, and Microsoft C are analyzed and compared on the basis of syntactic style, semantic structure, data types and manipulation, application facilities, and application requirements. Example programs are presented to reinforce these concepts. Frame programs are developed in these languages and are used to assist in the language evaluation. / Master of Science
534

Representation and simulation of a high level language using VHDL

Edwards, Carleen Marie 24 November 2009 (has links)
This paper presents an approach for representing and simulating High Level Languages (HLL) using VHDL behavioral models. The graphical representation, a Data Flow Graph (DFG), is used as a base for the VHDL representation and simulation of a High Level Language (C). A package of behavioral models for the functional units for the High Level Language as well as individual entities has been developed using VHDL. An algorithm, Graph2VHDL, accepts a Data Flow Graph representation of a High Level Language and constructs a VHDL model for that graph. The representation of a High Level Language in VHDL frees users of custom computing platforms from the tedious job of developing a hardware model for a desired application. The algorithm also constructs a test file that is used with a pre-existing program, Test Bench Generation (TBG), to create a test-bench for the VHDL model of a Data Flow Graph. The test bench that is generated is used to simulate the representation of the High Level Language in the Data Flow Graph format. Experimental results verify the representation of the High Level Language in the Data Flow Graph format and in VHDL is correct. / Master of Science
535

Static Performance Guarantees Through Storage Modes and Multi-Stage Programming

Anxhelo Xhebraj (18423639) 23 April 2024 (has links)
<p dir="ltr">This thesis explores two approaches to bridge the gap between expressiveness and performance in programming languages. It presents two complementary directions that reconcile performance and expressiveness: retrofitting static performance guarantees into an expressive general-purpose language like Scala, and transparently extending a restricted language like Datalog with features like polymorphism and user-defined functions while generating specialized and efficient code.</p><p dir="ltr">We first introduce storage modes, lightweight type qualifiers that enable statically tracking the lifetime of values in Scala programs. This enables stack allocating them and reduce memory pressure on the garbage collector. A key novelty is delaying the popping of the stack frames when returning dynamically-sized stack-allocated values, overcoming limitations of previous approaches.</p><p dir="ltr">Second, we present Flan, a Datalog compiler and embedding in Scala implemented as a multi-stage interpreter. Through staging, Flan can generate highly specialized code for different execution strategies and indexing structures, achieving performance on par with state-of-the-art domain-specific compilers like Soufflé. Moreover, by leveraging the host language, Flan enables seamless extension of Datalog with powerful features like user-defined functions, lattices and polymorphism, retaining the expressiveness of languages like Flix.</p><p dir="ltr">Overall, this work advances the design space for expressive and efficient programming languages, through lightweight annotations in an expressive host language, and transparent compiler specialization of an embedded domain-specific language. The techniques intoduced bridghe the gap between low-level performance and high-level programming abstractions.</p>
536

Constructs for the development of a computer simulation language for bulk material transportation systems

Watford, Bevlee A. January 1983 (has links)
The overall objective of this research is the development of a set of guidelines, or constructs, which will assist in the formulation of a simulation language for bulk material transportation systems. The formulation of these guidelines necessitated a thorough analysis of two particular areas; these being the simulation analysis procedure and bulk material transportation systems. For comprehension of the nature of simulation languages, Part II presents each step of the analysis procedure as examined from the language's perspective. Supporting this analysis, Part III presents a detailed review of selected simulation languages which are currently available to the systems analyst. Bulk material transportation systems are presented in Part IV. These systems are discussed in detail from the viewpoint of the mode of transportation, the transportation medium, and the type of bulk materials transported. The functional specifications, or constructs, for a bulk material transportation simulation language are presented in Part V. These specifications are categorized according to the following areas; the system being described, the language form, and computer considerations. Utilizing these constructs a simulation language may be developed for subsequent use by bulk material transportation systems analysts which shall be a more appropriate choice for simulating their systems than any language currently available to them. / M.S.
537

Wasm-PBChunk: Incrementally Developing A Racket-To-Wasm Compiler Using Partial Bytecode Compilation

Perlin, Adam C 01 June 2023 (has links) (PDF)
Racket is a modern, general-purpose programming language with a language-oriented focus. To date, Racket has found notable uses in research and education, among other applications. To expand the reach of the language, there has been a desire to develop an efficient platform for running Racket in a web-based environment. WebAssembly (Wasm) is a binary executable format for a stack-based virtual machine designed to provide a fast, efficient, and secure execution environment for code on the web. Wasm is primarily intended to be a compiler target for higher-level languages. Providing Wasm support for the Racket project may be a promising way to bring Racket to the browser. To this end, we present an incremental approach to the development of a Racket- to-Wasm compiler. We make use of an existing backend for Racket that targets a portable bytecode known as PB, along with the associated PB interpreter. We per- form an ahead-of-time static translation of sections of PB into native Wasm, linking the chunks back to the interpreter before execution. By replacing portions of PB with native Wasm, we can eliminate some portion of interpretation overhead and move closer to native Wasm support for Chez Scheme (Racket’s Backend). Due to the use of an existing backend and interpreter, our approach already supports nearly all features of the Racket language – including delimited continuations, tail-calling behavior, and garbage collection – and excluding threading and FFI support for the time being. We perform benchmarks against a baseline to validate our approach, to promising results.
538

Automatic Reasoning Techniques for Non-Serializable Data-Intensive Applications

Gowtham Kaki (7022108) 14 August 2019 (has links)
<div> <div> <div> <p>The performance bottlenecks in modern data-intensive applications have induced database implementors to forsake high-level abstractions and trade-off simplicity and ease of reasoning for performance. Among the first casualties of this trade-off are the well-known ACID guarantees, which simplify the reasoning about concurrent database transactions. ACID semantics have become increasingly obsolete in practice due to serializable isolation – an integral aspect of ACID, being exorbitantly expensive. Databases, including the popular commercial offerings, default to weaker levels of isolation where effects of concurrent transactions are visible to each other. Such weak isolation guarantees, however, are extremely hard to reason about, and have led to serious safety violations in real applications. The problem is further complicated in a distributed setting with asynchronous state replications, where high availability and low latency requirements compel large-scale web applications to embrace weaker forms of consistency (e.g., eventual consistency) besides weak isolation. Given the serious practical implications of safety violations in data-intensive applications, there is a pressing need to extend the state-of-the-art in program verification to reach non- serializable data-intensive applications operating in a weakly-consistent distributed setting. </p> <p>This thesis sets out to do just that. It introduces new language abstractions, program logics, reasoning methods, and automated verification and synthesis techniques that collectively allow programmers to reason about non-serializable data-intensive applications in the same way as their serializable counterparts. The contributions </p> </div> </div> <div> <div> <p>xi </p> </div> </div> </div> <div> <div> <div> <p>made are broadly threefold. Firstly, the thesis introduces a uniform formal model to reason about weakly isolated (non-serializable) transactions on a sequentially consistent (SC) relational database machine. A reasoning method that relates the semantics of weak isolation to the semantics of the database program is presented, and an automation technique, implemented in a tool called ACIDifier is also described. The second contribution of this thesis is a relaxation of the machine model from sequential consistency to a specifiable level of weak consistency, and a generalization of the data model from relational to schema-less or key-value. A specification language to express weak consistency semantics at the machine level is described, and a bounded verification technique, implemented in a tool called Q9 is presented that bridges the gap between consistency specifications and program semantics, thus allowing high-level safety properties to be verified under arbitrary consistency levels. The final contribution of the thesis is a programming model inspired by version control systems that guarantees correct-by-construction <i>replicated data types</i> (RDTs) for building complex distributed applications with arbitrarily-structured replicated state. A technique based on decomposing inductively-defined data types into <i>characteristic relations</i> is presented, which is used to reason about the semantics of the data type under state replication, and eventually derive its correct-by-construction replicated variant automatically. An implementation of the programming model, called Quark, on top of a content-addressable storage is described, and the practicality of the programming model is demonstrated with help of various case studies. </p> </div> </div> </div>
539

A tangible programming environment model informed by principles of perception and meaning

Smith, Andrew Cyrus 09 1900 (has links)
It is a fundamental Human-Computer Interaction problem to design a tangible programming environment for use by multiple persons that can also be individualised. This problem has its origin in the phenomenon that the meaning an object holds can vary across individuals. The Semiotics Research Domain studies the meaning objects hold. This research investigated a solution based on the user designing aspects of the environment at a time after it has been made operational and when the development team is no longer available to implement the user’s design requirements. Also considered is how objects can be positioned so that the collection of objects is interpreted as a program. I therefore explored how some of the principles of relative positioning of objects, as researched in the domains of Psychology and Art, could be applied to tangible programming environments. This study applied the Gestalt principle of perceptual grouping by proximity to the design of tangible programming environments to determine if a tangible programming environment is possible in which the relative positions of personally meaningful objects define the program. I did this by applying the Design Science Research methodology with five iterations and evaluations involving children. The outcome is a model of a Tangible Programming Environment that includes Gestalt principles and Semiotic theory; Semiotic theory explains that the user can choose a physical representation of the program element that carries personal meaning whereas the Gestalt principle of grouping by proximity predicts that objects can be arranged to appear as if linked to each other. / School of Computing / Ph. D. (Computer Science)
540

Preserving the separation of concerns while composing aspects with reflective AOP

Marot, Antoine 07 October 2011 (has links)
Aspect-oriented programming (AOP) is a programming paradigm to localize and modularize the concerns that tend to be tangled and scattered across traditional programming modules, like functions or classes. Such concerns are known as crosscutting concerns and aspect-oriented languages propose to encapsulate them in modules called aspects. Because each crosscutting concern implemented in an aspect is separated from the other concerns, AOP improves reusability, readability, and maintainability of code.<p><p>While it improves separation of concerns, AOP suffers from well-known composition issues. Aspects developed in isolation may indeed interact with each other in ways that were not expected by the programmers and therefore lead to a program that does not meet its requirements. Without appropriate tools, undesired aspect interactions must be identified by reading code in order to gain global knowledge of the program and understand where and how aspects interact. Then, if the aspect language does not offer the needed support, these interactions must be resolved by invasively changing the code of the conflicting aspects to make them work together. Neither one of these solutions are acceptable since global knowledge as well as invasive and composition-specific modifications are exactly what separation of concerns seeks to avoid.<p><p>In this dissertation we show that the existing approaches to compose aspects are not entirely satisfying either with respect to separation of concerns. These approaches either rely on global knowledge and invasive modifications, which is problematic, or lack genericity and/or expressivity, which means that code reading/code modification may still be required for the aspect interactions they cannot handle.<p><p>To properly detect and resolve aspect interactions we propose a novel approach that is based on AOP itself. Since aspect composition is a concern that, by definition, crosscuts the aspects, it indeed makes sense to expect that a technique to improve the separation of crosscutting concerns such as AOP is well-suited for the task. The resulting mechanism is based on reflection principles and is called reflective AOP. <p><p>The main difference between "regular" AOP and reflective AOP lies in the parts of the system they address. While traditional AOP aims at modularizing the concerns that crosscut the base system, reflective AOP offers the possibility to handle the concerns that crosscut the aspects themselves. This is achieved by incorporating new kinds of joinpoints, pointcuts and advice into the aspect language. These new elements, which form what we call a meta joinpoint model, are dedicated to the aspect level and enable programmers to reason about and act upon the semantics of aspects at runtime. As validated on numerous examples of aspect composition, having a well-designed and principled meta joinpoint model makes it possible to deal with both the detection and the resolution of composition issues in a way that preserves the separation of concerns principle. These examples are illustrated using Phase, our prototype reflective AOP language. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished

Page generated in 0.0193 seconds