Spelling suggestions: "subject:"programming anguage"" "subject:"programming 1anguage""
111 |
A Language-Based Approach to Robust Context-Aware Software / 堅牢な文脈認識ソフトウェア開発のためのプログラミング言語の研究Inoue, Hiroaki 26 March 2018 (has links)
付記する学位プログラム名: デザイン学大学院連携プログラム / 京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第21217号 / 情博第670号 / 新制||情||115(附属図書館) / 京都大学大学院情報学研究科通信情報システム専攻 / (主査)教授 五十嵐 淳, 教授 石田 亨, 教授 山本 章博 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
112 |
Information Extraction of Technical Details From Scholarly ArticlesKaushal, Kulendra Kumar 16 June 2021 (has links)
Researchers have made significant progress in information extraction from short documents in the last few years, including social media interaction, news articles, and email excerpts. This research aims to extract technical entities like hardware resources, computing platforms, compute time, programming language, and libraries from scholarly research articles. Research articles are generally long documents having both salient as well as non-salient entities. Analyzing the cross-sectional relation, filtering the relevant information, measuring the saliency of mentioned entities, and extracting novel entities are some of the technical challenges involved in this research. This work presents a detailed study about the performance, effectiveness, and scalability of rule-based weakly supervised algorithms. We also develop an automated end-to-end Research Entity and Relationship Extractor (E2R Extractor). Additionally, we perform a comprehensive study about the effectiveness of existing deep learning-based information extraction tools like Dygie, Dygie++, SciREX. The research also contributes a dataset containing novel entities annotated in BILUO format and represents the baseline results using the E2R extractor on the proposed dataset. The results indicate that the E2R extractor successfully extracts salient entities from research articles. / Master of Science / Information extraction is a process of automatically extracting meaningful information from unstructured text such as articles, news feeds and presenting it in a structured format.
Researchers have made significant progress in this domain over the past few years.
However, their work primarily focuses on short documents such as social media interactions, news articles, email excerpts, and not on long documents such as scholarly articles and research papers. Long documents contain a lot of redundant data, so filtering and extracting meaningful information is quite challenging. This work focuses on extracting entities such as hardware resources, compute platforms, and programming languages used in scholarly articles.
We present a deep learning-based model to extract such entities from research articles and research papers. We evaluate the performance of our deep learning model against simple rule-based algorithms and other state-of-the-art models for extracting the desired entities.
Our work also contributes a labeled dataset containing the entities mentioned above and results obtained on this dataset using our deep learning model.
|
113 |
Source Code Readability : A study on type-declaration and programming knowledge / Source Code Readability : A study on type-declaration and programming knowledgeLennartsson, Caesar January 2022 (has links)
The readability of source code is essential for software maintenance. Since maintenance is an ongoing process, which is estimated to be 70 percent of the software development life cycle's total costs, it cannot be deprioritized. The readability of source code is likely to affect the program comprehension, which may help or create problems in the maintenance of the software. How different code features and functions affect the readability of source code have previously been investigated, and readability metrics have been developed. The project was initiated because of the lack of research on how programming knowledge and statically compared to dynamically typed programming languages affect the readability of the source code. A survey was conducted and included 21 computer science students with various programming knowledge, each rating eight code snippets, making it in total 168 ratings. The results showed that the type of programming language could improve the readability of source code. The results also showed that programming knowledge does not have a correlation with the ability to read source code.
|
114 |
Making scope explorable in Software Development Environments to reduce defects and support program understandingvon Oldenburg, Tim January 2014 (has links)
Programming language tools help software developers to understand a program and to recognize possible pitfalls. Used with the right knowledge, they can be instrumented to achieve better software quality. However, creating language tools that integrate well into the development environment and workflow is challenging.This thesis utilizes a user-centered design process to identify the needs of professional developers through in-depth interviews, address those needs through a concept, and finally implement and evaluate the concept. Taking 'scope' as an exemplary source of misconceptions in programming, a “Scope Inspector” plug-in for the Atom IDE—targeting experienced JavaScript developers in the open source community—is implemented.
|
115 |
Program analysis for quantitative-reachability propertiesLiu, Jiawen 06 September 2024 (has links)
Program analysis studies the execution behaviors of computer programs including programs’ safety behavior, privacy behavior, resource usage, etc. The kind of program analysis on the safety behavior of a program involves analyzing if a particular line of code leaks a secret and how much secret is leaked by this line of code. When studying the resource usage of a program, certain program analysis mainly focuses on analyzing whether a piece of code consumes a certain resource and how much resource is used by this piece of code. Yet another kind of program analysis is studying the program privacy behavior by analyzing whether a specific private data is dependent on other data and how many times they are dependent during multiple executions. We notice that when studying the aforementioned behaviors, there are two dominant program properties that we are analyzing – “How Much” and “Whether”, namely quantitative properties and reachability properties. In other words, we are analyzing the kind of program property that contains two sub-properties – quantitative and reachability. A property is a hyper-property if it has two or more sub-properties. For the class of properties that has quantitative and reachability sub-properties, I refer to them as quantitative-reachability hyperproperties. Most existing program analysis methods can analyze only one subproperty of a program’s quantitative-reachability hyper-property. For example, the reachability analysis methods only tell us whether some code pieces are executed, whether the confidential data is leaked, whether certain data relies on another data, etc., which are only the reachability sub-properties. These analysis methods do not address how many times or how long these properties hold with respect to some particular code or data. Quantitative analysis methods, such as program complexity analysis, resource cost analysis, execution time estimation, etc., only tell us the upper bound on the overall quantity, i.e., the quantitative sub-property. However, these quantities are not associated with a specific piece of code, program location, private data, etc., which are related to the reachability sub-properties. This thesis presents new program analysis methodology for analyzing two representative quantitative-reachability properties. The new methodology mitigates the limitations in both reachability analysis methods and quantitative analysis methods and help to control the program’s execution behaviors in higher granularity. The effectiveness of the new analysis method is validated through prototype implementations and experimental evaluations.
The first noteworthy quantitative-reachability property I look into is the adaptivity in the programs that implement certain adaptive data analyses. Data analyses are usually designed to identify some properties of the population from which the data are drawn, generalizing beyond the specific data sample. For this reason, data analyses are often designed in a way that guarantees that they produce a low generalization error. An adaptive data analysis can be seen as a process composed by multiple queries interrogating some data, where the choice of which query to run next may rely on the results of previous queries. The generalization error of each individual query/analysis can be controlled by using an array of well-established statistical techniques. However, when queries are arbitrarily composed, the different errors can propagate through the chain of different queries and result in high generalization error. To address this issue, data analysts are designing several techniques that not only guarantee bounds on the generalization errors of single queries, but that also guarantee bounds on the generalization error of the composed analyses. The choice of which of these techniques to use, often depends on the chain of queries that an adaptive data analysis can generate, intuitively the adaptivity level in an adaptive data analysis. To help analysts with identifying which technique to use to control their generalization error, we consider adaptive data analyses implemented as while-like programs, and we design a program analysis framework. In this framework, we first formalize the intuitive notion of adaptivity as a quantitative-reachability property, which is a key measure of an adaptive data analysis to choose the appropriate technique. Then we design a program analysis algorithm that estimates a sound upper bound on the adaptivity of the program that implements an adaptive data analysis. We also implement my program analysis and show that it can help to analyze the adaptivity of several concrete data analyses with different adaptivity structures.
As a continuation of the previous work, to get a more precise bound on a program’s adaptivity level, I look at another quantitative-reachability hyper-property – the number of times a given location inside a procedure is visited during the program execution. The upper bound on this hyper-property is referred to as the reachability-bound. It can help to improve the program analysis results when studying other different program features. For example, the reachability-bound on each program location can be used by some resource cost analysis techniques to compute a precise bound on a program’s worst-case resource consumption. When we analyze the adaptivity in an adaptive data analysis program as discussed above, the accuracy of my program analysis result can also be improved through a tight reachability-bound on every program location. Some existing program complexity analysis methods can be repurposed to analyze and estimate the reachability-bound. However, these methods focus only on the overall quantity and ignore the path sensitivity in the program. For this reason, the reachability-bounds of the locations in different sub-procedures are usually over-approximated. As far as we know, there is no general analysis algorithm that computes the reachability-bound for every program location directly and path-sensitively. To this end, I present a pathsensitive reachability-bound algorithm, which exploit the path sensitivity to compute a precise reachability-bound for every program location. We implement this path-sensitive reachability-bound algorithm in a prototype, and report on an experimental comparison with state-of-art tools over four different sets of benchmarks.
|
116 |
LF : a language for reliable embedded systemsVan Riet, F. A. 11 1900 (has links)
Thesis (MSc)--University of Stellenbosch, 2001. / ENGLISH ABSTRACT: Computer-aided verification techniques, such as model checking, are often considered essential
to produce highly reliable software systems. Modern model checkers generally require models to
be written in eSP-like notations. Unfortunately, such systems are usually implemented using
conventional imperative programming languages. Translating the one paradigm into the other is
a difficult and error prone process.
If one were to program in a process-oriented language from the outset, the chasm between implementation
and model could be bridged more readily. This would lead to more accurate models
and ultimately more reliable software.
This thesis covers the definition of a process-oriented language targeted specifically towards embedded
systems and the implementation of a suitable compiler and run-time system.
The language, LF, is for the most part an extension of the language Joyce, which was defined by
Brinch Hansen. Both LF and Joyce have features which I believe make them easier to use than
other esp based languages such as occam. An example of this is a selective communication
primitive which allows for both input and output guards which is not supported in occam.
The efficiency of the implementation is important. The language was therefore designed to be
expressive, but constructs which are expensive to implement were avoided. Security, however, was
the overriding consideration in the design of the language and runtime system.
The compiler produces native code. Most other esp derived languages are either interpreted or
execute as tasks on host operating systems. Arguably this is because most implementations of
esp and derivations thereof are for academic purposes only. LF is intended to be an implementation
language.
The performance of the implementation is evaluated in terms of practical metries such as the
time needed to complete communication operations and the average time needed to service an
interrupt. / AFRIKAANSE OPSOMMING: Rekenaar ondersteunde verifikasietegnieke soos programmodellering, is onontbeerlik in die ontwikkeling
van hoogs betroubare programmatuur. In die algemeen, aanvaar programme wat modelle
toets eSP-agtige notasie as toevoer. Die meeste programme word egter in meer konvensionele
imperatiewe programmeertale ontwikkel. Die vertaling vanuit die een paradigma na die ander is
'n moelike proses, wat baie ruimte laat vir foute.
Indien daar uit die staanspoor in 'n proses gebaseerde taal geprogrammeer word, sou die verwydering
tussen model en program makliker oorbrug kon word. Dit lei tot akkurater modelle en
uiteindelik tot betroubaarder programmatuur.
Die tesis ondersoek die definisie van 'n proses gebaseerde taal, wat gemik is op ingebedde programmatuur.
Verder word die implementasie van 'n toepaslike vertaler en looptyd omgewing ook
bespreek.
Die taal, LF, is grotendeels gebaseer op Joyce, wat deur Brinch Hansen ontwikkel is. Joyce en op
sy beurt LF, is verbeterings op ander esp verwante tale soos occam. 'n Voorbeeld hiervan is 'n
selektiewe kommunikasieprimitief wat die gebruik van beide toevoer- en afvoerwagte ondersteun.
Omdat 'n effektiewe implementasie nagestreef word, is die taalontwerp om so nadruklik moontlik
te wees, sonder om strukture in te sluit wat oneffektief is om te implementeer. Sekuriteit was egter
die oorheersende oorweging in die ontwerp van die taal en looptyd omgewing.
Die vertaler lewer masjienkode, terwyl die meeste ander implementasies van eSP-agtige tale
geinterpreteer word of ondersteun word as prosesse op 'n geskikte bedryfstelsel- die meeste
eSP-agtige tale word slegs vir akademiese doeleindes aangewend. LF is by uitstek ontwerp
as implementasie taal.
Die evaluasie van die stelsel se werkverrigting is gedoen aan die hand van praktiese maatstawwe
soos die tyd wat benodig word vir kommunikasie, sowel as die gemiddelde tyd benodig vir die
hantering van onderbrekings.
|
117 |
A semantics for aspects by compositional translationSanjabi, Sam Bakhtiar January 2008 (has links)
We analyse the semantics of aspect-oriented extensions to functional languages by presenting compositional translations of these primitives into languages with traditional notions of state and control. As a first step, we examine an existing semantic description of aspects which allows the labelling of program points. We show that a restriction of these semantics to aspects which do not preempt the execution of code can be fully abstractly translated into a functional calculus with higher order references, but that removing this restriction requires a notion of exception handling to be added to the target language in order to yield a sound semantics. Next, we proceed to show that abandoning the labelling technique, and consequently relaxing the so-called ``obliviousness'' property of aspectual languages, allows preemptive aspects to be included in the general references model without the need for exceptions. This means that the game model of general references is inherited by the aspect calculus. The net result is a clean semantic description of aspect-orientation, which mirrors recently published techniques for their implementation, and thereby provides theoretical justification for these systems. The practical validity of our semantics is demonstrated by implementing extensions to the basic calculus in Standard ML, and showing how a number of useful aspect-oriented features can be expressed using general references alone. Our theoretical methodology closely follows the proof structure that often appears in the game semantics literature, and therefore provides an operational perspective on notions such as ``bad variables'' and factorisation theorems.
|
118 |
The Development and Validation of a Computer-Aided Instructional Program in Mathematics for Business and Economics MajorsMcCool, Kenneth Bland, 1942- 08 1900 (has links)
The problem with which this study is concerned is that of comparing the results of teaching community college students enrolled in a transferable mathematics sequence for business and economics majors by a computer-aided instructional program and by the traditional lecture method. In order to effectively resolve this problem, an A Programming Language System 360 (APL/360)-aided instructional program was developed and an experimental study was conducted. The APL/360-aided instructional program consisted of three sets of materials.: a manuscript on APL/360, a list of APL programs defining operators relevant to a computer-aided study of calculus, and a collection of problems based on these programs and calculus concepts. The subjects for the experiment were forty-four students enrolled in three sections of Mathematics 112 at Mountain View College of the Dallas County Community College District. The control group, students taught by the traditional lecture method, consisted of twenty-one students. The experimental group, students taught by the APL/360-aided instructional program, consisted of twenty-three students. The same instructor taught all students. The essential difference in the two teaching methods was the use of the computer as a teaching-learning aid in the computer-aided instructional program. The computer was a course supplement to classroom instruction and aided students in obtaining insight into the nature of mathematical concepts as well as serving as a computational aid.
|
119 |
Desenvolvimento de hardware e software para viabilizar a operação de um microdensitômetro / Devolopment of hardware and software to operate a microdensitometerMarques, Márcio Alexandre 22 September 1992 (has links)
O presente trabalho foi desenvolvido para viabilizar a operação do microdensitômetro Optronics P-1000 através de um microcomputador tipo IBM-PC. Assim, desenvolveu-se uma interface (hardware), bem como todo o software necessário para operar o equipamento e fazer a aquisição dos dados digitalizados. Este software, permite, também, a visualização interativa das imagens, usada para definir regiões de interesse no filme. / The present work was developed to enable the operation of the Optronics P-1000 densitometer using a IBM-PC compatible microcomputer. Therefore a hardware interface as well as all the needed software to operate the equipment and execute the data acquisition was developed. This software provides also interactive vicualization and operation used to define regions of interest on the film.
|
120 |
EXTRACT: Extensible Transformation and Compiler TechnologyCalnan, III, Paul W. 29 April 2003 (has links)
Code transformation is widely used in programming. Most developers are familiar with using a preprocessor to perform syntactic transformations (symbol substitution and macro expansion). However, it is often necessary to perform more complex transformations using semantic information contained in the source code. In this thesis, we developed EXTRACT; a general-purpose code transformation language. Using EXTRACT, it is possible to specify, in a modular and extensible manner, a variety of transformations on Java code such as insertion, removal, and restructuring. In support of this, we also developed JPath, a path language for identifying portions of Java source code. Combined, these two technologies make it possible to identify source code that is to be transformed and then specify how that code is to be transformed. We evaluate our technology using three case studies: a type name qualifier which transforms Java class names into fully-qualified class names; a contract checker which enforces pre- and post-conditions across behavioral subtypes; and a code obfuscator which mangles the names of a class's methods and fields such that they cannot be understood by a human, without breaking the semantic content of the class.
|
Page generated in 0.0653 seconds