• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 9
  • 7
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 63
  • 41
  • 12
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Comparing XML Documents as Reference-aware Labeled Ordered Trees

Mikhaiel, Rimon A. E. Unknown Date
No description available.
2

Development of a Track Editing System for Use with Maps on Smartphones

Kostov, Viktor, Slyusar, Andriy January 2012 (has links)
No description available.
3

Approaching intonational distance and change

Sullivan, Jennifer Niamh January 2011 (has links)
The main aim of this thesis is to begin to extend phonetic distance measurements to the domain of intonation. Existing studies of segmental phonetic distance have strong associations with historical linguistic questions. I begin with this context and demonstrate problems with the use of feature systems in these segmental measures. Then I attempt to draw strands from the disparate fields of quantitative historical linguistics and intonation together. The intonation of Belfast and Glasgow English provides a central case study for this. Previous work suggests that both varieties display nuclear rises on statements, yet they have never been formally compared. This thesis presents two main hypotheses on the source of these statement rises: the Alignment hypothesis and the Transfer hypothesis. The Alignment hypothesis posits that statement rises were originally more typical statement falls but have changed into rises over time through gradual phonetic change to the location of the pitch peak. The Transfer hypothesis considers that statement rises have come about through pragmatic transfer of rises onto a statement context, either from question rises or continuation rises. I evaluate these hypotheses using the primary parameters of alignment and scaling as phonetic distance measurements. The main data set consists of data from 3 Belfast English and 3 Glasgow English speakers in a Sentence reading task and Map task. The results crucially indicate that the origin of the statement rises in Belfast and Glasgow English respectively may be different. The Glasgow statement nuclear tones show support for the Alignment hypothesis, while the Belfast nuclear tones fit best with the Transfer hypothesis. The fundamental differences between Glasgow and Belfast are the earlier alignment of the peak (H) in Glasgow and the presence of a final low (L) tonal target in Glasgow and a final high (H) target in Belfast. The scaling of the final H in Belfast statements suggests that the transfer may be from continuation rather than from question rises. I then present a proposal for an overall measure of intonational distance, showing problems with parameter weighting, comparing like with like, and distinguishing between chance resemblance and genuine historical connections. The thesis concludes with an assessment of the benefits that intonational analysis could bring to improving segmental phonetic distance measures.
4

Automating program transformations based on examples of systematic edits

Meng, Na 16 January 2015 (has links)
Programmers make systematic edits—similar, but not identical changes to multiple places during software development and maintenance in order to add features and fix bugs. Finding all the correct locations and making the ed- its correctly is a tedious and error-prone process. Existing tools for automating systematic edits are limited because they do not create general purpose edit scripts or suggest edit locations, except for specialized or trivial edits. Since many similar changes occur in similar contexts (in code with similar surrounding dependent relations and syntactic structures), there is an opportunity to automate program transformations based on examples of systematic edits. By inferring systematic edits and relevant context from one or more exemplar changes, automated approaches can (1) apply similar changes to other loca- tions, (2) locate code that requires similar changes, and (3) refactor code which undergoes systematic edits. This thesis seeks to improve programmer produc- tivity and software correctness by automating parts of systematic editing and refactoring. Applying similar, but not identical code changes, to multiple locations with similar contexts requires (1) understanding and relating common program context—a program’s syntactic structure, control, and data flow—relevant to the edits in order to propagate code changes from one location to oth- ers, and (2) recognizing differences between locations in order to customize code changes for each location. Prior approaches for propagating nontrivial, general-purpose code changes from one location to another either do not ob- serve the program context when placing edits, or do not handle the differences between locations when customizing edits, producing syntactic invalid or in- correctly modified programs. We design a novel technique and implement it in a tool called Sydit. Our approach first creates an abstract, context-aware edit script which contains a syntax subtree enclosing the exemplar edit with all concrete identifiers abstracted and a sequence of edit operations. It then applies the edit script to user-selected locations by establishing both context matching and identifier matching to correctly place and customize the edit. Although SYDIT is effective in helping developers correctly apply edits to multiple locations, programmers are still on their own to identify all the appropriate locations. When developers omit some of the locations, the edit script inferred from a single code location is not always well suited to help them find the locations. One approach to infer the edit script is encoding the concrete context. However, the resulting edit script is too specific to the source location, and therefore can only identify locations which contain syntax trees identical to the source location (false negatives). Another approach is to encode context with all identifiers abstracted, but the resulting edit script may match too many locations (false positives). To suggest edit locations, we use multiple examples to create a partially abstract, context-aware edit script, and use this edit script to both find edit locations and transform the code. Our experiments show that edit scripts from multiple examples have high precision and recall in finding edit locations and high accuracy when applying systematic edits because the extracted common context together with identified common concrete identifiers from multiple examples improves the location search without sacrificing edit application accuracy. For systematic edits which insert or update duplicated code, our systematic editing approaches may encourage developers in the bad practice of creating or evolving duplicated code. We investigate and evaluate an approach that automatically refactors cloned code based on the extent of systematic edits by factoring out common code and parameterizing any differences between them. Our investigation finds that refactoring systematically edited code is not always feasible or desirable. When refactoring is desirable, systematic ed- its offer a better way to scope the refactoring as compared to whole method refactoring. Automatic clone removal refactoring cannot obviate the need for systematic editing. Developers need tool support for both automatic refactoring and systematic editing. Based on the systematic changes already made by developers for a subset of change locations, our automated approaches facilitate propagating general purpose systematic changes across large programs, identifying locations requiring systematic changes missed by developers, and refactoring code undergoing systematic edits to reduce code duplication and future repetitive code changes. The combination of these techniques opens a new way of helping developers automate tedious and error-prone tasks, when they add features, fix bugs, and maintain software. These techniques also have the potential to guide automated software development and maintenance activities based on existing code changes mined from version histories for bug fixes, feature additions, refactoring, and software migration. / text
5

PiaNote: A Sight-Reading Program That Algorithmically Generates Music Based on Human Performance

Schulz, Drew 01 June 2016 (has links)
Sight-reading is the act of performing a piece of music at first sight. This can be a difficult task to master, because it requires extensive knowledge of music theory, practice, quick thinking, and most importantly, a wide variety of musical material. A musician can only effectively sight-read with a new piece of music. This not only requires many resources, but also musical pieces that are challenging while also within a player's abilities. This thesis presents PiaNote, a sight-reading web application for pianists that algorithmically generates music based on human performance. PiaNote's goal is to alleviate some of the hassles pianists face when sight-reading. PiaNote presents musicians with algorithmically generated pieces, ensuring that a musician never sees the same piece of music twice. PiaNote also monitors player performances in order to intelligently present music that is challenging, but within the player's abilities. As a result, PiaNote offers a sight-reading experience that is tailored to the player. On a broader level, this thesis explores different methods in effectively creating a sight-reading application. We evaluate PiaNote with a user study involving novice piano players. The players actively practice with PiaNote over three fifteen-minute sessions. At the end of the study, users are asked to determine whether PiaNote is an effective practice tool that improves both their confidence in sight-reading and their sight-reading abilities. Results suggest that PiaNote does improve user's sight-reading confidence and abilities, but further research must be conducted to clearly validate PiaNote's effectiveness. We conclude that PiaNote has potential to become an effective sight-reading application with slight improvements and further research.
6

Investigating and Recommending Co-Changed Entities for JavaScript Programs

Jiang, Zijian January 2020 (has links)
JavaScript (JS) is one of the most popular programming languages due to its flexibility and versatility, but debugging JS code is tedious and error-prone. In our research, we conducted an empirical study to characterize the relationship between co-changed software entities (e.g., functions and variables), and built a machine learning (ML)-based approach to recommend additional entity to edit given developers’ code changes. Specifically, we first crawled 14,747 commits in 10 open-source projects; for each commit, we created one or more change dependency graphs (CDGs) to model the referencer-referencee relationship between co-changed entities. Next, we extracted the common subgraphs between CDGs to locate recurring co-change patterns between entities. Finally, based on those patterns, we extracted code features from co-changed entities and trained an ML model that recommends entities-to-change given a program commit. According to our empirical investigation, (1) 50% of the crawled commits involve multi-entity edits (i.e., edits that touch multiple entities simultaneously); (2) three recurring patterns commonly exist in all projects; and (3) 80–90% of co-changed function pairs either invoke the same function(s), access the same variable(s), or contain similar statement(s); and (4) our ML-based approach CoRec recommended entity changes with high accuracy. This research will improve programmer productivity and software quality. / M.S. / This thesis introduced a tool CoRec which can provide co-change suggestions when JavaScript programmers fix a bug. A comprehensive empirical study was carried out on 14,747 multi-entity bug fixes in ten open-source JavaScript programs. We characterized the relationship between co-changed entities (e.g., functions and variables), and extracted the most popular change patterns, based on which we built a machine learning (ML)-based approach to recommend additional entity to edit given developers’ code changes. Our empirical study shows that: (1) 50% of the crawled commits involve multi-entity edits (i.e., edits that touch multiple entities simultaneously); (2) three change patterns commonly exist in all ten projects; (3) 80-90% of co-changed function pairs in the 3 patterns either invoke the same function(s), access the same variable(s), or contain similar statement(s); and (4) our ML-based approach CoRec recommended entity changes with high accuracy. Our research will improve programmer productivity and software quality.
7

Suivre Dieu, servir le roi : la noblesse protestante bas-normande, de 1520 au lendemain de la Révocation de l'édit de Nantes / Worship God and serve the King : The Norman protestant Nobility, from 1520 to the abolition of the Edict of Nantes in 1685

Le Touzé, Isabelle 15 September 2012 (has links)
Aux trois moments que constituent d’abord la décennie 1550, temps fort de la conversion nobiliaire au protestantisme, puis le temps des affrontements religieux de 1560 à 1598, et enfin celui de la fondation de l’absolutisme, à quelle fidélité le gentilhomme réformé doit-il consentir en ces temps d’incertitudes où désormais l’unité religieuse n’existe plus ? Obéit-il à une fidélité confessionnelle dictée par sa conscience ou à une exigence politique et relationnelle qui le lie naturellement, et à son seigneur, et à son roi ? Si le noble de foi réformée ne ressent pas au XVIe siècle de contradiction entre les deux sphères, celle du politique et celle du religieux : il n’a pas en effet le sentiment de se couper de son roi, en combattant dans les rangs de l’armée protestante, bien au contraire. Progressivement la distance se creuse vis-à-vis de ces seigneurs, et on perçoit alors l’extrême liberté des attaches politiques qui les lient au chef de parti. Mais la revendication d’une liberté irrépressible n’est rendue possible que par une stratégie mise en place de longue date par ces nobles protestants. Celle-ci repose d’abord et surtout sur une base solide et indéfectible, une nébuleuse d’amis et de parents. La proximité avec l’Angleterre et la Cour d’Elisabeth facilite également cette attitude distancée. Enfin, au XVIIe siècle, elle s’appuie aussi sur le véritable bouclier qu’a pu représenter l’édit de Nantes pour la noblesse ; ce dernier permet l’établissement d’un culte de fief et les protestants nobles chercheront à exploiter tous les ressorts juridiques du texte pour préserver leur foi intacte. Les alliances matrimoniales et l’action des femmes, filles ou épouses, serviront à la consolidation de la foi réformée. Alors que la répression, ciblée sur quelques individus, n’épargne pas le second ordre, certains nobles chercheront à trancher le dilemme soumission par la conversion ou désobéissance par l’exil, en dissociant les deux services, en refusant de choisir entre Dieu et le roi. / The important steps of French nobility: At first, 1550: part of the French nobility chooses to Protestantism. Then, 1560 and 1598: the French Religious Civil Wars. Finally, it was the start of Absolute Monarchy. When the unity of the Faith no longer existed, the choice of the French nobility was either to be faithful to the King or to god. Therefore, there were a gap between the religious faith and the political loyalties to the King. At first, the French nobles kept trusting their King, but a certain misunderstanding started to develop and the nobles gradually chose freedom over their loyalty to the French King. England’s proximity and Elisabeth 1st’s Court help them keep their distance with the King. They could rely on too their friends and family and parents to keep their faith alive, and the Edict of Nantes re-established the French nobility’s civil and religious rights. However the persecution of the Protestant did already start. Therefore many French Protestants nobles chose exile. Otherwise they were banished by the French Kingdom. Some of them hid their real faith, refusing to have to choose between their God and their king.
8

La vanité et la rhétorique de la prédication au XVIIᵉ siècle / Vanity and the Rhetoric of Preaching in the Seventeenth Century

Thouin-Dieuaide, Christabelle 21 January 2019 (has links)
Le travail de recherche que nous proposons a pour cadre la prédication au XVIIe siècle dans la période de l’édit de Nantes (1598-1685) et concerne l’expression de la vanité dans des oeuvres oratoires (sermons catholiques et protestants) et picturales (Vanités, tableaux d’autel). Ces dernières années, l’étude de la rhétorique a ouvert la voie à de nouvelles perspectives intéressantes à exploiter. La problématique qui guide cette recherche concerne la manière dont le concept de la vanité permet de renouveler, à cette époque, la rhétorique de la prédication. Autrement dit, il s’agit de montrer que le concept de vanité est, dans la prédication, un enjeu théologique et littéraire. Ma démarche consiste donc à étudier lescaractéristiques d’un discours, héritier des conceptions antiques, remodelé pour s’adapter aux circonstances qui imposent une nécessaire réflexion sur la nature et le pouvoir de la parole exprimée dans les sermons et les tableaux de Vanité. Le concept de vanité témoigne non seulement d’un douloureux constat anthropologique, mais est aussi employé, dans le discours, comme un argument moral, religieux, tout en étant source paradoxalement de fascination esthétique. Nous reconsidérons donc plus précisément les thèmes privilégiés de la prédication (mort, mépris du monde, pénitence…), et les stratégies discursives mises en place par les prédicateurs protestants et catholiques pour étudier les paradoxes du discours sur la vanité. / This research is set within the framework of XVIIth-century preaching during the Edict of Nantes period (1598-1685). It regards the expression of vanity in oratorical works (Catholic and Protestant sermons) as well as pictorial works (Vanitas, altar paintings). These last years, the study of rhetoric opened new paths that are interesting to explore. The issue at thecore of this study is the way the concept of vanity led to a renewal of the rhetoric of preaching in that period. In other words, I will show that for preachers the concept of vanityis both a theological and a literary concern. Thus my approach is to study the characteristics of a form of speech which, while it is heir to ancient conceptions, is also remodeled in order to adapt tonew circumstances that demand necessary reflections about nature and the power of speech as expressed in sermons and in Vanitas. The concept of vanity isnot only evidence of painful anthropological assessments, but is also used as a moral and religious argumentin sermons, while paradoxically generating an aesthetic fascination. I will thus consider moreparticularly the preachers’ favorite themes (death, scorn for the world, penitence) and their speech strategies, as Catholics and as Protestants, in order to study the paradoxes of speeches about vanity.
9

Combinatorial aspects of genome rearrangements and haplotype networks/Aspects combinatoires des réarrangements génomiques et des réseaux d'haplotypes

Labarre, Anthony 12 September 2008 (has links)
The dissertation covers two problems motivated by computational biology: genome rearrangements, and haplotype networks. Genome rearrangement problems are a particular case of edit distance problems, where one seeks to transform two given objects into one another using as few operations as possible, with the additional constraint that the set of allowed operations is fixed beforehand; we are also interested in computing the corresponding distances between those objects, i.e. merely computing the minimum number of operations rather than an optimal sequence. Genome rearrangement problems can often be formulated as sorting problems on permutations (viewed as linear orderings of {1,2,...,n}) using as few (allowed) operations as possible. In this thesis, we focus among other operations on ``transpositions', which displace intervals of a permutation. Many questions related to sorting by transpositions are open, related in particular to its computational complexity. We use the disjoint cycle decomposition of permutations, rather than the ``standard tools' used in genome rearrangements, to prove new upper bounds on the transposition distance, as well as formulae for computing the exact distance in polynomial time in many cases. This decomposition also allows us to solve a counting problem related to the ``cycle graph' of Bafna and Pevzner, and to construct a general framework for obtaining lower bounds on any edit distance between permutations by recasting their computation as factorisation problems on related even permutations. Haplotype networks are graphs in which a subset of vertices is labelled, used in comparative genomics as an alternative to trees. We formalise a new method due to Cassens, Mardulyn and Milinkovitch, which consists in building a graph containing a given set of partially labelled trees and with as few edges as possible. We give exact algorithms for solving the problem on two graphs, with an exponential running time in the general case but with a polynomial running time if at least one of the graphs belong to a particular class. / La thèse couvre deux problèmes motivés par la biologie: l'étude des réarrangements génomiques, et celle des réseaux d'haplotypes. Les problèmes de réarrangements génomiques sont un cas particulier des problèmes de distances d'édition, où l'on cherche à transformer un objet en un autre en utilisant le plus petit nombre possible d'opérations, les opérations autorisées étant fixées au préalable; on s'intéresse également à la distance entre les deux objets, c'est-à-dire au calcul du nombre d'opérations dans une séquence optimale plutôt qu'à la recherche d'une telle séquence. Les problèmes de réarrangements génomiques peuvent souvent s'exprimer comme des problèmes de tri de permutations (vues comme des arrangements linéaires de {1,2,...,n}) en utilisant le plus petit nombre d'opérations (autorisées) possible. Nous examinons en particulier les ``transpositions', qui déplacent un intervalle de la permutation. Beaucoup de problèmes liés au tri par transpositions sont ouverts, en particulier sa complexité algorithmique. Nous nous écartons des ``outils standards' utilisés dans le domaine des réarrangements génomiques, et utilisons la décomposition en cycles disjoints des permutations pour prouver de nouvelles majorations sur la distance des transpositions ainsi que des formules permettant de calculer cette distance en temps polynomial dans de nombreux cas. Cette décomposition nous sert également à résoudre un problème d'énumération concernant le ``graphe des cycles' de Bafna et Pevzner, et à construire une technique générale permettant d'obtenir de nouvelles minorations en reformulant tous les problèmes de distances d'édition sur les permutations en termes de factorisations de permutations paires associées. Les réseaux d'haplotypes sont des graphes dont une partie des sommets porte des étiquettes, utilisés en génomique comparative quand les arbres sont trop restrictifs, ou quand l'on ne peut choisir une ``meilleure' topologie parmi un ensemble donné d'arbres. Nous formalisons une nouvelle méthode due à Cassens, Mardulyn et Milinkovitch, qui consiste à construire un graphe contenant tous les arbres partiellement étiquetés donnés et possédant le moins d'arêtes possible, et donnons des algorithmes résolvant le problème de manière optimale sur deux graphes, dont le temps d'exécution est exponentiel en général mais polynomial dans quelques cas que nous caractérisons.
10

Improving memorability in fisheye views

Skopik, Amy Caroline 01 September 2004
Interactive fisheye views use distortion to show both local detail and global context in the same display space. Although fisheyes allow the presentation and inspection of large data sets, the distortion effects can cause problems for users. One such problem is lack of memorability the ability to find and go back to objects and features in the data. This thesis examines the possibility of improving the memorability of fisheye views by adding historical information to the visualization. The historical information is added visually through visit wear, an extension of the concepts of edit wear and read wear. This will answer the question Where have I been? through visual instead of cognitive processing by overlaying new visual information on the data to indicate a users recent interaction history. This thesis describes general principles of visibility in a space that is distorted by a fisheye lens and defines some parameters of the design space of visit wear. Finally, a test system that applied the principles was evaluated, and showed that adding visit wear to a fisheye system improved the memorability of the information space.

Page generated in 0.0335 seconds