• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 147
  • 90
  • 17
  • 10
  • 9
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 310
  • 147
  • 130
  • 57
  • 44
  • 44
  • 43
  • 42
  • 42
  • 41
  • 40
  • 30
  • 28
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Les modifications vocales de Kendrick Lamar dans l'album To Pimp a Butterfly et leur apport au texte

Fortier, Jacqueline 12 December 2019 (has links)
Le rap a grandi en popularité depuis quelques années, non seulement du point de vue de la diffusion musicale, mais aussi sur le plan de la recherche. L’intérêt pour le rap dans le monde de la recherche musicologique se sépare en plusieurs branches, par contre, il existe peu de littérature en ce qui concerne les aspects narratifs de la musique rap, en particulier en ce qui a trait aux techniques vocales propres au rap (le flow). Cette recherche a pour but d’analyser les rapports qu’entretiennent le récit et la voix dans la musique rap en prenant pour exemple l’album To Pimp AButterfly (2015 de l’artiste Kendrick Lamar. Cet album par sa parution récente et son récit concis nous semble constituer un bon exemple, notamment en raison de sa richesse sur les plans narratif et vocal. À l’aide de méthodes que nous emprunterons à la narratologie, mais aussi d’outils d’analyse musicale, nous analyserons de plus près la manière dont l’artiste de rap peut utiliser sa voix pour donner vie à une histoire, souvent inspirée de la réalité.
222

Analyzing and Reducing Compilation Times for C++ Programs

Mivelli, Dennis January 2022 (has links)
Software companies often choose to develop in C++ because of the high performance that the language offers. Facilitated by static compilation and powerful optimization options, runtime performance is paid for with compilation time. Although the trade-off is inevitable to some extent, building very large C++ programs from scratch can take up to several hours if extra care is not taken during development. This thesis analyzes compilation times for C++ programs and shows how they can be reduced with the help of design patterns, implementation hiding, and framework related fixes. The results presented prove that compilation times can be decreased significantly with no drawbacks to the maintainability of a program. An in-depth analysis of compilation times and dependencies has been conducted for two large software modules from a representative company. Both modules take over an hour of CPU time each to compile. The time consumption for different compiler activities, such as parsing, preprocessing, and runtime optimization tasks have been measured for the modules. The compilation times for unit tests and mocks which use the GoogleTest framework have been analyzed. A simple method that may reduce compilation times by up to 50% for programs that use GoogleTest is presented. A dependency metric has been created, based on the number of include statements found recursively throughout a program. The dependency metric was found to be connected to compilation time for the two analyzed modules. Other factors that can influence compilation times are also shown, such as runtime optimization options, and the use of templates. Experiments which show how a typical usage of templates can drastically increase compilation times are presented. In addition, a solution which allows templates to be used while avoiding code bloat across translation units is reviewed. The solution effectively rivals non-template code in terms of compilation time. The Pointer to Implementation (PImpl) and Dependency Injection design patterns have been used to refactor a small program. Both design patterns performed well, reducing the total compilation time and total compiler memory usage by 70%. A program that detects dependency cycles has been created, but no cycles were found in any of two modules from the representative company.
223

The weak link in the language teaching system and what to do about it

Moore, Eric January 1900 (has links)
Master of Arts / Department of Modern Languages / Douglas K. Benson / This thesis answers the questions: How should the terms interaction, individualization, and personalization be applied to Computer Aided Language Learning (CALL) software? What progress has been made in their implementation? How can CALL software developers better incorporate them in the future? For each of the three terms, I explain how it is applicable to the CALL software environment by defining it, describing the pedagogical research supporting it, and then giving general guidelines for incorporating it into a CALL software program. I measure the progress of the implementation of the three terms in CALL software through compiling and analyzing data from reviews of 44 software titles. The publication dates of the software titles are from 1981 to 2008. I propose through description and a proof-of-concept software program ways to improve the incorporation of the terms in question into CALL software. As a result of answering the three questions, this thesis shows that the current accepted definitions and ways of implementing interaction, individualization, and personalization need to be improved in order to comply with pedagogical research and make full use of current technology. The general guidelines given in the explanation of each term relative to CALL and the attributes under each term in the analysis of the compilation data provide examples of areas on which to focus development. Additionally, I specifically comment on pedagogically supported attributes within each term that have a weak representation in the software compilation and therefore need more development. In addition, this thesis is accompanied by “Mis vacaciones”, a proof-of-concept software program, which demonstrates ways to improve the incorporation of interaction, individualization, and personalization into CALL software. In “Mis vacaciones”, the learner takes a virtual trip to Nuevo Leon, Nicaragua. The multimedia sent to the learner by a previous traveler shows Nicaraguan city people and the La Gigatona festival. After visiting, the learner is asked to describe the Nicaraguans that they saw. If the learner needs help, Structured Input activities lead the learner to develop the third person singular imperfect form. Buttons in the software environment provide access to internet sources. The learner is able to draw and take photos to create a visual prop to aid in the description task.
224

Runtime specialization for heterogeneous CPU-GPU platforms

Farooqui, Naila 27 May 2016 (has links)
Heterogeneous parallel architectures like those comprised of CPUs and GPUs are a tantalizing compute fabric for performance-hungry developers. While these platforms enable order-of-magnitude performance increases for many data-parallel application domains, there remain several open challenges: (i) the distinct execution models inherent in the heterogeneous devices present on such platforms drives the need to dynamically match workload characteristics to the underlying resources, (ii) the complex architecture and programming models of such systems require substantial application knowledge and effort-intensive program tuning to achieve high performance, and (iii) as such platforms become prevalent, there is a need to extend their utility from running known regular data-parallel applications to the broader set of input-dependent, irregular applications common in enterprise settings. The key contribution of our research is to enable runtime specialization on such hybrid CPU-GPU platforms by matching application characteristics to the underlying heterogeneous resources for both regular and irregular workloads. Our approach enables profile-driven resource management and optimizations for such platforms, providing high application performance and system throughput. Towards this end, this research: (a) enables dynamic instrumentation for GPU-based parallel architectures, specifically targeting the complex Single-Instruction Multiple-Data (SIMD) execution model, to gain real-time introspection into application behavior; (b) leverages such dynamic performance data to support novel online resource management methods that improve application performance and system throughput, particularly for irregular, input-dependent applications; (c) automates some of the programmer effort required to exercise specialized architectural features of such platforms via instrumentation-driven dynamic code optimizations; and (d) proposes a specialized, affinity-aware work-stealing scheduling runtime for integrated CPU-GPU processors that efficiently distributes work across all CPU and GPU cores for improved load balance, taking into account both application characteristics and architectural differences of the underlying devices.
225

Waveform Description Language (WDL) for Software Radios

Prill, Robert, Comba, Andrew 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / Waveform Description Language (WDL) was invented to ease the process of porting legacy and/ or new radio waveforms to Programmable / Software Radios. WDL has two primary requirements; 1st it is to provide a rigorous executable behavioural description of waveform signal structures that is unambiguous and yet independent of any particular end item software radio architecture. The 2nd requirement is that the behavioural specification provides a path to automatic code generation for GP’s, DSP’s, and FPFG’s and that the Generated code be tested against the behavioural model.
226

Méthodes Statiques et Dynamiques de Compilation Polyédrique pour l'Exécution en Environnement Multi-Cœurs

Pradelle, Benoit 20 December 2011 (has links) (PDF)
Depuis plusieurs années, le nombre de cœurs de calcul dans les processeurs ne cesse d'augmenter à chaque nouvelle génération. Les processeurs multi-cœurs sont maintenant très fréquents mais le développement de logiciels séquentiels reste une pratique très courante. Pour palier à ce problème, des outils de parallélisation automatique ont été proposés mais ils ne sont pas encore prêts pour une utilisation à grande échelle. Nous proposons d'étendre les outils existants dans trois directions différentes. Premièrement, on peut remarquer que le code source de certains programmes n'est pas disponible. Nous proposons donc un système de parallélisation statique de code binaire qui permet de paralléliser un application séquentielle déjà compilée. Ensuite, on peut s'apercevoir que la performance d'un programme dépend du contexte d'exécution dans lequel il s'exécute. Nous présentons donc un système qui permet de sélectionner une version d'un programme parmi plusieurs afin d'exploiter au mieux les particularités du contexte d'exécution courant. Enfin, étant donné que certains programmes sont difficiles à analyser statiquement, nous proposons un système de parallélisation spéculative permettant d'appliquer dynamiquement des transformations de code complexes sur ces programmes. Ces trois systèmes utilisent le modèles polyédrique comme une boîte à outil permettant d'analyser, de transformer ou de paralléliser les programmes. En travaillant à différentes phases de la vie des programmes, ils forment une approche globale qui étend les techniques de parallélisation existantes.
227

Le Projet M.A.C.S.I.-P : Méthode d'Aide à la Conception des Systèmes d'Informations orientée vers la fabrication de Prototypes d'applications informatiques de gestion

Giraudin, Jean-Pierre 01 July 1977 (has links) (PDF)
.
228

Un système informatique au service de la communication

Colonna, Jean-François 24 November 1976 (has links) (PDF)
.
229

SSH : un outil et des techniques simples à implémenter pour construire et simuler des modèles hiérarchisés de systèmes

Rohmer, Jean 24 June 1976 (has links) (PDF)
Partant des concepts de la théorie des systèmes hiérarchises, on définit un langage et son interpréteur permettant de décrire formellement et d'expérimenter des systèmes à architecture hiérarchisée. On introduit des dispositifs évolués de simulation du temps et on montre que les notions de hiérarchie apportent une dimension nouvelle aux langages et techniques de simulation.
230

Le Projet MACSI - 1

De Chelminski, Alfred 30 June 1975 (has links) (PDF)
.

Page generated in 0.0779 seconds