• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 335
  • 89
  • 40
  • 33
  • 31
  • 12
  • 8
  • 7
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 660
  • 114
  • 98
  • 68
  • 62
  • 61
  • 60
  • 58
  • 54
  • 53
  • 50
  • 46
  • 43
  • 42
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

[en] AN AGENT-BASED ARCHITECTURE FOR DBMS GLOBAL SELF-TUNING / [pt] UMA ARQUITETURA PARA AUTO-SINTONIA GLOBAL DE SGBDS USANDO AGENTES

ANOLAN YAMILE MILANES BARRIENTOS 13 October 2004 (has links)
[pt] O aumento da complexidade dos SGBDs comerciais e a carga que suportam, além da crescente utilização destes por pessoal pouco familiarizado com a administração de bancos de dados, entre outras causas, sugerem a introdução de técnicas que automatizem o processo de sintonia de bancos de dados. A auto-sintonia (self-tuning) é uma tecnologia que permite criar sistemas adaptáveis que possam manter um bom desempenho, minimizando no possível a interação do administrador com o sistema. Este trabalho propõe uma abordagem para o ajuste automático dos parâmetros em um SGBD usando agentes de software. A tarefa de sintonia é tratada nesta pesquisa como um problema global, dado que alterações de um parâmetro podem se refletir em outros. Os detalhes da arquitetura, sua implementação e avaliação de funcionamento são também discutidos nesta dissertação. / [en] The increasing complexity of the commercial DBMSs as well the workload they manage, besides the fact that many users do not have deep knowledge about database administration, among other reasons, strongly suggests the introduction of techniques that automates the database tuning process. Self- Tuning, or auto-tuning, is a feature that makes systems adaptable in order to keep a good overall performance, reducing as possible the interaction between the administrator and the system. This work proposes an approach for the automatic tuning of DBMSs parameters using an architecture based on software agents. We consider tuning as a global issue, given that changes of a single parameter can be reflected in others. The architecture details, ets implementation and a practical evaluation are also discussed in this dissertation.
162

On-line Controller Tuning By Matlab Using Real System Responses

Pektas, Seda 01 December 2004 (has links) (PDF)
This thesis attempts to tune any controller without the mathematical model knowledge of the system it is controlling. For that purpose, the optimization algorithm of MATLAB&reg / 6.5 / Nonlinear Control Design Blockset (NCD) is adapted for real-time executions and combined with a hardware-in-the-loop simulation provided by MATLAB&reg / 6.5 / Real-Time Windows Target (RTWT). A noise-included model of a DC motor position control system is obtained in MATLAB&reg / / SIMULINK first and simulated to test the modified algorithm in some aspects. Then the presented methodology is verified using the physical plant (DC motor position control system) where tuning algorithm is driven mainly by the real system data and the required performance parameters specified by a user defined constraint window are successfully satisfied. Resultant improvements on the step response behavior of DC motor position control system are shown for two case studies.
163

Méthodes numériques pour la résolution accélérée des systèmes linéaires de grandes tailles sur architectures hybrides massivement parallèles / Numerical methods for the accelerated resolution of large scale linear systems on massively parallel hybrid architecture

Cheik Ahamed, Abal-Kassim 07 July 2015 (has links)
Les progrès en termes de puissance de calcul ont entraîné de nombreuses évolutions dans le domaine de la science et de ses applications. La résolution de systèmes linéaires survient fréquemment dans le calcul scientifique, comme par exemple lors de la résolution d'équations aux dérivées partielles par la méthode des éléments finis. Le temps de résolution découle alors directement des performances des opérations algébriques mises en jeu.Cette thèse a pour but de développer des algorithmes parallèles innovants pour la résolution de systèmes linéaires creux de grandes tailles. Nous étudions et proposons comment calculer efficacement les opérations d'algèbre linéaire sur plateformes de calcul multi-coeur hétérogènes-GPU afin d'optimiser et de rendre robuste la résolution de ces systèmes. Nous proposons de nouvelles techniques d'accélération basées sur la distribution automatique (auto-tuning) des threads sur la grille GPU suivant les caractéristiques du problème et le niveau d'équipement de la carte graphique, ainsi que les ressources disponibles. Les expérimentations numériques effectuées sur un large spectre de matrices issues de divers problèmes scientifiques, ont clairement montré l'intérêt de l'utilisation de la technologie GPU, et sa robustesse comparée aux bibliothèques existantes comme Cusp.L'objectif principal de l'utilisation du GPU est d'accélérer la résolution d'un problème dans un environnement parallèle multi-coeur, c'est-à-dire "Combien de temps faut-il pour résoudre le problème?". Dans cette thèse, nous nous sommes également intéressés à une autre question concernant la consommation énergétique, c'est-à-dire "Quelle quantité d'énergie est consommée par l'application?". Pour répondre à cette seconde question, un protocole expérimental est établi pour mesurer la consommation d'énergie d'un GPU avec précision pour les opérations fondamentales d'algèbre linéaire. Cette méthodologie favorise une "nouvelle vision du calcul haute performance" et apporte des réponses à certaines questions rencontrées dans l'informatique verte ("green computing") lorsque l'on s'intéresse à l'utilisation de processeurs graphiques.Le reste de cette thèse est consacré aux algorithmes itératifs synchrones et asynchrones pour résoudre ces problèmes dans un contexte de calcul hétérogène multi-coeur-GPU. Nous avons mis en application et analysé ces algorithmes à l'aide des méthodes itératives basées sur les techniques de sous-structurations. Dans notre étude, nous présentons les modèles mathématiques et les résultats de convergence des algorithmes synchrones et asynchrones. La démonstration de la convergence asynchrone des méthodes de sous-structurations est présentée. Ensuite, nous analysons ces méthodes dans un contexte hybride multi-coeur-GPU, qui devrait ouvrir la voie vers les méthodes hybrides exaflopiques.Enfin, nous modifions la méthode de Schwarz sans recouvrement pour l'accélérer à l'aide des processeurs graphiques. La mise en oeuvre repose sur l'accélération par les GPUs de la résolution locale des sous-systèmes linéaires associés à chaque sous-domaine. Pour améliorer les performances de la méthode de Schwarz, nous avons utilisé des conditions d'interfaces optimisées obtenues par une technique stochastique basée sur la stratégie CMA-ES (Covariance Matrix Adaptation Evolution Strategy). Les résultats numériques attestent des bonnes performances, de la robustesse et de la précision des algorithmes synchrones et asynchrones pour résoudre de grands systèmes linéaires creux dans un environnement de calcul hétérogène multi-coeur-GPU. / Advances in computational power have led to many developments in science and its applications. Solving linear systems occurs frequently in scientific computing, as in the finite element discretization of partial differential equations. The running time of the overall resolution is a direct result of the performance of the involved algebraic operations.In this dissertation, different ways of efficiently solving large and sparse linear systems are put forward. We present the best way to effectively compute linear algebra operations in an heterogeneous multi-core-GPU environment in order to make solvers such as iterative methods more robust and therefore reduce the computing time of these systems. We propose new techniques to speed algorithms up the auto-tuning of the threading design, according to the problem characteristics and the equipment level in the hardware and available resources. Numerical experiments performed on a set of large-size sparse matrices arising from diverse engineering and scientific problems, have clearly shown the benefit of the use of GPU technology to solve large sparse systems of linear equations, and its robustness and accuracy compared to existing libraries such as Cusp.The main priority of the GPU program is computational time to obtain the solution in a parallel environment, i.e, "How much time is needed to solve the problem?". In this thesis, we also address another question regarding energy issues, i.e., "How much energy is consumed by the application?". To answer this question, an experimental protocol is established to measure the energy consumption of a GPU for fundamental linear algebra operations accurately. This methodology fosters a "new vision of high-performance computing" and answers some of the questions outlined in green computing when using GPUs.The remainder of this thesis is devoted to synchronous and asynchronous iterative algorithms for solving linear systems in the context of a multi-core-GPU system. We have implemented and analyzed these algorithms using iterative methods based on sub-structuring techniques. Mathematical models and convergence results of synchronous and asynchronous algorithms are presented here, as are the convergence results of the asynchronous sub-structuring methods. We then analyze these methods in the context of a hybrid multi-core-GPU, which should pave the way for exascale hybrid methods.Lastly, we modify the non-overlapping Schwarz method to accelerate it, using GPUs. The implementation is based on the acceleration of the local solutions of the linear sub-systems associated with each sub-domain using GPUs. To ensure good performance, optimized conditions obtained by a stochastic technique based on the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) are used. Numerical results illustrate the good performance, robustness and accuracy of synchronous and asynchronous algorithms to solve large sparse linear systems in the context of an heterogeneous multi-core-GPU system.
164

Content Adaption and Design In Mobile Learning of Wind Instruments

Priyadarshani, Neha January 2021 (has links)
No description available.
165

Разработка чат-бота для второй линии поддержки : магистерская диссертация / Development of a chatbot for the second technical support line

Гулич, А. С., Gulich, A. S. January 2024 (has links)
Целью магистерской диссертации является разработка и программная реализация чат-бота для второй линии технической поддержки в компании Nordic IT на базе языковой модели GPT-3.5-turbo. В работе приведен подробный обзор современных средств разработки чат-ботов, проведен обзор проектирования, классификации, преимуществ и недостатков, а также практического применения чат-ботов. Проведен анализ примененных технологий разработки, обоснован выбор оптимальных методов для обучения языковых моделей. Приведено подробное описание структуры программных модулей, методов обучения, оценки качества ответов, интеграции бота с Microsoft Teams и произведен расчет экономической эффективности проекта. Практическая значимость данной работы обусловлена в возможности и целесообразности использования разработанного чат-бота для улучшения производительности и качества обслуживания в компании Nordic IT. / The aim of the master thesis is the development and program implementation of a chatbot for the second line of technical support in Nordic IT based on the GPT-3.5-turbo language model. The work provides a detailed overview of modern chatbot development tools, a review of the design, classification, advantages and disadvantages, and practical application of chatbots. The applied development technologies are analyzed, the choice of optimal methods for training language models is justified. A detailed description of the structure of program modules, training methods, evaluation of the quality of answers, integration of the bot with Microsoft Teams and calculation of the economic efficiency of the project are given. The practical significance of this work is determined by the possibility and expediency of using the developed chatbot to improve productivity and quality of service in the company Nordic IT.
166

Aligning language models to code : exploring efficient, temporal, and preference alignment for code generation

Weyssow, Martin 09 1900 (has links)
Pre-trained and large language models (PLMs, LLMs) have had a transformative impact on the artificial intelligence (AI) for software engineering (SE) research field. Through large-scale pre-training on terabytes of natural and programming language data, these models excel in generative coding tasks such as program repair and code generation. Existing approaches to align the model's behaviour with specific tasks propose using parameter-free methods like prompting or fine-tuning to improve their effectiveness. Nevertheless, it remains unclear how to align code PLMs and LLMs to more complex scenarios that extend beyond task effectiveness. We focus on model alignment in three overlooked scenarios for code generation, each addressing a specific objective: optimizing fine-tuning costs, aligning models with new data while retaining previous knowledge, and aligning with user coding preferences or non-functional requirements. We explore these scenarios in three articles, which constitute the main contributions of this thesis. In the first article, we conduct an empirical study on parameter-efficient fine-tuning techniques (PEFTs) for code LLMs in resource-constraint settings. Our study reveals the superiority of PEFTs over few-shot learning, showing that PEFTs like LoRA and QLoRA allow fine-tuning LLMs with up to 33 billion parameters on a single 24GB GPU without compromising task effectiveness. In the second article, we examine the behaviour of code PLMs in a continual fine-tuning setting, where the model acquires new knowledge from sequential domain-specific datasets. Each dataset introduces new data about third-party libraries not seen during pre-training or previous fine-tuning. We demonstrate that sequential fine-tuning leads to catastrophic forgetting and implement replay- and regularization-based continual learning approaches, showcasing their superiority in balancing task effectiveness and knowledge retention. In our third article, we introduce CodeUltraFeedback and CODAL-Bench, a novel dataset and benchmark for aligning code LLMs to user coding preferences or non-functional requirements. Our experiments reveal that tuning LLMs with reinforcement learning techniques like direct preference optimization (DPO) using CodeUltraFeedback results in better-aligned LLMs to coding preferences and substantial improvement in the functional correctness of LLM-generated code. / Les modèles de langue pré-entraînés et de grande taille (PLMs, LLMs) ont eu un impact transformateur sur le domaine de la recherche en intelligence artificielle (IA) pour l’ingénierie logicielle (SE). Grâce à un pré-entraînement à grande échelle sur des téraoctets de données en langage naturel et de programmation, ces modèles excellent dans les tâches de codage génératif telles que la réparation de programmes et la génération de code. Les approches existantes pour aligner le comportement du modèle avec des tâches spécifiques proposent l’utilisation de méthodes non paramétriques telles que le prompting ou le fine-tuning pour améliorer leur efficacité. Néanmoins, il reste incertain comment aligner les PLMs et LLMs de code sur des scénarios plus complexes qui nécessitent plus que garantir l’efficacité du modèle sur des tâches cibles. Nous nous concentrons sur l’alignement des modèles dans trois scénarios négligés pour la génération de code, chacun abordant un objectif spécifique: optimiser les coûts de fine-tuning, aligner les modèles avec de nouvelles données dans le temps tout en conservant les connaissances antérieures, et aligner les modèles sur les préférences de codage des utilisateurs ou exigences non fonctionnelles. Nous explorons ces scénarios dans trois articles, qui constituent les principales contributions de cette thèse. Dans le premier article, nous réalisons une étude empirique sur les techniques de finetuning efficaces en paramètres (PEFTs) pour les LLMs de code dans des environnements à ressources limitées. Notre étude révèle la supériorité des PEFTs par rapport au few-shot learning, montrant que des PEFTs comme LoRA et QLoRA permettent de fine-tuner des LLMs jusqu’à 33 milliards de paramètres sur un seul GPU de 24Go sans compromettre l’efficacité sur les tâches. Dans le deuxième article, nous examinons le comportement des PLMs de code dans un contexte de fine-tuning continu, où le modèle acquiert de nouvelles connaissances à partir de jeux de données séquentiels. Chaque jeu de données introduit de nouvelles informations sur des bibliothèques tierces non vues lors de la phase de préentraînement ou dans les jeux de données de fine-tuning précédents. Nous démontrons que le fine-tuning séquentiel conduit à de l’oubli catastrophique et mettons en œuvre des approches d’apprentissage continu basées sur le replay et la régularisation, et montrons leur supériorité pour balancer l’efficacité du modèle et la rétention des connaissances. Dans notre troisième article, nous introduisons CodeUltraFeedback et CODAL-Bench, un nouveau jeu de données et un banc d’essai pour aligner les LLMs de code sur les préférences de codage des utilisateurs ou exigences non fonctionnelles. Nos expériences révèlent que le tuning des LLMs avec des techniques d’apprentissage par renforcement comme l’optimisation directe des préférences (DPO) utilisant CodeUltraFeedback résulte en des LLMs mieux alignés sur les préférences de codage et une amélioration substantielle de l’exactitude fonctionnelle des codes générés.
167

Probing and modeling of optical resonances in rolled-up structures

Li, Shilong 30 January 2015 (has links) (PDF)
Optical microcavities (OMs) are receiving increasing attention owing to their potential applications ranging from cavity quantum electrodynamics, optical detection to photonic devices. Recently, rolled-up structures have been demonstrated as OMs which have gained considerable attention owing to their excellent customizability. To fully exploit this customizability, asymmetric and topological rolled-up OMs are proposed and investigated in addition to conventional rolled-up OMs in this thesis. By doing so, novel phenomena and applications are demonstrated in OMs. The fabrication of conventional rolled-up OMs is presented in details. Then, dynamic mode tuning by a near-field probe is performed on a conventional rolled-up OM. Next, mode splitting in rolled-up OMs is investigated. The effect of single nanoparticles on mode splitting in a rolled-up OM is studied. Because of a non-synchronized oscillating shift for different azimuthal split modes induced by a single nanoparticle at different positions, the position of the nanoparticle can be determined on the rolled-up OM. Moreover, asymmetric rolled-up OMs are fabricated for the purpose of introducing coupling between spin and orbital angular momenta (SOC) of light into OMs. Elliptically polarized modes are observed due to the SOC of light. Modes with an elliptical polarization can also be modeled as coupling between the linearly polarized TE and TM mode in asymmetric rolled-up OMs. Furthermore, by adding a helical geometry to rolled-up structures, Berry phase of light is introduced into OMs. A -π Berry phase is generated for light in topological rolled-up OMs so that modes have a half-integer number of wavelengths. In order to obtain a deeper understanding for existing rolled-up OMs and to develop the new type of rolled-up OMs, complete theoretical models are also presented in this thesis.
168

Probing and modeling of optical resonances in rolled-up structures

Li, Shilong 22 January 2015 (has links)
Optical microcavities (OMs) are receiving increasing attention owing to their potential applications ranging from cavity quantum electrodynamics, optical detection to photonic devices. Recently, rolled-up structures have been demonstrated as OMs which have gained considerable attention owing to their excellent customizability. To fully exploit this customizability, asymmetric and topological rolled-up OMs are proposed and investigated in addition to conventional rolled-up OMs in this thesis. By doing so, novel phenomena and applications are demonstrated in OMs. The fabrication of conventional rolled-up OMs is presented in details. Then, dynamic mode tuning by a near-field probe is performed on a conventional rolled-up OM. Next, mode splitting in rolled-up OMs is investigated. The effect of single nanoparticles on mode splitting in a rolled-up OM is studied. Because of a non-synchronized oscillating shift for different azimuthal split modes induced by a single nanoparticle at different positions, the position of the nanoparticle can be determined on the rolled-up OM. Moreover, asymmetric rolled-up OMs are fabricated for the purpose of introducing coupling between spin and orbital angular momenta (SOC) of light into OMs. Elliptically polarized modes are observed due to the SOC of light. Modes with an elliptical polarization can also be modeled as coupling between the linearly polarized TE and TM mode in asymmetric rolled-up OMs. Furthermore, by adding a helical geometry to rolled-up structures, Berry phase of light is introduced into OMs. A -π Berry phase is generated for light in topological rolled-up OMs so that modes have a half-integer number of wavelengths. In order to obtain a deeper understanding for existing rolled-up OMs and to develop the new type of rolled-up OMs, complete theoretical models are also presented in this thesis.
169

Novel Closed-Loop Matching Network Topology for Reconfigurable Antenna Applications

Smith, Nathanael J. 21 May 2014 (has links)
No description available.
170

Harry Partch: And on the Seventh Day Petals Fell on Petaluma

Nicholl, Matthew James 08 1900 (has links)
Harry Partch's tuning system is an important contribution to tuning theory, and his music is original and significant. Part One of this study presents a brief biography of Partch, a discussion of his musical aesthetics (Monophony and Corporeality), and a technical summary of his tuning system. These elements are placed in historical perspective. Part Two presents a comprehensive analysis of "And on the Seventh Day Petals Fell on Petaluma," discussing the organization of formal, textural, rhythmic, linear, and tonal elements in the thirty-four "verses" of the work. Part Two concludes by showing how large-scale structure in the work is achieved through an overlay process.

Page generated in 0.0533 seconds