• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 291
  • 169
  • 79
  • 37
  • 27
  • 23
  • 14
  • 12
  • 11
  • 8
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 729
  • 153
  • 140
  • 89
  • 76
  • 74
  • 73
  • 72
  • 72
  • 72
  • 61
  • 60
  • 51
  • 50
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Large language models and variousprogramming languages : A comparative study on bug detection and correction

Gustafsson, Elias, Flystam, Iris January 2024 (has links)
This bachelor’s thesis investigates the efficacy of cutting-edge Large Language Models (LLMs) — GPT-4, Code Llama Instruct (7B parameters), and Gemini 1.0 — in detecting and correcting bugs in Java and Python code. Through a controlled experiment using standardized prompts and the QuixBugs dataset, each model's performance was analyzed and compared. The study highlights significant differences in the ability of these LLMs to correctly identify and fix programming bugs, showcasing a comparative advantage in handling Python over Java. Results suggest that while all these models are capable of identifying bugs, their effectiveness varies significantly between models. The insights gained from this research aim to aid software developers and AI researchers in selecting appropriate LLMs for integration into development workflows, enhancing the efficiency of bug management processes.
382

"Few and Far Between": Digital Musical Instrument Design based on Machine Vision and Neural Deep Learning Algorithms

Yaşarlar, Okan 05 1900 (has links)
Few and Far Between is a music composition for violoncello and live electronics. The project consists of software that uses video data from a webcam to control interactive audio in real time to manipulate audio processing and multichannel diffusion.
383

Novel Materials Simulation Techniques to Investigate Earth’s Interior

Zhuang, Jingyi January 2025 (has links)
Understanding the dynamics and structure of Earth's interior requires investigating the thermodynamic and elastic properties of minerals under extreme pressure and temperature. Experimental methods face significant challenges in reproducing these conditions, requiring alternative methods to study deep Earth materials. Computational simulations, particularly 𝘢𝘣 𝘪𝘯𝘪𝘵𝘪𝘰 methods, offer a powerful way to explore mineral behavior under such conditions but often face limitations due to complex physical effects. To address these challenges, we develop advanced methodologies and novel simulation techniques to study the minerals in the lower mantle and Earth's deepest regions. We use 𝘢𝘣 𝘪𝘯𝘪𝘵𝘪𝘰 computations for 𝜖-Fe under exoplanetary conditions, accounting for electronic thermal excitation effects within the phonon gas model framework. We extend this framework into an open-source Python code for calculating free energy in complicated systems. We investigate pressure-induced changes on the electronic structure of iron in ferropericlase, a spin state change, on the seismological properties of the lower mantle. We further explore ferrous iron partitioning combined with spin crossover and non-ideal solid solution models, to understand major mineral phases in the lower mantle, and their impacts on lower mantle velocities and temperature-induced heterogeneities. We also analyze the impact of Fe and Al alloying on the bridgmanite-post-perovskite phase boundary. Together, these investigations enhance our understanding of the thermal and chemical structure of Earth's deep interior.
384

Realization of LSTM Based Cognitive Radio Network

Valluru, Aravind-Deshikh 08 1900 (has links)
This thesis presents the realization of an intelligent cognitive radio network that uses long short term memory (LSTM) neural network for sensing and predicting the spectrum activity at each instant of time. The simulation is done using Python and GNU Radio. The implementation is done using GNU Radio and Universal Software Radio Peripherals (USRP). Simulation results show that the confidence factor of opportunistic users not causing interference to licensed users of the spectrum is 98.75%. The implementation results demonstrate high reliability of the LSTM based cognitive radio network.
385

SCOOP : cadriciel de calcul distribué générique

Hold-Geoffroy, Yannick 23 April 2018 (has links)
Ce document présente SCOOP, un nouveau cadriciel Python pour la distribution automatique de hiérarchies de tâches dynamiques axé sur la simplicité. Une hiérarchie de tâches réfère à des tâches qui peuvent récursivement générer un nombre arbitraire de sous-tâches. L’infrastructure de calcul sous-jacente consiste en une simple liste de ressources de calcul. Le cas d’utilisation typique est l’exécution d’un programme principal sous la tutelle du module SCOOP, qui devient alors la tâche racine pouvant générer des sous-tâches au travers de l’interface standard des « futures » de Python. Ces sous-tâches peuvent elles-mêmes générer d’autres sous-sous-tâches, etc. La hiérarchie de tâches complète est dynamique dans le sens où elle n’est potentiellement pas entièrement connue jusqu’à la fin de l’exécution de la dernière tâche. SCOOP distribue automatiquement les tâches parmi les ressources de calcul disponibles en utilisant un algorithme de répartition de charge dynamique. Une tâche n’est rien de plus qu’un objet Python pouvant être appelé en conjonction avec ses arguments. L’utilisateur n’a pas à s’inquiéter de l’implantation du passage de message ; toutes les communications sont implicites. / This paper presents SCOOP, a new Python framework for automatically distributing dynamic task hierarchies focused on simplicity. A task hierarchy refers to tasks that can recursively spawn an arbitrary number of subtasks. The underlying computing infrastructure consists of a simple list of resources. The typical use case is to run the user’s main program under the umbrella of the SCOOP module, where it becomes a root task that can spawn any number of subtasks through the standard “futures” API of Python, and where these subtasks may themselves spawn other subsubtasks, etc. The full task hierarchy is dynamic in the sense that it is unknown until the end of the last running task. SCOOP automatically distributes tasks amongst available resources using dynamic load balancing. A task is nothing more than a Python callable object in conjunction with its arguments. The user need not worry about message passing implementation details; all communications are implicit.
386

Identifiera löv i skogar – Att lära en dator känna igen löv med ImageAI

Nordqvist, My January 2019 (has links)
A current field of research today is machine learning because it can simplify everyday life for human beings. A functioning system that has learned specific tasks can make it easier for companies in both cost and time. A company who want to use machine learning is SCA, who owns and manages forests to produce products. They have a need to automate forest classification. In order to evaluate forests, and to plan forestry measures, the proportion of leafy tree that is not used in production must be determined. Today, manual work is required of people who have to investigate aerial photos to classify the tree types. This study investigates whether it is possible, through machine learning, to teach a computer to determine whether it is leaf or not in photographs. A program is constructed with the library ImageAI which receives methods for training and predicting information in images. It examines how the choice of neural network and the number of images affects the safety of the models and how reliable the models can be. Exercise time and hardware are also two factors that are investigated. The result shows that the neural network ResNet delivers the safest results and the more images the computer exercises, the safer the result. The final model is a ResNet model that has trained on 20,000 images and has 79,0 percent security. Based on 50 samples, the mean value for safety is 90,5 percent and the median is 99,6 percent. / Maskininlärning är idag ett aktuellt forskningsområde som kan förenkla vardagen för oss människor. Ett fungerande system som har lärt sig specifika uppgifter kan underlätta för företag i både kostnad och tid. Ett företag som vill använda maskininlärning är SCA, som äger och förvaltar skog för att producera produkter. De har behov av att automatisera klassificering av skog. För att värdera skogar, samt planera skogsåtgärder, måste andelen lövträd som inte används i produktionen bestämmas. Idag krävs det manuellt arbete av personer som måste undersöka flygfoton för att klassificera trädtyperna. Denna studie undersöker om det är möjligt, via maskininlärning, att lära en dator avgöra om det är löv eller inte i ortofoton. Ett program konstrueras med biblioteket ImageAI som erhåller metoder för att träna och förutsäga information i bilder. Det undersöks hur valet av neuralt nätverk och antalet bilder påverkar säkerheten för modellerna samt hur tillförlitlig modellerna kan bli. Träningstid och hårdvara är också två faktorer som studeras. Resultatet visar att neurala nätverket ResNet levererar säkrast resultat och desto fler bilder datorn tränar på, desto säkrare blir resultatet. Den slutgiltiga modellen är en ResNet-modell som tränat på 20 000 bilder och har 79,0 procents säkerhet. Utifrån 50 stickprov är medelvärdet för säkerheten 90,5 procent och medianen 99,6 procent.
387

On the Generalized Finite Element Method in nonlinear solid mechanics analyses / Sobre o método dos Elementos Finitos Generalizados em análises da mecânica dos sólidos não-linear

Piedade Neto, Dorival 29 November 2013 (has links)
The Generalized Finite Element Method (GFEM) is a numerical method based on the Partition of Unity (PU) concept and inspired on both the Partition of Unity Method (PUM) and the hp-Cloud method. According to the GFEM, the PU is provided by first-degree Lagragian interpolation functions, defined over a mesh of elements similar to the Finite Element Method (FEM) meshes. In fact, the GFEM can be considered an extension of the FEM to which enrichment functions can be applied in specific regions of the problem domain to improve the solution. This technique has been successfully employed to solve problems presenting discontinuities and singularities, like those that arise in Fracture Mechanics. However, most publications on the method are related to linear analyses. The present thesis is a contribution to the few studies of nonlinear analyses of Solid Mechanics by means of the GFEM. One of its main topics is the derivation of a segment-to-segment generalized contact element based on the mortar method. Material and kinematic nonlinear phenomena are also considered in the numerical models. An Object-Oriented design was developed for the implementation of a GFEM nonlinear analyses framework written in Python programming language. The results validated the formulation and demonstrate the gains and possible drawbacks observed for the GFEM nonlinear approach. / O Método dos Elementos Finitos Generalizados (MEFG) é um método numérico baseado no conceito de partição da unidade (PU) e inspirado no Método da Partição da Unidade (MPU) e o método das Nuvens-hp. De acordo com o MEFG, a PU é obtida por meio de funções de interpolação Lagragianas de primeiro grau, definidas sobre uma rede de elementos similar àquela do Método dos Elementos Finitos (MEF). De fato, o MEFG pode ser considerado uma extensão do MEF para a qual se pode aplicar enriquecimentos em regiões específicas do domínio, buscando melhorias na solução. Esta técnica já foi aplicada com sucesso em problemas com descontinuidades e singularidades, como os originários da Mecânica da Fratura. Apesar disso, a maioria das publicações sobre o método está relacionada a análises lineares. A presente tese é uma contribuição aos poucos estudos relacionados a análises não-lineares de Mecânica dos Sólidos por meio do MEFG. Um de seus principais tópicos é o desenvolvimento de um elemento de contato generalizado do tipo segmento a segmento baseado no método mortar. Fenômenos não lineares devidos ao material e à cinemática também são considerados nos modelos numéricos. Um projeto de orientação a objetos para a implementação de uma plataforma de análises não-lineares foi desenvolvido, escrito em linguagem de programação Python. Os resultados validam a formulação e demonstram os ganhos e possíveis desvantagens da abordagem a problemas não lineares por meio do MEFG.
388

Performance Analysis of Distributed Spatial Interpolation for Air Quality Data

Asratyan, Albert January 2021 (has links)
Deteriorating air quality is a growing concern that has been linked to many health- related issues. Its monitoring is a good first step to understanding the problem. However, it is not always possible to collect air quality data from every location. Various data interpolation techniques are used to assist with populating sparse maps with more context, but many of these algorithms are computationally expensive. This work presents a three- step chain mail algorithm that uses kriging (without any modifications to the kriging algorithm itself) and achieves up to ×100 execution time improvement with minimal accuracy loss (relative RMSE of 3%) by parallelizing the load for the locally tested data sets. This approach can be described as a multiple- step parallel interpolation algorithm that includes specific regional border data manipulation for achieving greater accuracy. It does so by interpolating geographically defined data chunks in parallel and sharing the results with their neighboring nodes to provide context and compensate for lack of knowledge of the surrounding areas. Combined with the cloud serverless function architecture, this approach opens doors to interpolating data sets of huge sizes in a matter of minutes while remaining cost- efficient. The effectiveness of the three- step chain mail approach depends on the equal point distribution among all regions and the resolution of the parallel configuration, but in general, it offers a good balance between execution speed and accuracy. / Försämrad luftkvalitet är en växande oro som har kopplats till många hälsorelaterade frågor. Övervakningen är ett bra första steg för att förstå problemet. Det är dock inte alltid möjligt att samla in luftkvalitetsdata från alla platser. Olika interpolationsmetoder används för att hjälpa till att fylla i glesa kartor med mer sammanhang, men många av dessa algoritmer är beräkningsdyra. Detta arbete presenterar en trestegs ‘kedjepostalgoritm’ som använder kriging (utan några modifieringar av själva krigingsalgoritmen) och uppnår upp till × 100 förbättring av exekveringstiden med minimal noggrannhetsförlust (relativ RMSE på 3%) genom att parallellisera exekveringen för de lokalt testade datamängderna. Detta tillvägagångssätt kan beskrivas som en flerstegs parallell interpoleringsalgoritm som inkluderar regional specifik gränsdatamanipulation för att uppnå större noggrannhet. Det görs genom att interpolera geografiskt definierade databitar parallellt och dela resultaten med sina angränsande noder för att ge sammanhang och kompensera för bristande kunskap om de omgivande områdena. I kombination med den molnserverfria funktionsarkitekturen öppnar detta tillvägagångssätt dörrar till interpolering av datamängder av stora storlekar på några minuter samtidigt som det förblir kostnadseffektivt. Effektiviteten i kedjepostalgorithmen i tre steg beror på lika punktfördelning mellan alla regioner och upplösningen av den parallella konfigurationen, men i allmänhet erbjuder den en bra balans mellan exekveringshastighet och noggrannhet.
389

Kundtjänster för mobilapplikationer : Utveckling av rapportgenerator, symbolgenerator,RevitArchitecture–add-in och metadatahantering / Custumer Services for Mobile Application

Bernau, Maja, Olsson, Tobias January 2014 (has links)
The goal of this project was to streamline and automate a business system. This was achieved through the implementation of four subtasks. This report describes what parts of the system that needed to be updated and why. It also describes how the development was carried out and what results the project ultimately led to.   The project's tasks were to: Create a report generator designed to generate Excel documents. Develop a symbol generator where a user, through a web-based interface, could generate symbols. The symbols could then be used in the company's mobile application. Create an interface for a web service, and to develop an add-in for the modeling software Revit Architecture 2014. / Målet med detta projekt var att effektivisera och automatisera ett företagssystem. Detta skulle uppnås genom implementationen av fyra deluppgifter. Denna rapport beskriver vilka delar av systemet som behövde utvecklas och varför. Den beskriver även hur utvecklingen genomfördes samt vilka resultat projektet slutligen ledde till.   Projektets deluppgifter var att: Skapa en rapportgenerator för generering av Excel-dokument. Utveckla en symbolgenerator där man genom ett web-baserat gränssnitt kunde generera symboler avsedda att användas i företagets mobila applikation. Skapa ett gränssnitt för en web-tjänst samt utveckla ett add-in till modelleringsprogrammet Revit Architecture 2014.
390

Robotic Process Automation : Analys och implementation

Englevid, Jonas January 2018 (has links)
Employees today have necessary daily tasks that do not require human handling. The objective is to investigate two processes if they are suitable for automation as well as to create and evaluate a prototype. The goals are to analyze the process, examine appropriate tools for automation, compare the tools, create and evaluate prototype, and perform an acceptance test. Robotic Process Automation is about automating tasks that humans have to do. Good candidates for automation are time-consuming, repetitive, rule-based tasks, prone to human er- rors with clear goals and expectations. The preliminary study was conducted in the form of a literature study of web-based sources, and the analysis was done by breaking down the process in different parts. The comparison was carried out by investigating the features of these tools. The prototype was created on Windows in UiPath tools and the robot will work on Internet Explorer and Excel, which will have a macro written in Visual Basic for Applications. The client will look at the criteria given and also on the prototype output and provide a subjective response. UiPath, Workfusion, and Selenium test programs were created. The prototype automatically logs on to Visma PX by entering username and password. Then it navigates, searches for an assignment and retrieves the data available. Indata is filtered and typed into Excel for each activity and employee. Finally, a macro creates graphs. Time tests show that UiPath is significantly more optimized and faster at completing the test programs. UiPath has strong benefits with its tools. / Anställda idag har nödvändiga vardagsuppgifter som inte kräver mänsklig inverkan och tanken är att frigöra dessa uppgifter. Projektets övergripande syfte är att undersöka två processer om de är lämpliga för automation samt att skapa och utvärdera en prototyp. Målen är att analysera processen, undersöka lämpliga verktyg för automatisering, jämföra verktygen, skapa en prototyp, utvärdera prototypen och utföra ett acceptanstest. Robotic Process Automation handlar om att automatisera uppgifter som människor gör. Bra kandidater för automatisering är tidskrävande, repetitiva, regelbaserade uppgifter, benägna till mänskliga fel med klara mål och förväntningar. Förstudien genomfördes i form av en litteraturstudie av webbaserade källor och analysen gjordes genom att bryta ner processen i olika delar. Jämförelsen genomfördes genom att undersöka de funk- tioner som verktygen har. Prototypen skapas på Windows i verktygen UiPath och roboten kommer att arbeta på Internet Explorer och mot Excel som kommer ha ett makro skrivet i Visual Basic for Applications. Beställaren kommer att titta på de kriterier som gavs och även på prototypens utdata och ge en subjektiv respons. Testprogrammen i UiPath, Workfusion och Selenium skapades med sina respektive funktioner. Prototypen loggar automatiskt in på Visma PX genom att skriva in användarnamn och lösenord. Sedan navigerar den i verktyget, söker på ett uppdrag och hämtar den data som finns. Indata filtreras och skrivs in i Excel för varje aktivitet och anställd. Slutligen körs ett makro som skapar grafer. Tidstesterna visar att UiPath är betydligt mer optimerad och snabbare på att slutföra testprogrammen. Jämförelserna visar att UiPath har starka fördelar med sitt verktyg.

Page generated in 0.0851 seconds