• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 291
  • 169
  • 79
  • 37
  • 27
  • 23
  • 14
  • 12
  • 11
  • 8
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 729
  • 153
  • 140
  • 89
  • 76
  • 74
  • 73
  • 72
  • 72
  • 72
  • 61
  • 60
  • 51
  • 50
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Measuring What Matters : Software Engineering and its Role in Scientific Software Success

Hansson, Tobias, Thand, Samuel January 2024 (has links)
Scientific software is vital for research across domains and can be reused when it is open-source. To promote this reuse, it is generally beneficial to adopt software engineering best practices to improve accessibility and popularity. However, given the distinct properties of scientific software, there is no consensus on how these practices are being or should be used in scientific software. Since previous evidence on this topic is primarily anecdotal or qualitative, this study used repository mining to quantitatively examine best practices and their relationship with popularity in 90 software engineering artifacts, which are examples of scientific software. The data varied significantly but showed that the studied artifacts generally did not prioritize software engineering best practices, and no significant relationships were found between these aspects and popularity. The results may suggest that scientific software developers prioritize scientific quality over software quality and that traditional software quality measures may not be suitable quality benchmarks in scientific software. However, accessibility issues were identified, highlighting potential societal concerns. Based on these findings, we offer practical advice for quality improvements from a software engineering perspective. Further research is needed to obtain more conclusive and general results.
372

Developing an Image Analysis Pipeline for Insights into Symbiodiniaceae Growth and Morphology

Kinsella, Michael January 2024 (has links)
Symbiodiniaceae is a family of dinoflagellates which often live in a symbiotic relationship with cnidarian hosts such as corals. Symbiodiniaceae are vital for host survival, providing energy from photosynthesis and in return gaining protection from environmental stress and nutrients. However, when these symbiont cells are exposed to environmental stress such as elevated temperatures they can be expelled from their host, leading to the coral bleaching, a global issue. Coral reefs are vital for marine biodiversity and hold a large economic importance due to fishing and tourism.  This thesis aims to develop a computational pipeline to study growth, shape and size of Symbiodiniaceae cells, which takes microscopy images using a mother machine microfluidics device and segments the Symbiodiniaceae cells. This enables extraction ofcellular features such as area, circularity and cell count to study morphology and growth of Symbiodiniaceae based on segmentation labels. To achieve this, pretrained segmentation models from the Cellpose algorithm were evaluated to decide which was the best to use to extract features most accurately. The results showed the pretrained ‘cyto3’ model with default parameters performed the best based on the Dice score. The feature extraction showed indications of division events of Symbiodiniaceae linked to light and dark cycles, suggesting synchronicity among cells. However, segmentation needs further investigation to accurately capture cells and add statistical significance to the feature extraction.
373

Investigating Social Presence Dynamics in Online Education

Sun, Weimei 12 1900 (has links)
This research study delves into the multifaceted realm of social presence in online education, encompassing the existence and manifestation of social presence indicators in students' engagement with discussion boards within asynchronous online courses. Social presence manifests when individuals perceive themselves to be simultaneously present with others through a communication medium, thereby cultivating a shared sense of togetherness. The dissertation addresses the impact of course disciplines, exploring the significant influence of both STEM and non-STEM courses on the manifestation of social presence indicators. Furthermore, the study delves into the influence of course duration on the social presence level, unveiling critical insights into the challenges posed by prolonged courses on sustaining student engagement and interaction.This study, randomly selecting sample from Coursera, employed a methodology, encompassing both quantitative and qualitative analysis, to assess social presence within online courses. The approach consisted of five key stages, involving Python-based web crawling, manual keyword identification, data processing, statistical analysis using R, and qualitative exploration. The insights obtained offer valuable suggestions for enhancing social presence in future online educational settings. While acknowledging certain limitations regarding sample size and keyword identification, the study provides valuable contributions to the evolving landscape of online education research, offering practical implications for course design and facilitation in promoting an enriched and engaging online learning environment. / Communication Sciences
374

Measurement and Development for Automated Secure Coding Solutions

Frantz, Miles Eugene 09 September 2024 (has links)
With the rise of development efforts, there has also been a rise in source code vulnerabilities. Advanced security tools have been created to identify these vulnerabilities throughout the lifetime of the developer's ecosystem and afterward, before the vulnerabilities are exposed. One such popular method is Static Code Analysis (Code Analysis) (SCA), which scans developers' source code to identify potential vulnerabilities in the code. My Ph.D. work aims to help reduce the vulnerabilities exposed by YIELD, ENHANCE, and EVALUATE (EYE) SCA tools to identify vulnerabilities while the developer writes the code. We first look into evaluating tools that support developers with their source code by determining how accurate they are with identifying vulnerability information. Large Language Machine Learning Model (LLM)s have been on the rise recently with the introduction of Chat Generative Pre-trained Transformer (ChatGPT) 3.5, ChatGPT 4.1, Google Gemini, and many more. Using a common framework, we created a zero-shot prompt instructing the LLM to identify; whether there is a vulnerability in the provided source code and what Common Weakness Enumeration (CWE) value represents the vulnerability. With our Python cryptographic benchmark PyCryptoBench, we sent vulnerable samples to four different LLMs and two different versions of ChatGPT Application Program Interface (API)s. The samples allow us to measure how reliable each LLM is at vulnerability identification and defining. The Chat- GPT APIs include multiple reproducible fields that allowed us to measure how reproducible the responses are. Next, we yield a new SCA tool to apply what we learned to a current gap in increasingly complex source code. Cryptolation, our state-of-the-art (SOA) Python SCA tool uses constant propagation-supported variable inference to obtain insight into the data flow state through the program's execution. Python source code has ever-increasing complexities and a lack of SCA tools compared to Java. We compare Cryptolation with the other SOA SCA tools Bandit, Semgrep, and Dlint. To verify the Precision of our tool, we created the benchmark PyCryptoBench, which contains 1,836 test cases and encompasses five different language features. Next, we crawled over 1,000 cryptographic-related Python projects on GitHub and each with each tool. Finally, we reviewed all PyCryptoBench results and sampled over 10,000 cryptographic-related Python projects. The results reveal Cryptolation has a 100% Precision on the benchmark, with the second highest Precision with cryptographic-related projects. Finally, we look at enhancing SCA tools. The SOA tools already compete to have the highest Precision, Recall, and Accuracy. However, we examine several developer surveys to determine their reasons for not adopting such tools. These are generally better aesthetics, usability, customization, and a low effort cost to use consistently. To achieve this, we enhance the SOA Java SCA tool CryptoGuard with the following: integrated build tools, modern terminal Command Line Interface (CLI) usage, customizable and vendor-specific output formats, and no-install demos. / Doctor of Philosophy / With the rise of more development efforts and source codes, there has also been a rise in source code vulnerabilities. More advanced security tools have been created to identify these vulnerabilities before they are exposed to match this. SCA are a popular method for identifying vulnerable source code since they do not execute any code and can scan the code while the developer is writing it. Despite their popularity, there is still much room for improvement. My Ph.D. work aims to help reduce the vulnerabilities exposed by EYE SCA tools to identify vulnerabilities while the developer writes the code. First, we look into evaluating tools that support and refine SCA by examining the Accuracy and secureness of generative LLMs. LLM have been on the rise recently with the introduction of ChatGPT 3.5 and, more recently, ChatGPT 4.1. ChatGPT is a conversation-based program in which you ask the program a question, and it answers the question. This can explain small source code snippets to developers, provide source code examples, or even fix source code. While the developers of the LLMs have restricted certain aspects of the models, one of their main selling points is their source code assistance. With over 1,000 zero-shot prompts, we measure how accurate and reliable LLMs are in identifying the existence and information of vulnerabilities within the source code. First, we yield a new SCA tool to apply what we learned to a current gap in increasingly complex source code. This tool is Cryptolation, a Python SCA tool that uses variable inference to try to determine the variable values without execution. Python source code has ever-increasing complexities and a lack of tools compared to Java. We compare Cryptolation with four other SOA tools. To verify the Precision of our tool, we create the benchmark PyCryptoBench, over 1,000 test cases encompassing five different language features. Next, we crawled over 1,000 cryptographic-related Python projects on GitHub and each with each tool. Finally, we reviewed all PyCryptoBench results and samples of the 10,000 cryptographic-related Python projects. The results reveal Cryptolation has a 100% Precision on the benchmark, with the second highest Precision with cryptographic-related projects. Next, we look at enhancing SCA tools. The SOA tools already compete to have the highest Precision, Recall, and Accuracy. However, we investigated developers' current surveys to see what they identified as reasons not to adopt such tools. These are generally better aesthetics, usability, customization, and a low effort cost to use consistently. To achieve this, we enhance the SOA Java SCA tool CryptoGuardto address these adequately.
375

The genetic basis of phenotypic differentiation in Python regius and Gasterosteus aculeatus

Garcia-Elfring, Alan January 2023 (has links)
No description available.
376

Implementierung eines Model-View-Controller-Modells zur Entwicklung einer grafischen Oberfläche zur Fernsteuerung eines Funktionsgenerators unter Verwendung der Entwicklungsplattform Python

Kramer, Fabian 10 December 2024 (has links)
In dieser Arbeit habe ich eine Software entwickelt, mit der ein realer Funktionsgenerator ferngesteuert werden kann. Als Grundlage diente die Programmiersprache Python und das Model-View-Controller-Modell. Ziel war es, eine grafische Benutzeroberfläche zu erstellen, die dem Gerät möglichst ähnlich ist, und einen Steuerungsmechanismus für die Befehlsübermittlung zu implementieren, um den digitalen Unterricht zu unterstützen.:Vorwort V Abbildungsverzeichnis IX Tabellenverzeichnis XI Formelverzeichnis XII Abkürzungsverzeichnis XIII 1 Einleitung 1 2 Theoretische Grundlagen 6 2.1 Fernsteuerung von Laborgeräten 6 2.2 Funktionsgenerator 7 2.3 Programmieren mit Python 8 2.4 Grundlagen des Model-View-Controller-Modells 9 2.4.1 Betrachtung der einzelnen Komponenten 9 2.4.2 Wechselwirkungen zwischen den Komponenten 10 3 Beschreibung des Untersuchungsgegenstandes 15 3.1 Analyse des Ist-Standes 15 3.1.1 Laborpraktika 15 3.1.2 Technische Daten des Funktionsgenerators 17 3.1.3 Stand der Digitalisierung 17 3.2 Soll-Zustand 19 3.3 Analyse des Funktionsgenerators 19 3.3.1 Aufbau des Funktionsgenerators 20 3.3.2 Funktionsanalyse des Funktionsgenerators 23 4 Rechentechnische Implementierung 34 4.1 Vorbereitung der Softwareentwicklung 34 4.1.1 Auswahl spezifischer Programmierwerkzeuge 34 4.1.2 Auswahl einer Entwicklungsumgebung 44 4.1.3 Aufstellung von Programmierungsprämissen 47 4.2 Softwaretechnische Umsetzung des MVC-Modells 51 4.2.1 View - grafischen Benutzeroberfläche 51 4.2.2 Model - Datenmodell 67 4.2.3 Controller - Steuerungslogik 82 4.2.4 Implementierung von Backend-Funktionen 98 5 Prototypische Inbetriebnahme 103 6 Zusammenfassung 105 7 Ausblick 107 Literaturverzeichnis XIV Anhang XVII
377

Improving the Accessibility of Smartwatches as Research Tools by Developing a Software Library

Wanjara, Dhwan Devendra 13 June 2022 (has links)
Over the past 10 years, smartwatches have become increasingly popular for commercial use. Their ever-increasing capabilities, accuracy, and sophistication of smartwatches is making them increasingly appealing to physical activity researchers as a valuable research tool. The non-invasive nature, prevalence, and versatility of smartwatches is being utilized to track heart rate, blood-oxygen levels, activity and movement, and sleep. However, the current state of the art lacks a uniform method to extract, organize, and analyze data collected from these devices. The objective of this research was to develop a Python software library that is widely available, highly capable, and easy to use with the data collected by the Apple Watch. The library was designed to offer data science, visualization, and mining features that help physical activity research find and communicate patterns in the Apple Health data. The custom-built caching system of the library provides near-instant runtime to parse and analyze large files without trading off on memory usage. The Wanjara Smartwatch Library has significantly better performance, proven reliability and robustness, and improved usability than the alternatives discovered in the review of the literature. / Master of Science / Over the past 10 years, smartwatches have become increasingly popular for commercial use. Their ever-increasing capabilities, accuracy, and sophistication of smartwatches is making them increasingly appealing to physical activity researchers as a valuable research tool. The non-invasive nature, prevalence, and versatility of smartwatches is being utilized to track heart rate, blood-oxygen levels, activity and movement, and sleep. However, the current state of the art lacks a uniform method to extract, organize, and analyze data collected from these devices. The objective of this research was to develop a Python software library that is widely available, highly capable, and easy to use with the data collected by the Apple Watch. The library was designed to offer data science, visualization, and mining features that help physical activity research find and communicate patterns in the Apple Health data. The custom-built caching system of the library provides near-instant runtime to parse and analyze large files without trading off on memory usage. The Wanjara Smartwatch Library has significantly better performance, proven reliability and robustness, and improved usability than the alternatives discovered in the review of the literature.
378

Fuzzing tool for industrial communication

Köhler Djurberg, Markus, Heen, Isak January 2024 (has links)
Unit testing is a fundamental practice in software development and the goal is to create a test suite that tests the robustness of the software. It is challenging to create a test suite that covers every possible input to a system which can lead to security flaws not being detected. Fuzz testing is a technique that creates randomly generated, or fuzzy, input with the goal to uncover these areas of the input space potentially missed by the unit test suite.  EtherNet/IP is an industrial communications protocol built on top of the TCP/IP suite. HMS Anybus develops hardware to use in secure networks in industrial settings utilizing the EtherNet/IP protocol.  This report outlines the development of a Scapy-based fuzz testing tool capable of testing the implementation of the protocol on HMS devices. Additionally we propose a strategy for how the tool can be deployed in future testing. The resulting fuzz testing tool is capable of creating packets containing selected commands’ encapsulation headers and layering them with command specific data fields. These packets can be filled with static or fuzzy input depending on user configuration. The tool is implemented with the intention of providing HMS the capability for conducting fuzz testing. The report mentions multiple improvements that can be made using A.I. assisted generation of test cases and how the tool can be scaled in the future. This thesis project is a proof of concept that using Scapy to create a fuzz testing tool tailored to the EtherNet/IP protocol is possible.
379

RedditXtract : Ett IT-forensiskt verktyg framtaget för utvinning av Reddits applikationsdata

Andersson, Gustav, Salomonsson, Julia January 2024 (has links)
I den digitala eran, där varje interaktion lämnar ett spår och varje klick avslöjar en historia, är Reddit mer än bara ett forum - det är en guldgruva av användardata. Genom att gräva djupt in i denna digitala labyrint har detaljerade spår av scrollande och klickande upptäckts. Denna studie har utvecklat ett verktyg, RedditXtract, som sammanställer den mest centrala användardata och avslöjar hjärtat av en användares digitala profil. Med ett forensiskt perspektiv syftar verktyget till att möjliggöra en mer effektiv användning av Reddits användardata inom brottsutredningar. Metodiken innefattar experiment och intervju. Experimenten omfattar en forensisk analys av Reddits applikationsdata på en Android-telefon samt skapandet av ett Python-skript. En intervju med en erfaren åklagare ger insikter om den potential Reddits data har som bevisvärde i brottmål. Resultaten från den forensiska analysen konstaterar att det går att utvinna en hel del användardata från Reddit. I relation med resultatet från intervjun blir slutsatsen att i stort sett all denna data kan vara av intresse i en brottsutredning, beroende på brottmål och syfte. Undersökning av raderad och modifierad data visar att den är begränsad. Det gick att lokalisera redigerad och viss raderad data i en av de databaser som undersöktes, men denna data sparas endast en begränsad tid. Utveckling av Python-skriptet visar att det krävs kunskap om databasernas uppbyggnad och användning av relevanta Python-bibliotek för att kunna skriva ett skript som extraherar Reddits applikationsdata från en avbildning av en Androidtelefon.
380

Comparative Analysis of Machine Learning Libraries : Performance and Availability in Python, Scala, and Lua

Eriksson, Johanna, Jakobsson, Emil January 2024 (has links)
This report presents a comparative analysis of Python, Scala, and Lua for machine learning tasks, focusing on performance and library availability. The study addresses the gap in understanding how these less popular programming languages perform compared to Python, which is widely used in the machine learning community. Logistic regression and neural networks were investigated using datasets of varying sizes and complexities. The method involved implementing these algorithms in each language and measuring runtime and accuracy. The results showed that Python consistently achieved shorter runtimes and required fewer lines of code, largely due to its optimized Scikit-learn library. Python performed overall the best, with Scala closely behind, possibly due to using Python's default settings as the parameter settings throughout the experiment. Lua lagged significantly in performance and accuracy, hindered by its limited and outdated library support. The findings suggest that while Python remains the best choice for most machine learning tasks, Scala is a strong contender for large-scale data processing. Lua, however, does not seem to be a good choice for machine learning due to its current limitations, though it may be suitable for other areas such as scripting.

Page generated in 0.041 seconds