• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 39
  • 17
  • 16
  • 11
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Mobile application for showing that behind the blocks within block programming there is code

Emanuelsson, Daniel, Rimhagen, Elsa January 2022 (has links)
Scratch is a block programming language which introduces beginners to programming. Instead of code the user has access to a set of blocks with text and icons, explaining how the block will affect the program that is written. The connection between one block and the corresponding code can be hard to understand for the beginner. The goal of this project is therefore to develop a user-friendly, flashcard-based mobile application to show the target group of 8- to 16-year-olds that behind every block there is code. The application is developed in TypeScript, using React Native as framework and the developer tool Expo for setting up and publishing of the application. The final application consists of 6 different screens; a starting screen, an information screen, a menu, a submenu, an "under development"-screen and a flashcard view. The user can navigate between the screens and by choosing a specific block the flashcard view displays a flashcard with the block and the corresponding translation in Python. The goal of the project is fulfilled, and with a testing group it is also confirmed that the application is user-friendly. Although the goal is achieved, the conclusion that the step between block programming and syntax is hard can be drawn, with difficulties in translating the blocks appearing along the way.
32

Supplementing Dependabot’svulnerability scanning : A Custom Pipeline for Tracing DependencyUsage in JavaScript Projects

Karlsson, Isak, Ljungberg, David January 2024 (has links)
Software systems are becoming increasingly complex, with developers frequentlyutilizing numerous dependencies. In this landscape, accurate tracking and understanding of dependencies within JavaScript and TypeScript codebases are vital formaintaining software security and quality. However, there exists a gap in how existing vulnerability scanning tools, such as Dependabot, convey information aboutthe usage of these dependencies. This study addresses the problem of providing amore comprehensive dependency usage overview, a topic critical to aiding developers in securing their software systems. To bridge this gap, a custom pipeline wasimplemented to supplement Dependabot, extracting the dependencies identified asvulnerable and providing specific information about their usage within a repository.The results highlight the pros and cons of this approach, showing an improvement inthe understanding of dependency usage. The effort opens a pathway towards moresecure software systems.
33

Towards automated learning from software development issues : Analyzing open source project repositories using natural language processing and machine learning techniques

Salov, Aleksandar January 2017 (has links)
This thesis presents an in-depth investigation on the subject of how natural language processing and machine learning techniques can be utilized in order to perform a comprehensive analysis of programming issues found in different open source project repositories hosted on GitHub. The research is focused on examining issues gathered from a number of JavaScript repositories based on their user generated textual description. The primary goal of the study is to explore how natural language processing and machine learning methods can facilitate the process of identifying and categorizing distinct issue types. Furthermore, the research goes one step further and investigates how these same techniques can support users in searching for potential solutions to these issues. For this purpose, an initial proof-of-concept implementation is developed, which collects over 30 000 JavaScript issues from over 100 GitHub repositories. Then, the system extracts the titles of the issues, cleans and processes the data, before supplying it to an unsupervised clustering model which tries to uncover any discernible similarities and patterns within the examined dataset. What is more, the main system is supplemented by a dedicated web application prototype, which enables users to utilize the underlying machine learning model in order to find solutions to their programming related issues. Furthermore, the developed implementation is meticulously evaluated through a number of measures. First of all, the trained clustering model is assessed by two independent groups of external reviewers - one group of fellow researchers and another group of practitioners in the software industry, so as to determine whether the resulting categories contain distinct types of issues. Moreover, in order to find out if the system can facilitate the search for issue solutions, the web application prototype is tested in a series of user sessions with participants who are not only representative of the main target group which can benefit most from such a system, but who also have a mixture of both practical and theoretical backgrounds. The results of this research demonstrate that the proposed solution can effectively categorize issues according to their type, solely based on the user generated free-text title. This provides strong evidence that natural language processing and machine learning techniques can be utilized for analyzing issues and automating the overall learning process. However, the study was unable to conclusively determine whether these same methods can aid the search for issue solutions. Nevertheless, the thesis provides a detailed account of how this problem was addressed and can therefore serve as the basis for future research.
34

Systém pro automatické filtrování testů / System for Automatic Filtering of Tests

Lysoněk, Milan January 2020 (has links)
Cílem této práce je vytvořit systém, který je schopný automaticky určit množinu testů, které mají být spuštěny, když dojde v ComplianceAsCode projektu ke změně. Navržená metoda vybírá množinu testů na základě statické analýzy změněných zdrojových souborů, přičemž bere v úvahu vnitřní strukturu ComplianceAsCode. Vytvořený systém je rozdělen do čtyř částí - získání změn s využitím verzovacího systému, statická analýza různých typů souborů, zjištění souborů, které jsou ovlivněny těmi změnami, a výpočet množiny testů, které musí být spuštěny pro danou změnu. Naimplementovali jsme analýzu několika různých typů souborů a náš systém je navržen tak, aby byl jednoduše rozšiřitelný o analýzy dalších typů souborů. Vytvořená implementace je nasazena na serveru, kde automaticky analyzuje nové příspěvky do ComplianceAsCode projektu. Automatické spouštění informuje přispěvatelé a vývojáře o nalezených změnách a doporučuje, které testy by pro danou změnu měly být spuštěny. Tím je ušetřen čas strávený při kontrole správnosti příspěvků a čas strávený spouštěním testů.
35

Prieskum a taxonómia sieťových forenzných nástrojov / Network Forensics Tools Survey and Taxonomy

Zembjaková, Martina January 2021 (has links)
Táto diplomová práca sa zaoberá prieskumom a taxonómiou sieťových forenzných nástrojov. Popisuje základné informácie o sieťovej forenznej analýze, vrátane procesných modelov, techník a zdrojov dát používaných pri forenznej analýze. Ďalej práca obsahuje prieskum existujúcich taxonómií sieťových forenzných nástrojov vrátane ich porovnania, na ktorý naväzuje prieskum sieťových forenzných nástrojov. Diskutované sieťové nástroje obsahujú okrem nástrojov spomenutých v prieskume taxonómií aj niektoré ďalšie sieťové nástroje. Následne sú v práci detailne popísané a porovnané datasety, ktoré sú podkladom pre analýzu jednotlivými sieťovými nástrojmi. Podľa získaných informácií z vykonaných prieskumov sú navrhnuté časté prípady použitia a nástroje sú demonštrované v rámci popisu jednotlivých prípadov použitia. Na demonštrovanie nástrojov sú okrem verejne dostupných datasetov použité aj novo vytvorené datasety, ktoré sú detailne popísane vo vlastnej kapitole. Na základe získaných informácií je navrhnutá nová taxonómia, ktorá je založená na prípadoch použitia nástrojov na rozdiel od ostatných taxonómií založených na NFAT a NSM nástrojoch, uživateľskom rozhraní, zachytávaní dát, analýze, či type forenznej analýzy.
36

Разработка web-приложения АРМ «Технический отчет доменного цеха» : магистерская диссертация / Development of a web-application automated workspace "Blast-furnace production technical report"

Перетыкина, К. Р., Peretykina, K. R. January 2021 (has links)
Магистерская диссертация посвящена разработке программного обеспечения автоматизированного рабочего места (АРМа) технолога доменного цеха, которое позволяет сформировать технический отчет о работе доменного цеха за отчетный период (месяц/год) с использованием web-приложения. В ходе работы рассмотрены основные этапы реализации программного модуля: анализ предметной области, проектирование и программная реализация web-приложения. В ходе разработки программного обеспечения АРМа спроектированы и реализованы серверная часть системы и web-приложение на платформе ASP.NET Core. Серверная часть включает базу данных, которая является не только местом хранения данных, но и частично реализует функции бизнес логики. Приложение позволяет технологу доменного цеха с помощью пользовательских форм сопровождать базу данных отчетных показателей работы доменного цеха и формировать технический отчет за определенный месяц и сохранять в различных форматах. Научная новизна полученных в работе результатов заключается в разработке методов эффективной организации, ведения процесса разработки и сопровождения специализированного информационного, алгоритмического и программного обеспечения АИС АППС ДЦ, включая базу данных доменного цеха и средства создания технического отчета доменного цеха: использование гибкой методологии разработки (Agile, SCRUM) и таск-трекера Atlassian JIRA для ведения проекта, взаимодействия с заказчиком во время разработки, отслеживания ошибок, визуального отображения задач и мониторинга процесса их выполнения; функциональное моделирование процессов и подсистем для реализации web-приложения подготовки технического отчета доменного цеха на основе методологии IDEF0 и средства реализации Ramus Educational; использование методики коллективного владения программным кодом на основе сервиса (удаленного репозитория) GitHub. Практическая значимость результатов заключается в том, что разработанное программное обеспечение позволит: производить автоматизированный сбор и подготовку необходимых отчетных данных о работе доменного цеха за нормативный период (месяц); специалистам инженерно-технологической группы доменного цеха сократить время на формирование отчетных документов, сократить время поиска необходимой фактической отчетной информации за счет реализации эргономичного web-интерфейса; специалистам отдела сопровождения информационных систем снизить трудозатраты на сопровождение, совершенствование и развитие системы с учетом пожеланий пользователей. Результаты работы могут быть использованы также в учебном процессе для обучения бакалавров и магистрантов по направлению «Информационные системы и технологии». Результаты работы представлены и обсуждены на международных и всероссийских конференциях: VII, VIII и IX Всероссийской научно-практической конференции студентов, аспирантов и молодых учёных (Екатеринбург, УрФУ, 2018, 2019, 2021); XII Всероссийской научно-практической конференции (Новокузнецк, СибГИУ, 2019); 77-й международной научно-технической конференции «Актуальные проблемы современной науки, техники и образования» (Магнитогорск, МГТУ, 2019). / The master's thesis is devoted to the development of software for an automated workstation (AWP) of a blast furnace shop technologist, which allows you to generate a technical report on the operation of a blast furnace shop for a reporting period (month / year) using a web application. In the course of the work, the main stages of the implementation of the software module were considered: analysis of the subject area, design and software implementation of a web application. During the development of the AWP software, the server part of the system and the web application on the ASP.NET Core platform were designed and implemented. The server part includes a database, which is not only a place for storing data, but also partially implements the functions of business logic. The application allows the technologist of the blast furnace shop, using user-defined forms, to accompany the database of reporting indicators of the blast furnace shop operation and generate a technical report for a specific month and save it in various formats. The scientific novelty of the results obtained in the work lies in the development of methods for effective organization, maintenance of the development process and maintenance of specialized information, algorithmic and software AIS APPS DC, including the blast furnace shop database and tools for creating a technical report of the blast furnace shop: - use of flexible development methodology (Agile, SCRUM) and the Atlassian JIRA task tracker for project management, interaction with the customer during development, tracking errors, visual display of tasks and monitoring the process of their implementation; - functional modeling of processes and subsystems for the implementation of a web application for preparing a technical report for a blast furnace shop based on the IDEF0 methodology and Ramus Educational implementation tools; - using the method of collective ownership of the program code based on the service (remote repository) GitHub. The practical significance of the results lies in the fact that the developed software will allow: - to carry out automated collection and preparation of the necessary reporting data on the operation of the blast furnace shop for the regulatory period (month); - for specialists of the blast-furnace shop engineering and technological group to reduce the time for the formation of reporting documents, to reduce the search time for the necessary actual reporting information due to the implementation of an ergonomic web interface; - specialists of the information systems support department to reduce labor costs for maintenance, improvement and development of the system, taking into account the wishes of users. The results of the work can also be used in the educational process for training bachelors and undergraduates in the direction of "Information systems and technologies". The results of the work are presented and discussed at international and all-Russian conferences: VII, VIII and IX All-Russian scientific-practical conference of students, graduate students and young scientists (Yekaterinburg, UrFU, 2018, 2019, 2021); XII All-Russian Scientific and Practical Conference (Novokuznetsk, SibGIU, 2019); 77th international scientific and technical conference "Actual problems of modern science, technology and education" (Magnitogorsk, MSTU, 2019).
37

Open Legacies : Exploring Thanatosensitivity in the Context of Creators’ Digital Commons Contributions

Pyttel, Miriam January 2022 (has links)
Technology has become closely interwoven with our lives, positioning us as authors of large and diverse databases. These extensive collections of digital assets will be left behind as digital legacies after users eventually die. Addressing the inevitability of death in digital systems, including considerations for pre-configuring, or accessing these digital legacies, calls for thanatosensitivity in design. As a relatively new field, thanatosensitive HCI research on digital legacy has primarily focused on data storage and security as well as social networking systems. However, people might create online content that can be of relevance postmortem beyond the next of kin and private network, such as contributions to digital commons communities. In my research, I explore challenges and opportunities for thanatosensitive design in the context of digital commons communities by examining two design cases as samples of that area: GitHub and the Free Music Archive. Through a process inspired by programmatic design research, I followed a mixed method approach including literature reviews, interviews, workshop sessions, and iterative design synthesis. The outcome is a guidebook consisting of annotated portfolios with design exemplars for each design case, accessible to different stakeholders for further collaboration. Drawing on the annotations and intersections between both cases, I frame the knowledge contributions of this study as insights from the design process, aiming to provide directions for future research on thanatosensitivity in systems for digital commons contributions.
38

Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA / with Applications for QuantNet 2.0 and GitHub

Borke, Lukas 08 September 2017 (has links)
Mit der wachsenden Popularität von GitHub, dem größten Online-Anbieter von Programm-Quellcode und der größten Kollaborationsplattform der Welt, hat es sich zu einer Big-Data-Ressource entfaltet, die eine Vielfalt von Open-Source-Repositorien (OSR) anbietet. Gegenwärtig gibt es auf GitHub mehr als eine Million Organisationen, darunter solche wie Google, Facebook, Twitter, Yahoo, CRAN, RStudio, D3, Plotly und viele mehr. GitHub verfügt über eine umfassende REST API, die es Forschern ermöglicht, wertvolle Informationen über die Entwicklungszyklen von Software und Forschung abzurufen. Unsere Arbeit verfolgt zwei Hauptziele: (I) ein automatisches OSR-Kategorisierungssystem für Data Science Teams und Softwareentwickler zu ermöglichen, das Entdeckbarkeit, Technologietransfer und Koexistenz fördert. (II) Visuelle Daten-Exploration und thematisch strukturierte Navigation innerhalb von GitHub-Organisationen für reproduzierbare Kooperationsforschung und Web-Applikationen zu etablieren. Um Mehrwert aus Big Data zu generieren, ist die Speicherung und Verarbeitung der Datensemantik und Metadaten essenziell. Ferner ist die Wahl eines geeigneten Text Mining (TM) Modells von Bedeutung. Die dynamische Kalibrierung der Metadaten-Konfigurationen, TM Modelle (VSM, GVSM, LSA), Clustering-Methoden und Clustering-Qualitätsindizes wird als "Smart Clusterization" abgekürzt. Data-Driven Documents (D3) und Three.js (3D) sind JavaScript-Bibliotheken, um dynamische, interaktive Datenvisualisierung zu erzeugen. Beide Techniken erlauben Visuelles Data Mining (VDM) in Webbrowsern, und werden als D3-3D abgekürzt. Latent Semantic Analysis (LSA) misst semantische Information durch Kontingenzanalyse des Textkorpus. Ihre Eigenschaften und Anwendbarkeit für Big-Data-Analytik werden demonstriert. "Smart clusterization", kombiniert mit den dynamischen VDM-Möglichkeiten von D3-3D, wird unter dem Begriff "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA" zusammengefasst. / With the growing popularity of GitHub, the largest host of source code and collaboration platform in the world, it has evolved to a Big Data resource offering a variety of Open Source repositories (OSR). At present, there are more than one million organizations on GitHub, among them Google, Facebook, Twitter, Yahoo, CRAN, RStudio, D3, Plotly and many more. GitHub provides an extensive REST API, which enables scientists to retrieve valuable information about the software and research development life cycles. Our research pursues two main objectives: (I) provide an automatic OSR categorization system for data science teams and software developers promoting discoverability, technology transfer and coexistence; (II) establish visual data exploration and topic driven navigation of GitHub organizations for collaborative reproducible research and web deployment. To transform Big Data into value, in other words into Smart Data, storing and processing of the data semantics and metadata is essential. Further, the choice of an adequate text mining (TM) model is important. The dynamic calibration of metadata configurations, TM models (VSM, GVSM, LSA), clustering methods and clustering quality indices will be shortened as "smart clusterization". Data-Driven Documents (D3) and Three.js (3D) are JavaScript libraries for producing dynamic, interactive data visualizations, featuring hardware acceleration for rendering complex 2D or 3D computer animations of large data sets. Both techniques enable visual data mining (VDM) in web browsers, and will be abbreviated as D3-3D. Latent Semantic Analysis (LSA) measures semantic information through co-occurrence analysis in the text corpus. Its properties and applicability for Big Data analytics will be demonstrated. "Smart clusterization" combined with the dynamic VDM capabilities of D3-3D will be summarized under the term "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA".
39

Introducing Generative Artificial Intelligence in Tech Organizations : Developing and Evaluating a Proof of Concept for Data Management powered by a Retrieval Augmented Generation Model in a Large Language Model for Small and Medium-sized Enterprises in Tech / Introducering av Generativ Artificiell Intelligens i Tech Organisationer : Utveckling och utvärdering av ett Proof of Concept för datahantering förstärkt av en Retrieval Augmented Generation Model tillsammans med en Large Language Model för små och medelstora företag inom Tech

Lithman, Harald, Nilsson, Anders January 2024 (has links)
In recent years, generative AI has made significant strides, likely leaving an irreversible mark on contemporary society. The launch of OpenAI's ChatGPT 3.5 in 2022 manifested the greatness of the innovative technology, highlighting its performance and accessibility. This has led to a demand for implementation solutions across various industries and companies eager to leverage these new opportunities generative AI brings. This thesis explores the common operational challenges faced by a small-scale Tech Enterprise and, with these challenges identified, examines the opportunities that contemporary generative AI solutions may offer. Furthermore, the thesis investigates what type of generative technology is suitable for adoption and how it can be implemented responsibly and sustainably. The authors approach this topic through 14 interviews involving several AI researchers and the employees and executives of a small-scale Tech Enterprise, which served as a case company, combined with a literature review.  The information was processed using multiple inductive thematic analyses to establish a solid foundation for the investigation, which led to the development of a Proof of Concept. The findings and conclusions of the authors emphasize the high relevance of having a clear purpose for the implementation of generative technology. Moreover, the authors predict that a sustainable and responsible implementation can create the conditions necessary for the specified small-scale company to grow.  When the authors investigated potential operational challenges at the case company it was made clear that the most significant issue arose from unstructured and partially absent documentation. The conclusion reached by the authors is that a data management system powered by a Retrieval model in a LLM presents a potential path forward for significant value creation, as this solution enables data retrieval functionality from unstructured project data and also mitigates a major inherent issue with the technology, namely, hallucinations. Furthermore, in terms of implementation circumstances, both empirical and theoretical findings suggest that responsible use of generative technology requires training; hence, the authors have developed an educational framework named "KLART".  Moving forward, the authors describe that sustainable implementation necessitates transparent systems, as this increases understanding, which in turn affects trust and secure use. The findings also indicate that sustainability is strongly linked to the user-friendliness of the AI service, leading the authors to emphasize the importance of HCD while developing and maintaining AI services. Finally, the authors argue for the value of automation, as it allows for continuous data and system updates that potentially can reduce maintenance.  In summary, this thesis aims to contribute to an understanding of how small-scale Tech Enterprises can implement generative AI technology sustainably to enhance their competitive edge through innovation and data-driven decision-making.

Page generated in 0.0348 seconds