• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 380
  • 89
  • 72
  • 70
  • 67
  • 37
  • 33
  • 18
  • 12
  • 11
  • 10
  • 8
  • 7
  • 5
  • 5
  • Tagged with
  • 935
  • 935
  • 452
  • 196
  • 133
  • 124
  • 115
  • 99
  • 89
  • 88
  • 86
  • 83
  • 79
  • 74
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
671

Can Developer Data Predict Vulnerabilities? : Examining Developer and Vulnerability Correlation in the Kibana Project / Kan Utvecklardata Förutse Sårbarheter? : Studie om Korrelation Mellan Utvecklare och Sårbarheter i Kibanas Källkod

Lövgren, Johan January 2023 (has links)
Open-source software is often chosen with the expectation of increased security [1]. The transparency and peer review process of open development offer advantages in terms of more secure code. However, developing secure code remains a challenging task that requires more than just expertise. Even with adequate knowledge, human errors can occur, leading to mistakes and overlooked issues that may result in exploitable vulnerabilities. It is reasonable to assume that not all developers introduce bugs or vulnerabilities randomly since each developer brings unique experience and knowledge to the development process. The objective of this thesis is to investigate a method for identifying high-risk developers who are more likely to introduce vulnerabilities or bugs, which can be used to predict potential locations of bugs or vulnerabilities in the source code based on the developer who wrote the code. Metrics related to developers’ code churn, code complexity, bug association, and experience were collected during a case study of the open-source project Kibana. The findings provide empirical evidence suggesting that developers that write code with higher complexity and have a greater project activity pose a higher risk of introducing vulnerabilities and bugs. Developers who have introduced vulnerabilities also tend to exhibit higher code churn, code complexity, and bug association compared to those who have not introduced a vulnerability. However, the metrics employed in this study were not sufficiently discriminative for identifying developers with a higher risk of introducing vulnerabilities or bugs per commit. Nevertheless, the results of this study serve as a foundation for further research in this area exploring the topic further. / Programvara med öppen källkod väljs ofta med förväntningar om ökad säkerhet [1]. Transparensen och peer review-processen erbjuder fördelar i form av säkrare kod. Men att utveckla säker kod är fortfarande en utmanande uppgift som kräver mer än bara expertis. Även med tillräcklig kunskap kan mänskliga fel uppstå, vilket leder till misstag och förbisedda problem som kan resultera i exploaterbara sårbarheter. Det är rimligt att anta att inte alla utvecklare introducerar buggar eller sårbarheter slumpmässigt, eftersom varje utvecklare tar med sig unik erfarenhet och kunskap till utvecklingsprocessen. Syftet med detta examensarbete är att identifiera en metod att identifiera högriskutvecklare som är mer benägna att introducera sårbarheter eller buggar, vilket kan användas för att förutsäga potentiella platser för buggar eller sårbarheter i källkoden baserat på utvecklaren som skrev koden. Mätvärden relaterade till utvecklarnas omsättning av kod, kodkomplexitet, buggassociation och erfarenhet samlades in under en fallstudie av det öppna källkodsprojektet Kibana. Fynden ger empiriska bevis som tyder på att utvecklare med högre kodkomplexitetsmått och större projektaktivitet utgör en högre risk för att introducera sårbarheter och buggar. Utvecklare som har introducerat sårbarheter tenderar också att uppvisa högre omsättning av kod, kodkomplexitet och buggassociation jämfört med de som inte har introducerat en sårbarhet. De mätvärden som användes i denna studie var dock inte tillräckligt diskriminerande för att identifiera utvecklare med en högre risk att introducera sårbarheter eller buggar per commit. Ändå fungerar resultaten av denna studie som en grund för vidare studier inom detta område.
672

Leveraging Street View and Remote Sensing Imagery to Enhance Air Quality Modeling through Computer Vision and Machine Learning

Qi, Meng 14 February 2024 (has links)
Air pollution is associated with various adverse health impacts and is identified as one of the leading risk factors for global disease burden. Further, air pollution is one of the pathways through which climate change could negatively impact health. Field studies have shown that air pollution has high spatiotemporal variability and pollutant concentrations vary substantially within neighborhoods. Characterizing air pollution at a fine-grained level is essential for accurately estimating human exposure, assessing its impact to human health, and further aiding localized air pollution policy. Air quality models are developed to estimate air pollution at locations and time periods without monitors, and these estimates are commonly used for exposure and health effects studies. Traditional land use regression [LUR] models are one of the cost-effective empirical air quality models. LUR typically relies on fixed-site measurements, GIS-derived variables with limited spatial resolution, and captures linear relationships. In recent years, innovative open-source imagery datasets and their associated features (e.g., street view imagery, remote sensing imagery) have emerged and show potential to augment or replace traditional LUR predictors. Such imagery data sources embody abundant information of natural and built environment features. Advanced computer vision techniques enable feature extraction and quantification through these extensive imagery datasets. The overarching objective of this dissertation is to investigate the feasibility of leveraging open-source imagery datasets (i.e., Google Street View [GSV] imagery, Landsat imagery, etc.) and advanced machine learning algorithms to develop image-based empirical air quality models at both local and national scale. The first study of this work established a pipeline of feature extraction through street view imagery sematic segmentation. The resulting street view features were used to predict street-level particulate air pollution for a single city. The results showed that solely using GSV-derived features can achieve comparable model fits as using traditional GIS-derived variables. Feature engineering improved model stability and interpretability through reducing spurious variables from potential misclassifications from computer vision algorithms. The second study further developed GSV-based models at national scale across multiple years. Random forest models were developed to capture the nonlinear relationship between air pollution and its impacting factors. The results showed that with sufficient street view images, GSV imagery alone may explain the variation of long-term national NO2 concentrations. Adding satellite-derived aerosol estimates (i.e., OMI column density) can significantly boost model performance when GSV images are insufficient, but the addition narrows when more GSV images are available. Our systematic assessment of the impact of image availability on model performance suggested that a parsimonious image sampling strategy (i.e., one GSV image per 100m grid) may be sufficient and most cost-effective for model development and application. Our third study explored the feasibility of combining street view and remote sensing derived features for national NO2 and PM2.5 modeling and projection at high spatial resolution. We found that GSV-based models captured both the highest and lowest pollutant concentrations while remote sensing features tended to smooth the air pollution variations. The results suggested that GSV features may have the capability to better capture fine-scale air pollution variability. The resulting air pollution prediction product may serve a variety of applications, including providing new insights into environmental justice and epidemiological studies due to its high spatial resolution (i.e., street level). Collectively, the result of this dissertation suggests that GSV imagery, processed with computer vision techniques, is a promising data source to develop empirical air quality models with high spatial resolution and consistent predictor variables processing protocol. Image-based features assisted with advanced ML approaches have the potential to greatly improve air quality modeling estimates, and successfully show comparable and even superior model performance than other modeling studies. Moreover, the ever-growing public imagery data sources are particularly promising for remote or less developed areas where traditional curated geodatabases are sparse or nonexistent. / Doctor of Philosophy / Air pollution is detrimental to human health and well-being. Further, air pollutants concentrations can change rapidly within a short distance and temporal frame. Monitoring air pollution with high spatial-temporal resolution is important. Traditional air quality monitoring networks are expensive and sparsely distributed, leading to gaps in capturing the air pollution at small spatial scales. Air quality models are developed to estimate air pollution at locations and time periods without monitors. Empirical air quality models often use air measurements from stationary sites and GIS-derived features (e.g., traffic, population density, land use types, etc.) to develop regression models and use the regression formula to estimate air pollutant concentrations in unmonitored areas. However, GIS-derived features are often collected from curated GIS databases, which often have coarse resolution when available across large geography. Street view imagery and remote sensing imagery contains rich information of natural and built environments. Computer vision techniques can be applied to extract such information to replace or augment traditional GIS-derived features. Combined with advanced machine learning algorithms, features derived from open-access images are promising to develop air quality models with a consistent image collection and processing protocol. This dissertation examines the feasibility of using street view imagery (i.e., Google Street View [GSV] Imagery) and remote sensing imagery to develop air quality models at both local and national scales. Our results found that solely using GSV features to build local and national models can achieve good model performance, which is consistent or even better than other models using traditional GIS-derived variables. For areas without sufficient GSV images, adding satellite observations for air pollution can significantly enhance model performance. Remote sensing features tend to smooth air pollution variation while GSV features tend to better capture fine-scale intra-urban air pollution variation. In conclusion, leveraging open-source imagery datasets with advanced machine learning methods are promising for estimating air pollution at high spatial resolution with good model fits.
673

An Investigation of Routine Repetitiveness in Open-Source Projects

Arafat, Mohd 13 August 2018 (has links)
No description available.
674

The Origin, Evolution, and Variation of Routine Structures in Open Source Software Development: Three Mixed Computational-Qualitative Studies

Lindberg, Aron 03 September 2015 (has links)
No description available.
675

The Design and Implementation of a Web-based GIS for Political Redistricting

Chen, Wei 29 September 2009 (has links)
No description available.
676

Småföretag och nyttan av att utveckla open source - en fallstudie

Zalas, Pierre January 2017 (has links)
Open source eller öppen källkod innebär att vem som helst kan se, justera och dela koden. Företag, organisationer och individer investerar såväl tid som resurser att utveckla open source-projekt, där majoriteten av projekten inte genererar någon form av ekonomisk vinning. Pocketsize är en Malmöbaserad webbyrå som har utvecklat ett front-end ramverk vid namn Bolts samt ett automatiseringsverktyg vid namn Toolbelt som avses släppas som open source.Denna studie syftar till att bidra med kunskap om vad det är som driver småföretag, idella organisationer eller individer till investera tid i att utveckla open source-projekt. Därmed har en fallstudie på webbyrån Pocketsize i Malmö gjorts där en analys av ramverket Bolts och verktyget Toolbelt, utvecklat av Pocketsize, har genomförts. Detta för att erhålla kunskap om varför Pocketsize väljer att släppa de open source. Teori och tidigare forskning presenterar aspekter som motiverande faktorer och drivkrafter bakom utveckling av open source-projekt. Studien har har antagit ett kvalitativt perspektiv där observationer och semistrukturerade intervjuer har gjorts för att kartlägga handlingsmönster och för att återge hur en individ tänker och resonerar kring ämnet. Resultatet redogör för hur utvecklare ofta skapar en produkt utifrån sina egna behov samt hur känslan av altruism och att viljan att ”ge tillbaka” till open source-communityt är några av de största motiverande faktorer och drivkrafter bakom att bidra till open source-projekt. / Open source or open source-code means that anyone can view, adjust and share the code. Companies, organizations and individuals invest their time and resources to develop open source projects, where the majority of projects doesn’t even generate any kind of financial gain. Pocketsize is a Malmö based web agency that has developed a front-end framework called Bolts as well as an automation tool called Toolbelt intended to be released as open source.This study aims to contribute knowledge of what drives small businesses, non-profit organizations or individuals to invest time and resources in developing open source projects. Therefore, a case study has been made at the web development agency Pocketsize in Malmö, where an analysis of the Bolts framework and Toolbelt automation tool that Pocketsize has developed has been conducted. In an attempt to understand why Pocketsize chooses to release them as open source. Theory and previous research presents aspects as motivational factors and driving forces behind the development of open source projects. The study has adopted a qualitative perspective where observations and semi structured interviews have been made to map action patterns and to reflect how an individual contemplates on the subject. The result states that developers often create aproduct based on their own needs and how a feeling of altruism and the willingness to “pay it forward” to the open source community are some of the major motivatiors and driving forces behind contributing to open source projects.
677

Exploring the Dynamics of Software Bill of Materials (SBOMs) and Security Integration in Open Source Projects

Ambala, Anvesh January 2024 (has links)
Background.The rapid expansion of open-source software has introduced significant security challenges, particularly concerning supply chain attacks. Software supply chain attacks, such as the NotPetya attack, have underscored the critical need for robust security measures. Managing dependencies and protecting against such attacks have become important, leading to the emergence of Software Bill of Materials (SBOMs) as a crucial tool. SBOMs offer a comprehensive inventory of software components, aiding in identifying vulnerabilities and ensuring software integrity. Objectives. Investigate the information contained within SBOMs in Python and Gorepositories on GitHub. Analyze the evolution of SBOM fields over time to understand how software dependencies change. Examine the impact of the US Executive Order of May 2021 on the quality of SBOMs across software projects. Conduct dynamic vulnerability scans in repositories with SBOMs, focusing on identifying types and trends of vulnerabilities. Methods. The study employs archival research and quasi-experimentation, leveraging data from GitHub repositories. This approach facilitates a comprehensive analysis of SBOM contents, their evolution, and the impact of policy changes and security measures on software vulnerability trends. Results. The study reveals that SBOMs are becoming more complex as projects grow, with Python projects generally having more components than Go projects. Both ecosystems saw reductions in vulnerabilities in later versions. The US Executive Order of 2021 positively impacted SBOM quality, with measures like structural elements and NTIA guidelines showing significant improvements post-intervention. Integrating security scans with SBOMs helped identify a wide range of vulnerabilities. Projects varied in critical vulnerabilities, highlighting the need for tailored security strategies. CVSS scores and CWE IDs provided insights into vulnerability severity and types. Conclusions. The thesis highlights the crucial role of SBOMs in improving software security practices in open-source projects. It shows that policy interventions like the US Executive Order and security scans can significantly enhance SBOM quality, leading to better vulnerability management and detection strategies. The findings contribute to the development of robust dependency management and vulnerability detection methodologies in open-source software projects.
678

Collaborative learning in Open Source Software (OSS) communities: The dynamics and challenges in networked learning environments

Mitra, Raktim 22 August 2011 (has links)
The proliferation of web based technologies has resulted in new forms of communities and organizations with enormous implications for design of learning and education. This thesis explores learning occurring within open source software (OSS) communities. OSS communities are a dominant form of organizing in software development with implications not only for innovative product development but also for the training of a large number of software developers. The central catalyst of learning within these communities is expert-novice interactions. These interactions between experts and novices or newcomers are critical for the growth and sustenance of a community and therefore it is imperative that experts are able to provide newcomers requisite advice and support as they traverse the community and develop software. Although prior literature has demonstrated the significance of expert-novice interactions, there are two central issues that have not been examined. First, there is no examination of the role of external events on community interaction, particularly as it relates to experts and novices. Second, the exact nature of expert help, particularly, the quantity of help and whether it helps or hinders newcomer participation has not been studied. This thesis studies these two aspects of expert-novice interaction within OSS communities. The data for this study comes from two OSS communities. The Java newcomer forum was studied as it provided a useful setting for examining external events given the recent changes in Java's ownership. Furthermore, the forum has a rating system which classifies newcomers and experienced members allowing the analysis of expert-novice interactions. The second set of data comes from the MySQL newcomer forum which has also undergone organizational changes and allows for comparison with data from the Java forum. Data were collected by parsing information from the HTML pages and stored in a relational database. To analyze the effect of external events, a natural experiment method was used whereby participation levels were studied around significant events that affected the community. To better understand the changes contextually, an extensive study of major news outlets was also undertaken. Findings from the external event study show significant changes in participation patterns, especially among newcomers in response to key external events. The study also revealed that the changes in participation of newcomers were observed even though other internal characteristics (help giving, expert participation) did not change indicating that external events have a strong bearing on community participation. The effect of expert advice was studied using a logistic regression model to determine how specific participation patterns in discussion threads led to the final response to newcomers. This was supported by social network analysis to visually interpret the participation patterns of experienced members in two different scenarios, one in which the question was answered and the other where it was not. Findings show that higher number of responses from experienced members did not correlate with a response. Therefore, although expert help is essential, non-moderated or unguided help can lead to conflict among experts and inefficient feedback to newcomers. / Master of Science
679

Integration of Open-Source Networks

Cooper, Thomas A. 10 May 2012 (has links)
Global System for Mobile Communications (GSM) networks are receiving increasing attention in the open-source community. Open-source software allows for deployment of a mobile cellular network with lower costs, more customization, and scalable control. Two popular projects have emerged that offer varying network architectures and allow users to implement a GSM network in different capacities depending on individual needs. Osmocom provides more network control and scalability but requires commercial Base Transceiver Station (BTS) hardware with limited availability and closed source code. OpenBTS provides minimal GSM network functionality with more easily available and open-source hardware; however, it does not allow multi-cellular network configuration. This thesis offers a significant contribution towards a fully open-source GSM network by integrating the two major open-source communities, Osmocom and OpenBTS. Specifically, the Osmo-USRP program provides an inter-layer interface between the different network architectures of two GSM base station projects. Inter-layer primitive messages are processed in a thread multiplexer that manages logical channels across the interface. Downstream flow control is implemented in order to receive data frames on time for transmitting at the appropriate GSM frame number (FN). Uplink measurements, which are necessary for decision making in the Base Station Controller (BSC), are also gathered in the physical layer of Osmo-USRP and reported to Osmocom. Osmo-USRP operation is tested using a Universal Software Radio Peripheral (USRP), a relatively inexpensive and accessible Software-Defined Radio (SDR). Standard GSM events are investigated for single cell and multi-cellular network configurations. These tests include subscriber authentication and encryption, location updating, International Mobile Subscriber Identity (IMSI) attach and detach, Short Message Service (SMS) storage and delivery, voice calls with the full-rate audio codec, and uplink and downlink measurement reporting. While most functionality is successfully tested, inter-cell handover is not currently implemented. Further details on the proposed implementation of program limitations, especially inter-cell handover, are also discussed. / Master of Science
680

Fake News Detection : Using a Large Language Model for Accessible Solutions

Jurgell, Fredrik, Borgman, Theodor January 2024 (has links)
In an attempt to create a fake news detection tool using a large language model (LLM), the emphasis is on validating the effectiveness of this approach and then making the tooling readily available. With the current model of gpt-4-turbo-preview and its assistant capabilities combined with simple prompts tailored to different objectives. While tools to detect fake news and simplify the process are not new, insight into how they work and why is not commonly available, most likely due to the monetization around the current services. By building an open-source platform that others can expand upon, giving insight into the prompts used, and enabling experimentation and a baseline to start at when developing further or taking inspiration from.  The results when articles are not willfully written as fake but missing key data are obviously very hard to detect. However, common tabloid-style news, which are often shared to create an emotional response, shows more promising detection results.

Page generated in 0.0516 seconds