• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 382
  • 89
  • 72
  • 70
  • 67
  • 37
  • 33
  • 18
  • 12
  • 11
  • 10
  • 8
  • 7
  • 5
  • 5
  • Tagged with
  • 936
  • 936
  • 452
  • 196
  • 133
  • 124
  • 115
  • 100
  • 89
  • 88
  • 86
  • 83
  • 79
  • 75
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
661

Oceanographic Instrument Simulator

Chen, Amy 01 March 2016 (has links) (PDF)
The Monterey Bay Aquarium Research Institute (MBARI) established the Free Ocean Carbon Enrichment (FOCE) experiment to study the long-term effects of decreased ocean pH levels by developing in-situ platforms [1]. Deep FOCE (dpFOCE) was the first platform, which was deployed in 950 meters of water in Monterey Bay. After the conclusion of dpFOCE, MBARI developed an open source shallow water FOCE (swFOCE) platform located at around 250 meter of water to facilitate worldwide shallow water experiments on FOCE [1][2]. A shallow water platform can be more ubiquitous than a deep-water platform as shallow water instruments are less expensive (as it does not have to be designed to withstand the pressure at deep ocean depths) and more easily deployed (they can be deployed right along the coast). The swFOCE experiment is an open source platform, and MBARI has made the plans available online to anyone interested in studying shallow water carbon enrichment. There is a gateway node what is connected to four sensor nodes within the swFOCE. In order to test the sensor node individually, an idea of designing an Oceanographic Instrument Simulator is purposed. The Oceanographic instrument simulator (OIS), described in this paper provides the means for MBARI engineers to test the swFOCE platform without attaching the numerous and expensive oceanographic instruments. The Oceanographic Instrument Simulator simulates the various scientific instruments that could be deployed in an actual experiment. The Oceanographic Instrument Simulator (OIS) system includes the designed circuit board, Arduino Due and an SD Card shield. The designed circuit board will be connected to a computer through a USB cable, and be connected to MBARI’s swFOCE sensor node through a serial connection. When a query is given from the sensor node, the Arduino Due will parse the data given from the sensor node, search through the pre-installed data in the SD card and return the appropriate data back to the sensor node. A user can also manually set up the input current through a computer terminal window to control the simulated signals from the PCB.
662

Programming Language Fragmentation and Developer Productivity: An Empirical Study

Krein, Jonathan L. 10 February 2011 (has links) (PDF)
In an effort to increase both the quality of software applications and the efficiency with which applications can be written, developers often incorporate multiple programming languages into software projects. Although language specialization arguably introduces benefits, the total impact of the resulting language fragmentation (working concurrently in multiple programming languages) on developer performance is unclear. For instance, developers may solve problems more efficiently when they have multiple language paradigms at their disposal. However, the overhead of maintaining efficiency in more than one language may outweigh those benefits. This thesis represents a first step toward understanding the relationship between language fragmentation and programmer productivity. We address that relationship within two different contexts: 1) the individual developer, and 2) the overall project. Using a data-centered approach, we 1) develop metrics for measuring productivity and language fragmentation, 2) select data suitable for calculating the needed metrics, 3) develop and validate statistical models that isolate the correlation between language fragmentation and individual programmer productivity, 4) develop additional methods to mitigate threats to validity within the developer context, and 5) explore limitations that need to be addressed in future work for effective analysis of language fragmentation within the project context using the SourceForge data set. Finally, we demonstrate that within the open source software development community, SourceForge, language fragmentation is negatively correlated with individual programmer productivity.
663

Using Hard Macros to Accelerate FPGA Compilation for Xilinx FPGAs

Lavin, Christopher Michael 22 January 2012 (has links) (PDF)
Field programmable gate arrays (FPGAs) offer an attractive compute platform because of their highly parallel and customizable nature in addition to the potential of being reconfigurable to any almost any desired circuit. However, compilation time (the time it takes to convert user design input into a functional implementation on the FPGA) has been a growing problem and is stifling designer productivity. This dissertation presents a new approach to FPGA compilation that more closely follows the software compilation model than that of the application specific integrated circuit (ASIC). Instead of re-compiling every module in the design for each invocation of the compilation flow, the use of pre-compiled modules that can be "linked" in the final stage of compilation are used. These pre-compiled modules are called hard macros and contain the necessary physical information to ultimately implement a module or building block of a design. By assembling hard macros together, a complete and fully functional implementation can be created within seconds. This dissertation describes the process of creating a rapid compilation flow based on hard macros for Xilinx FPGAs. First, RapidSmith, an open source framework that enabled the creation of custom CAD tools for this work is presented. Second, HMFlow, the hard macro-based rapid compilation flow is described and presented as tuned to compile Xilinx FPGA designs as fast as possible. Finally, several modifications to HMFlow are made such that it produces circuits with clock rates that run at more than 75% of Xilinx-produced implementations while compiling more than 30X faster than the Xilinx tools.
664

Analysis and Characterization of Author Contribution Patterns in Open Source Software Development

Taylor, Quinn Carlson 02 March 2012 (has links) (PDF)
Software development is a process fraught with unpredictability, in part because software is created by people. Human interactions add complexity to development processes, and collaborative development can become a liability if not properly understood and managed. Recent years have seen an increase in the use of data mining techniques on publicly-available repository data with the goal of improving software development processes, and by extension, software quality. In this thesis, we introduce the concept of author entropy as a metric for quantifying interaction and collaboration (both within individual files and across projects), present results from two empirical observational studies of open-source projects, identify and analyze authorship and collaboration patterns within source code, demonstrate techniques for visualizing authorship patterns, and propose avenues for further research.
665

Open Innovation Strategy: Open platform-based digital mapping; as tools for value creation and value capture : Case study of OpenStreetMap and Google Maps

William, Jeffry Leonardo, Wijaya, Mochamad Rifky January 2017 (has links)
Open innovation has been rising in popularity as an alternative to traditional model for organizations to enhance innovation in their products or services. In the past, the innovation processes was time-consuming and costly. It has now become significantly efficient and effective, supported by the advancement of today’s IT such as Internet, Cloud Computing and Big Data. Open innovation has changed the aspect of the innovation source; from closed internal R&D to fully utilization of consumers’ collaboration. Decision to shift towards open innovation strategy has been lying on several areas including motivation, financial direction, and preference of the innovation strategies and business models that fitting the organizational core strategy. This research studied the relation of these areas and its effect; it determined the way IT-organization creates and captures value that were done by opening its product platform. This thesis was conducted to analyze the open innovation approach in an open digital navigation platform, featuring two platforms as case study: Google Maps and OpenStreetMap. The investigation emphasized the utilizing of the open innovation strategy to build its platform where crowdsourcing and open source software as objects highlighted in the research. The data was collected from secondary sources. Research findings suggested that crowdsourcing and open source software strategy are the main strategies of open innovation implemented in IT digital mapping platform to create and capture value. While these strategies have been practiced in both platforms, circumstances (motivation, financial direction, and business strategy) that hovering around the internal aspect of organizations affected the application of those strategies. The implementation results are differ according to preferred business model. The result of this research suggested that a non-profit based organization tends to utilize open innovation to improve the value of their product through consumer collaboration, while a profit based organization adopts open innovation to generate additional pool of revenue through customers’ feedback and input data. The open innovation leads to creation of a new business model as the foundation of innovation.
666

​​SOCIAL MEDIA INTELLIGENCE (SOCMINT) INVESTIGATIVE FRAMEWORK ​AS A HUMAN TRAFFICKING DETERRENT TOOL​

Ana P Slater (17363026) 09 November 2023 (has links)
<p dir="ltr">Open-source intelligence is utilized to identify individuals and compare changes in social media profiles and content. The proliferation of social media platforms and apps has facilitated the creation, distribution, and consumption of material related to human trafficking. Social media and internet service providers are not obligated to monitor users for trafficking-related activities or content. </p><p dir="ltr">However, an increase in minors joining social media leads to a rise in predatory activity. With the escalation of predatory behavior, research can focus on communication patterns, grooming, and victim profiles targeted by criminals. Technology has been developed to identify biometric points, aiding the identification of victims and criminals. Open-source intelligence is just one step toward gathering information about victims and criminals. It can be utilized throughout the investigative process to prevent human trafficking and related crimes.</p><p dir="ltr">This research employs open-source intelligence to provide investigators, law enforcement, and government agencies with preventative solutions for this global issue. The study focuses on extracting, collecting, and analyzing social media and OSINT, specifically social media intelligence (SOCMINT). Classification patterns were identified, and suspicious behavior indicative of human trafficking was detected using the JAPAN principle approach, reducing information overload. <br><br>Additionally, the research introduced a standardized investigation framework based on gathered data. This framework demonstrated the effectiveness of selected SOCMINT tools in enhancing human trafficking investigations. The study emphasizes the need for adaptive tools in SOCMINT, complemented by innovative approaches, to strengthen law enforcement efforts in deterring human trafficking. </p>
667

Towards an understanding of OSS ecosystem health : Health characteristics and the benefits and barriers of their digital evaluation tools / Mot en förståelse av OSS ekosystemhälsa : Hälsoegenskaper och fördelarna och hindren med deras digitala utvärderingsverktyg

Ozaeta-Arce, Alexander January 2020 (has links)
In order for the collaborations to be fruitful and sustainable between organisations and open source software (OSS) ecosystems, maintainers need to understand if, and how it is possible to evaluate OSS ecosystem health in an effective manner. Understanding how OSS maintainers characterise ecosystem health and how they evaluate these health characteristics using digital evaluation tools is interesting to analyse since it could give insight in how ecosystem health in practice is evaluated, which health aspects can be evaluated with the help of digital tools, and what barriers exists in the evaluation processes. This qualitative study is based on semi-structured interviews and was conducted in order to answer two research questions regarding this topic. The answers which were produced by the semi-structured interviews were transcribed and coded to later be analysed where conclusions could be drawn. The research attempts to broaden the academic perspective on how ecosystem maintainers view health and how health evaluation digital tools can help maintainers understand the state of their ecosystem health, and what barriers exist. It became clear during the research that answering how ecosystem health is to be characterized is incredibly difficult since the answer might differ in many ways depending on the nature of the project, where the project is in its life cycle, and who is asking the questions. Two views surrounding the definition of ecosystem health are presented, one revolving around longevity and the other revolving around an ecosystem life cycle perspective. Furthermore, Diversity, Governance, Activity and Licensing seem to be the health characteristics maintainers find to be the most important for ecosystem health evaluation. Out of these, tools such as the ones offered by CHAOSS, seem somewhat geared towards assessing Activity, Licensing and Diversity. Saving time and finding trends when evaluating health are examples of how tools help maintainers however, barriers exist for maintainers in smaller or younger projects who have not practiced health evaluation for a very long time. Finally, another barrier is the amount of additional context and human judgment which is needed when using tools for the health evaluation. / För att samarbetet mellan organisationer och öppen källkod (OSS) ekosystem ska vara gynnsamma och hållbara, måste ekosystemsunderhållare förstå om och hur det är möjligt att utvärdera OSS-ekosystemhälsa på ett effektivt sätt. Att förstå hur OSS-underhållare karaktäriserar ekosystemhälsa och hur de utvärderar dessa hälsoegenskaper med hjälp av digitala utvärderingsverktyg är intressant att analysera eftersom det skulle kunna ge insikt i hur ekosystemhälsa i praktiken utvärderas, vilka hälsoaspekter som kan utvärderas med hjälp av digitala verktyg, och vilka hinder som finns i utvärderingsprocesserna. Denna kvalitativa studie är baserad på semistrukturerade intervjuer och genomfördes för att besvara två forskningsfrågor inom detta ämne. Svaren som producerades av de semistrukturerade intervjuerna transkriberades och kodades för att senare analyseras där slutsatser kunde dras. Forskningen försöker vidga det akademiska perspektivet på hur ekosystemsunderhållare ser på hälsa och hur hälsoutvärderingsverktyg kan hjälpa underhållare att förstå hälsotillståndet för deras ekosystem, men också vilka hinder som finns i processerna. Det blev tydligt under forskningen att det är otroligt svårt att svara på hur ekosystemhälsa ska karakteriseras eftersom svaret kan skilja sig åt på många sätt beroende på projektets karaktär, var projektet befinner sig i sin livscykel och vem som ställer frågorna. Två synpunkter kring definitionen av ekosystemhälsa tas upp, en som kretsar kring livslängd, och den andra som kretsar kring ett ekosystemlivscykelperspektiv. Dessutom verkar Mångfald, Styrning, Aktivitet och Licensiering vara de hälsoegenskaper som underhållare anser vara de viktigaste för hälsoutvärdering av ekosystem. Av dessa verkar verktyg som de som erbjuds av CHAOSS något inriktade på att bedöma Aktivitet, licensiering och mångfald. Att spara tid och hitta trender när man utvärderar hälsa är exempel på hur verktyg hjälper underhållare, men hinder finns för underhållare i mindre eller yngre projekt som inte har praktiserat hälsoutvärdering under en längre period. Slutligen är en annan barriär den mängden ytterligare kontext och mänskligt omdöme som behövs när man använder verktyg för hälsoutvärderingen.
668

The State of Software Diversity in the Software Supply Chain of Ethereum Clients

Jönsson, Noak January 2022 (has links)
The software supply chain constitutes all the resources needed to produce a software product. A large part of this is the use of open-source software packages.Although the use of open-source software makes it easier for vast numbers of developers to create new products, they all become susceptible to the same bugs or malicious code introduced in components outside of their control.Ethereum is a vast open-source blockchain network that aims to replace several functionalities provided by centralized institutions.Several software clients are independently developed in different programming languages to maintain the stability and security of this decentralized model.In this report, the software supply chains of the most popular Ethereum clients are cataloged and analyzed.The dependency graphs of Ethereum clients developed in Go, Rust, and Java, are studied. These client are Geth, Prysm, OpenEthereum, Lighthouse, Besu, and Teku.To do so, their dependency graphs are transformed into a unified format.Quantitative metrics are used to depict the software supply chain of the blockchain.The results show a clear difference in the size of the software supply chain required for the execution layer and consensus layer of Ethereum.Varying degrees of software diversity are present in the studied ecosystem. For the Go clients, 97% of Geth dependencies also in the supply chain of Prysm.The Java clients Besu and Teku share 69% and 60% of their dependencies respectively.The Rust clients showing a much more notable amount of diversity, with only 43% and 35% of OpenEthereum and Lighthouse respective dependencies being shared. / Mjukvaruleverantörskedjan sammanfattar
all resurser som behövs för att producera en mjukvaruprodukt.
En stor del av detta är användningen av öppen källkod. Trots att
användningen av öppen källkod tillåter snabb produktion av nya
produkter, utsätter sig alla som använder den för potentiella bug-
gar samt attacker som kan tillföras utanför deras kontroll. Ethere-
um är ett stort blockkedje nätverk baserad på öppen källkod som
försöker konkurrera med tjänster som tidigare endast erbjudits
av centraliserade institutioner. Det finns flera implementationer
av mjukvaran som implementerar Ethereum som alla utvecklas
oberoende av varandra i olika programmerings språk för att öka
stabiliteten och säkerheten av den decentraliserade modellen. I
denna rapport studeras mjukvaruleverantörskedjorna av de mest
populära klienterna som implementerar Ethereum. Dessa utveck-
las i programmeringsspråken Go, Rust, och Java. Dom studerade
klienterna är Geth, Prysm, OpenEthereum, Lighthouse, Besu, och
Teku. För att genomföra studien transformeras klienternas mjuk-
varuleverantörskedjor till ett standardiserat format. Kvantitiva
mått används för att beskriva dessa leverantörskedjor. Resultaten
visar en stor skillnad i storlek av leverantörskedjor för olika
lager i Ethereum. Det visas att det finns en varierande mångfald
av mjukvara baserat på de språk som klienter är utvecklade med.
Leverantörskedjorna av Go klienter sammanfaller i princip fullt,
medan de av Java klienter sammanfaller med en stor majoritet,
och de av Rust klienter visar på mest mångfald i mjukvarupaket. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
669

Uppgradering av inspelningslösning för strömmad video : Förbättring och utvärdering av videokvalitet för distansinspektion

Manneby, Olof January 2023 (has links)
Projektet som beskrivs i denna rapport utgör modifieringen av videokonferensprogrammet Jitsi Meet, där öppen källkod tillåter anpassning av tjänsten för videoinspelning. Den mottagna videoströmmen som utgör samtalet mellan två deltagare skall sparas till disk, för att sedan möjliggöra för utredning och analys av sessionens videokvalitet. För att uppnå behoven för denna kvalitetskontroll krävdes en inspelning av mottagen video med jämförbar klarhet och komposition till originalet. Olika tillvägagångssätt utvärderades för att fullborda videoströmmens utvinning. Av de tre ansatser som presenterade sig valdes framtagandet av en inspelningsfunktion på klientsidan, en lösning som nyttjar användarens webbläsare för att utföra uppgiften. Jitsi Meet stöds av standardiserade teknologier inom webbaserad mediahantering, vilket möjliggjorde för tillämpningen av intilliggande metoder och verktyg i inspelningsfunktionens utveckling. Den resulterande inspelningsfunktionen utgör ena parten i ett tvåsidigt system, där den modifierade webbklienten agerar mottagare för en särskild mobilapplikations videoström. Enskilda bildrutor från enheternas videoinspelningar jämfördes via dess strukturella likheter och brusanalys, där skillnader i videons kvalitet före och efter strömning uppenbarade sig genom konkreta värden. Undersökning av resulterande grafer indikerar att särskilda händelser så som förändrad bithastighet, försvunna bildrutor och andra avvikelser kan identifieras med hjälp av den framtagna mätuppställningen. Projektet har därmed uppnått målet att producera en inspelningslösning för strömmad video som kan användas i kontroll av videokvalitet. Därmed ges goda förutsättningar för fortsatt arbete med utvärdering av kvalitet för videosamtal. / The project described in this report is the modification of the video conferencing software Jitsi Meet, where its open-source code allows for customization for the purpose of video recording. The received video stream that constitutes the conversation between two participants shall be saved to disk, to then allow for the examination and analysis of the session’s video quality. To meet the needs of this quality control, a recording of the received video with comparable clarity and composition to the original was required. Various approaches were evaluated to allow for the extraction of the video stream. Of the three approaches that presented themselves, the development of a recording function on the client side was chosen, a solution that uses the user's web browser to perform the task. Jitsi Meet is based on standardized technologies in web-based media management, which enabled the use of adjacent methods and tools in the development of its recording solution. The resulting recording solution forms one part of a two-sided system, where the modified web client acts as a receiver for a particular mobile application's video stream. Individual frames from the devices' video recordings were compared via their structural similarities and noise analysis, where differences in video quality before and after streaming revealed themselves through measured values. Study of the resulting graphs indicates that special events such as changing bit rates, missing frames and other deviations can be identified with the help of this measurement setup. The project has thus achieved its goal of producing a recording solution for streamed video that can be used in video quality control. This provides good conditions for continued work with evaluation of the quality of video calls.
670

Can Developer Data Predict Vulnerabilities? : Examining Developer and Vulnerability Correlation in the Kibana Project / Kan Utvecklardata Förutse Sårbarheter? : Studie om Korrelation Mellan Utvecklare och Sårbarheter i Kibanas Källkod

Lövgren, Johan January 2023 (has links)
Open-source software is often chosen with the expectation of increased security [1]. The transparency and peer review process of open development offer advantages in terms of more secure code. However, developing secure code remains a challenging task that requires more than just expertise. Even with adequate knowledge, human errors can occur, leading to mistakes and overlooked issues that may result in exploitable vulnerabilities. It is reasonable to assume that not all developers introduce bugs or vulnerabilities randomly since each developer brings unique experience and knowledge to the development process. The objective of this thesis is to investigate a method for identifying high-risk developers who are more likely to introduce vulnerabilities or bugs, which can be used to predict potential locations of bugs or vulnerabilities in the source code based on the developer who wrote the code. Metrics related to developers’ code churn, code complexity, bug association, and experience were collected during a case study of the open-source project Kibana. The findings provide empirical evidence suggesting that developers that write code with higher complexity and have a greater project activity pose a higher risk of introducing vulnerabilities and bugs. Developers who have introduced vulnerabilities also tend to exhibit higher code churn, code complexity, and bug association compared to those who have not introduced a vulnerability. However, the metrics employed in this study were not sufficiently discriminative for identifying developers with a higher risk of introducing vulnerabilities or bugs per commit. Nevertheless, the results of this study serve as a foundation for further research in this area exploring the topic further. / Programvara med öppen källkod väljs ofta med förväntningar om ökad säkerhet [1]. Transparensen och peer review-processen erbjuder fördelar i form av säkrare kod. Men att utveckla säker kod är fortfarande en utmanande uppgift som kräver mer än bara expertis. Även med tillräcklig kunskap kan mänskliga fel uppstå, vilket leder till misstag och förbisedda problem som kan resultera i exploaterbara sårbarheter. Det är rimligt att anta att inte alla utvecklare introducerar buggar eller sårbarheter slumpmässigt, eftersom varje utvecklare tar med sig unik erfarenhet och kunskap till utvecklingsprocessen. Syftet med detta examensarbete är att identifiera en metod att identifiera högriskutvecklare som är mer benägna att introducera sårbarheter eller buggar, vilket kan användas för att förutsäga potentiella platser för buggar eller sårbarheter i källkoden baserat på utvecklaren som skrev koden. Mätvärden relaterade till utvecklarnas omsättning av kod, kodkomplexitet, buggassociation och erfarenhet samlades in under en fallstudie av det öppna källkodsprojektet Kibana. Fynden ger empiriska bevis som tyder på att utvecklare med högre kodkomplexitetsmått och större projektaktivitet utgör en högre risk för att introducera sårbarheter och buggar. Utvecklare som har introducerat sårbarheter tenderar också att uppvisa högre omsättning av kod, kodkomplexitet och buggassociation jämfört med de som inte har introducerat en sårbarhet. De mätvärden som användes i denna studie var dock inte tillräckligt diskriminerande för att identifiera utvecklare med en högre risk att introducera sårbarheter eller buggar per commit. Ändå fungerar resultaten av denna studie som en grund för vidare studier inom detta område.

Page generated in 0.0536 seconds