• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2635
  • 483
  • 390
  • 370
  • 58
  • 45
  • 35
  • 19
  • 10
  • 10
  • 9
  • 7
  • 6
  • 3
  • 2
  • Tagged with
  • 4631
  • 4631
  • 2051
  • 1971
  • 1033
  • 617
  • 521
  • 485
  • 456
  • 448
  • 421
  • 416
  • 408
  • 337
  • 310
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Towards Guidelines for Conducting Software Process Simulation in Industry

bin Ali, Nauman January 2013 (has links)
Background: Since the 1950s explicit software process models have been used for planning, executing and controlling software development activities. To overcome the limitation of static models at capturing the inherent dynamism in software development, Software Process Simulation Modelling (SPSM) was introduced in the late 1970s. SPSM has been used to address various challenges, e.g. estimation, planning and process assessment. The simulation models developed over the years have varied in their scope, purpose, approach and the application domain. However, there is a need to aggregate the evidence regarding the usefulness of SPSM for achieving its intended purposes. Objective: This thesis aims to facilitate adoption of SPSM in industrial practice by exploring two directions. Firstly it aims to establish the usefulness of SPSM for its intended purposes, e.g. for planning, training and as an alternative to study the real world software (industrial and open source) development. Secondly to define and evaluate a process for conducting SPSM studies in industry. Method: Two systematic literature reviews (SLR), a literature review, a case study and an action research study were conducted. A literature review of existing SLRs was done to identify the strategies for selecting studies. The resulting process for study selection was utilized in an SLR to capture and aggregate evidence regarding the usefulness of SPSM. Another SLR was used to identify existing process descriptions of how to conduct an SPSM study. The consolidated process and associated guidelines identified in this review were used in an action research study to develop a simulation model of the testing process in a large telecommunication vendor. The action research was preceded by a case study to understand the testing process at the company. Results: A study selection process based on the strategies identified from literature was proposed. It was found to systemize selection and to support inclusiveness with reasonable additional effort in an SLR of the SPSM literature. The SPSM studies identified in literature scored poorly on the rigor and relevance criteria and lacked evaluation of SPSM for the intended purposes. Lastly, based on literature, a six-step process to conduct an SPSM study was used to develop a System Dynamics model of the testing process for training purposes in the company. Conclusion: The findings identify two potential directions for facilitating SPSM adoption. First, by learning from other disciplines having done simulation for a longer time. It was evident how similar the consolidated process for conducting an SPSM study was to the process used in simulation in general. Second the existing work on SPSM can at best be classified as strong ``proof-of-concept’’ that SPSM can be useful in the real world software development. Thus, there is a need to evaluate and report the usefulness of SPSM for the intended purposes with scientific rigor.
232

Experimental Evaluation of Tools for Mining Test Execution Logs

Parmeza, Edvin January 2021 (has links)
Data and software analysis tools are considered a very beneficial and advantageous approach that is used in the software industry environments. They are powerful tools that help to generate testing, web browsing and mail server statistics in different formats. These statistics are also known as logs or log files, and they can be generated in different formats, textually or visually, depending on the tool which tests them. Though these tools have been used in software industry for many years, there is still a lack of fully understanding them by software developers and testers. Literature study shows that related work on test execution log analysis is rather limited. Studies on evaluating a subset of features related to test execution logs are missing from existing literature since even those that exist are usually focused only on a one – feature comparison (e.g., fault – localization algorithms). One of the reasons for this issue might be the lack of experience or training. Some practitioners are also not fully involved with the testing tools that their companies use, so lack of time and involvement might be another reason that there are only a few experts on this field, who can understand these tools very well and find any error in a short time. This makes the need for more research on this topic even more important. In this thesis report, we presented a case study focused on the evaluation of tools which are used for analyzing test execution logs. Our work relied on three different studies: - Literature study - Experimental study - Expert - based survey study So, in order to get familiar with the topic, we started with the literature study. It helped us to investigate the current tools and approaches that exist in the software industry. It was a very important, but also difficult step, since it was hard to find research papers that are relevant to our work. Our topic was very specific, while many of research papers had performed just a general investigation on different tools. That is why in our literature search, in order to get relevant papers, we had to use specific digital libraries, terms and keywords, and a criteria for literature selection. In the next step, we experimented with two specific tools, in order to investigate their capabilities and features which they provide to analyze the execution logs. The tools we managed to work with are Splunk and Loggly. They were the only tools available for us which would comform to our thesis demands, containing the features that we needed for our work to be more complete. The last part of the study was a survey, which we sent to different experts. A total of twenty-six practitioners responded and their answers gave us a lot of useful information to enrich our work. The contributions of this thesis will be: 1. The analysis of the findings and results which are derived from the three conducted studies in order to identify the performance of the tools, the fault localization techniques they use, the test failures that occur during the test runs and conclude which one is better in these terms. 2. The proposals on how to improve further our work on log analysis tools. We explain what is needed in addition in order to understand better these tools and to provide correct results during testing.
233

Augmenting Code Review with developers’ Affect analysis : Systematic Mapping Study and Survey

Mada, Vaishnavi January 2021 (has links)
No description available.
234

Databasmodeller och tidsseriedata : En jämförelse av svarstider

Landin, Sandra January 2020 (has links)
Målet med studien har varit att undersöka hur olika databasmodeller presterar vid inhämtning av tidsseriedata. I studien jämförs en tidsseriedatabas med en databas av annan modell. Studien har genomförts enligt metoden Design Science Research. Metoden lämpar sig väl vid systemutveckling och bygger på att skapa en artefakt, en prototyp. Med hjälp av denna artefakt kan experiment utföras för att simulera verkliga händelser. I denna studie simuleras värden som ska motsvara en temperatur vid en tidpunkt, registrerat av en sensor och skickat till en databas. För ändamålet har en enkortsdator (minidator)  av typen RaspberryPi använts, då den är vanligt förekommande inom Internet of Things. Två databasmodeller har skapats som artefakt: TinyDB och InfluxDB. TinyDB är av typen NoSQL och InfluxDB är en tidsseriedatabas.  Databaserna har fyllts med data med hjälp av ett program som tar fram ett slumpmässigt värde och en tidsstämpel. Experimenten som därefter har utförts och jämfört de två databasmodellerna innebär mätning av hur långa svarstiderna är vid olika hämtningar av data, samtidigt som datorns processoraktivitet har observerats. Både skapandet av artefakterna och experimenten har utförs med program skrivna i språket Python. Resultatet för samtliga experiment visar på tidsseriedatabasens fördel. Den är både snabbare och belastar datorn mindre för de hämtningar som gjorts. Framtida arbete som kan utföras omfattar test av flera databasmodeller, större datamängder, annan hårdvara och program. / The goal of this project has been to study how different database models perform when fetching time series data. In the study, a time series database and a database of another type are compared. The study was conducted according to the Design Science Research Methodology. This method is well suited for system development and is based on creating an artifact, a prototype. Using this artefact, experiments can be performed to simulate real events. This study simulates values that should correspond to temperatures at a specific time, registered by a sensor and sent to a database. For this purpose, a small, single-board  computer of the type RaspberryPi has been used, as it is common in Internet of Things. Two database models have been created as artefacts: TinyDB and InfluxDB. TinyDB is of the NoSQL type and InfluxDB is a time series database. The databases have been filled with data using a program that generates a random value and a timestamp. The experiments that have been performed to compare the two models are measurements of how long the response times are at different fetching of data, at the same time as the computer´s processor activity has been observed. Both the implementation of the artifacts as well as the experiments have been performed using programs written in the Python language. The results for all experiments show the advantage of the time series database. It is faster and it also burdens the CPU less for the fetch requests in this study. Future work may involve testing of multiple database models, larger amounts of data, other hardware and programs.
235

Which Abilities and Attitudes Matter Most? : Understanding and Investigating Capabilities in Industrial Agile Contexts

Vishnubhotla, Sai Datta January 2019 (has links)
Background: Over the past decades, advancements in the software industry and the prevalence of Agile Software Development (ASD) practices have increased the prominence of individual and interpersonal skills. The humancentric nature of ASD practices makes it imperative to identify and to assign a capable professional to a team. While capabilities of professionals influence team performance and lead the path to a project’s success, the area of capability measurement in ASD remains largely unexplored. Objectives: This thesis aims to aggregate evidence from both the state of the art and practice to understand capability measurement in ASD. Further, to support research and practice towards composing agile teams, this thesis also investigates the effects of capability measures on team-level aspects (team performance and team climate) within industrial contexts. Method: A mixed-methods approach was employed to address the thesis’ objectives. A Systematic Literature Review (SLR) and an industrial survey were conducted to identify and gather evidence in relation to individual and team capability measures, which are pertinent to ASD context. A case study and another industrial survey were carried out to provide insights and extend support towards agile team composition. Results: Our SLR results showed that a major portion of former studies discussed capability measures in relation to affective, communication, interpersonal and personal aspects. Results from our survey also aligned with these findings, where, measures associated with the aforementioned aspects were observed to be widely known to practitioners and were also perceived by them as highly relevant in ASD contexts. Our case study conducted at a small-sized organization revealed multiple professional capability measures to be affecting team performance. Whereas, our survey conducted at a large-sized organization identified an individual’s ability to easily get along with other team members (agreeableness personality trait) to have a significant positive influence on the person’s perceived level of team climate. Conclusion: In this thesis, the empirical evidence gathered by employing mixed-methods and examining diverse organizational contexts, contributed towards better realization of capability measurement in ASD. In order to extend support towards team composition in ASD, this thesis presents two approaches. The first approach is based on developing an agile support tool that coordinates capability assessments and team composition. The second approach is based on establishing team climate forecasting models that can provide insights about how the perceived level of climate within a team would vary based on its members’ personalities. However, in order to improve both approaches, it is certainly necessary to examine the effects of diverse capability measures.
236

Webbläsartillägg för trovärdighetsbedömningar av källor : - baserat på bibliometrisk data tillhandahållet av Wikidata

Henriksson, Andreas January 2020 (has links)
Genom de senaste åren har en mängd olika försök till verktyg som bedömer trovärdigheten för länkar växt fram. Dessa verktyg har oftast ett gemensamt problem i bedömning vilket är människan. Att ha människans åsikter som en del av trovärdigheten gör att bedömningen blir lättmanipulerad och därför inte trovärdig. Ett nytt verktyg utvecklas i denna studie där själva bedömningen för trovärdigheten tillämpar bibliometrisk data genom Danish Bibliometric Research Indicator som är tillgänglig på Wikidata. Denna lösning skulle då alltså utesluta lättmanipulerade metoder så som individers åsikter. Problemet som studeras är kopplingen mellan länk och betyg som angrips på tre olika tillvägagångssätt. Dessa tillvägagångssätt eller metoder jämförs i Effektivitet och nytta. Effektiviteten mättes i tiden det tog att hämta samtliga betyg för en sida och nyttan mättes i antalet hämtade betyg per sida. Att koppla domännamn till Wikipediasidor visade sig generera högst antal betyg medans att koppla domännamn direkt mot Wikidata hade en knapp fördel i tid. Den tredje metoden, som använde sig av SPARQL, skulle på förhand vara det bästa men visade sig ha begränsningar i brukandet som gjorde det oanvändbart. Vidare så bekräftas att kopplingen mellan länk och betyg på Wikidata gick att utföra och därav togs ett verktyg fram som utesluter lättmanipulerade metoder i trovärdighetsbedömningen. / Over the past few years, a variety of attempts to create tools that assess the credibility of links have been made. These tools usually have a common problem in assessment which is the human. Having the human’s opinion as part of their credibility makes the assessment easily manipulated and therefore not credible. A new tool is being developed in this study where the credibility assessment itself uses bibliometric data provided by Danish Bibliometric Research Indicator Level available on Wikidata. This solution would therefore excludes easily manipulated methods such as individual’s opinion. The problem being studied is the link between URL and credibility rating which is attacked in three different approaches. These approaches or methods are compared in duration of action and benefit. Duration of action was measured in the time it took to retrieve all the ratings for one page and the benefit was measured in the number of downloaded ratings per page. Linking domain names to pages on Wikipedia was found to generate the greatest number of ratings while linking domain names directly to Wikidata had a scarce advantage in time. The third method, which used SPARQL, would be the most beneficial solution in advance but turned out to have limitations in its practice that made it unusable. Furthermore, it is confirmed that the link between URL and credibility ratings on Wikidata was carried out. Hence, a tool was developed that excludes easily manipulated methods in the credibility assessment.
237

Nodgrafer i gruvsystem / Nodegraphs for mine tunnel complexes

Dahlén, Christina, Olsson, Alex, Lindgren, Jens, Lindström, Ebba, Häggström, Isak, Hellqvist, Viktor, Jonsson, Jonathan, Lessongkram, Techit January 2020 (has links)
Denna kandidatrapport avhandlar ett arbete utfört av åtta studenter som studerar på civilingenjörsprogrammen i mjukvaruteknik eller datateknik på Linköpings universitet  och  har deltagit i  kursen Kandidatprojekt i programvaruutveckling TDDD96. Rapporten är en sammanfattning av det arbete studenterna utfört under vårterminen 2020, då kursen fortlöpt. Studenterna har under kursens gång arbetat med att leverera en slutprodukt som ska generera en nodgraf beskrivande gruvschakten i LKABs gruvsystem åt kunden Combitech. Slutsatserna förklarar hur projektgruppen lyckats skapa värde för kunden genom att producera ett generellt verktyg som kan användas med varierad indata, lyckats identifiera processrelaterade, teamarbetsrelaterade och tekniska erfarenheter samt att projektgruppen inte fann något stöd i att kontinuerligt revidera en systemanatomi. Rapporten behandlar även individuella studier skrivna av gruppmedlemmarna som analyserar ett valfritt ämne kopplat till arbetet.
238

Vision Control – Visualiserar data som aldrig förr / Vision Control – Visualizes data like never before

Ekroth, Robin, Frost, Agnes, Granström, Isak, Hägglund, Martin, Kamsvåg, Ivar, Larsson-Kapp, Ernst, Lundin, Hugo, Sundberg, Adam January 2022 (has links)
No description available.
239

E-frikort : Modernisering av e-tjänst inom hälso- och sjukvården

Kadamani, Ibrahim January 2022 (has links)
Den svenska digitaliseringsresan gör så att många av de tidigare manuella arbetena numera kan utföras via digitala tjänster. Detta sparar resurser för den svenska staten samt att det blir lättare för befolkningen att hantera och administrera sin vardag. Ett exempel på detta är e-frikort som automatiskt hanterar vårdbesök i öppenvården och ser till att alla människor får sitt frikort enligt högkostnadsskyddet. Före digitaliseringen av e-frikort så hanterades högkostnadsskyddet manuellt och patienten fick handha kvitton för att sedan registrera dessa hos vårdgivaren. Företaget CGI som utvecklat e-frikort har beslutat att uppdatera tjänsten och byta programmeringsspråket. CGI vill behålla det bekanta gränssnittet som användarna är vana med men även uppdatera till ett modernare utseende. Den äldre versionen av e-frikort analyseras och översätts till det nya programmeringsspråket. En ny statistiksida implementeras för att kunna visa statistik över e-frikort inom olika regioner. Eftersom tjänsten är utvecklad för vårdpersonal så utfördes användartester mot den målgruppen. Användartester gjordes emot den äldre och det nya gränssnittet. Resultatet av användartesterna visar att den nya tjänsten fortfarande efterliknar den gamla. Användarna känner sig bekanta med gränssnittet men uttrycker missnöje över att det saknar grafiska element. Användartesterna visar även på att flera utav de funktioner som tjänsten erbjuder i många fall inte används. Då tjänsten fortfarande är under utveckling och inte i ett stadie för att göra en utredning om användbarhet ses detta som ett utav de kommande målen för att kunna upfylla WCAG-kraven. Vidare kan det diskuteras om tjänsten kan förenklas för specifika användningsområden beroende på vårdpersonalens syfte och om tjänsten kan komma att erbjudas även för offentligheten. / The Swedish digitization journey means that many of the previous manual work can now be carried out via digital services. This saves resources for the Swedish state and it makes it easier for the population to manage and administer their everyday lives. An example of this is e-frikort that automatically handles care visits in outpatient care and sees to ensure that all people receive their exemption card under the high-cost protection. Prior to the digitization of e-frikort, the highcost protection was handled manually and the patient had to handle receipts and then register them at the local caregiver. The company CGI which developed e-frikort has decided to update the service and change the programming language. CGI wants to keep the familiar interface that the users are used to but also update to a more modern look. The older version of e-frikort is analyzed and translated into the new programming language. A new statistics page is going to be implemented to display statistics on e-frikort within different regions. Since the service is developed for healthcare staff, user tests were performed on that target group. The user tests were performed against the old and the new interface. The results of the user tests show that the new service still resembles the old one. Users feel familiar with the interface but express dissatisfaction that it lacks graphical elements. The user tests also show that several of the function that the service offers are in many cases not used. As the service still is under development and not in a stage to conduct an investigation of usability, this is seen as one of the future goals to be able to meet the WCAG-requirements. Furthermore it can be discussed whether the service can be simplified for specific areas of use depending on the purpose of the care staff and whether the service may also be offered to the public.
240

Workflow Management Service based on an Event-driven Computational Cloud

Chen, Ziwei January 2013 (has links)
The Event-driven computing paradigm, also known as trigger computing, is widely used in computer technology. Computer systems, such as database systems, introduce trigger mechanism to reduce repetitive human intervention. With the growing complexity of industrial use case requirements, independent and isolated triggers cannot fulfil the demands any more. Fortunately, an independent trigger can be triggered by the result produced by other triggered actions, and that enables the modelling of complex use cases, where the chains or graphs that consist of triggers are called workflows. Therefore, workflow construction and manipulation become a must for implementing the complex use cases. As the developing platform of this study, VISION Cloud is a computational storage system that executes small programs called storles as independent computation units in the storage. Similar to the trigger mechanism in database systems, storlets are triggered by specific events and then execute computations. As a result, one storlet can also be triggered by the result produced by other storlets, and it is called connections between storlets. Due to the growing complexity of use case requirements, an urgent demand is to have starlet workflow management supported in VISION system. Furthermore, because of the existence of connections between storlets in VISION, problems as non-termination triggering and unexpected overwriting appear as side-effects. This study develops a management service that consists of an analysis engine and a multi-level visualization interface. The analysis engine checks the connections between storlets by utilizing the technology of automatic theorem proving and deterministic finite automaton. The involved storlets and their connections are displayed in graphs via the multi-level visualization interface. Furthermore, the aforementioned connection problems are detected with graph theory algorithms. Finally, experimental results with practical use case examples demonstrate the correctness and comprehensiveness of the service. Algorithm performance and possible optimization are also discussed. They lead the way for future work to create a portable framework of event-driven workflow management services.

Page generated in 0.111 seconds