• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 8
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 49
  • 49
  • 44
  • 21
  • 14
  • 13
  • 11
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Using Explicit State Space Enumeration For Specification Based Regression Testing

Chakrabarti, Sujit Kumar 01 1900 (has links)
Regression testing of an evolving software system may involve significant challenges. While, there would be a requirement of maximising the probability of finding out if the latest changes to the system has broken some existing feature, it needs to be done as economically as possible. A particularly important class of software systems are API libraries. Such libraries would typically constitute a very important component of many software systems. High quality requirements make it imperative to continually optimise the internal implementation of such libraries without affecting the external interface. Therefore, it is preferred to guide the regression testing by some kind of formal specification of the library. The testing problem comprises of three parts: computation of test data, execution of test, and analysis of test results. Current research mostly focuses on the first part. The objective of test data computation is to maximise the probability of uncovering bugs, and to do it with as few test cases as possible. The problem of test data computation for regression testing is to select a subset of the original test suite running which would suffice to test for bugs probably inserted in the modifications done after the last round of testing. A variant of this problem is that of regression testing of API libraries. The regression testing of an API is usually done by making function calls in such a way that the sequence of function calls thus made suffices a test specification. The test specification in turn embodies some concept of completeness. In this thesis, we focus on the problem of test sequence computation for the regression testing of API libraries. At the heart of this method lies the creation of a state space model of the API library by reverse engineering it by executing the system, with guidance from an formal API specification. Once the state space graph is obtained, it is used to compute test sequences for satisfying some test specification. We analyse the theoretical complexity of the problem of test sequence computation and provide various heuristic algorithms for the same. State space explosion is a classical problem encountered whenever there is an attempt of creating a finite state model of a program. Our method also faces this limitation. We explore a simple and intuitive method of ameliorating this problem – by simply reducing the size of the state vector. We develop the theoretical insights into this method. Also, we present experimental results indicating the practical effectiveness of this method. Finally, we bring all this together into the design and implementation of a tool called Modest.
42

Détection des utilisations à risque d’API : approche basée sur le système immunitaire

Gallais-Jimenez, Maxime 06 1900 (has links)
No description available.
43

Crowd cookbooks: usando conhecimento de multidão a partir de sítios de perguntas e respostas para documentação de apis

Souza, Lucas Batista Leite de 23 July 2014 (has links)
Developers of reusable software elements, such as libraries, usually have the responsibility to provide comprehensive and high quality documentation to enable eective reuse of those elements. The eective reuse of libraries depends upon the quality of the API (Application Program Interface) documentation. Well established libraries typically have comprehensive API documentation, for example in Javadocs. However, they also typically lack examples and explanations, which may dicult the eective reuse of the library. StackOverow.com (SO) is a Question and Answer service directed to issues related to software development. In SO, a developer can post questions related to a programming topic and other members of the site can provide answers to help him/her solve the problem he/she has at hand. Despite of the increasing use of SO by the software development community, the information related to a particular library is spread along the website. Thus, SO still lacks an organization of its crowd knowledge. In this dissertation, we present a semi-automatic approach that organizes the information available on SO in order to build a kind of documentation for APIs, called cookbooks (recipe-oriented books). The cookbooks generated by the approach are called crowd cookbooks. In order to evaluate the proposed approach, cookbooks were generated for three APIs widely used by the software development community: SWT,LINQ and QT. Desired features that cookbooks must meet were identied and a study with human subjects was conducted to assess to what extent the generated cookbook meet those features. Through the study it was also possible to identify what is the perceived usefulness by the subjects in relation to the use of cookbooks in APIs learning. The results showed that the cookbooks built using the proposed strategy, in general, meet the identied features. Furthermore, most human subjects considered that cookbooks do not have an appropriate format to the learning of APIs. / Desenvolvedores de elementos reusáveis de software, como as bibliotecas, em geral têm a responsabilidade de disponibilizar documentação abrangente e de alta qualidade para permitir o reuso efetivo desses elementos. O reuso efetivo de bibliotecas depende da qualidade da documentação da API (Interface para Programação de Aplicativos). Bibliotecas bem estabelecidas tipicamente têm documentação abrangente, por exemplo em Javadocs. Porém, essa documentação geralmente carece de exemplos e explicações, o que pode dicultar o reuso efetivo da biblioteca. Stackoverow.com (SO) é um serviço de perguntas e respostas (Q&A) direcionado a questões relacionadas ao desenvolvimento de software. No SO, um desenvolvedor pode postar perguntas relacionadas a um tópico de programação e outros membros do site podem disponibilizar respostas para ajudá-lo a resolver o problema que ele tem em mãos. Apesar da utilização crescente do SO pela comunidade de desenvolvimento de software, a informação relação a um biblioteca está espalhada ao longo do site. Assim, o SO ainda carece de uma organização do crowd knowledge nele contido. Nessa dissertação, será apresentada uma abordagem semi-automatizada que organiza a informação disponível no SO para a construção de um tipo de documentação para APIs, conhecido por cookbooks (livros orientados a receitas). Os cookbooks produzidos pela abordagem proposta são chamados crowd cookbooks. Para avaliar a abordagem proposta foram gerados cookbooks para três APIs amplamente utilizadas pela comunidade de desenvolvimento de software: SWT, LINQ e QT. Foram identicadas características desejáveis de cookbooks e realizado um estudo com sujeitos humanos para entender em que grau os cookbooks construídos atendem a estas características. Por meio estudo também foi possível compreender melhor os pers de uso dos cookbooks mais apropriados em relação ao aprendizado de APIs. Os resultados mostraram que os cookbooks construídos pela estratégia proposta, em geral, atendem às características identicadas. Além disso, a maior parte dos sujeitos humanos considerou que cookbooks não possuem um formato adequado ao aprendizado de APIs. / Mestre em Ciência da Computação
44

Explorativ studie av faktorer som påverkar framgångsrik utveckling och användning av Internet of Things-enheter : En kvalitativ intervjustudie fokuserad på informationssäkerhet och personlig integritet / Exploratory Study on Factors that Affect the Successful Deployment and Use of Internet-of-Things Devices : A Qualitative Interview Study Focused on Information Security and Personal Integrity

Engberg, Patricia January 2017 (has links)
Året är 2017 och den ökande användningen av enheter som är kopplade mot internet har exploderat okontrollerat. Enheter modifieras i en snabb takt för att kunna kopplas samman med syftet att få en mer effektivare vardag, i folkmun häftigare enheter och framförallt för att generera en ökad försäljning av dessa produkter. Kommunikationsverktygen kopplas samman och därmed samlas en stor mängd data på enskilda enheter som kan bli sårbara i form av övervakning, intrång och övertagande för syften som innehavaren kan vara helt omedveten om. Om individens enhet är medverkande i scenariot att stänga ned servern som håller en samhällstjänst uppe under en tidpunkt av en allvarlig fysisk attack mot Sverige: Vem bär i så fall skulden? Detta kallas överbelastningsattack och är en av många potentiella sårbarheter i dagens samhälle.   Internet of Things är ett nytt fenomen som är relativt outforskat med många öppna och obesvarade frågor. Forskningen ligger otvivelaktigt steget efter. Det gemensamma i forskningsartiklarna är slutsatsen: vi ska forskare vidare inom detta område. Alarmerande eftersom enheterna redan är närvarande i vardagen. Informationssäkerheten och individens personliga integritet är vad som står på spel, och frågan är vad individerna är villiga att offra för att ha de senaste produkterna.   Metoden och genomförandet av denna kandidatuppsats har bestått av det explorativa tillvägagångssättet. En litteraturstudie av forskningsartiklar och personliga intervjuer har genomförts med relevanta individer inom området.   Denna kandidatuppsats kommer inte att ge facit på vad som bör göras härnäst. Målet är att upplysa om olika problemställningar avseende fenomenet Internet of Things. I uppsatsen ligger fokuseringen på att generera en beskrivning av hur informationssäkerhet och personlig integritet kan påverka skapande och användandet av enheter som är uppkopplade mot Internet of Things.   Syftet med denna explorativa studie är att identifiera och beskriva faktorer som bidrar med ett framgångsrikt skapande och användande av Internet of Things enheter med fokusering på informationssäkerhet och personlig integritet. Slutsatserna är att faktorer som påverkar är bekräftelse av identitet, standarder, otillgänglig åtkomst samt användarkontroll.
45

Cooperative Execution of Opencl Programs on Multiple Heterogeneous Devices

Pandit, Prasanna Vasant January 2013 (has links) (PDF)
Computing systems have become heterogeneous with the increasing prevalence of multi-core CPUs, Graphics Processing Units (GPU) and other accelerators in them. OpenCL has emerged as an attractive programming framework for heterogeneous systems. However, utilizing mul- tiple devices in OpenCL is a challenge as it requires the programmer to explicitly map data and computation to each device. Utilizing multiple devices simultaneously to speed up execu- tion of a kernel is even more complex, as the relative execution time of the kernel on different devices can vary significantly. Also, after each kernel execution, a coherent version of the data needs to be established. This means that, in order to utilize all devices effectively, the programmer has to spend considerable time and effort to distribute work across all devices, keep track of modified data in these devices and correctly perform a merging step to put the data together. Further, the relative performance of a program may vary across different inputs, which means a statically determined work distribution may not work well. In this work, we present FluidiCL, an OpenCL runtime that takes a program written for a single device and uses multiple heterogeneous devices to execute each kernel. The runtime performs dynamic work distribution and cooperatively executes each kernel on all available devices. Since we consider a setup with devices having discrete address spaces, our solution ensures that execution of OpenCL work-groups on devices is adjusted by taking into account the overheads for data management. The data transfers and data merging needed to ensure coherence are handled transparently without requiring any effort from the programmer. Flu- idiCL also does not require prior training or profiling and is completely portable across dif- ferent machines. Because it is dynamic, the runtime is able to adapt to system load. We have developed several optimizations for improving the performance of FluidiCL. We evaluate the runtime across different sets of devices. On a machine with an Intel quad-core processor and an NVidia Fermi GPU, FluidiCL shows a geomean speedup of nearly 64% over the GPU, 88% over the CPU and 14% over the best of the two devices in each benchmark. In all benchmarks, performance of our runtime comes to within 13% of the best of the two devices. FluidiCL shows similar results on a machine with a quad-core CPU and an NVidia Kepler GPU, with up to 26% speedup over the best of the two. We also present results considering an Intel Xeon Phi accelerator and a CPU and find that FluidiCL performs up to 45% faster than the best of the two devices. We extend FluidiCL from a CPU–GPU scenario to a three-device setup hav- ing a quad-core CPU, an NVidia Kepler GPU and an Intel Xeon Phi accelerator and find that FluidiCL obtains a geomean improvement of 6% in kernel execution time over the best of the three devices considered in each case.
46

Hardware/Software Co-Verification Using the SystemVerilog DPI

Freitas, Arthur 08 June 2007 (has links)
During the design and verification of the Hyperstone S5 flash memory controller, we developed a highly effective way to use the SystemVerilog direct programming interface (DPI) to integrate an instruction set simulator (ISS) and a software debugger in logic simulation. The processor simulation was performed by the ISS, while all other hardware components were simulated in the logic simulator. The ISS integration allowed us to filter many of the bus accesses out of the logic simulation, accelerating runtime drastically. The software debugger integration freed both hardware and software engineers to work in their chosen development environments. Other benefits of this approach include testing and integrating code earlier in the design cycle and more easily reproducing, in simulation, problems found in FPGA prototypes.
47

Comparative study of open source and dot NET environments for ontology development.

Mahoro, Leki Jovial 05 1900 (has links)
M. Tech. (Department of Information & Communication Technology, Faculty of Applied and Computer Sciences), Vaal University of Technology. / Many studies have evaluated and compared the existing open-sources Semantic Web platforms for ontologies development. However, none of these studies have included the dot NET-based semantic web platforms in the empirical investigations. This study conducted a comparative analysis of open-source and dot NET-based semantic web platforms for ontologies development. Two popular dot NET-based semantic web platforms, namely, SemWeb.NET and dotNetRDF were analyzed and compared against open-source environments including Jena Application Programming Interface (API), Protégé and RDF4J also known as Sesame Software Development Kit (SDK). Various metrics such as storage mode, query support, consistency checking, interoperability with other tools, and many more were used to compare two categories of platforms. Five ontologies of different sizes are used in the experiments. The experimental results showed that the open-source platforms provide more facilities for creating, storing and processing ontologies compared to the dot NET-based tools. Furthermore, the experiments revealed that Protégé and RDF4J open-source and dotNetRDF platforms provide both graphical user interface (GUI) and command line interface for ontologies processing, whereas, Jena open-source and SemWeb.NET are command line platforms. Moreover, the results showed that the open-source platforms are capable of processing multiple ontologies’ files formats including Resource Description Framework (RDF) and Ontology Web Language (OWL) formats, whereas, the dot NET-based tools only process RDF ontologies. Finally, the experiment results indicate that the dot NET-based platforms have limited memory size as they failed to load and query large ontologies compared to open-source environments.
48

Improving supply chain visibility within logistics by implementing a Digital Twin : A case study at Scania Logistics / Att förbättra synlighet inom logistikkedjor genom att implementera en Digital Tvilling : En fallstudie på Scania Logistics

BLOMKVIST, YLVA, ULLEMAR LOENBOM, LEO January 2020 (has links)
As organisations adapt to the rigorous demands set by global markets, the supply chains that constitute their logistics networks become increasingly complex. This often has a detrimental effect on the supply chain visibility within the organisation, which may in turn have a negative impact on the core business of the organisation. This paper aims to determine how organisations can benefit in terms of improving their logistical supply chain visibility by implementing a Digital Twin — an all-encompassing virtual representation of the physical assets that constitute the logistics system. Furthermore, challenges related to implementation and the necessary steps to overcome these challenges were examined.  The results of the study are that Digital Twins may prove beneficial to organisations in terms of improving metrics of analytics, diagnostics, predictions and descriptions of physical assets. However, these benefits come with notable challenges — managing implementation and maintenance costs, ensuring proper information modelling, adopting new technology and leading the organisation through the changes that an implementation would entail.  In conclusion, a Digital Twin is a powerful tool suitable for organisations where the benefits outweigh the challenges of the initial implementation. Therefore, careful consideration must be taken to ensure that the investment is worthwhile. Further research is required to determine the most efficient way of introducing a Digital Twin to a logistical supply chain. / I takt med att organisationer anpassar sig till de hårda krav som ställs av den globala marknaden ökar också komplexiteten i deras logistiknätverk. Detta har ofta en negativ effekt på synligheten inom logistikkedjan i organisationen, vilken i sin tur kan ha en negativ påverkan på organisationens kärnverksamhet. Målet med denna studie är att utröna de fördelar som organisationer kan uppnå vad gäller att förbättra synligheten inom deras logistikkedjor genom att implementera en Digital Tvilling — en allomfattande virtuell representation av de fysiska tillgångar som utgör logistikkedjan.  Resultaten av studien är att Digitala Tvillingar kan vara gynnsamma för organisationer när det gäller att förbättra analys, diagnostik, prognoser och beskrivningar av fysiska tillgångar. Implementationen medför dock utmaningar — hantering av implementations- och driftskostnader, utformning av informationsmodellering, anammandet av ny teknik och ledarskap genom förändringsarbetet som en implementering skulle innebära.  Sammanfattningsvis är en Digital Tvilling ett verktyg som lämpar sig för organisationer där fördelarna överväger de utmaningar som tillkommer med implementationen. Därmed bör beslutet om en eventuell implementation endast ske efter noggrant övervägande. Vidare forskning behöver genomföras för att utröna den mest effektiva metoden för att introducera en Digital Tvilling till en logistikkedja.
49

Development of a pipeline to allow continuous development of software onto hardware : Implementation on a Raspberry Pi to simulate a physical pedal using the Hardware In the Loop method / Utveckling av en pipeline för att ge upphov till kontinuerligt utvecklande av mjukvara på hårdvara : Implementation på en Raspberry Pi för att simulera en fysisk pedal genom användandet av Hardware In the Loop-metoden

Ryd, Jonatan, Persson, Jeffrey January 2021 (has links)
Saab want to examine Hardware In the Loop method as a concept, and how an infrastructure of Hardware In the Loop would look like. Hardware In the Loop is based upon continuously testing hardware, which is simulated. The software Saab wants to use for the Hardware In the Loop method is Jenkins, which is a Continuous Integration, and Continuous Delivery tool. To simulate the hardware, they want to examine the use of an Application Programming Interface between a Raspberry Pi, and the programming language Robot Framework. The reason Saab wants this examined, is because they believe that this method can improve the rate of testing, the quality of the tests, and thereby the quality of their products.The theory behind Hardware In the Loop, Continuous Integration, and Continuous Delivery will be explained in this thesis. The Hardware In the Loop method was implemented upon the Continuous Integration and Continuous Delivery tool Jenkins. An Application Programming Interface between the General Purpose Input/Output pins on a Raspberry Pi and Robot Framework, was developed. With these implementations done, the Hardware In the Loop method was successfully integrated, where a Raspberry Pi was used to simulate the hardware. / Saab vill undersöka metoden Hardware In the Loop som ett koncept, dessutom hur en infrastruktur av Hardware In the Loop skulle se ut. Hardware In the Loop baseras på att kontinuerligt testa hårdvara som är simulerad. Mjukvaran Saab vill använda sig av för Hardware In the Loop metoden är Jenkins, vilket är ett Continuous Integration och Continuous Delivery verktyg. För attsimulera hårdvaran vill Saab undersöka användningen av ett Application Programming Interface mellan en Raspberry Pi och programmeringsspråket Robot Framework. Anledning till att Saab vill undersöka allt det här, är för att de tror att det kan förbättra frekvensen av testning och kvaliteten av testning, vilket skulle leda till en förbättring av deras produkter. Teorin bakom Hardware In the Loop, Continuous Integration och Continuous Delivery kommer att förklaras i den här rapporten. Hardware In the Loop metoden blev implementerad med Continuous Integration och Continuous Delivery verktyget Jenkins. Ett Application Programming Interface mellan General Purpose Input/output pinnarna på en Raspberry Pi och Robot Framework blev utvecklat. Med de här implementationerna utförda, så blev Hardware Inthe Loop metoden slutligen integrerat, där Raspberry Pis användes för att simulera hårdvaran.

Page generated in 0.3425 seconds