• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2635
  • 483
  • 390
  • 370
  • 58
  • 45
  • 35
  • 19
  • 10
  • 10
  • 9
  • 7
  • 6
  • 3
  • 2
  • Tagged with
  • 4631
  • 4631
  • 2051
  • 1971
  • 1033
  • 617
  • 521
  • 485
  • 456
  • 448
  • 421
  • 416
  • 408
  • 337
  • 310
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Simuleringsmiljö för mobila nätverk / Simulation environment for mobile networks

Branting, Jonatan, Byhlin, Martin, Erhard Olsson, Niklas, Fundin Bengtsson, August, Johns, Rasmus, Lundberg, Martin, Müller, Hanna, Nyberg, Adam, Pettersson, Niklas January 2016 (has links)
Denna rapport innefattar det arbete som nio studenter från kursen TDDD96: Kandidatprojekt i programvaruutveckling vid Linköpings universitet ägnat sig åt under vårterminen 2016. Projektet genomfördes med målsättningen att utveckla ett grafiskt användargränssnitt som skulle visualisera simuleringar av teledatatrafik i 2G-miljöer. Arbetet har varit en lärande process som omfattat gruppdynamik, nya tekniska utmaningar och ett undersökande av hur utvecklingen i ett projekt går från en beställning till en önskvärd leverans. Gruppen har arbetat i korta intervaller med täta möten och prioriteringslistor med aktiviteter. Resultatet hittills är en prototyp med interaktiv simulering via uppkoppling mot kundens servrar som bådar gott för fortsatt utveckling.
162

Utveckling av en applikation för framtagning av hjärnresponser vid funktionell magnetresonanstomografi / Developing an application for extracting brain responses infunctional magnetic resonance imaging

Arvidsson, Carl, Bergström, David, Eilert, Pernilla, Gudmundsson, Håkan, Henriksson, Christoffer, Magnusson, Filip, Nåbo, Henning, Petersén, Elin January 2016 (has links)
Rapporten behandlar utvecklandet av programvaran JABE som ska användas i forskning om hjärnresponser. Syftet med rapporten är att utreda frågeställningarna kring hur ett sådant system kan utformas så att det skapar värde för kund och samtidigt underlättar vidareutveckling. Rapporten behandlar även hur ett användargränssnitt kan anpassas för en användares kunskapsnivå och vilka erfarenheter som kan dokumenteras från projektet i allmänhet. Problemet tacklas med stark kundkontakt, flera enkätundersökningar, agila arbetsmetodiker och genomgående dokumentering. Programvaran JABE är beställd av CMIV, centrum för medicisk bildvetenskap och visualisering, vid Linköpings Universitet och är den enda i sitt slag. Resultatet är, förutom en programvara, en genomgående beskrivning av erfarenheter, beskrivning av systemet och en utvärdering av SEMAT Kernel ALPHA. Kandidatrapporten innehåller även åtta individuella bidrag som gör fördjupningar i områden kopplade till projektet.
163

Order Engine : Prestandajämförelse mellan paradigmen MTEDA och COOA

Estlind, Björn January 2016 (has links)
Denna undersökning har gått ut på att utveckla ett system “Order Engine” vars uppgift är att asynkront konsumera köer med ordrar (uppgifter) åt både interna- och externa enheter. Systemet har utvecklats enligt paradigmet MTEDA och prestandajämförts med ett (sedan tidigare befintligt) OE-system som är utvecklat enligt paradigmet COOA för att avgöra vilket av de två paradigmen som är att föredra vid utveckling av ett OE-system. Systemet som följer paradigmet MTEDA använder en struktur där en huvudprocess delegerar arbete till slavprocesserna som följer en event-driven arkitektur, vilket innebär att processer skapas och avbryts av huvudprocessen. Systemet som följer paradigmet COOA använder istället en trådpool där trådar tilldelas arbete ifall de är lediga. Det visade sig att MTEDA-systemet exekverade ordrarna snabbare än COOA-systemet väl under själva exekveringen av ordrarna, dock så kan olika förutsättningar hos ordrarna påverka exekveringshastigheten för MTEDA-systemet. MTEDA-systemet är en mer kostsam lösning på grund av det mindre effektiva sättet som arbete fördelas på. Skapandet och avbrytandet av flertalet processer visade sig nämligen vara mer kostsamt än hanteringen av en trådpool. Båda av dessa paradigmen kan vara fördelaktiga att följa vid utvecklingen av ett OE-system. Resultatet från denna undersökning tyder på att det paradigm som ska väljas vid utveckling av ett OE-system bör avgöras med avseende för resurstillgänglighet samt de generella förutsättningar som inkommande ordrar har. / The purpose of this project has been about comparing what performance-differences there is between the paradigms MTEDA and COOA when applied for an “Order Engine”-system, a system that receives orders (tasks) from internal and external units and executes them asynchronously. The company Dewire that has developed the COOA-system has supervised the author to find out if an OE-system can benefit from using the MTEDA-paradigm. The solution for the MTEDA-system included a structure where a master-process delegate incoming orders for slave-processes, the slave-processes then executes the orders through an event-driven manner. The COOA-system on the other hand uses a thread-pool, where the orders get assigned to the threads in the pool if they are available. It turned out that the MTEDA- system executed the orders faster than the COOA-system, even though the MTEDA-system can gain- or loose execution-speed depending on the conditions of the orders. The way MTEDA delegates the orders, that is spawning and destroying processes, turned out to be more expensive for the CPU. Assigning orders to threads that are already active turned out to be an efficient way of using the resources. Both of these paradigms could be the better choice when developing an OE-system. Therefore, the things that should decide that choice is the resources available and the general conditions that the incoming orders have.
164

Evaluation of Model Based Testing and Conformiq Qtronic

Khan, Muhammad Bilal Ahmad, Shang, Song January 2009 (has links)
The Model Based Testing is one of the modern automated testing methodologies used to generate test suits automatically from the abstract behavioral or environmental models of the System Under Test (SUT). The Generated test cases are abstract like models, but these test cases can also be transformed to the different test scripts or language specific test cases to execute them. The Model based testing can be applied in different ways and it has several dimensions during implementation that can be changes with nature of the SUT. Since model based testing is directly related with models, the model based testing can be applied at early stages of development that helps in validation of both models and requirements that could save time of test development at later stages. With the automatic generation of test cases, requirements change is very easy to handle with the model based testing as it requires fewer changes in the models and reduces rework. It is also easy to generate a large number of test cases with full coverage criteria using the model based testing that was hard to produce with traditional testing methodologies. Testing non-functional requirements is one field in which the model based testing is lacking; quality related aspects of the SUT difficult to be tested with the model based testing. The effectiveness and performance of model based testing is directly related to the efficiency of CASE tool that implementing it. A variety of CASE tools based on models are currently in use in different industries. The Qtronic tool is one generating test cases from abstract model of SUT automatically. In this master thesis detailed evaluation of the Qtronic test case generation technique, generation time, coverage criterion and quality of test cases are analyzed by modeling the Session Initiating Protocol (SIP) & File Transfer Protocol. (FTP), Also generation of test cases from models manually and by using the Qtronic Tool. In order to evaluate the Qtronic tool, detailed experiments and comparisons of manually generated test cases and test case generated by the Qtronic are conducted. The results of the case studies show the efficiency of the Qtronic over traditional manual test case generation in many aspects. We also show that the model based testing is not effective applied on every system under test, for some simple systems manual test case generation might be a good choice.
165

The Progress run-time architecture

Pop, Tomas January 2009 (has links)
This thesis is a part of a bigger research vision called Progress which aims at providing component based techniques for the development of realtime embedded systems. It starts research of the runtime structures of the Progress component model. The thesis aims at identifying necessary questions about the internal structure of virtual nodes and about the supporting mechanisms needed to run virtual nodes on destination hardware. A part of this thesis is also a sample implementation of the virtual node runtime environment covering local and Ethernet communication, event driven andtimer driven tasks, and multiple computational nodes. / Progress
166

Automated end-to-end user testing on single page web applications

Palmér, Tobias, Waltré, Markus January 2015 (has links)
Competencer wants to be sure users experience their web product as it was designed. With the help of tools for end-to-end user testing, interactions based on what a user would do is simulated to test potential situations. This thesis work is targeting four areas of end-to-end user testing with a major focus on making it automatic. A study is conducted on test case methods to gain an understanding of how to approach writing tests. A coverage tool is researched and built to present a measure of what is being tested of the product. To ease the use for developers a solution for continuous integration is looked at. To make tests more automatic a way to build mocks through automation is implemented. These areas combined with the background of Competencers application architecture creates a foundation for replacing manual testing sessions with automatic ones.
167

Comaparison of Web Developement Technologies

Ramesh Nagilla, Ramesh January 2012 (has links)
Web applications play an important role for many business purpose activities in the modernworld. It has become a platform for the companies to fulfil the needs of their business. In thissituation, Web technologies that are useful in developing these kinds of applications become animportant aspect. Many Web technologies like Hypertext Preprocessor (PHP), Active ServerPages (ASP.NET), Cold Fusion Markup Language (CFML), Java, Python, and Ruby on Rails areavailable in the market. All these technologies are useful to achieve the goals of Clients andbusiness organizations. ASP.NET and PHP are the most competing out of all the technologies.Most of the companies and developers think that one is better than other. The main aim of thisthesis is done by taking this point in to the consideration. A Photo Gallery application isdeveloped using ASP.NET and PHP in order to compare the two Web development technologies.Many points are taken in to consideration in order to differentiate them and conclude which oneis the better among the two technologies.
168

Simplan v3.0

Larsson, Jesper, Teintang, André January 2013 (has links)
Under våren 2012 har vi utvecklat en IT-lösning åt simulatorcentralen på Saab Aeronautics i Tannefors, Linköping. Saab har en lång historia av att utveckla och tillverka flygplan med tillhörande stödsystem, bland annat Jas Gripen. De simulatorer som finns används dels för utveckling av nya mjukvaror åt flygplanen samt dels för utbildning och övning för piloter. Projektet har utförts som en del av vårt examensarbete från utbildningsprogrammet Innovativ Programmering vid Linköpings universitet. Projektet har utförts av fyra personer, där arbetet har fördelats över två stycken examensarbeten för att separera arbetet och ansvaret så mycket som möjligt. Vi delade upp arbetet så att vår grupp gjorde det bakomliggande systemet med logik, datalagring och alla funktioner. Samtidigt gjorde den parallella gruppen ett grafiskt gränssnitt i form av en websida för att använda dessa funktioner. Denna rapport beskriver det arbete som gjordes av Jesper Larsson och André Teintang. Det kan alltså även vara av intresse att se den andra gruppens rapport med Gustav Hjortsparre och Han Lin Yap som författare. Målet med vårt arbete var att bygga en uppsättning verktyg för att hantera bokningar och hantering av de simulatorer som används på Saab. Detta inkluderar gränssnitt för användaren att boka simulatorer, extra utrustning, externa piloter samt hantera de bokningar som redan gjorts. Samtidigt behövs det också verktyg för administratörerna och operatörerna som arbetar med simulatorerna för att göra uppföljningar, planera underhåll, installera extra utrustning och schemalägga personal. Dessutom ska verktygen generera fakturor för att ta betalt för den bokade utrustningen, skicka e-post med påminnelser med mera. Denna rapport går igenom den information man behöver känna till och tänka på när man designar ett bakomliggande system som andra sedan kommer att bygga en applikation ovanpå. Samtidigt som man försöker möjliggöra andra utvecklare att bygga ut systemet i framtiden. Vi förklarar dessutom de designövervägande vi behövde göra, vad vi beslutade, och vad resultatet av dessa blev.
169

Towards a guideline for refactoring of embedded systems

Dersten, Sara January 2012 (has links)
The electronics in automotive systems give great possibilities. It has contributed to environmental improvements through reduced emissions and reduced fuel consumption, safety, driver assistance, and quality through better diagnostic capabilities. Automotive systems are today distributed embedded systems that consist of several nodes that communicate with each other. The increasing possibilities have led to a situation where functions that used to be stand-alone, are today dependent on several inter-connected systems which all contribute to the desired functionality. This has increased the costs and the complexity to deal with the systems. The automotive industry is adopting a new open software architecture, called AUTOSAR, that is intended to reduce the complexity. AUTOSAR also gives possibilities for coping with large product ranges and for component sharing. The introduction of AUTOSAR is an example of an architecture change without modifying the external functionality. We have chosen to call such changes system refactoring. However, if the introduction of AUTOSAR is not successfully performed, there are risks for delayed development projects, which are costly for the automotive companies. Unfortunately, existing engineering standards and literature focus mostly on new product development and less on system re-factoring, and this gap needs to be filled. The goal of this research is to provide guidelines for refactoring, which provides support throughout the complete process of system architects in efforts to refactor the system. This thesis identifies the characteristics of refactoring processes. This is done by empirical studies of the drivers behind refactoring, the effects we can expect from refactoring, and the process activities and characteristics. The result can be used to create guidelines for improving the work of refactoring.
170

Managing applications and data in distributed computing infrastructures

Toor, Salman January 2010 (has links)
Over the last few decades, the needs of computational power and data storage by collaborative, distributed scientific communities have increased very rapidly. Distributed computing infrastructures such as computing and storage grids provide means to connect geographically distributed resources and helps in addressing the needs of these communities. Much progress has been made in developing and operating grids, but several issues still need further attention. This thesis discusses three different aspects of managing large-scale scientific applications in grids: • Using large-scale scientific applications is often in itself a complex task, and to set them up and run experiments in a distributed environment adds another level of complexity. It is important to design general purpose and application specific frameworks that enhance the overall productivity for the scientists. The thesis present further development of a general purpose framework where existing portal technology is combined with tools for robust and middleware independent job management. Also, a pilot implementation of a domain-specific problem solving environment based on a grid-enabled R solution is presented. • Many current and future applications will need large-scale storage systems. Centralized systems are eventually not scalable enough to handle huge data volumes and also have can have additional problems with security and availability. An alternative is a reliable and efficient distributed storage system. In the thesis the architecture of a self-healing, grid-aware distributed storage cloud, Chelonia, is described and performance results for a pilot implementation are presented. • In a distributed computing infrastructure it is very important to manage and utilize the available resources efficiently. The thesis presents a review of different resource brokering techniques and how they are implemented in different production level middlewares. Also, a modified resource allocation model for the Advanced Resource Connector (ARC) middleware is described and performance experiments are presented. / eSSENCE

Page generated in 0.0997 seconds