• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1522
  • 452
  • Tagged with
  • 1974
  • 1970
  • 1969
  • 524
  • 420
  • 416
  • 397
  • 202
  • 172
  • 170
  • 131
  • 131
  • 125
  • 108
  • 104
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Handling of curvilinear coordinates in a PDE solver framework

Ljungberg, Malin January 2003 (has links)
By the use of object-oriented analysis and design combined with variability modeling a highly flexible software model for the metrics handling functionality of a PDE solver framework was obtained. This new model was evaluated in terms of usability, particularly with respect to efficiency and flexibility. The efficiency of a pilot implementation is similar to, or even higher than that of a pre-existing application-specific reference code. With regards to flexibility it is shown that the new software model performs well for a set of four change scenarios selected by an expert user group.
72

On using mobile agents for load balancing in high performance computing

Munasinghe, Kalyani January 2002 (has links)
One recent advance in software technology is the development of software agents that can adapt to changes in their environment and can cooperate and coordinate their activities to complete a given task. Such agents can be distributed over a network. Advances in hardware technology have meant that clusters of workstations can be used to create parallel virtual machines that bring the power of parallel computing to a much wider research and development community. Many software packages are now being developed to utilise such cluster environments. In a cluster, each processor will be multitasking and running other jobs simultaneously with a distributed application that uses a message passing environment such as MPI. A typical application might be a large scale mesh-based computation, such as a finite element code, in which load balancing is equivalent to mesh partitioning. When the load is varying between processors within the cluster, distributing the computation in equal amounts may not deliver the optimum performance. Some machines may be very heavily loaded by other users while other processors may have no such additional load. It may be beneficial to measure current system information and use this information when balancing the load within a single distributed application program. This thesis presents one approach to distributing workload more efficiently in a multi-user distributed environment by using mobile agents to collect system information which is then transmitted to all the MPI tasks. The thesis contains a review of software agents and mesh partitioning together with some numerical experiments and a paper.
73

Parallel PDE Solvers on cc-NUMA Systems

Nordén, Markus January 2004 (has links)
The current trend in parallel computers is that systems with a large shared memory are becoming more and more popular. A shared memory system can be either a uniform memory architecture (UMA) or a cache coherent non-uniform memory architecture (cc-NUMA). In the present thesis, the performance of parallel PDE solvers on cc-NUMA computers is studied. In particular, we consider the shared namespace programming model, represented by OpenMP. Since the main memory is physically, or geographically distributed over several multi-processor nodes, the latency for local memory accesses is smaller than for remote accesses. Therefore, the geographical locality of the data becomes important. The questions posed in this thesis are: (1) How large is the influence on performance of the non-uniformity of the memory system? (2) How should a program be written in order to reduce this influence? (3) Is it possible to introduce optimizations in the computer system for this purpose? Most of the application codes studied address the Euler equations using a finite difference method and a finite volume method respectively and are parallelized with OpenMP. Comparisons are made with an alternative implementation using MPI and with PDE solvers implemented with OpenMP that solve other equations using different numerical methods. The main conclusion is that geographical locality is important for performance on cc-NUMA systems. This can be achieved through self optimization provided in the system or through migrate-on-next-touch directives that could be inserted automatically by the compiler. We also conclude that OpenMP is competitive with MPI on cc-NUMA systems if care is taken to get a favourable data distribution.
74

Performance characterization and evaluation of parallel PDE solvers

Johansson, Henrik January 2006 (has links)
Computer simulations that solve partial differential equations (PDEs) are common in many fields of science and engineering. To decrease the execution time of the simulations, the PDEs can be solved on parallel computers. For efficient parallel implementations, the characteristics of both the hardware and the PDE solver must be taken into account. In this thesis, we explore two ways to increase the efficiency of parallel PDE solvers. First, we use full-system simulation of a parallel computer to get detailed knowledge about cache memory usage for three parallel PDE solvers. The results reveal cases of bad cache memory locality. This insight can be used to improve the performance of the PDE solvers. Second, we study the adaptive mesh refinement (AMR) partitioning problem. Using AMR, computational resources are dynamically concentrated to areas in need of a high accuracy. Because of the dynamic resource allocation, the workload must repeatedly be partitioned and distributed over the processors. We perform two comprehensive characterizations of partitioning algorithms for AMR on structured grids. For an efficient parallel AMR implementation, the partitioning algorithm must be dynamically selected at run-time with regard to both the application and computer state. We prove the viability of dynamic algorithm selection and present performance data that show the benefits of using a large number of complementing partitioning algorithms. Finally, we discuss how our characterizations can be used in an algorithm selection framework.
75

An approach to software product line use case modeling

Eriksson, Magnus January 2006 (has links)
Organizations developing software intensive defense systems are today faced with a number challenges related to characteristics of both the market place and the system domain: 1. Systems grow ever more complex, consisting of tightly integrated mechanical, electrical/electronic and software components. 2. Systems are often developed in short series; ranging from only a few to a few hundred units. 3. Systems have very long life spans, typically 30 years or longer. 4. Systems are developed with high commonality between different customers; however systems are always customized for specific needs. The goal of the research presented in this thesis is to investigate methods and tools to enable efficient development and maintenance of systems in such a context. The strategy adopted in this work is to utilize the forth system characteristic, high commonality, to achieve this. One approach to software reuse, which could be a potential solution as it enables reuse of common parts but at the same time allow for variations, is known as software product line development. The basic idea of this approach is to use domain knowledge to identify common parts within a family of related products and to separate them from the differences between the products. The commonalties are then used to create a product platform that can be used as a common baseline for all products within such a product family. The main contribution of this licentiate thesis is a product line use case modeling approach tailored towards organizations developing software intensive defense systems. We describe how a common and complete use case model can be developed and maintained for a whole family of products, and how the variations within such a family are modeled using a feature model. Concrete use case models, for particular products within a family, can then be generated by selecting features from a feature model. We furthermore describe extensions to the commercial requirements management tool Telelogic DOORS and the UML modeling tool IBM-Rational Rose to support the proposed approach. The approach was applied and evaluated in an industrial case study in the target domain. Based on the collected case study data we draw the conclusion that the approach performs better than modeling according to the styles and guidelines specified by the IBM-Rational Unified Process (RUP) in the current industrial context. The results however also indicate that for the approach to be successfully applied, stronger configuration management and product planning functions than traditionally found in RUP projects are needed.
76

Value Based Requirements Engineering : State-of-art and Survey

Mudduluru, Pavan January 2016 (has links)
No description available.
77

Tidsloggning via NFC

Hedlund, Robin, Johansson, Peter January 2016 (has links)
Denna rapport beskriver utvecklandet av ett tidsrapporteringssystem som tillämpar Near-Field-Communication[NFC]-tekniken. Systemet är i första hand avsett för ett företag där systemet i sig är framtaget med moderna tekniker och utvecklades med syn på expandering och vidareutveckling. En webbapplikation har utvecklats som består utav en server, databas och webbsida. Servern kan ta emot förfrågningar för att hantera information som finns lagrad i databasen. Webbsidan kan hantera och se över tidsrapporter i ett användargränssnitt som kan nås med hjälp utav en webbläsare. Systemet innehåller en station som använder sig utav en NFC-läsare för att läsa av information ifrån externa NFC-enheter. Informationen som läses av skickas vidare, via Wi-Fi, till servern för att antingen registrera en ny station eller skapa en tidsloggning. En mobil applikation har utvecklats till mobiltelefoner som använder sig utav operativsystemet Android och har inbyggt NFC-stöd. En mobiltelefon som stödjer dessa kriterier kan svepas över en station för att utföra en tidsloggning. Mobiltelefonen kan själv utföra skapa, modifiera, ta bort och hämta tidsloggningar. GPS är integrerat för navigering och för att koppla ihop position med en tidsloggning. / This report describes the development of a time reporting system that applies the Near-Field Communication [NFC] technology. The system is primarily intended for a company where the system itself is designed with modern techniques and was developed with the vision of expansion and further development. A web application has been developed that consists of a server, database, and a web page. The server can receive requests to manage the information stored in the database. The web page can manage and review the time logs in a user interface that can be accessed with a browser. The system includes a station that uses a NFC-reader to read information from the external NFC devices. The information is then forwarded, with the help of Wi-Fi, to the server to either register a new station or create a time log. It also has a mobile application developed for mobile phones that use Android as operating system and has built-in NFC support. A mobile phone that supports these criteria may be swept over a station to perform a time log. The mobile phone can carry out create, modify, delete, and view time logs. GPS is integrated for navigation and to connect a position with a time log.
78

Implications of Client Involvement in Student Projects : Comparative Study between Project without a Real Client and Project with a Real Client

Marriska, Aftri January 2015 (has links)
No description available.
79

Unit Test of Capsules using Google Test Framework

Ström, Joakim, Sjölund, Jakob January 2016 (has links)
Software testing is an important part of modern system development. It is a collection of methods used to detect and correct bugs and faults found in software code. Unit testing is a widely used technique in software testing where individual units of source code are isolated, often divided up into classes and functions, and tested separately. When developing in a modeling environment, the system components and their respective behavior are expressed by models written in the Unified Modeling Language (UML). These model descriptions are then used to automatically generate programming code for compilation into real-time systems. The generated code can in turn be subjected to unit testing in order to aid in the verification of the systems behavior and functionality. The modeling tool Rational Software Architect RealTime Edition (RSARTE), developed by IBM, is one example of such an environment. The generated code from the UML models in RSARTE is designed to execute in a real-time computing C++ runtime environment. An essential building block for real-time functionality is the Capsule model. A capsule is an element with an internal state-machine and ports defining its behavior and communication with other capsules. This construction is of great help when programming concurrent real-time applications. Due to the complexity provided by the real-time runtime environment, it is difficult to isolate and unit test the behavior of designed capsules. In this thesis we will show that a capsule in this environment can be isolated and then subjected to unit testing with the help of an integrated third party unit test framework. Also, before integrating a suitable framework, we will select one by doing a review, discussion and a comparison of different mature and available unit test frameworks for use in the C++ language.
80

Outcomes of applying lightweight code review in terms of error detection and perceived value and learning

Tholin, Emil January 2015 (has links)
The problems a new start-up company face are numerous. Everything from restricted resources and a very high speed of development, to different backgrounds and levels of expertise and experience of the employees. Procedures have to be set in place in order to give everyone involved the same vision of the product, and to get the development up to speed as fast as possible. This case study implements a light weight code review protocol that is adopted by the programmers of the company, primarily to mitigate the problem of varying expertise. During the course of the study, measurements of errors detected and perceived value and learning were made. Finally, extrapolations of the data was done in order to see what could be generalised from this very specific case study to a broader context.

Page generated in 0.0317 seconds