• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2607
  • 475
  • 390
  • 369
  • 58
  • 45
  • 35
  • 18
  • 10
  • 10
  • 9
  • 7
  • 6
  • 3
  • 2
  • Tagged with
  • 4590
  • 4590
  • 2027
  • 1957
  • 1030
  • 613
  • 521
  • 483
  • 450
  • 446
  • 419
  • 414
  • 407
  • 332
  • 308
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Moving Towards Component Based Software Engineering in Train Control Applications

Riaz, Sajid January 2012 (has links)
Software industry deals with a vital challenge that is caused by a rapidly growing demand for speedy and cost-effective development of large and complex software systems. To overcome this challenge, software community is moving towards the component based software engineering (CBSE). The major inspiration for software industry, to adopt CBSE as its software development paradigm, is to rapidly build and deploy complex and trustworthy software systems with enormous savings, least engineering effort, cost, and time. CBSE provides the technical facilities that enable the easy assembly and upgrading of the software systems out of independently developed pieces of the software. As the demand for new software increases, software reuse has become an attraction point for the many organizations because in a competitive environment, every organization wants to increase its productivity, reduce the development cost and time to market. Organizations also want to achieve a systematic software reuse in order to ensure a higher reliability, better maintenance and quality by exploiting reusability. Software reuse has become an important objective for every organization that is developing the software systems. CBSE is the systematic approach to achieve the systematic software reuse. The aim of this thesis is to present a precise study of the CBSE advantages, available CBSE lifecycle models in literature, component models, CBSE cost benefits analysis (CBA), and comparison of CBSE economics with another software reuse strategy named as copy paste strategy in the railway industry. This thesis also defines a method to identify the reusable software components from the existing systems. A case study was performed at train control management system (TCMS) supplier organization to define a suitable CBSE lifecycle, a component model for TCMS, and apply the defined method for the identification of reusable software from the existing system in real-time environment. The detailed cost benefits analysis was performed on real data to justify the upfront cost of the CBSE.
182

The PythonNeural Simulation Technology Graphical User Interface

Haglund, Nicklas January 2009 (has links)
This report is about the thesis work PyNestGUI which goal was to make a graphical interface to the neuron simulator NEST. The report's first part is about how NEST works well as which graphical interface that where selected. The report then continues go through what a neuron is and how it works superficial. The final section of the report will cover how PyNestGUI is build and how the program works. The problem that the program solves is that it builds a model in NEST with user settings and connects neurons in a similar way as a minicolumn are interconnected. The programs purpose was that it would help the user to change the variables in an easy manner and produce results that can be visualized and save for later analysis. Results that the the program can plot from the simulation is from voltmeters and a spike detector connected all neurons. The program can also display an animation of the simulation so that the user will be able to see when and which neurons spikes and their connections. / Den här rapporten handlar om examensarbetet PyNestGUI som gick ut på att göra ett grafiskt gränssnitt till neuronsimulatorn NEST. Rapportens första del handlar om hur NEST fungerar samt vilket grafiskt gränssnitt som valdes ut. Sedan fortsätter rapporten med att ytligt gå igenom vad en neuron är och hur den fungerar. Sista delen av rapporten går igenom hur PyNestGUI är uppbyggt och hur programmet fungerar. Problemet som programmet löser är att den bygger upp en modell i NEST med användarens inställningar och kopplar ihop neuroner på liknande sätt som en minikolumn är sammankopplad. Programmets syfte var att det skulle hjälpa användaren kunna ändra på variabler på ett lätt sätt samt ge resultat som kan visualiseras och som användaren skulle kunna spara. Resultat som programmet kan visa av simuleringen är voltmätare och spikdetektor på samtliga neuroner. Programmet kan också visa en animation av simuleringen så att användaren ska kunna se när neuroner spikar och vilka dessa neuroner är sammankopplade med.
183

Analysis and Monitoring of Team Collaboration in Emergency Response Training supported by a Web Based Information Management System

Ali, Asif, Ramzan, Faheem January 2009 (has links)
Our objective in this thesis work is to analyze and manage the log files which are generated after a number of experiments series on different groups using C3Fire simulation environment. It includes analyzing and extracting information from log files, and then maintaining this information in a database. This should be presented with a web interface through ICEfaces Ajax framework for Java. Log Files are generated after a number of experiments series on the different groups. All sequences and information related to task performed by team in group is organized in session log files. The work is divided into different steps; first step is to analyze and extract data from log files, and properly arrange it in several different tables in a database, for this MySQL database is used to store the information. The web interface of log file management system is implemented using ICEfaces Ajax framework, and is based on the statistics of log files generated from the C3Fire environment.  User would be able to add/remove the log files, also can view or edit the details of each session log file in database through web interface. Different events can be generated, and logged for the session information. C3Fire is an environment that supports training and research in team collaboration. The environment is mainly used in command, control and communication research, and in training of team decision making. Many humanitarian relief operations are doing their work without having any practice. When some disaster events occur, they cannot perform their jobs effectively. Effective and efficient relief operation is the need of humanity; even that’s not enough to move teams to the disaster place at right time; communication and co-ordination among the team members is the big factor to make effective and well-organized work. C3Fire is a simulation system which provides the training for team members to handle such type of disaster events, and makes the work more proficient at that time by doing effective coordination.
184

Training Communication and Self Organization in a Team Training Environment

Nazir, Qamar, Shahzad, Khurram January 2009 (has links)
C3Fire is a micro-world that provides the simulation system, which is use to improve team management skills in fully controlled enviroment. C3Fire system can be used in research process where researcher can select some characteristics of the real world and create the well controlled simulation system. Training is used for developing skills to tackle with emergency situation. The purpose of our thesis is to develop and test, Communiacation and Self Organization cofigurations in a team training environment. Success of dealing with emergency management situation mostly depends on these training factors. In the first step we had studied different theories and research work relevant to Communication and Self Organization. In second step we studied the structure of the C3Fire then we developed different configurations based on communication and self-organization. In third step we test these training session with the real world participants. Finally we analyze the behavior of the participants while playing the game.
185

Handling of curvilinear coordinates in a PDE solver framework

Ljungberg, Malin January 2003 (has links)
By the use of object-oriented analysis and design combined with variability modeling a highly flexible software model for the metrics handling functionality of a PDE solver framework was obtained. This new model was evaluated in terms of usability, particularly with respect to efficiency and flexibility. The efficiency of a pilot implementation is similar to, or even higher than that of a pre-existing application-specific reference code. With regards to flexibility it is shown that the new software model performs well for a set of four change scenarios selected by an expert user group.
186

On using mobile agents for load balancing in high performance computing

Munasinghe, Kalyani January 2002 (has links)
One recent advance in software technology is the development of software agents that can adapt to changes in their environment and can cooperate and coordinate their activities to complete a given task. Such agents can be distributed over a network. Advances in hardware technology have meant that clusters of workstations can be used to create parallel virtual machines that bring the power of parallel computing to a much wider research and development community. Many software packages are now being developed to utilise such cluster environments. In a cluster, each processor will be multitasking and running other jobs simultaneously with a distributed application that uses a message passing environment such as MPI. A typical application might be a large scale mesh-based computation, such as a finite element code, in which load balancing is equivalent to mesh partitioning. When the load is varying between processors within the cluster, distributing the computation in equal amounts may not deliver the optimum performance. Some machines may be very heavily loaded by other users while other processors may have no such additional load. It may be beneficial to measure current system information and use this information when balancing the load within a single distributed application program. This thesis presents one approach to distributing workload more efficiently in a multi-user distributed environment by using mobile agents to collect system information which is then transmitted to all the MPI tasks. The thesis contains a review of software agents and mesh partitioning together with some numerical experiments and a paper.
187

Parallel PDE Solvers on cc-NUMA Systems

Nordén, Markus January 2004 (has links)
The current trend in parallel computers is that systems with a large shared memory are becoming more and more popular. A shared memory system can be either a uniform memory architecture (UMA) or a cache coherent non-uniform memory architecture (cc-NUMA). In the present thesis, the performance of parallel PDE solvers on cc-NUMA computers is studied. In particular, we consider the shared namespace programming model, represented by OpenMP. Since the main memory is physically, or geographically distributed over several multi-processor nodes, the latency for local memory accesses is smaller than for remote accesses. Therefore, the geographical locality of the data becomes important. The questions posed in this thesis are: (1) How large is the influence on performance of the non-uniformity of the memory system? (2) How should a program be written in order to reduce this influence? (3) Is it possible to introduce optimizations in the computer system for this purpose? Most of the application codes studied address the Euler equations using a finite difference method and a finite volume method respectively and are parallelized with OpenMP. Comparisons are made with an alternative implementation using MPI and with PDE solvers implemented with OpenMP that solve other equations using different numerical methods. The main conclusion is that geographical locality is important for performance on cc-NUMA systems. This can be achieved through self optimization provided in the system or through migrate-on-next-touch directives that could be inserted automatically by the compiler. We also conclude that OpenMP is competitive with MPI on cc-NUMA systems if care is taken to get a favourable data distribution.
188

Performance characterization and evaluation of parallel PDE solvers

Johansson, Henrik January 2006 (has links)
Computer simulations that solve partial differential equations (PDEs) are common in many fields of science and engineering. To decrease the execution time of the simulations, the PDEs can be solved on parallel computers. For efficient parallel implementations, the characteristics of both the hardware and the PDE solver must be taken into account. In this thesis, we explore two ways to increase the efficiency of parallel PDE solvers. First, we use full-system simulation of a parallel computer to get detailed knowledge about cache memory usage for three parallel PDE solvers. The results reveal cases of bad cache memory locality. This insight can be used to improve the performance of the PDE solvers. Second, we study the adaptive mesh refinement (AMR) partitioning problem. Using AMR, computational resources are dynamically concentrated to areas in need of a high accuracy. Because of the dynamic resource allocation, the workload must repeatedly be partitioned and distributed over the processors. We perform two comprehensive characterizations of partitioning algorithms for AMR on structured grids. For an efficient parallel AMR implementation, the partitioning algorithm must be dynamically selected at run-time with regard to both the application and computer state. We prove the viability of dynamic algorithm selection and present performance data that show the benefits of using a large number of complementing partitioning algorithms. Finally, we discuss how our characterizations can be used in an algorithm selection framework.
189

An approach to software product line use case modeling

Eriksson, Magnus January 2006 (has links)
Organizations developing software intensive defense systems are today faced with a number challenges related to characteristics of both the market place and the system domain: 1. Systems grow ever more complex, consisting of tightly integrated mechanical, electrical/electronic and software components. 2. Systems are often developed in short series; ranging from only a few to a few hundred units. 3. Systems have very long life spans, typically 30 years or longer. 4. Systems are developed with high commonality between different customers; however systems are always customized for specific needs. The goal of the research presented in this thesis is to investigate methods and tools to enable efficient development and maintenance of systems in such a context. The strategy adopted in this work is to utilize the forth system characteristic, high commonality, to achieve this. One approach to software reuse, which could be a potential solution as it enables reuse of common parts but at the same time allow for variations, is known as software product line development. The basic idea of this approach is to use domain knowledge to identify common parts within a family of related products and to separate them from the differences between the products. The commonalties are then used to create a product platform that can be used as a common baseline for all products within such a product family. The main contribution of this licentiate thesis is a product line use case modeling approach tailored towards organizations developing software intensive defense systems. We describe how a common and complete use case model can be developed and maintained for a whole family of products, and how the variations within such a family are modeled using a feature model. Concrete use case models, for particular products within a family, can then be generated by selecting features from a feature model. We furthermore describe extensions to the commercial requirements management tool Telelogic DOORS and the UML modeling tool IBM-Rational Rose to support the proposed approach. The approach was applied and evaluated in an industrial case study in the target domain. Based on the collected case study data we draw the conclusion that the approach performs better than modeling according to the styles and guidelines specified by the IBM-Rational Unified Process (RUP) in the current industrial context. The results however also indicate that for the approach to be successfully applied, stronger configuration management and product planning functions than traditionally found in RUP projects are needed.
190

Value Based Requirements Engineering : State-of-art and Survey

Mudduluru, Pavan January 2016 (has links)
No description available.

Page generated in 0.1158 seconds