• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 382
  • 116
  • Tagged with
  • 498
  • 498
  • 498
  • 498
  • 498
  • 471
  • 27
  • 10
  • 10
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

A graphical diagram editor plugin for Eclipse

Ulvestad, Kay Are January 2005 (has links)
This document serves the dual purpose of being the first version of the documentation for the Eclipse UIML DiaMODL and UIML Diagram plugins, as well as being the written part of my master thesis. The documentation part is the main body of the document. It covers the background of the project, and describes each of the plugins in detail, in terms of structure and behaviour. In addition, the documentation features a relatively detailed practical guide on how the UIML Diagram plugin can be extended with other language specific plugins. As this is part of my master thesis, there is also a short summary of my personal experiences working on the project.
292

Code Coverage and Software Reliability

Fugelseth, Lars, Lereng, Stian Frydenlund January 2005 (has links)
With an ever-growing competition among software vendors in supplying customers with tailored, high-quality systems, an emphasis is put on creating products that are well-tested and reliable. During the last decade and a half numerous articles have been published that deal with code coverage and its effect, whether existent or not, on reliability. The last few years have also witnessed an increasing number of software tools for automating the data collection and presentation of code coverage information for applications being tested. In this report we aim at presenting available and frequently used measures of code coverage, the practical applications and typical misconceptions of code coverage and its role in software development nowadays. Then we take a look at the notion of reliability in computer systems and which elements that constitute a software reliability model. With the basics of code coverage and reliability estimation in place, we try to assess the status of the relationship between code coverage and reliability, highlight the arguments for and against its existence and briefly survey a few proposed models for connecting code coverage to reliability. Finally, we examine an open-source tool for automated code coverage analysis, focusing on its implementation of the coverage measures it supports, before assessing the feasibility of integrating a proposed approach for reliability estimation into this software utility.
293

A software tool for risk-based testing

Jørgensen, Lars Kristoffer Ulstein January 2005 (has links)
There are several approaches to risk-based testing. They have in common that risk is the focus when the tester chooses what to test. In this thesis we will combine some of these approaches and present our method for risk-based testing. The method analyses the risk for each part of the system and use a hazard analysis to indicate what can go wrong. The test efficiency and risk determine the tests priority. We have shown how a software tool can support our method and implemented a proof of concept. The implementation is presented and been tried out by an experienced tester.
294

On the feasability of automatic segmentation with active contour models in image databases for shape extraction

Smestad, Ole Marius January 2005 (has links)
In this thesis the image segmentation system EDISON, was tested against an automatic version snake, which is an algorithm active contour models. The algorithms were tested against each to see if an automatic snake algorithm could be feasible for use in an image database for shape extraction. The conducted tests showed that EDISON yielded the best results, and that snake should be given further work before being considered.
295

Preserving privacy in UbiCollab: Extending privacy support in a ubiquitous collaborative environment

Braathen, Anders Magnus, Rasmussen, Hans Steien January 2005 (has links)
UbiCollab is a platform that supports the development of cooperative applications for collaboration in a ubiquitous environment. The platform enables entities of different types and technologies to work together and share a common set of resources. In a collaborative setting, trust is crucial for creating bonds between the different participants and the system. People using these kinds of systems need to feel secure and trust the system enough to give personal information away and feel that they can control the use of this gathered information. By personal information we mean name, title, email etc., but also location or type of task the user is performing within the system. This thesis explores multiple identities in ubiquitous collaboration, as a mechanism for improving the privacy of UbiCollab. The thesis also explores the building and displaying of a reputation from past collaborative experiences in connection with the different identities. To realize these mechanisms the system allows anonymous access to services by communicating through a privacy proxy. UbiCollab uses a privacy policy description engine that enables negotiation on how private data is gathered and used by the system. The different identities will be supplied with a set of preferences that describes what actions the system is allowed to perform on their personal data. This provides a way to give the user control over the gathering and sharing of personal information. The policy description is based on an adaptation of the P3P standard, designed to suit policy descriptions in a service-based architecture. Privacy extensions to the existing or new services will be easily performed by adding a reference to where the policies can be found. As a counterpart to the P3P policies, the P3P Preference Exchange Language (APPEL) has been incorporated into the platform to allow the users a way to post their privacy preferences. The adapted API has been redefined to better suit the development of UbiCollab applications. The resulting prototype demonstrates the use of these privacy mechanisms and their value to the UbiCollab platform.
296

Dynamic indexes vs. static hierarchies for substring search

Grimsmo, Nils January 2005 (has links)
This report explores the problem of substring search in a dynamic document set. The operations supported are document inclusion, document removal and queries. This is a well explored field for word indexes, but not for substring indexes. The contributions of this report is the exploration of a multi-document dynamic suffix tree (MDST), which is compared with using a hierarchy of static indexes using suffix arrays. Only memory resident data structures are explored. The concept of a ``generalised suffix tree'', indexing a static set of strings, is used in bioinformatics. The implemented data structure adds online document inclusion, update and removal, linear on the single document size. Various models for the hierarchy of static indexes is explored, some which of give faster update, and some faster search. For the static suffix arrays, the BPR cite{SS05} construction algorithm is used, which is the fastest known. This algorithm is about 3-4 times faster than the implemented suffix tree construction. Two tricks for speeding up search and hit reporting in the suffix array are also explored: Using a start index for the binary search, and a direct map of global addresses to document IDs and local addresses. The tests show that the MDST is much faster than the hierarchic indexes when the index freshness requirement is absolute, and the documents are small. The tree uses about three times as much memory as the suffix arrays. When there is a large number of hits, the suffix arrays are slightly faster on reporting hits, as there they have better memory locality. If you have enough primary memory, the MDST seems to be the best choice in general.
297

Measuring on Large-Scale Read-Intensive Web sites

Ruud, Jørgen, Tveiten, Olav Gisle January 2005 (has links)
We have in this thesis continued the work started in our project, i.e. to explore the practical and economic feasibility of assessing the scalability of a read-intensive large-scale Internet site. To do this we have installed the main components in a news site using open source software. This scalability exploration has been driven by the scaling scenario of increased article size. We have managed to assess the scalability of our system in a good way, but it has been more time consuming and knowledge demanding than expected. This means that the feasibility of such a study is lesser than we expected, but if the experiences and the method of this thesis are applied, such a study should be more feasible. We have assessed the scalability of a general web architecture, and this means that our approach can be applied to all read-intensive web sites and not just the one looked at in the cite{prosjekt}. This general focus is one of the strengths with this thesis. One of the objectives in our thesis was to make a resource function workbench (RFW) that is a framework which aids in the measuring and data interpretation. We feel that our RFW is one of the most important outcomes from this thesis, because it should be easy to reuse, thus saving time for future projects and making the feasibility of such a study higher. One of the most important is that the impact of increased article size on the throughput is bigger than expected. A small increase in article size, especially image size, leads to a clear decrease in the throughput. This reduction is larger on the small image sizes that on the large ones. This has wide implications for news sites, as many of them expect to increase the article size and still use the same system. Another major finding is that it is hard to predict the effects a scale-up of one or more components (a non-uniform scaling) will have on the throughput. This is because the throughput have different levels of dependency on the components on different image/text sizes. As we have seen the effects of the scale-up on the throughput varied between the different image sizes (a increase in throughput by 4.5 on 100 KB, but only an increase by a factor of 3.2 on image size 300 KB). In our case we have performed a non-uniform scaling, where we have increased the CPU by 2.4 and the disk by 1.1 On some image sizes and text sizes, the overall throughput was increased by a factor 10, but on others there was almost no improvement. The implications this have for web sites, is that it is hard for them to predict how system alternations will affect the overall throughput. As it is dependant on the current image and article size. It was an open question whether or not a dynamic model of the system could be constructed and solved. We have managed to construct the dynamic model, but the predictions it makes are a bit crude. However, we feel that creating a dynamic model has been very useful, and we believe it can make valuable predictions if the accuracy of the parameters are improved. This should be feasible, as our measurements should be easy to recreate. This thesis has been very demanding, because scalability requires a wide field of knowledge (statistics, hardware, software, programming, measurements etc). This has made this work very instructive, as we have gained knowledge in so many different aspects of computer science. Ideally, the thesis should have a larger time span, as the there are so many time consuming phases, which would have been interesting to spend more time on. As consequence of this short time span there are some further work which can be conducted in order to gain further valuable knowledge.
298

Integrity checking of operating systems with respect to kernel level malware

Melcher, Tobias January 2005 (has links)
Kernel-mode rootkits represent a considerable threat to any computer system, as they provide an intruder with the ability to hide the presence of his malicious activity. These rootkits make changes to the operating system’s kernel, thereby providing particularly stealthy hiding techniques. This thesis addresses the problem of collecting reliable information from a system compromised by kernel-mode rootkits. It looks at the possibility of using virtualization as a means to facilitate kernel-mode rootkit detection through integrity checking. It describes several areas within the Linux kernel, which are commonly subverted by kernel-mode rootkits. Further, it introduces the reader to the concept of virtualization, before the kernel-mode rootkit threat is addressed through a description of their hiding methodologies. Some of the existing methods for malware detection are also described and analysed. A number of general requirements, which need to be satisfied by a general model enabling kernel-mode rootkit detection, are identified. A model addressing these requirements is suggested, and a framework implementing the model is set-up.
299

A study of practices for domain and architecture knowledge management in a Scandinavian software company

Person, Anders January 2005 (has links)
Knowledge management has become increasingly popular in the software industry. Knowledge is one of the software companies main asset, and large amounts of resources are being used to manage and re-use this knowledge. Management of architectural knowledge is also important, especially when dealing with software development. This is because a team with good architectural understanding will have a good chance at efficiently creating re-usable assets. In this thesis I will describe how a Scandinavian software company deals with knowledge management. I have also analyzed the management of architectural knowledge. These subjects have been viewed from both the managers and employees point of view, and I have compared the intentions of the managers, with how the employees actually perform. The research question: "How is domain- and architecture-knowledge managed in a Scandinavian software company?" is answered by describing and analyzing the data gathered by interviews in such a company. The thesis is concluded by summaries of the discussion and the analysis that has been done. My findings in the researched areas suggest that knowledge management practices are important but that they often are underestimated. The company wherein I have conducted my research does have a QA team and a re-use culture, this culture is described, however this thesis also points out areas in which the company can improve. The case-study is based upon qualitative analysis of the results from eight interviews conducted among managers and developers in the company. In the thesis I discuss the findings and report upon issues such as company culture, routines and goals in the areas of knowledge management. My findings have been generalized, and hopefully other companies can make use of them to improve their own knowledge management processes and goals.
300

Virtual control rooms assisted by 3D sound

Sjøvoll, Håvard January 2005 (has links)
A high amount of complex and urgent information needs timely attention in an operational environment. This requires specialized systems. These systems should provide immediate access to accurate and pertinent information when troubleshooting or controlling abnormal situations. This study is a collaboration between NTNU and Statoil Research Center. It aims at designing and developing a prototype to improve the operator's awareness of alarms, by means of a multi-modal virtual environment. This will be achieved by creating an extension to the virtual model SnøhvitSIM, using a spatial auditory display in addition to visual elements. The auditory display will provide (1) spatial information about new alarms and (2) information about the overall state of the facility. The system also offers (3) beacons to aid navigation within the virtual environment. To reach these goals, a comprehensive literature study was carried out, investigating similar concepts and various techniques for developing such systems. The development was prioritized in the following order, according to the main objectives: (1) design, (2) functionality and (3) architecture. Within the design-process, the main focus has been on the spatial auditory display. The development of the prototype proved successful. The feedback on the prototype reflects its value as a showcase for future development, containing new and potentially very effective solutions for tomorrow's alarm management systems.

Page generated in 0.0819 seconds