561 |
Segmentation of Kidneys from MR-ImagesRee, Eirik January 2005 (has links)
Det har blitt utviklet en metode for semi-automatisk segmentering av nyrer fra 2D og 3D MR-bilder. Algoritmen foregår som en kombinasjon av en watershed segmentering og en modellbasert segmentering. For å løse problemet med at aktive konturer krever en svært god initialisering, brukes resultatet av watershed segmenteringen til å lage initielle konturer. Resultatet har blitt en god og fleksibel algoritme som gir gode resultater og lett kan brukes også på andre segmenteringsoppgaver.
|
562 |
NanoRiscRand, Peder January 2005 (has links)
This report gives a short introduction of the Norwegian wireless electronics company Chipcon AS, and goes on to account for the state of the art of small IP processor cores. It then describes the NanoRisc, a powerful processor developed in this project to replace hardware logic modules in future Chipcon designs. The architecture and a VHDL implementation of the NanoRisc is described and discussed, as well as an assembler and instruction set simulator developed for the NanoRisc. The results of this development work are promising; synthesis shows that the NanoRisc is capable of powerful 16-bit data moving and processing at 50 MHz in an 18nm process while requiring less than 4500 gates. The report concludes that the NanoRisc, and none of the existing IP cores studied, satisfies the requirements for hardware logic replacement in Chipcon transceivers.
|
563 |
Empirical study of software evolutionHagli, Andreas Tørå January 2005 (has links)
Software development is rapidly changing and software systems are increasing in size and expected lifetime. To cope with this, several new languages and development processes have emerged, as has stronger focus on design and software architecture and development with consideration for evolution and future change in requirements. There is a clear need for improvements, research shows that the portion of development cost used for maintenance is increasing and can be as high as 50 %. We also see many software systems that grow into uncontrollable complexity where large parts of the system cannot be touched because of risks for unforeseeable consequence. Therefore a clearer understanding of the evolution of software is needed in order to prevent decay of the systems structure. This thesis approaches the field of software evolution through an empirical study on the open source project Portage from the Gentoo Linux project. Data is gathered, ratified and analysed to study the evolutionary trends of the system. These findings are seen in the context of Lehman's laws on the inevitability of growth and increasement of complexity through the lifetime of software systems. A set of research question and hypotheses are formulated and tested. Also, experience from using open source software for data mining is presented.
|
564 |
A graphical diagram editor plugin for EclipseUlvestad, Kay Are January 2005 (has links)
This document serves the dual purpose of being the first version of the documentation for the Eclipse UIML DiaMODL and UIML Diagram plugins, as well as being the written part of my master thesis. The documentation part is the main body of the document. It covers the background of the project, and describes each of the plugins in detail, in terms of structure and behaviour. In addition, the documentation features a relatively detailed practical guide on how the UIML Diagram plugin can be extended with other language specific plugins. As this is part of my master thesis, there is also a short summary of my personal experiences working on the project.
|
565 |
Code Coverage and Software ReliabilityFugelseth, Lars, Lereng, Stian Frydenlund January 2005 (has links)
With an ever-growing competition among software vendors in supplying customers with tailored, high-quality systems, an emphasis is put on creating products that are well-tested and reliable. During the last decade and a half numerous articles have been published that deal with code coverage and its effect, whether existent or not, on reliability. The last few years have also witnessed an increasing number of software tools for automating the data collection and presentation of code coverage information for applications being tested. In this report we aim at presenting available and frequently used measures of code coverage, the practical applications and typical misconceptions of code coverage and its role in software development nowadays. Then we take a look at the notion of reliability in computer systems and which elements that constitute a software reliability model. With the basics of code coverage and reliability estimation in place, we try to assess the status of the relationship between code coverage and reliability, highlight the arguments for and against its existence and briefly survey a few proposed models for connecting code coverage to reliability. Finally, we examine an open-source tool for automated code coverage analysis, focusing on its implementation of the coverage measures it supports, before assessing the feasibility of integrating a proposed approach for reliability estimation into this software utility.
|
566 |
A software tool for risk-based testingJørgensen, Lars Kristoffer Ulstein January 2005 (has links)
There are several approaches to risk-based testing. They have in common that risk is the focus when the tester chooses what to test. In this thesis we will combine some of these approaches and present our method for risk-based testing. The method analyses the risk for each part of the system and use a hazard analysis to indicate what can go wrong. The test efficiency and risk determine the tests priority. We have shown how a software tool can support our method and implemented a proof of concept. The implementation is presented and been tried out by an experienced tester.
|
567 |
On the feasability of automatic segmentation with active contour models in image databases for shape extractionSmestad, Ole Marius January 2005 (has links)
In this thesis the image segmentation system EDISON, was tested against an automatic version snake, which is an algorithm active contour models. The algorithms were tested against each to see if an automatic snake algorithm could be feasible for use in an image database for shape extraction. The conducted tests showed that EDISON yielded the best results, and that snake should be given further work before being considered.
|
568 |
Preserving privacy in UbiCollab: Extending privacy support in a ubiquitous collaborative environmentBraathen, Anders Magnus, Rasmussen, Hans Steien January 2005 (has links)
UbiCollab is a platform that supports the development of cooperative applications for collaboration in a ubiquitous environment. The platform enables entities of different types and technologies to work together and share a common set of resources. In a collaborative setting, trust is crucial for creating bonds between the different participants and the system. People using these kinds of systems need to feel secure and trust the system enough to give personal information away and feel that they can control the use of this gathered information. By personal information we mean name, title, email etc., but also location or type of task the user is performing within the system. This thesis explores multiple identities in ubiquitous collaboration, as a mechanism for improving the privacy of UbiCollab. The thesis also explores the building and displaying of a reputation from past collaborative experiences in connection with the different identities. To realize these mechanisms the system allows anonymous access to services by communicating through a privacy proxy. UbiCollab uses a privacy policy description engine that enables negotiation on how private data is gathered and used by the system. The different identities will be supplied with a set of preferences that describes what actions the system is allowed to perform on their personal data. This provides a way to give the user control over the gathering and sharing of personal information. The policy description is based on an adaptation of the P3P standard, designed to suit policy descriptions in a service-based architecture. Privacy extensions to the existing or new services will be easily performed by adding a reference to where the policies can be found. As a counterpart to the P3P policies, the P3P Preference Exchange Language (APPEL) has been incorporated into the platform to allow the users a way to post their privacy preferences. The adapted API has been redefined to better suit the development of UbiCollab applications. The resulting prototype demonstrates the use of these privacy mechanisms and their value to the UbiCollab platform.
|
569 |
Dynamic indexes vs. static hierarchies for substring searchGrimsmo, Nils January 2005 (has links)
This report explores the problem of substring search in a dynamic document set. The operations supported are document inclusion, document removal and queries. This is a well explored field for word indexes, but not for substring indexes. The contributions of this report is the exploration of a multi-document dynamic suffix tree (MDST), which is compared with using a hierarchy of static indexes using suffix arrays. Only memory resident data structures are explored. The concept of a ``generalised suffix tree'', indexing a static set of strings, is used in bioinformatics. The implemented data structure adds online document inclusion, update and removal, linear on the single document size. Various models for the hierarchy of static indexes is explored, some which of give faster update, and some faster search. For the static suffix arrays, the BPR cite{SS05} construction algorithm is used, which is the fastest known. This algorithm is about 3-4 times faster than the implemented suffix tree construction. Two tricks for speeding up search and hit reporting in the suffix array are also explored: Using a start index for the binary search, and a direct map of global addresses to document IDs and local addresses. The tests show that the MDST is much faster than the hierarchic indexes when the index freshness requirement is absolute, and the documents are small. The tree uses about three times as much memory as the suffix arrays. When there is a large number of hits, the suffix arrays are slightly faster on reporting hits, as there they have better memory locality. If you have enough primary memory, the MDST seems to be the best choice in general.
|
570 |
Measuring on Large-Scale Read-Intensive Web sitesRuud, Jørgen, Tveiten, Olav Gisle January 2005 (has links)
We have in this thesis continued the work started in our project, i.e. to explore the practical and economic feasibility of assessing the scalability of a read-intensive large-scale Internet site. To do this we have installed the main components in a news site using open source software. This scalability exploration has been driven by the scaling scenario of increased article size. We have managed to assess the scalability of our system in a good way, but it has been more time consuming and knowledge demanding than expected. This means that the feasibility of such a study is lesser than we expected, but if the experiences and the method of this thesis are applied, such a study should be more feasible. We have assessed the scalability of a general web architecture, and this means that our approach can be applied to all read-intensive web sites and not just the one looked at in the cite{prosjekt}. This general focus is one of the strengths with this thesis. One of the objectives in our thesis was to make a resource function workbench (RFW) that is a framework which aids in the measuring and data interpretation. We feel that our RFW is one of the most important outcomes from this thesis, because it should be easy to reuse, thus saving time for future projects and making the feasibility of such a study higher. One of the most important is that the impact of increased article size on the throughput is bigger than expected. A small increase in article size, especially image size, leads to a clear decrease in the throughput. This reduction is larger on the small image sizes that on the large ones. This has wide implications for news sites, as many of them expect to increase the article size and still use the same system. Another major finding is that it is hard to predict the effects a scale-up of one or more components (a non-uniform scaling) will have on the throughput. This is because the throughput have different levels of dependency on the components on different image/text sizes. As we have seen the effects of the scale-up on the throughput varied between the different image sizes (a increase in throughput by 4.5 on 100 KB, but only an increase by a factor of 3.2 on image size 300 KB). In our case we have performed a non-uniform scaling, where we have increased the CPU by 2.4 and the disk by 1.1 On some image sizes and text sizes, the overall throughput was increased by a factor 10, but on others there was almost no improvement. The implications this have for web sites, is that it is hard for them to predict how system alternations will affect the overall throughput. As it is dependant on the current image and article size. It was an open question whether or not a dynamic model of the system could be constructed and solved. We have managed to construct the dynamic model, but the predictions it makes are a bit crude. However, we feel that creating a dynamic model has been very useful, and we believe it can make valuable predictions if the accuracy of the parameters are improved. This should be feasible, as our measurements should be easy to recreate. This thesis has been very demanding, because scalability requires a wide field of knowledge (statistics, hardware, software, programming, measurements etc). This has made this work very instructive, as we have gained knowledge in so many different aspects of computer science. Ideally, the thesis should have a larger time span, as the there are so many time consuming phases, which would have been interesting to spend more time on. As consequence of this short time span there are some further work which can be conducted in order to gain further valuable knowledge.
|
Page generated in 0.0621 seconds