541 |
DISCOlab: a toolkit for development of shared display systems in UbiCollabHeitmann, Carsten Andreas January 2005 (has links)
Shared displays are important tools for promoting collaboration.Ubiquitous computing presents new requirements for the design of shared display systems.Contextualisation of information at shared displays is becoming more important. The ability to rapidly create shared display systems is motivated by the fact that shared displays play central roles in collaboration. Low level implementation issues, common to shared display systems can be an obstacle for this. A toolkit for creation of such systems is therefore needed to provide basic shared display functionality to developers. This master thesis presents a toolkit for creating shared display applications on UbiCollab, a platform supporting collaborative work in ubiquitous environments. The work shows the development of the toolkit and how the toolkit can be used to create a shared display system. The toolkit takes advantage of the opportunities the UbiCollab platform provides on contextualisation of information.
|
542 |
Empirical Testing of a Clustering Algorithm for Large UML Class Diagrams; an ExperimentHaugen, Vidar January 2005 (has links)
One important part of developing information systems is to get as much insight as possible about the problem, and possible solutions, in an early phase. To get this insight, the actors involved need good and understandable models. One popular modeling approach is UML class diagrams. A problem with UML class diagrams is that they tend to get very large when used to model large-scale commercial applications. In the absence of suitable mechanisms for complexity management, such models tend to be represented as single, interconnected diagrams. Diagrams of this kind are difficult for stakeholders to understand and maintain. There have been developed algorithms for filtering large ER diagrams, and the aim of this project has been to try if one of these algorithms can be used for filtering UML class diagrams as well. This paper describes a laboratory experiment which compares the efficiency of two different representation methods for documentation and maintenance of large data models. The representation methods compared are the ordinary UML class diagram, and the Leveled Data Model. The methods are compared using a range of performance based and perception based variables. The results show that the Leveled Data Model is not suited for modeling large generalization hierarchies. For other kinds of relations, the efficiency of the two modeling methods is the same. The participants preferred to use the ordinary UML diagram to solve the experimental tasks.
|
543 |
Building a Replicated Data Store using Berkeley DB and the Chord DHTWaagan, Kristian January 2005 (has links)
Peer-to-peer technology is gaining grounds in many different application areas. This report describes the task of building a simple distributed and replicated database system on top of the distributed hash table Chord and the database application Berkeley DB Java Edition (JE). The prototype was implemented to support a limited subset of the commands available in JE. These were the main challenges of realizing the prototype; (1) integration of the application level communication with the Chord level communication (2) design and implement a set of data maintenance protocols required to handle node joins and failures (3) run tests to verify correct operation (4) to quantify basic performance metrics for our local area test setup. The performance of the prototype is acceptable, taken into consideration that network access is taking place and that it has not been optimized. There are challenges and features to support: (a) although Chord handles churn reasonably well, the application layer does not in the current implementation (b) operations that need to access all records in a specific database are not supported (c) an effective finger table is required in large networks The current approach seems to be well suited for relatively stable networks, but the need to relocate and otherwise maintain data requires more complex protocols in a system with high churn.
|
544 |
Knowledge acquisition and modelling for knowledge-intensive CBRBakke, Elise January 2005 (has links)
This thesis contains a study of state of the art knowledge acquisition modelling principles and methods for modelling general domain knowledge. This includes Newell's knowledge level, knowledge level modelling, Components of Expertise, CommonKADS and the Protégé meta tool. The thesis also includes a short introduction to the knowledge-intensive case-based reasoning system TrollCreek. Based on this background knowledge, one did analysis and comparison of different possible solutions. Then, after justifying the choices made, a knowledge acquisition method for TrollCreek was created. The method was illustrated through an example, evaluated and discussed.
|
545 |
Adaptive Mobile Work ProcessesHauso, Frode, Røed, Øivind January 2005 (has links)
Systems that efficiently provide support for adaptive work processes in a mobile environment do not exist according to our knowledge. Such systems would increase efficiency and safety in environments where work is inherently mobile, ad-hoc, and requires input from a set of heterogeneous sources. Traditional work support systems are normally not capable of dynamic change and plans must be made before work is started. This conflicts with most work processes, which are dynamic and where plans cannot be completely pre-defined. Current workflow systems are for the most part not capable of handling dynamic change in the workflow process execution. Those that do exist are geared more towards long-term adaptability of the workflow process and not towards in-situ planning of activities. In this report we provide an overview over current research related to adaptive workflow, activity theory, situated actions, and context-awareness. Then, we further explore the concept of adaptive workflow and context-awareness and how this can be implemented in a prototype workflow enactment service. A set of requirements for such a system is elicited from this exploration. We also provide a possible scenario for usage of adaptive context-aware workflow technology. From these requirements we have created an overall architecture that supports adaptive context-aware mobile work. Our focus within this architecture is on context-aware adaptive workflow systems. We finally present the design and implementation of a prototype application supporting context-aware adaptive mobile work processes. This prototype has been named PocketFlow and is implemented in embedded visual C++ for Microsoft PocketPC 2003 second edition.
|
546 |
MPEG-2 video-decoding for OpenRISCHaugseggen, Morten January 2005 (has links)
Oppgaven har tatt for seg tre forskjellige måter å utvide en open hardware prosessor på. En av metodene har blitt brukt for en faktisk implementasjon, mens de to andre har blitt brukt til simulering. Hardware aksellereringen har blitt brukt for å kjøre MPEG-2 dekoding på prosessoren på en mer effektiv måte. Resultatene fra simuleringene har så blitt brukt for å se på klokkefrekvens, areal og strømforbruk til den nye SoCen/prosessoren.
|
547 |
Using Commodity Graphics Hardware for Medical Image SegmentationBotnen, Martin, Ueland, Harald January 2005 (has links)
Modern graphics processing units (GPUs) have evolved into high-performance processors with fully programmable vertex and fragment stages. As their functionality and performance are still increasing, more programmers are appealed by their computational power. This has led to an extensive usage of the GPU as a computational resource in general-purpose computing, and not just within applications of the entertainment business and computer games. Large volume data sets are involved when it comes to medical image segmentation. It is a time consuming task, but is important in the process of detection and identification of special structures and objects. In this thesis we investigate the possibility of using commodity graphics hardware for medical image segmentation. By using a high-level shading language, and utilizing state of the art technolgy like the framebuffer object (FBO) extension and a modern programmable GPU, we perform seeded region growing (SRG) on medical volume data. We also implement two pre-processing filters on the GPU; a median filter and a nonlinear anisotropic diffusion filter, along with a volume visualizer that renders volume data. In our work, we managed to port the Seeded Region Growing (SRG) algorithm from the CPU programming model onto the GPU programming model. The GPU implementation was successful, but we did not, however, get the desired reduction in time consume. In comparison with an equivalent CPU implementation, we found that the GPU version is outperformed. This is most likely due to the overhead associated with the setup of shaders and render-targets (FBO) while running the SRG. The algorithm has low computational costs, and if a more complex and sophisticated method is implemented on the GPU, the computational capacity and the parallelism of the of the GPU may be more utilized. Hence, a speed-up in computational time is then more likely to occur compared to a CPU implementation. Our work involving a 3D nonlinear anisotropic diffusion filter strongly suggests this.
|
548 |
Fast Tree Rendering On The GPUKjær, Andreas Solem January 2005 (has links)
Over the last few years, the computer graphics hardware has evolved extremely fast from supporting only a few fixed graphical algorithms to support execution of dynamic programs supplied by a developer. Only a few years back all graphics programs were written in assembly language, a nonintuitive low level programming language. Today such programs can be written in high level, near written English, source code, making it easier to develop more advanced effects and geometric shapes on the graphics card. This project presents a new way to utilize today's programmable graphics card to generate and render trees for real-time applications. The emphasis will be on generating and rendering the geometry utilizing the graphics hardware, trying to speed up the calculation of naturally advanced shapes for the purpose of offloading the systems central processing unit.
|
549 |
Segmentation of Neuro Tumours : from MR and Ultrasound imagesGjedrem, Stian Dalene, Navestad, Gunn Marie January 2005 (has links)
We have implemented and tested segmentation methods for segmenting brain tumours from magnetic resonance (MR) and ultrasound data. Our work in this thesis mainly focuses on active contours, both parametric (snakes) and geometric contours (level set). Active contours have the advantage over simpler segmentation methods that they are able to take both high- and low-level information into consideration. This means that the result they produce both depends on shape as well as intensity information from the input image. Our work is based on the results from an earlier completed depth study which investigated different segmentation methods. We have implemented and tested one simplified gradient vector flow snake model and four level set approaches: fast marching level set, geodesic level set, canny edge level set, and Laplacian level set. The methods are evaluated based on precision of the region boundary, sensitivity to noise, the effort needed to adjust parameters and the time to perform the segmentation. We have also compared the results with the result from a region growing method. We achieved promising results for active contour segmentation methods compared with other, simpler segmentation methods. The simplified snake model has given promising results, but has to be subject to more testing. Furthermore, tests with four variants of the level set method have given good results in most cases with MR data and in some cases with ultrasound data.
|
550 |
Volume-to-volume registrationHarg, Erik January 2005 (has links)
Implementation of automated volume-to-volume registration applications for three separate registration steps desired in enhancing neurosurgical navigation is considered. Prototype implementations for MRI-to-MRI registration, MRI-to-US registration and US-to-US registration have been made using registration methods available in the Insight Toolkit, with variants of the Mutual Information similarity metric. The obtained results indicate that automatic volume-to-volume registration using Normalized Mutual Information should be feasible for the neuronavigational applications considered here, with sufficient accuracy.
|
Page generated in 0.0574 seconds