371 |
Conservation of attribute data during geometric upgrading of 3D modelsMoe, Jon January 2005 (has links)
<p>Existing models of Norwegian offshore platforms are generally incomplete and lack the accuracy needed to eliminate clashes between the existing parts on the platform and new systems, before construction. Laser scanning is today used to a growing extent to make models with the necessary accuracy and thus resolve clashes prior to construction. However these models only show the surface of the plant and do not have the non-visible attributes or intelligent information contained in the existing models of the platform. I will in this thesis present how the intelligent information from an existing as-built model can be assigned to a scan model, and thus replace the old and inaccurate as-built model used today during the design of new systems.</p>
|
372 |
Java Virtual Machine - Memory-Constrained Copying : Part 1: Main reportAmundsen, Kai Kristian January 2005 (has links)
<p>Atmel is inventing a new microcontroller that is capable of running Java pro- grams through an implementation of the Java Virtual Machine. Compared to industry standard PCs the microcontroller has limited processing power and main memory. When running interactive programs on this microcontroller it is important that the program interruption time is kept to a minimum. In a Java Virtual machine the garbage collector is responsible for reclaiming unused main memory and making it available for the Java program again. This process creates a program interruption where the Java program is halted and the garbage collector is working. At the project start the Atmel Virtual Machine was using the mark-sweep garbage collector. This garbage collector could produce a program interruption greater than one second and was not suitable for interactive programs. The Memory-Constrained Copying algorithm is a new garbage collection algorithm that is incremental and therefore only collects a little bit of main memory at a time compared to the implemented mark-sweep garbage collector. A theoretical comparison of the mark sweep algorithm and the Memory- Constrained Copying algorithm was performed. This comparison showed that the mark-sweep algorithm would have a much longer program interruption than the Memory-Constrained Copying algorithm. The two algorithms should in the- ory also produce equal throughput. The penalty for the short program interrup- tion time in the Memory-Constrained Copying algorithm is its high algorithmic complexity. After a few modfications to the Virtual Machine, the Memory-Constrained Copying algorithm was implemented and tested functionally. To test the pro- gram interruption and throughput of the garbage collection algorithms a set of benchmarks were chosen. The EDN Embedded Microprocessor Benchmark Consortium Java benchmark suite was selected as the most accurate benchmarks available. The practical comparison of the two garbage collection algorithms showed that the theoretical comparison was correct. The mark-sweep algorithm pro- duced in the worst case an interruption of 3 seconds, while the Memory-Constrained Copying algorithm's maximum program interruption was 44 milliseconds. The results of the benchmarking confirms the results that the inventors of the Memory-Constrained Copying algorithm achieved in their test. Their test was not performed on a microcontroller, but on a standard desktop computer. This implementation has also confirmed that it is possible to implement the Memory-Constrained Copying algorithm in a microcontroller. During the implementation of the Memory-Constrained Copying algorithm a hardware bug was found in the microcontroller. This bug was identified and reported so the hardware could be modified.</p>
|
373 |
Fighting Botnets in an Internet Service Provider EnvironmentKnutsen, Morten January 2005 (has links)
<p>Botnets are compromised hosts under a common command and control infrastructure. These nets have become very popular because of their potential for various malicious activity. They are frequently used for distributed denial-of-service attacks, spamming, spreading malware and privacy invasion. Manually uncovering and responding to such hosts is difficult and costly. In this thesis a technique for uncovering and reporting botnet activity in an internet service provider environment is presented and tested. Using a list of known botnet controllers, an ISP can proactivly warn customers of likely compromised hosts while at the same time mitigate future ill-effects by severing communications between the compromised host and the controller. A prototype system is developed to route traffic destined for controllers to a sinkhole host, then analyse and drop the traffic. After using the system in a live environment at the norwegian reasearch and education network the technique has proven to be a feasable one, and is used in a incident response test-case, warning two big customers of likely compromised hosts. However, there are challenges in tracking down and following up such hosts, especially ``roaming'' hosts such as laptops. The scope of the problem is found to be serious, with the expected number of new hosts found to be about 75 per day. Considering that the list used represents only part of the actual controllers active on the internet, the need for an automated incident response seems clear.</p>
|
374 |
DISCOlab: a toolkit for development of shared display systems in UbiCollabHeitmann, Carsten Andreas January 2005 (has links)
<p>Shared displays are important tools for promoting collaboration.Ubiquitous computing presents new requirements for the design of shared display systems.Contextualisation of information at shared displays is becoming more important. The ability to rapidly create shared display systems is motivated by the fact that shared displays play central roles in collaboration. Low level implementation issues, common to shared display systems can be an obstacle for this. A toolkit for creation of such systems is therefore needed to provide basic shared display functionality to developers. This master thesis presents a toolkit for creating shared display applications on UbiCollab, a platform supporting collaborative work in ubiquitous environments. The work shows the development of the toolkit and how the toolkit can be used to create a shared display system. The toolkit takes advantage of the opportunities the UbiCollab platform provides on contextualisation of information.</p>
|
375 |
Empirical Testing of a Clustering Algorithm for Large UML Class Diagrams; an ExperimentHaugen, Vidar January 2005 (has links)
<p>One important part of developing information systems is to get as much insight as possible about the problem, and possible solutions, in an early phase. To get this insight, the actors involved need good and understandable models. One popular modeling approach is UML class diagrams. A problem with UML class diagrams is that they tend to get very large when used to model large-scale commercial applications. In the absence of suitable mechanisms for complexity management, such models tend to be represented as single, interconnected diagrams. Diagrams of this kind are difficult for stakeholders to understand and maintain. There have been developed algorithms for filtering large ER diagrams, and the aim of this project has been to try if one of these algorithms can be used for filtering UML class diagrams as well. This paper describes a laboratory experiment which compares the efficiency of two different representation methods for documentation and maintenance of large data models. The representation methods compared are the ordinary UML class diagram, and the Leveled Data Model. The methods are compared using a range of performance based and perception based variables. The results show that the Leveled Data Model is not suited for modeling large generalization hierarchies. For other kinds of relations, the efficiency of the two modeling methods is the same. The participants preferred to use the ordinary UML diagram to solve the experimental tasks.</p>
|
376 |
Building a Replicated Data Store using Berkeley DB and the Chord DHTWaagan, Kristian January 2005 (has links)
<p>Peer-to-peer technology is gaining grounds in many different application areas. This report describes the task of building a simple distributed and replicated database system on top of the distributed hash table Chord and the database application Berkeley DB Java Edition (JE). The prototype was implemented to support a limited subset of the commands available in JE. These were the main challenges of realizing the prototype; (1) integration of the application level communication with the Chord level communication (2) design and implement a set of data maintenance protocols required to handle node joins and failures (3) run tests to verify correct operation (4) to quantify basic performance metrics for our local area test setup. The performance of the prototype is acceptable, taken into consideration that network access is taking place and that it has not been optimized. There are challenges and features to support: (a) although Chord handles churn reasonably well, the application layer does not in the current implementation (b) operations that need to access all records in a specific database are not supported (c) an effective finger table is required in large networks The current approach seems to be well suited for relatively stable networks, but the need to relocate and otherwise maintain data requires more complex protocols in a system with high churn.</p>
|
377 |
Knowledge acquisition and modelling for knowledge-intensive CBRBakke, Elise January 2005 (has links)
<p>This thesis contains a study of state of the art knowledge acquisition modelling principles and methods for modelling general domain knowledge. This includes Newell's knowledge level, knowledge level modelling, Components of Expertise, CommonKADS and the Protégé meta tool. The thesis also includes a short introduction to the knowledge-intensive case-based reasoning system TrollCreek. Based on this background knowledge, one did analysis and comparison of different possible solutions. Then, after justifying the choices made, a knowledge acquisition method for TrollCreek was created. The method was illustrated through an example, evaluated and discussed.</p>
|
378 |
Adaptive Mobile Work ProcessesHauso, Frode, Røed, Øivind January 2005 (has links)
<p>Systems that efficiently provide support for adaptive work processes in a mobile environment do not exist according to our knowledge. Such systems would increase efficiency and safety in environments where work is inherently mobile, ad-hoc, and requires input from a set of heterogeneous sources. Traditional work support systems are normally not capable of dynamic change and plans must be made before work is started. This conflicts with most work processes, which are dynamic and where plans cannot be completely pre-defined. Current workflow systems are for the most part not capable of handling dynamic change in the workflow process execution. Those that do exist are geared more towards long-term adaptability of the workflow process and not towards in-situ planning of activities. In this report we provide an overview over current research related to adaptive workflow, activity theory, situated actions, and context-awareness. Then, we further explore the concept of adaptive workflow and context-awareness and how this can be implemented in a prototype workflow enactment service. A set of requirements for such a system is elicited from this exploration. We also provide a possible scenario for usage of adaptive context-aware workflow technology. From these requirements we have created an overall architecture that supports adaptive context-aware mobile work. Our focus within this architecture is on context-aware adaptive workflow systems. We finally present the design and implementation of a prototype application supporting context-aware adaptive mobile work processes. This prototype has been named PocketFlow and is implemented in embedded visual C++ for Microsoft PocketPC 2003 second edition.</p>
|
379 |
MPEG-2 video-decoding for OpenRISCHaugseggen, Morten January 2005 (has links)
<p>Oppgaven har tatt for seg tre forskjellige måter å utvide en open hardware prosessor på. En av metodene har blitt brukt for en faktisk implementasjon, mens de to andre har blitt brukt til simulering. Hardware aksellereringen har blitt brukt for å kjøre MPEG-2 dekoding på prosessoren på en mer effektiv måte. Resultatene fra simuleringene har så blitt brukt for å se på klokkefrekvens, areal og strømforbruk til den nye SoCen/prosessoren.</p>
|
380 |
Using Commodity Graphics Hardware for Medical Image SegmentationBotnen, Martin, Ueland, Harald January 2005 (has links)
<p>Modern graphics processing units (GPUs) have evolved into high-performance processors with fully programmable vertex and fragment stages. As their functionality and performance are still increasing, more programmers are appealed by their computational power. This has led to an extensive usage of the GPU as a computational resource in general-purpose computing, and not just within applications of the entertainment business and computer games. Large volume data sets are involved when it comes to medical image segmentation. It is a time consuming task, but is important in the process of detection and identification of special structures and objects. In this thesis we investigate the possibility of using commodity graphics hardware for medical image segmentation. By using a high-level shading language, and utilizing state of the art technolgy like the framebuffer object (FBO) extension and a modern programmable GPU, we perform seeded region growing (SRG) on medical volume data. We also implement two pre-processing filters on the GPU; a median filter and a nonlinear anisotropic diffusion filter, along with a volume visualizer that renders volume data. In our work, we managed to port the Seeded Region Growing (SRG) algorithm from the CPU programming model onto the GPU programming model. The GPU implementation was successful, but we did not, however, get the desired reduction in time consume. In comparison with an equivalent CPU implementation, we found that the GPU version is outperformed. This is most likely due to the overhead associated with the setup of shaders and render-targets (FBO) while running the SRG. The algorithm has low computational costs, and if a more complex and sophisticated method is implemented on the GPU, the computational capacity and the parallelism of the of the GPU may be more utilized. Hence, a speed-up in computational time is then more likely to occur compared to a CPU implementation. Our work involving a 3D nonlinear anisotropic diffusion filter strongly suggests this.</p>
|
Page generated in 0.0386 seconds