• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 382
  • 116
  • Tagged with
  • 498
  • 498
  • 498
  • 498
  • 498
  • 471
  • 27
  • 10
  • 10
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

A Just-In-Time compilation framework targeting resource-constrained devices

Hansen, Kent January 2005 (has links)
<p>A framework for JIT compilation that specifically caters to the resource constraints of new generations of small devices is presented. Preliminary results obtained from a prototype implementation show average speedup of 5.5 over a conventional interpreter-based Java implementation, with only a 15% increase in the static memory footprint and an adjustable, highly predictable dynamic footprint.</p>
122

Conservation of attribute data during geometric upgrading of 3D models

Moe, Jon January 2005 (has links)
<p>Existing models of Norwegian offshore platforms are generally incomplete and lack the accuracy needed to eliminate clashes between the existing parts on the platform and new systems, before construction. Laser scanning is today used to a growing extent to make models with the necessary accuracy and thus resolve clashes prior to construction. However these models only show the surface of the plant and do not have the non-visible attributes or “intelligent information” contained in the existing models of the platform. I will in this thesis present how the intelligent information from an existing as-built model can be assigned to a scan model, and thus replace the old and inaccurate as-built model used today during the design of new systems.</p>
123

Java Virtual Machine - Memory-Constrained Copying : Part 1: Main report

Amundsen, Kai Kristian January 2005 (has links)
<p>Atmel is inventing a new microcontroller that is capable of running Java pro- grams through an implementation of the Java Virtual Machine. Compared to industry standard PCs the microcontroller has limited processing power and main memory. When running interactive programs on this microcontroller it is important that the program interruption time is kept to a minimum. In a Java Virtual machine the garbage collector is responsible for reclaiming unused main memory and making it available for the Java program again. This process creates a program interruption where the Java program is halted and the garbage collector is working. At the project start the Atmel Virtual Machine was using the mark-sweep garbage collector. This garbage collector could produce a program interruption greater than one second and was not suitable for interactive programs. The Memory-Constrained Copying algorithm is a new garbage collection algorithm that is incremental and therefore only collects a little bit of main memory at a time compared to the implemented mark-sweep garbage collector. A theoretical comparison of the mark sweep algorithm and the Memory- Constrained Copying algorithm was performed. This comparison showed that the mark-sweep algorithm would have a much longer program interruption than the Memory-Constrained Copying algorithm. The two algorithms should in the- ory also produce equal throughput. The penalty for the short program interrup- tion time in the Memory-Constrained Copying algorithm is its high algorithmic complexity. After a few modfications to the Virtual Machine, the Memory-Constrained Copying algorithm was implemented and tested functionally. To test the pro- gram interruption and throughput of the garbage collection algorithms a set of benchmarks were chosen. The EDN Embedded Microprocessor Benchmark Consortium Java benchmark suite was selected as the most accurate benchmarks available. The practical comparison of the two garbage collection algorithms showed that the theoretical comparison was correct. The mark-sweep algorithm pro- duced in the worst case an interruption of 3 seconds, while the Memory-Constrained Copying algorithm's maximum program interruption was 44 milliseconds. The results of the benchmarking confirms the results that the inventors of the Memory-Constrained Copying algorithm achieved in their test. Their test was not performed on a microcontroller, but on a standard desktop computer. This implementation has also confirmed that it is possible to implement the Memory-Constrained Copying algorithm in a microcontroller. During the implementation of the Memory-Constrained Copying algorithm a hardware bug was found in the microcontroller. This bug was identified and reported so the hardware could be modified.</p>
124

Fighting Botnets in an Internet Service Provider Environment

Knutsen, Morten January 2005 (has links)
<p>Botnets are compromised hosts under a common command and control infrastructure. These nets have become very popular because of their potential for various malicious activity. They are frequently used for distributed denial-of-service attacks, spamming, spreading malware and privacy invasion. Manually uncovering and responding to such hosts is difficult and costly. In this thesis a technique for uncovering and reporting botnet activity in an internet service provider environment is presented and tested. Using a list of known botnet controllers, an ISP can proactivly warn customers of likely compromised hosts while at the same time mitigate future ill-effects by severing communications between the compromised host and the controller. A prototype system is developed to route traffic destined for controllers to a sinkhole host, then analyse and drop the traffic. After using the system in a live environment at the norwegian reasearch and education network the technique has proven to be a feasable one, and is used in a incident response test-case, warning two big customers of likely compromised hosts. However, there are challenges in tracking down and following up such hosts, especially ``roaming'' hosts such as laptops. The scope of the problem is found to be serious, with the expected number of new hosts found to be about 75 per day. Considering that the list used represents only part of the actual controllers active on the internet, the need for an automated incident response seems clear.</p>
125

DISCOlab: a toolkit for development of shared display systems in UbiCollab

Heitmann, Carsten Andreas January 2005 (has links)
<p>Shared displays are important tools for promoting collaboration.Ubiquitous computing presents new requirements for the design of shared display systems.Contextualisation of information at shared displays is becoming more important. The ability to rapidly create shared display systems is motivated by the fact that shared displays play central roles in collaboration. Low level implementation issues, common to shared display systems can be an obstacle for this. A toolkit for creation of such systems is therefore needed to provide basic shared display functionality to developers. This master thesis presents a toolkit for creating shared display applications on UbiCollab, a platform supporting collaborative work in ubiquitous environments. The work shows the development of the toolkit and how the toolkit can be used to create a shared display system. The toolkit takes advantage of the opportunities the UbiCollab platform provides on contextualisation of information.</p>
126

Empirical Testing of a Clustering Algorithm for Large UML Class Diagrams; an Experiment

Haugen, Vidar January 2005 (has links)
<p>One important part of developing information systems is to get as much insight as possible about the problem, and possible solutions, in an early phase. To get this insight, the actors involved need good and understandable models. One popular modeling approach is UML class diagrams. A problem with UML class diagrams is that they tend to get very large when used to model large-scale commercial applications. In the absence of suitable mechanisms for complexity management, such models tend to be represented as single, interconnected diagrams. Diagrams of this kind are difficult for stakeholders to understand and maintain. There have been developed algorithms for filtering large ER diagrams, and the aim of this project has been to try if one of these algorithms can be used for filtering UML class diagrams as well. This paper describes a laboratory experiment which compares the efficiency of two different representation methods for documentation and maintenance of large data models. The representation methods compared are the ordinary UML class diagram, and the Leveled Data Model. The methods are compared using a range of performance based and perception based variables. The results show that the Leveled Data Model is not suited for modeling large generalization hierarchies. For other kinds of relations, the efficiency of the two modeling methods is the same. The participants preferred to use the ordinary UML diagram to solve the experimental tasks.</p>
127

Building a Replicated Data Store using Berkeley DB and the Chord DHT

Waagan, Kristian January 2005 (has links)
<p>Peer-to-peer technology is gaining grounds in many different application areas. This report describes the task of building a simple distributed and replicated database system on top of the distributed hash table Chord and the database application Berkeley DB Java Edition (JE). The prototype was implemented to support a limited subset of the commands available in JE. These were the main challenges of realizing the prototype; (1) integration of the application level communication with the Chord level communication (2) design and implement a set of data maintenance protocols required to handle node joins and failures (3) run tests to verify correct operation (4) to quantify basic performance metrics for our local area test setup. The performance of the prototype is acceptable, taken into consideration that network access is taking place and that it has not been optimized. There are challenges and features to support: (a) although Chord handles churn reasonably well, the application layer does not in the current implementation (b) operations that need to access all records in a specific database are not supported (c) an effective finger table is required in large networks The current approach seems to be well suited for relatively stable networks, but the need to relocate and otherwise maintain data requires more complex protocols in a system with high churn.</p>
128

Knowledge acquisition and modelling for knowledge-intensive CBR

Bakke, Elise January 2005 (has links)
<p>This thesis contains a study of state of the art knowledge acquisition modelling principles and methods for modelling general domain knowledge. This includes Newell's knowledge level, knowledge level modelling, Components of Expertise, CommonKADS and the Protégé meta tool. The thesis also includes a short introduction to the knowledge-intensive case-based reasoning system TrollCreek. Based on this background knowledge, one did analysis and comparison of different possible solutions. Then, after justifying the choices made, a knowledge acquisition method for TrollCreek was created. The method was illustrated through an example, evaluated and discussed.</p>
129

Adaptive Mobile Work Processes

Hauso, Frode, Røed, Øivind January 2005 (has links)
<p>Systems that efficiently provide support for adaptive work processes in a mobile environment do not exist according to our knowledge. Such systems would increase efficiency and safety in environments where work is inherently mobile, ad-hoc, and requires input from a set of heterogeneous sources. Traditional work support systems are normally not capable of dynamic change and plans must be made before work is started. This conflicts with most work processes, which are dynamic and where plans cannot be completely pre-defined. Current workflow systems are for the most part not capable of handling dynamic change in the workflow process execution. Those that do exist are geared more towards long-term adaptability of the workflow process and not towards in-situ planning of activities. In this report we provide an overview over current research related to adaptive workflow, activity theory, situated actions, and context-awareness. Then, we further explore the concept of adaptive workflow and context-awareness and how this can be implemented in a prototype workflow enactment service. A set of requirements for such a system is elicited from this exploration. We also provide a possible scenario for usage of adaptive context-aware workflow technology. From these requirements we have created an overall architecture that supports adaptive context-aware mobile work. Our focus within this architecture is on context-aware adaptive workflow systems. We finally present the design and implementation of a prototype application supporting context-aware adaptive mobile work processes. This prototype has been named PocketFlow and is implemented in embedded visual C++ for Microsoft PocketPC 2003 second edition.</p>
130

MPEG-2 video-decoding for OpenRISC

Haugseggen, Morten January 2005 (has links)
<p>Oppgaven har tatt for seg tre forskjellige måter å utvide en open hardware prosessor på. En av metodene har blitt brukt for en faktisk implementasjon, mens de to andre har blitt brukt til simulering. Hardware aksellereringen har blitt brukt for å kjøre MPEG-2 dekoding på prosessoren på en mer effektiv måte. Resultatene fra simuleringene har så blitt brukt for å se på klokkefrekvens, areal og strømforbruk til den nye SoCen/prosessoren.</p>

Page generated in 0.1676 seconds