• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 382
  • 116
  • Tagged with
  • 498
  • 498
  • 498
  • 498
  • 498
  • 471
  • 27
  • 10
  • 10
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Interactive removal of outlier points in latent variable models using virtual reality

Aurstad, Tore January 2005 (has links)
This report investigates different methods in computer graphics and virtual reality that can be applied to develop a system that provides analysis for the changes that occur when removing outlier points in plots for principal component analysis. The main results of the report show that the use of animation gives a better understanding for the movement of individual points in the plots, before and after removal.
262

Reasoning with sequences of events in knowledge-intensive CBR

Brede, Tore January 2005 (has links)
Denne oppgaven presenterer et rammeverk for representasjon av og resonnering med temporale data i CBR-systemet TrollCreek. Slik funksjonalitet vil forbedre TrollCreek’s ytelse innenfor prediksjonsproblemer, det vil si: Kunne prediktere hva som vil skje i en ny problemsituasjon basert på sammenligninger med lagrede problemsituasjoner. Representasjonen er basert på å legge til en eller flere tidslinjer til et case. Resonneringsmekanismen abstraherer disse tidslinjene inn i en enkelt tidslinje, og sammenligner deretter denne tidslinjen med andre abstraherte tidslinjer i case-basen. I denne oppgaven har vi implementert en enkel ikke-kunnskapsintensiv metode for denne abstraksjonsoppgaven og vi har brukt metoder for sekvenssammenligning for å sammenligne tidslinjene. Oppgaven inneholder også et eksempel av rammeverket i bruk. Eksempelet er av en ”proof-of-concept”-type, og involverer et imagniært domene.
263

An implemention of support for multiple run-time architectures in a packaging system perspective

Heen, Tollef Fog January 2005 (has links)
Multiarch is a mechanism for packages supporting multiple architectures to be installed at the same time on the same machine in the same operating system. This paper shows a sample implementation of one way do do this on a UNIX-like system using the dpkg package manager. It shows the needed changes to both packages and the package manager itself.
264

A Just-In-Time compilation framework targeting resource-constrained devices

Hansen, Kent January 2005 (has links)
A framework for JIT compilation that specifically caters to the resource constraints of new generations of small devices is presented. Preliminary results obtained from a prototype implementation show average speedup of 5.5 over a conventional interpreter-based Java implementation, with only a 15% increase in the static memory footprint and an adjustable, highly predictable dynamic footprint.
265

Conservation of attribute data during geometric upgrading of 3D models

Moe, Jon January 2005 (has links)
Existing models of Norwegian offshore platforms are generally incomplete and lack the accuracy needed to eliminate clashes between the existing parts on the platform and new systems, before construction. Laser scanning is today used to a growing extent to make models with the necessary accuracy and thus resolve clashes prior to construction. However these models only show the surface of the plant and do not have the non-visible attributes or “intelligent information” contained in the existing models of the platform. I will in this thesis present how the intelligent information from an existing as-built model can be assigned to a scan model, and thus replace the old and inaccurate as-built model used today during the design of new systems.
266

Java Virtual Machine - Memory-Constrained Copying : Part 1: Main report

Amundsen, Kai Kristian January 2005 (has links)
Atmel is inventing a new microcontroller that is capable of running Java pro- grams through an implementation of the Java Virtual Machine. Compared to industry standard PCs the microcontroller has limited processing power and main memory. When running interactive programs on this microcontroller it is important that the program interruption time is kept to a minimum. In a Java Virtual machine the garbage collector is responsible for reclaiming unused main memory and making it available for the Java program again. This process creates a program interruption where the Java program is halted and the garbage collector is working. At the project start the Atmel Virtual Machine was using the mark-sweep garbage collector. This garbage collector could produce a program interruption greater than one second and was not suitable for interactive programs. The Memory-Constrained Copying algorithm is a new garbage collection algorithm that is incremental and therefore only collects a little bit of main memory at a time compared to the implemented mark-sweep garbage collector. A theoretical comparison of the mark sweep algorithm and the Memory- Constrained Copying algorithm was performed. This comparison showed that the mark-sweep algorithm would have a much longer program interruption than the Memory-Constrained Copying algorithm. The two algorithms should in the- ory also produce equal throughput. The penalty for the short program interrup- tion time in the Memory-Constrained Copying algorithm is its high algorithmic complexity. After a few modfications to the Virtual Machine, the Memory-Constrained Copying algorithm was implemented and tested functionally. To test the pro- gram interruption and throughput of the garbage collection algorithms a set of benchmarks were chosen. The EDN Embedded Microprocessor Benchmark Consortium Java benchmark suite was selected as the most accurate benchmarks available. The practical comparison of the two garbage collection algorithms showed that the theoretical comparison was correct. The mark-sweep algorithm pro- duced in the worst case an interruption of 3 seconds, while the Memory-Constrained Copying algorithm's maximum program interruption was 44 milliseconds. The results of the benchmarking confirms the results that the inventors of the Memory-Constrained Copying algorithm achieved in their test. Their test was not performed on a microcontroller, but on a standard desktop computer. This implementation has also confirmed that it is possible to implement the Memory-Constrained Copying algorithm in a microcontroller. During the implementation of the Memory-Constrained Copying algorithm a hardware bug was found in the microcontroller. This bug was identified and reported so the hardware could be modified.
267

Fighting Botnets in an Internet Service Provider Environment

Knutsen, Morten January 2005 (has links)
Botnets are compromised hosts under a common command and control infrastructure. These nets have become very popular because of their potential for various malicious activity. They are frequently used for distributed denial-of-service attacks, spamming, spreading malware and privacy invasion. Manually uncovering and responding to such hosts is difficult and costly. In this thesis a technique for uncovering and reporting botnet activity in an internet service provider environment is presented and tested. Using a list of known botnet controllers, an ISP can proactivly warn customers of likely compromised hosts while at the same time mitigate future ill-effects by severing communications between the compromised host and the controller. A prototype system is developed to route traffic destined for controllers to a sinkhole host, then analyse and drop the traffic. After using the system in a live environment at the norwegian reasearch and education network the technique has proven to be a feasable one, and is used in a incident response test-case, warning two big customers of likely compromised hosts. However, there are challenges in tracking down and following up such hosts, especially ``roaming'' hosts such as laptops. The scope of the problem is found to be serious, with the expected number of new hosts found to be about 75 per day. Considering that the list used represents only part of the actual controllers active on the internet, the need for an automated incident response seems clear.
268

DISCOlab: a toolkit for development of shared display systems in UbiCollab

Heitmann, Carsten Andreas January 2005 (has links)
Shared displays are important tools for promoting collaboration.Ubiquitous computing presents new requirements for the design of shared display systems.Contextualisation of information at shared displays is becoming more important. The ability to rapidly create shared display systems is motivated by the fact that shared displays play central roles in collaboration. Low level implementation issues, common to shared display systems can be an obstacle for this. A toolkit for creation of such systems is therefore needed to provide basic shared display functionality to developers. This master thesis presents a toolkit for creating shared display applications on UbiCollab, a platform supporting collaborative work in ubiquitous environments. The work shows the development of the toolkit and how the toolkit can be used to create a shared display system. The toolkit takes advantage of the opportunities the UbiCollab platform provides on contextualisation of information.
269

Empirical Testing of a Clustering Algorithm for Large UML Class Diagrams; an Experiment

Haugen, Vidar January 2005 (has links)
One important part of developing information systems is to get as much insight as possible about the problem, and possible solutions, in an early phase. To get this insight, the actors involved need good and understandable models. One popular modeling approach is UML class diagrams. A problem with UML class diagrams is that they tend to get very large when used to model large-scale commercial applications. In the absence of suitable mechanisms for complexity management, such models tend to be represented as single, interconnected diagrams. Diagrams of this kind are difficult for stakeholders to understand and maintain. There have been developed algorithms for filtering large ER diagrams, and the aim of this project has been to try if one of these algorithms can be used for filtering UML class diagrams as well. This paper describes a laboratory experiment which compares the efficiency of two different representation methods for documentation and maintenance of large data models. The representation methods compared are the ordinary UML class diagram, and the Leveled Data Model. The methods are compared using a range of performance based and perception based variables. The results show that the Leveled Data Model is not suited for modeling large generalization hierarchies. For other kinds of relations, the efficiency of the two modeling methods is the same. The participants preferred to use the ordinary UML diagram to solve the experimental tasks.
270

Building a Replicated Data Store using Berkeley DB and the Chord DHT

Waagan, Kristian January 2005 (has links)
Peer-to-peer technology is gaining grounds in many different application areas. This report describes the task of building a simple distributed and replicated database system on top of the distributed hash table Chord and the database application Berkeley DB Java Edition (JE). The prototype was implemented to support a limited subset of the commands available in JE. These were the main challenges of realizing the prototype; (1) integration of the application level communication with the Chord level communication (2) design and implement a set of data maintenance protocols required to handle node joins and failures (3) run tests to verify correct operation (4) to quantify basic performance metrics for our local area test setup. The performance of the prototype is acceptable, taken into consideration that network access is taking place and that it has not been optimized. There are challenges and features to support: (a) although Chord handles churn reasonably well, the application layer does not in the current implementation (b) operations that need to access all records in a specific database are not supported (c) an effective finger table is required in large networks The current approach seems to be well suited for relatively stable networks, but the need to relocate and otherwise maintain data requires more complex protocols in a system with high churn.

Page generated in 0.0952 seconds