• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 628
  • 171
  • Tagged with
  • 799
  • 799
  • 799
  • 557
  • 471
  • 471
  • 136
  • 136
  • 94
  • 94
  • 88
  • 88
  • 6
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Movement detection in shifting light environments

Nygård, Kristin January 2005 (has links)
The task of this assignment is to make an algorithm that can detect movement regardless of how the illumination is. Handling changes in illumination is an important part of creating a stable surveillance system. This problem has been attempted solved here by making a model of the scene which consists of the expectation value and the standard deviation value for each pixel. For every frame that is tested for movement a ratio $p$ is calculated that is the relationship between the actual pixel value $x$, the expectation value $mu$ and the standard deviation value $sigma$. Three different methods were made that use these $p$ values to look for movement. The method that turned out to work best under all conditions compares the $p$ value for each pixel with the $p$ values of its neighbours. This solution is based on the observation that the relation between the greyscale values of pixels in a small area doesn't change. The system is tested both indoor and outdoor. It handles moving shadows and big changes in the illumination without triggering too many false alarms, and at the same time it detects movement under different illumination environments. When tested on a uniform scene it detected 87.7 % of the movement that was presented to the system. The hardest movement to detect were dark objects on a dark background. The system has a problem when the greyscale value of the moving object gets too similar to the greyscale value of the scene. And if the scene has some areas with monotonous texture and some areas with complex texture, the monotonous texture tends to get less sensitive. This problem is proposed solved by splitting the region of interest in several smaller areas, to make each area equally sensitive for movement.
552

Autonomous Remote Controlled Helicopter

Bru, Leif Hamang January 2005 (has links)
Unmanned Aerial Vehicles (UAVs) have a tremendous appeal. One can imagine a large number of applications such as search-and-rescue, traffic monitoring, aerial mapping, etc. Helicopters are particularly attractive due to their Vertical Take Off and Landing (VTOL) capabilities. The research on UAVs has shown rapid development in recent years, and offers a great number of challenges. This thesis is the result of a project which is a part of the Autonomous Remote Controlled Helicopter (ARCH) project at the Department of Computer and Information Science, Norwegian University of Science and Technology. The ARCH project has already gained public interest, when it was featured on a television program (Schrödingers katt, NRK. September 2004). The object of this thesis is divided into three main sections. Firstly, it is to create and describe a remote control system for controlling the UAV in semi-autonomous mode, that will also enable the UAV to autonomously follow objects (pursuit-mode). Secondly, it is to create and describe a virtual cockpit which is to be used with the remote control system. Finally, it is to create and describe an image stabilization system, which can stabilize the visual information sent from the UAV to the ground and the virtual cockpit. These three components have been combined and integrated into the client prototype called ARCH Groundstation. Together, these three components provides a platform for an operator to control the ARCH UAV in semi-autonomous mode.
553

Surface-based Markerless Patient Registration

Augdal, Sigmund January 2005 (has links)
When doing image guided surgery, it is important to find a proper alig nment of the coordinate systems for the images and for the tracking system that tracks the positions o f the surgeons tools. This report explores surface based methods for finding such an alignment, using either an optical shape measurement device, or surfaces gathered by passing the tracking tool along the surface of the patien t. Accuracy and usability factors are explored, and compared to existing methods based on finding corresponding points
554

Using genetic algorithms for improving segmentation.

Flaten, Erlend January 2005 (has links)
Segmentation is one of the core fields in image processing, and the first difficult step in processing and understanding images. There are dozens of different segmentation algorithm, but all these algorithms have some kind of “Achilles’ heal”, or may be limited to one or a few domains . This paper presents a possible solution to avoid problems with single segmentation algorithms by making it possible to use several algorithms. The algorithms can be used separately or in sequences of algorithms. Finally, an application is developed to give some insight into this new way of segmenting.
555

Implementation of Floating-point Coprocessor

Skogstrøm, Kristian January 2005 (has links)
This thesis presents the architecture and implementation of a high-performance floating-point coprocessor for Atmel's new microcontroller. The coprocessor architecture is based on a fused multiply-add pipeline developed in the specialization project, TDT4720. This pipeline has been optimized significantly and extended to support negation of all operands and single-precision input and output. New hardware has been designed for the decode/fetch unit, the register file, the compare/convert pipeline and the approximation tables. Division and square root is performed in software using Newton-Raphson iteration. The Verilog RTL implementation has been synthesized at 167 MHz using a 0.18 um standard cell library. The total area of the final implementation is 107 225 gates. The coprocessor has also been synthesized with the CPU. Test-programs have been run to verify that the coprocessor works correctly. A complete verification of the floating-point coprocessor, however, has not been performed due to limitations in time.
556

Augmented Reality for MR-guided surgery

Karlsen, Jørn Skaarud January 2005 (has links)
Intra-operative Magnetic Resonance Imaging is a new modality for image-guided therapy, and Augmented Reality (AR) is an important emerging technology in this field. AR enables the development of tools which can be applied both pre-operatively and intra-operatively, thus helping users to see into the body, through organs and visualize the relevant parts useful for a specific procedure. The work presented in this paper aims at solving several problems in order to develop an Augmented Reality system for real-life surgery in an MR environment. Specifically, ways of correctly registering 3D-imagery with the real world is the major problem of both Augmented Reality and this thesis. Emphasis is put on the static registration problem. Subproblems of this include: calibrating a video-see-through Head Mounted Display (HMD) entirely in Augmented Reality, registering a virtual object on a patient by placing a set of points on both the virtual object and patient, and calculating the transformation needed in order for two overlapping tracking systems to deliver tracking signals in the same coordinate system. Additionally, problems and solutions related to the visualization of volume data and internal organs are presented: Specifically, how to view virtual organs as if they were residing inside the body of a patient through a cut, thought no surgical opening of the body has been performed, and the visualization and manipulation of a volume transfer function in a real-time Augmented Reality setting. Implementations use the Studierstube and OpenTracker software frameworks for visualization and abstraction of tracking devices respectively. OpenCV, a computer vision library, is used for image processing and calibraton together with an implementation of Tsai's calibration method by Reg Willson. The Augmented Reality based calibration implementation uses two different calibration methods, referred to in litterature as Zhang and Tsai camera calibration, for calibrating the intrinsic and extrinsic camera parameters respectively. Registering virtual-real objects and overlapping tracking systems is performed using a simplified version of the Iterative Closest Point (ICP) procedure solving a problem commonly referred to as the absolute orientation problem. The virtual-cut implementation works by projecting a rendered texture of a virtual organ and mapping this to a mesh representation of a cut which is placed on the patient in Augmented Reality. The volume transfer functions are implemented as Catmull-Rom curves, and have control points which are movable in Augmented Reality. Histograms represent transfer functions as well as distribution of volume intensities. Results show that the Augmented Reality based camera calibration procedure suffers from inaccuracies in the sampling of points for extrinsic camera calibration due to the dynamics present when wearing an HMD and holding a tracked pen. This type of calibration should occur by sampling statically and averaging over several samples to reduce noise. The virtual real and overlapping tracking systems are also sensitive to sampling, and care has to be taken in order to do this accurately. The virtual-cut technique has been shown to increase the feeling of a virtual object residing within the body of a patient, and the volume transfer function became easier to use after implementing the histogram visualization, reducing the time needed to set up a transfer function. There are many issues which need to be solved in order to set up a useful medical Augmented Reality implementation. This thesis attempts to illustrate some of these problems, and introduces solutions to a few. Further developments are needed in order to bring the results from this paper into a clinical setting, but the possibilities are many if such an integration is achieved.
557

Computer game based learning - SimComp

Friis, Nicolai January 2005 (has links)
This report is the result of a computer architecture simulation game development project. The goals of the project were to develop conceptual ideas for a game that could be used in teaching computer architecture at a university level and develop a prototype of game. The game should be based on simulation and the BSPlab simulator. Two types of simulation games were identified; observer and participant. The observer type puts the player outside the simulation and the participant type puts the player inside the simulation. The observer type of simulation game was selected as best suited for a game about computer architecture and simulation. Three conceptual ideas for types of observer simulation games were developed; Computer Tycoon, which puts the player in charge of a company. Computer Manager, which puts the player in the role of manager of a computer team and Computer Builder, which lets the player construct a computer city. The Computer Manager idea was developed further. The player is put in the role of the manager of a computer team. The team competes in a league against other teams, playing a series of matches against each other. A ranking system shows how well the teams have done and in the end of the series a winner will be declared. This is similar to a football-league. A simple prototype of the Computer Manager idea was designed and implemented in Java for use in evaluation of the idea.
558

Effective Quantification of the Paper Surface 3D Structure

Fidjestøl, Svein January 2005 (has links)
This thesis covers the topic of image processing in relation to the segmentation and analysis of pores protruding the surface in the three dimensional surface structure of paper. The successful analysis of pores is related to a greater goal of relating such an analysis to the perceived quality of the surface of a paper sample. The first part of the thesis gives an introduction to the context of image processing in relation to paper research. Also, an overview of the image processing framework used for image processing plugin development, ImageJ, is provided, together with the current status of ImageJ plugins for surface characterization. The second part of the thesis gives an overview of an envisioned future paper quality assessment system. The quality assesment system described consists of six phases, three of which are treated in this thesis. These are the Image Processing phase, the Modeling phase, and the Measurement phase. The Image Processing phase is further divided into three subphases. These are the Error Correction subphase, the Pore Extraction subphase, and the Segmentation phase. Together with the description of the phases of the system, techniques are presented that are relevant to the phase currently being described. The third part of the thesis covers the development of new plugins for surface characterization within the ImageJ framework. Examples are given and evaluated to show the usage and results of each plugin, and each plugin is related to a specific part of the quality assesment system. Also, a tutorial covering use of several plugins in sequence is presented. The parts of the system receiving the most attention in relation to plugin development are segmentation and modeling.
559

Location-aware service for the UbiCollab platform

Jensen, Børge Setså January 2005 (has links)
Location-aware services have become more important during the last decade due to the increasing mobility and connectivity of users and resources. Location-awareness is an important aspect of making an application context-aware. In supporting collaboration in a ubiquitous computing environment taking advantage of location information is an important feature. The UbiCollab Platform is a platform that supports collaboration in a ubiquitous environment. This thesis presents an extension of the UbiCollab platform to make it location-aware. The work shows how the location service can be developed to handle storing and querying of location information.
560

Reconstruction of hepatic vessels from CT scans

Eidheim, Ole Christian January 2005 (has links)
Deriving liver vessel structure from CT scans manually is time consuming and error prone. An automatic procedure that could help the radiologist in her analysis is therefore needed. We present two algorithms to preprocess and segment the hepatic vessels. The first algorithm processes each CT slice individually, while the second algorithm applies processing on the whole CT scan at once. Matched filtering and anisotropic diffusion is used to emphasise the blood vessels, and entropy based thresholding and segmentation by local mean and variance are used to coarsely position the vessels. Node positions and sizes are derived from the skeleton and the distance transform of the segmentation results, respectively. From the skeleton and node data, interconnections are added forming a vessel graph. At the end, a search is executed to find the most likely vessel graph based on anatomical knowledge. Results have been inspected visually by medical staff and are promising with respect to clinical use in the future.

Page generated in 0.0611 seconds