• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 382
  • 116
  • Tagged with
  • 498
  • 498
  • 498
  • 498
  • 498
  • 471
  • 27
  • 10
  • 10
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Using genetic algorithms for improving segmentation.

Flaten, Erlend January 2005 (has links)
Segmentation is one of the core fields in image processing, and the first difficult step in processing and understanding images. There are dozens of different segmentation algorithm, but all these algorithms have some kind of “Achilles’ heal”, or may be limited to one or a few domains . This paper presents a possible solution to avoid problems with single segmentation algorithms by making it possible to use several algorithms. The algorithms can be used separately or in sequences of algorithms. Finally, an application is developed to give some insight into this new way of segmenting.
282

Implementation of Floating-point Coprocessor

Skogstrøm, Kristian January 2005 (has links)
This thesis presents the architecture and implementation of a high-performance floating-point coprocessor for Atmel's new microcontroller. The coprocessor architecture is based on a fused multiply-add pipeline developed in the specialization project, TDT4720. This pipeline has been optimized significantly and extended to support negation of all operands and single-precision input and output. New hardware has been designed for the decode/fetch unit, the register file, the compare/convert pipeline and the approximation tables. Division and square root is performed in software using Newton-Raphson iteration. The Verilog RTL implementation has been synthesized at 167 MHz using a 0.18 um standard cell library. The total area of the final implementation is 107 225 gates. The coprocessor has also been synthesized with the CPU. Test-programs have been run to verify that the coprocessor works correctly. A complete verification of the floating-point coprocessor, however, has not been performed due to limitations in time.
283

Augmented Reality for MR-guided surgery

Karlsen, Jørn Skaarud January 2005 (has links)
Intra-operative Magnetic Resonance Imaging is a new modality for image-guided therapy, and Augmented Reality (AR) is an important emerging technology in this field. AR enables the development of tools which can be applied both pre-operatively and intra-operatively, thus helping users to see into the body, through organs and visualize the relevant parts useful for a specific procedure. The work presented in this paper aims at solving several problems in order to develop an Augmented Reality system for real-life surgery in an MR environment. Specifically, ways of correctly registering 3D-imagery with the real world is the major problem of both Augmented Reality and this thesis. Emphasis is put on the static registration problem. Subproblems of this include: calibrating a video-see-through Head Mounted Display (HMD) entirely in Augmented Reality, registering a virtual object on a patient by placing a set of points on both the virtual object and patient, and calculating the transformation needed in order for two overlapping tracking systems to deliver tracking signals in the same coordinate system. Additionally, problems and solutions related to the visualization of volume data and internal organs are presented: Specifically, how to view virtual organs as if they were residing inside the body of a patient through a cut, thought no surgical opening of the body has been performed, and the visualization and manipulation of a volume transfer function in a real-time Augmented Reality setting. Implementations use the Studierstube and OpenTracker software frameworks for visualization and abstraction of tracking devices respectively. OpenCV, a computer vision library, is used for image processing and calibraton together with an implementation of Tsai's calibration method by Reg Willson. The Augmented Reality based calibration implementation uses two different calibration methods, referred to in litterature as Zhang and Tsai camera calibration, for calibrating the intrinsic and extrinsic camera parameters respectively. Registering virtual-real objects and overlapping tracking systems is performed using a simplified version of the Iterative Closest Point (ICP) procedure solving a problem commonly referred to as the absolute orientation problem. The virtual-cut implementation works by projecting a rendered texture of a virtual organ and mapping this to a mesh representation of a cut which is placed on the patient in Augmented Reality. The volume transfer functions are implemented as Catmull-Rom curves, and have control points which are movable in Augmented Reality. Histograms represent transfer functions as well as distribution of volume intensities. Results show that the Augmented Reality based camera calibration procedure suffers from inaccuracies in the sampling of points for extrinsic camera calibration due to the dynamics present when wearing an HMD and holding a tracked pen. This type of calibration should occur by sampling statically and averaging over several samples to reduce noise. The virtual real and overlapping tracking systems are also sensitive to sampling, and care has to be taken in order to do this accurately. The virtual-cut technique has been shown to increase the feeling of a virtual object residing within the body of a patient, and the volume transfer function became easier to use after implementing the histogram visualization, reducing the time needed to set up a transfer function. There are many issues which need to be solved in order to set up a useful medical Augmented Reality implementation. This thesis attempts to illustrate some of these problems, and introduces solutions to a few. Further developments are needed in order to bring the results from this paper into a clinical setting, but the possibilities are many if such an integration is achieved.
284

Computer game based learning - SimComp

Friis, Nicolai January 2005 (has links)
This report is the result of a computer architecture simulation game development project. The goals of the project were to develop conceptual ideas for a game that could be used in teaching computer architecture at a university level and develop a prototype of game. The game should be based on simulation and the BSPlab simulator. Two types of simulation games were identified; observer and participant. The observer type puts the player outside the simulation and the participant type puts the player inside the simulation. The observer type of simulation game was selected as best suited for a game about computer architecture and simulation. Three conceptual ideas for types of observer simulation games were developed; Computer Tycoon, which puts the player in charge of a company. Computer Manager, which puts the player in the role of manager of a computer team and Computer Builder, which lets the player construct a computer city. The Computer Manager idea was developed further. The player is put in the role of the manager of a computer team. The team competes in a league against other teams, playing a series of matches against each other. A ranking system shows how well the teams have done and in the end of the series a winner will be declared. This is similar to a football-league. A simple prototype of the Computer Manager idea was designed and implemented in Java for use in evaluation of the idea.
285

Effective Quantification of the Paper Surface 3D Structure

Fidjestøl, Svein January 2005 (has links)
This thesis covers the topic of image processing in relation to the segmentation and analysis of pores protruding the surface in the three dimensional surface structure of paper. The successful analysis of pores is related to a greater goal of relating such an analysis to the perceived quality of the surface of a paper sample. The first part of the thesis gives an introduction to the context of image processing in relation to paper research. Also, an overview of the image processing framework used for image processing plugin development, ImageJ, is provided, together with the current status of ImageJ plugins for surface characterization. The second part of the thesis gives an overview of an envisioned future paper quality assessment system. The quality assesment system described consists of six phases, three of which are treated in this thesis. These are the Image Processing phase, the Modeling phase, and the Measurement phase. The Image Processing phase is further divided into three subphases. These are the Error Correction subphase, the Pore Extraction subphase, and the Segmentation phase. Together with the description of the phases of the system, techniques are presented that are relevant to the phase currently being described. The third part of the thesis covers the development of new plugins for surface characterization within the ImageJ framework. Examples are given and evaluated to show the usage and results of each plugin, and each plugin is related to a specific part of the quality assesment system. Also, a tutorial covering use of several plugins in sequence is presented. The parts of the system receiving the most attention in relation to plugin development are segmentation and modeling.
286

Location-aware service for the UbiCollab platform

Jensen, Børge Setså January 2005 (has links)
Location-aware services have become more important during the last decade due to the increasing mobility and connectivity of users and resources. Location-awareness is an important aspect of making an application context-aware. In supporting collaboration in a ubiquitous computing environment taking advantage of location information is an important feature. The UbiCollab Platform is a platform that supports collaboration in a ubiquitous environment. This thesis presents an extension of the UbiCollab platform to make it location-aware. The work shows how the location service can be developed to handle storing and querying of location information.
287

Reconstruction of hepatic vessels from CT scans

Eidheim, Ole Christian January 2005 (has links)
Deriving liver vessel structure from CT scans manually is time consuming and error prone. An automatic procedure that could help the radiologist in her analysis is therefore needed. We present two algorithms to preprocess and segment the hepatic vessels. The first algorithm processes each CT slice individually, while the second algorithm applies processing on the whole CT scan at once. Matched filtering and anisotropic diffusion is used to emphasise the blood vessels, and entropy based thresholding and segmentation by local mean and variance are used to coarsely position the vessels. Node positions and sizes are derived from the skeleton and the distance transform of the segmentation results, respectively. From the skeleton and node data, interconnections are added forming a vessel graph. At the end, a search is executed to find the most likely vessel graph based on anatomical knowledge. Results have been inspected visually by medical staff and are promising with respect to clinical use in the future.
288

Segmentation of Kidneys from MR-Images

Ree, Eirik January 2005 (has links)
Det har blitt utviklet en metode for semi-automatisk segmentering av nyrer fra 2D og 3D MR-bilder. Algoritmen foregår som en kombinasjon av en watershed segmentering og en modellbasert segmentering. For å løse problemet med at aktive konturer krever en svært god initialisering, brukes resultatet av watershed segmenteringen til å lage initielle konturer. Resultatet har blitt en god og fleksibel algoritme som gir gode resultater og lett kan brukes også på andre segmenteringsoppgaver.
289

NanoRisc

Rand, Peder January 2005 (has links)
This report gives a short introduction of the Norwegian wireless electronics company Chipcon AS, and goes on to account for the state of the art of small IP processor cores. It then describes the NanoRisc, a powerful processor developed in this project to replace hardware logic modules in future Chipcon designs. The architecture and a VHDL implementation of the NanoRisc is described and discussed, as well as an assembler and instruction set simulator developed for the NanoRisc. The results of this development work are promising; synthesis shows that the NanoRisc is capable of powerful 16-bit data moving and processing at 50 MHz in an 18nm process while requiring less than 4500 gates. The report concludes that the NanoRisc, and none of the existing IP cores studied, satisfies the requirements for hardware logic replacement in Chipcon transceivers.
290

Empirical study of software evolution

Hagli, Andreas Tørå January 2005 (has links)
Software development is rapidly changing and software systems are increasing in size and expected lifetime. To cope with this, several new languages and development processes have emerged, as has stronger focus on design and software architecture and development with consideration for evolution and future change in requirements. There is a clear need for improvements, research shows that the portion of development cost used for maintenance is increasing and can be as high as 50 %. We also see many software systems that grow into uncontrollable complexity where large parts of the system cannot be touched because of risks for unforeseeable consequence. Therefore a clearer understanding of the evolution of software is needed in order to prevent decay of the systems structure. This thesis approaches the field of software evolution through an empirical study on the open source project Portage from the Gentoo Linux project. Data is gathered, ratified and analysed to study the evolutionary trends of the system. These findings are seen in the context of Lehman's laws on the inevitability of growth and increasement of complexity through the lifetime of software systems. A set of research question and hypotheses are formulated and tested. Also, experience from using open source software for data mining is presented.

Page generated in 0.1064 seconds