• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 774
  • 206
  • Tagged with
  • 980
  • 968
  • 799
  • 592
  • 498
  • 498
  • 162
  • 162
  • 130
  • 130
  • 107
  • 104
  • 100
  • 62
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Decreasing Response Time of Failing Automated Tests by Applying Test Case Prioritization

Dalatun, Sveinung, Remøy, Simon Inge, Seth, Thor Kristian Ravnanger, Voldsund, Øyvind January 2011 (has links)
Running automated tests can be a time-consuming task, especially when doing regression testing. If the sequence of the execution of the test cases is arbitrary, there is a good chance that many of the defects are not detected until the end of the test run. If the developer could get the failing tests first, he would almost immediately be able to get back to coding or correcting mistakes. In order to achieve this, we designed and analyzed a set of test case prioritization techniques. The prioritization techniques were compared in an experiment, and evaluated against two existing techniques for prioritizing test cases.Our implementation of the prioritization techniques resulted in a tool called Pritest, built according to good design principles for performance, adaptability and maintainability. This tool was compared to an existing similar tool through a discussion.The problem we address is relevant for the increased popularity of agile software methods, where rapid regression testing is of high importance. The experiment indicates that some prioritization techniques perform better than others, and that techniques based on code analysis is outperformed by techniques analyzing code changes, in the context of our experiment.
22

Dancing Robots

Tidemann, Axel January 2006 (has links)
<p>This Master’s thesis implements a multiple paired models architecture that is used to control a simulated robot. The architecture consists of several modules. Each module holds a paired forward/inverse model. The inverse model takes as input the current and desired state of the system, and outputs motor commands that will achieve the desired state. The forward model takes as input the current state and the motor commands acting on the environment, and outputs the predicted next state. The models are paired, due to the fact that the output of the inverse model is fed into the forward model. A weighting mechanism based on how well the forward model predicts determines how much a module will influence the total motor control. The architecture is a slight tweak of the HAMMER and MOSAIC architectures of Demiris and Wolpert, respectively. The robot is to imitate dance moves that it sees. Three experiments are done; in the first two the robot imitates another robot, whereas in the third experiment the robot imitates a movement pattern gathered from human data. The pattern was obtained using a Pro Reflex tracking system. After training the multiple paired models architecture, the performance and self-organization of the different modules are analyzed. Shortcomings with the architecture are pointed out along with directions for future work. The main results of this thesis is that the architecture does not self-organize as intended; instead the architecture finds its own way to separate the input space into different modules. This is also most likely attributed to a problem with the learning of the responsibility predictor of the modules. This problem must be solved for the architecture to work as designed, and is a good starting point for future work.</p>
23

GeneTUC : Event extraction from TQL logic

Søvik, Harald January 2006 (has links)
<p>As Natural Language Processing systems converge on a high percentage of successful deeply parsed text, parse success alone is an incomplete measure of the ``intelligence'' exhibited by the system. Because systems apply different grammars, dictionaries and programming languages, the internal representation of parsed text is often different from system to system, making it difficult to compare performance and exchange useful data such as tagged corpora or semantic interpretations. This report describes how semantically annotated corpora can be used to measure quality of Natural Language Processing systems. A selected corpus produced by the GENIA project were used as ``golden standard'' (event-annotated abstracts from MEDLINE). This corpus were sparse (19 abstracts), thus manual methods were employed to produce a mapping from the native GeneTUC knowledge format (TQL). This mapping were used to produce an evaluation of events in GeneTUC. We were able to attain a recall of 67% and average precision of 33% on the training data. These results suggest that the mapping is inadequate. On test data, the recall were 28% and average precision 21%. Because events is a new ``feature'' in NLP-applications, there are no large corpora that can be used for automated rule learning. The conclusion is that at least there exists a partial mapping from TQL to GENIA events, and that larger corpora and AI-methods should be applied to refine the mapping rules. In addition, we discovered that this mapping can be of use for extraction of protein-protein interactions.</p>
24

Automatic diagnosis of ultrasound images using standard view planes of fetal anatomy

Ødegård, Jan, Østen, Anders January 2006 (has links)
<p>The use of ultrasound has revolutionised the area of clinical fetal examinations. The possibility of detecting congenital abnormalities at an early stage of the pregnancy is highly important to maximise the chances of correcting the defect before it becomes life-threatening. The problems related to the routine procedure is its complexity and the fact that it requires a lot of knowledge about fetal anatomy. Because of the lack of training among midwives, especially in less developed countries, the results of the examinations are often limited. In addition, the quality of the ultrasound equipment is often restricted. These limitations imply the need for a standardised procedure for the examination to decrease the amount of time required, as well as an automatic method for proposing the diagnosis of the fetus. This thesis has proposed a solution for automatically making a diagnosis based on the contents of extracted ultrasound images. Based on the concept of standard view planes, a list of predefined images are obtained of the fetus during the routine ultrasonography. These images contain the most important organs to examine, and most common congenital abnormalities are therefore detectable in this set. In order to perform the analysis of the images, medical domain knowledge must be obtained and stored to enable reasoning about the findings in the ultrasound images. The findings are extracted through segmentation and each object is given a unique description. An organ database is developed to store descriptions about existing organs to recognise the extracted objects. Once the organs have been identified, a CBR system is applied to analyse the total contents of one standard view plane. The CBR system uses domain knowledge from the medical domain as well as previously solved problems to identify possible abnormalities in the case describing the standard view plane. When a solution is obtained, it is stored for later retrieval. This causes the reliability of future examinations to increase, because of the constant expansion of the knowledge base. The foundation of standard view planes ensures an effective procedure and the amount of training needed to learn the procedure is minimised due to the automatic extraction and analysis of the contents of the standard view plane. The midwife only has to learn which standard view planes to obtain, not the analysis of their contents.</p>
25

A Shared Memory Structure for Cooperative Problem Solving

Røssland, Kari January 2006 (has links)
<p>The contribution of this thesis is a framework architecture for cooperative distributed problem solving in multiagent systems using a shared memory structure. Our shared memory structure, the TEAM SPACE, coordinates the problem solving process that is based on a plan in form of a hierarchy of decomposed tasks.</p>
26

Cloth Modelling on the GPU

Dencker, Kjartan January 2006 (has links)
<p>This project explores the possibility to use general purpose programming on the GPU to simlate clothes in 3D. The goal is to implement a faster version of the method given in 'Large Steps in Cloth Modelling' by Baraff et. al. (Implicit Euler).</p>
27

Implementation and evaluation of Norwegian Analyzer for use with DotLucene

Olsen, Bjørn Harald January 2006 (has links)
<p>This work has focused on improving retrieval performance of search in Norwegian document collections. The initiator of the thesis, InfoFinder Norge, desired an Norwegian analyzer for DotLucene. The standard analyzer used before did not support stopword elimination and stemming for Norwegian language. Norwegian Analyzer and standard analyzer were used in turns on the same document collection before indexing and querying, then the respective results were compared to discover efficiency improvements. An evaluation method based on Term Relevance Sets were investigated and used on DotLucene with use of the two analyzer approaches. Term Relevance Sets methodology were also compared with common measurements for relevance judging, and found useful for evaluation of IR systems. The evaluation results of Norwegian analyzer and standard analyzer gave clear indications that use of stopword elimination and stemming for Norwegian documents improves retrieval efficiency. Term Relevance Set-based evaluation was found reliable by comparing the results with precision measurements. Precision was increased with 16% with use of Norwegian Analyzer compared to use an standard analyzer with no content preprocessing support for Norwegian. Term Relevance Set evaluation with use of 10 ontopic terms and 10 offtopic terms gave an increased $tScore$ of 44%. The results show that counting term occurrences in the content of retrieved documents can be used to gain confidence that documents are either relevant or not relevant.</p>
28

Physically Based Simulation and Visualization of Fire in Real-Time using the GPU

Rødal, Knut Erik Samuel, Storli, Geir January 2006 (has links)
<p>Fire is a powerful natural effect which can greatly enhance the immersion of virtual environments and games. In this thesis we describe the theory and GPU implementation of a physically based approach for simulating and visualizing 3D fire in real-time. Previous approaches are generally lacking either in visual quality, turbulence and flickering, or flexibility and extensibility. We attempt to address all these issues by using an underlying fluid simulation, modeling the mass and heat transfer aspects of the physics of fire, in combination with an explicit combustion process. The fluid simulation is used to control the behavior of a velocity field governing the motion of fuel gas, hot exhaust gas, and temperature fields, and the combustion process models the conversion of fuel gas to exhaust gas when the temperature is above the ignition temperature of the fuel gas. The velocity field is among other affected by vorticity confinement, causing a more turbulent and flickering fire, and a buoyancy force modeling upward motion. We perform the fire simulation both in 3D and in a set of 2D slices using volumetric extrusion to define an implicit 3D domain. In order to achieve satisfying visual quality, we visualize the fire using a particle system of textured particles guided by the results from the fire simulation. The particle colors are based on black-body radiation from the hot exhaust gas, and the particles move according to the velocity field from the fluid simulation. A similar particle system is used to visualize the cooled exhaust gas or smoke. As an alternative to particle systems we have also implemented a volume rendering approach for visualizing fire, but it falls short both in performance and visual quality. Finally, we model dynamic illumination, approximating the illumination from the fire on the surrounding scene by a set of point lights, whose intensities are computed in a similar fashion as the fire particle colors. The point lights are either stationary positioned near the center of the fire, or set to follow the velocity field just like the particles of the fire and smoke particle systems. Both the simulation and visualization of fire are implemented completely on the GPU, ensuring high frame rates without sacrificing visual quality. We have achieved a flickering and turbulent fire which compares favorably to previous approaches and works well in virtual environments, especially due to the dynamic illumination. The fire visualization also has realistic colors and intensity, and thus captures important elements of real fire. Our underlying physically based simulation enables us to efficiently simulate a variety of different kinds of small-scale fires, by altering a set of simulation parameters. One of our main contributions is implementing the explicit combustion process with fluid simulation on the GPU, as well as using it in combination with vorticity confinement and volumetric extrusion. Our contributions also include the dynamic illumination already mentioned, simulation domain advection, a novel method for modeling the behavior of fire as it is moved, and using time-dependent noise curves to model dynamic wind affecting the fire.</p>
29

Parallel Methods for Real-Time Visualization of Snow

Saltvik, Ingar January 2006 (has links)
<p>Using computer generated imaging is becoming more and more popular in areas such as computer gaming, movie industry and simulation. A familiar scene in the winter months for most us in the Nordic countries is snow. This thesis discusses some of the complex numerical algorithms behind snow simulations. Previous methods for snow simulation have either covered only a very limited aspect of snow, or have been unsuitable for real-time performance. In this thesis, some of these methods are combined into a model for real-time snow simulation that handles both snowflake motion through the air, wind simulation, and accumulation of snow on objects and the ground. With a goal towards achieving real-time performance with more than 25 frames per second, some new parallel methods for the snow model are introduced. Focus is set on efficient parallelization on new SMP and multi-core computer systems. The algorithms are first parallelized in a pure data-parallel manner by dividing the data structures among threads. This scheme is then improved by overlapping inherently sequential algorithms with computations for the following frame, to eliminate processor idle time. A speedup of 1.9 on modern dual CPU workstations is achieved, while displaying a visually satisfying result in real-time. By utilizing Hyper-Threading enabled dual CPU systems, the speedup is further improved to 2.0.</p>
30

Neighborhood Mining in Biological Networks

Stenersen, Kristoffer, Sundsdal, Sverre January 2006 (has links)
<p>Biologists are constantly looking for new knowledge about biological properties and processes. Bio-molecular interaction networks model dependencies among proteins and the processes they participate. By studying patterns of interaction in these networks, it may be possible to discover implicit information embedded in the network topology. In this thesis we improve existing and develop new methods for investigating similarities between proteins, and for discovering protein interaction sub-patterns. Cytoscape (Shannon et al., 2003) is a tool for visualization and analysis of interaction networks used by biologists. We have developed an extension to Cytoscape that lets biologists perform the following tasks: - Compare proteins based on neighborhood information - Find interaction sub pattern in an interaction network. - Discover sub patterns in one or several networks. Our main contributions are improvements to graph mining algorithms gSpan by Yan and Han (2002) and Apriori by Inokuchi et al. (2003) whose original task was the discovering of frequent sub-patterns in a very large set of networks. We have enabled mining a single network and enabled less exact matches. The graph mining algorithm runs on labeled graphs, and we have used various clustering techniques for this task. The clustering is done through similarity measures between proteins, which we have based on Gene Ontology annotations and experimental data obtained from a ChIP-chip experiment. Our plug-in may easily be extended by adding other cluster techniques or similarity measures. We verify the results of our implementations and test them for speed. We find that of the two mining algorithms gSpan shows the most promise for mining biological graphs.</p>

Page generated in 0.0294 seconds