Spelling suggestions: "subject:"sio2"" "subject:"sift""
91 |
Compression in XML search enginesNatvig, Ola January 2010 (has links)
<p>The structure of XML documents can be used by search engines to answer structured queries or to provide better relevancy. Several index structures exist for search in XML data. This study focuses on inverted lists with dictionary coded path types and dewey coded path instances. The dewey coded path index is large, but could be compressed. This study examines query processing with indexes encoded using well known integer coding methods VByte and PFor(delta) and methods tailored for the dewey index. Intersection queries and structural queries are evaluated. In addition to standard document level skipping, skip operations for path types are implemented and evaluated. Four extensions over plain PFor methods are proposed and tested. Path type sorting sorts dewey codes on their path types and store all deweys from one path type together. Column wise dewey storage stores the deweys in columns instead of rows. Prefix coding a well known method, is adapted to the column wise dewey storage, and dynamic column wise method chooses between row wise and column wise storage based on the compressed data. Experiments are performed on a XML collection based on Wikipedia. Queries are generated from the TREC 06 efficiency task query trace. Several different types of structural queries have been executed. Experiments show that column wise methods perform good on both intersection and structural queries. The dynamic column wise scheme is in most cases the best, and creates the smallest index. Special purpose skipping for path types makes some queries extremely fast and can be implemented with only limited storage footprint. The performance of in-memory search with multi-threaded query execution is limited by memory bandwidth.</p>
|
92 |
Edge and line detection of complicated and blurred objectsHaugsdal, Kari January 2010 (has links)
<p>This report deals with edge and line detection in pictures with complicated and/or blurred objects. It explores the alternatives available, in edge detection, edge linking and object recognition. Choice of methods are the Canny edge detection and Local edge search processing combined with regional edge search processing in the form of polygon approximation.</p>
|
93 |
Multi-touch Interaction with Gesture RecognitionNygård, Espen Solberg January 2010 (has links)
<p>This master's thesis explores the world of multi-touch interaction with gesture recognition. The focus is on camera based multi-touch techniques, as these provide a new dimension to multi-touch with its ability to recognize objects. During the project, a multi-touch table based on the technology Diffused Surface Illumination has been built. In addition to building a table, a complete gesture recognition system has been implemented, and different gesture recognition algorithms have been successfully tested in a multi-touch environment. The goal with this table, and the accompanying gesture recognition system, is to create an open and affordable multi-touch solution, with the purpose of bringing multi-touch out to the masses. By doing this, more people will be able to enjoy the benefits of a more natural interaction with computers. In a larger perspective, multi-touch is just the beginning, and by adding additional modalities to our applications, such as speech recognition and full body tracking, a whole new level of computer interaction will be possible.</p>
|
94 |
Tracking objects in 3D using Stereo VisionEndresen, Kai Hugo Hustoft January 2010 (has links)
<p>This report describes a stereo vision system to be used on a mobile robot. The system is able to triangulate the positions of cylindrical and spherical objects in a 3D environment. Triangulation is done in real-time by matching regions in two images, and calculating the disparities between them.</p>
|
95 |
The Educational Game Editor : The Design of a Program for Making Educational Computer GamesTollefsrud, John Ola January 2006 (has links)
<p>This report is about computer game based learning, how to make a program for making educational games, the possibilities to use a hypermedia structure for storage of the data in an educational game, and different learning theories related to computer game based learning. The first part is about the different learning theories behaviourism, cognitivism, constructivism, socio-constructivism, and situated learning. The different theories are related to learning games, and a classification of game based learning is also given. Hypermedia is a smart and efficient way of organizing data, and is a relevant solution for use in education and games. The relationship between data, information and wisdom is central, and how the hypermedia base is constructed and different information structures are described. The advantages and limitations of use of hypermedia in education are discussed, and examples of use, as in OPSYS and the Mobile instruction system, are given. There exist some computer games for use in higher education, and some of them are described. To make a good educational, many certain requirements have to be fulfilled both aspects in game design and learning aspects. The main part of the report is about the Educational Game Editor. The idea is to design a program for making computer games for use in education. Before the design, the Software Requirements Specification is presented, containing functional and quality requirements, and scenarios to exemplify the requirements. The conceptual design of the program gives an overall description and describes the phases of creating a game and the elements the game consists of: file management, object management, Library, and Tools. The main architectural drivers are usability and availability. The program must be easy to use and be stable and not crash. An example of making a simple game about the history of Trondheim explains how to use the program in steps, and gives a good guide for the users to make their own game.</p>
|
96 |
Framework Support for Web Application SecurityØdegård, Leif January 2006 (has links)
<p>There are several good reasons to use a framework when you are developing a new web application. We often here that: *** frameworks use known patterns that result in an easily extendable architecture *** frameworks result in loose couplings between different modules in the application *** frameworks allow developer to concentrate on business logic instead of reinventing wheels that is already reinvented several times *** frameworks are often thoroughly tested and contains less bugs than custom solutions But security is rarely mentioned in this setting. Our main motivation in this thesis is therefore to discuss what three popular web application frameworks do to improve the overall security level. In this thesis we have chosen to research Spring, Struts and JSF. We use them to develop small applications and test whether they are vulnerable to different types of attacks or not. We focus on attacks involving metacharacters such that SQL-injection and cross-site scripting, but also security pitfalls connected to access control and error handling. We have found out that all three frameworks do implement some metacharacter handling. Since Spring tries to fill the role of a full-stack application framework, it provides some SQL metacharacter handling to avoid SQL-injections, but we have identified some implementation weaknesses that may lead to vulnerabilities. Cross-site scripting problems are handled in both Spring, Struts, and JSF by HTML-encoding as long as custom RenderKits are not introduced in JSF. When it comes to access control, the framework support is somewhat limited. They do support a role-based access control model, but this is not sufficient in applications where domain object access is connected to users rather than roles. To improve the access control in Struts applications, we provide an overall access control design that is based on aspect-oriented programming and integrates with standard Struts config files. Hopefully, this design is generic enough to suit several application's needs, but also useable to developers such that it results in a more secure access control containing less bugs than custom solutions.</p>
|
97 |
Flexible Discovery of Modules with Distance ConstraintsLekang, Øystein January 2006 (has links)
<p>Many authors argue that finding single transcription factor binding sites is not enough to be able to make predictions with regard to regulation in eukaryotic genes, as is the case with simpler prokaryotes. With eukaryotes combinations of transctiption factors must be modeled as a composite motif or module. In some cases even with a restriction on distance between individual sites, or within the module. Create a module discovery tool capable of using both deterministic patterns and position weight matrices as input, that can impose restrictions on distance. Use the tool for module discovery and evaluate the results.</p>
|
98 |
Analysis of fibre cross sections : Developing methods for image processing and visualisation utilising the GPUBergquist, Jørgen, Titlestad, Helge January 2006 (has links)
<p>Modern graphics processing units, GPUs, have evolved into high-performance processors with programmable vertex and pixel shaders. With these new abilities a new subfield of research, dubbed GPGPU for General Purpose computing on the GPU has emerged, in areas as oil exploration, processing of sound effects, neural networks, cryptography and image processing. As the GPUs functionality and performance are still increasing, more programmers are appealed by their computational power. To understand the performance of paper materials a detailed characterisation of the fibre cross-sections is necessary. Using scanning electron microscopy, SEM, fibres embedded in epoxy are depicted. These images have to be analysed and quantified. In this master thesis we explore the possibility of taking advantage of todays generation of GPUs performance when analysing digital images of fibre cross-sections. We implemented common algorithms such as the median filter, the SUSAN smoothing filter and various mathematical morphology operations using the high-level shader language OpenGL Shader Language, GLSL. When measured against equivalent image processing opreations run on the CPU, we have found our GPU solution to perform about the same. The operations run much faster on the GPU, but due to overhead of binding FBOs, intialising shader programs and transfering data between the CPU and the GPU, the end result is about the same on the GPU and CPU implementations. We have deliberatly worked with commodity hardware to see what one can gain with just replacing the graphics card in the engineer's PCs. With newer hardware the results would tilt heavily towards the GPU implementations. We have concluded that making a paper fibre cross-section analysis program based on GPU image processing with commodity hardware is indeed feasible, and would give some benefits to the user interactivity. But it is also harder to implement because the field is still young, with immature compilers and debugging tools and few solid libraries.</p>
|
99 |
A Classifier for Microprocessor Processing Site Prediction in Human MicroRNAsHelvik, Snorre Andreas January 2006 (has links)
<p>MircoRNAs are ~22nts long non-coding RNA sequences that play a central role in gene regulation. As the microRNAs are temporary and not necessarily expressed when RNA from tissue samples are sequenced, bioinformatics is an important part of microRNA discovery. Most of the computational microRNA discovery approaches are based on conservation between human and other species. Recent results, however, estimate that there exists around 350 microRNAs unique to human. It is therefore a need for methods that use characteristics in the primary microRNA transcript to predict microRNA candidates. The main problem with such methods is, however, that many of the characteristics in the primary microRNA transcript are correlated with the location where the Microprocessor complex cleaves the primary microRNA into the precursor, which is unknown until the candidate is experimentally verified. This work presents a method based on support vector machines (SVM) for Microprocessor processing site prediction in human microRNAs. The SVM correctly predicts the processing site for 43% of the known human microRNAs and shows a great performance distinguishing random hairpins and microRNAs. The processing site SVM is useful for microRNA discovery in two ways. One, the predicted processing sites can be used to build an SVM with more distinct features and, thus, increase the accuracy of the microRNA gene predictions. Two, it generates information that can be used to predict microRNA candidates directly, such as the score differences between the candidate's potential and predicted processing sites. Preliminary results show that an SVM that uses the predictions from the processing site SVM and trains explicitly to separate microRNAs and random hairpins performs better than current prediction-based approaches. This illustrates the potential gain of using the processing site predictions in microRNA gene prediction.</p>
|
100 |
Protein Remote Homology Detection using Motifs made with Genetic ProgrammingHåndstad, Tony January 2006 (has links)
<p>A central problem in computational biology is the classification of related proteins into functional and structural classes based on their amino acid sequences. Several methods exist to detect related sequences when the level of sequence similarity is high, but for very low levels of sequence similarity the problem remains an unsolved challenge. Most recent methods use a discriminative approach and train support vector machines to distinguish related sequences from unrelated sequences. One successful approach is to base a kernel function for a support vector machine on shared occurrences of discrete sequence motifs. Still, many protein sequences fail to be classified correctly for a lack of a suitable set of motifs for these sequences. We introduce a motif kernel based on discrete sequence motifs where the motifs are synthesised using genetic programming. The motifs are evolved to discriminate between different families of evolutionary origin. The motif matches in the sequence data sets are then used to compute kernels for support vector machine classifiers that are trained to discriminate between related and unrelated sequences. When tested on two updated benchmarks, the method yields significantly better results compared to several other proven methods of remote homology detection. The superiority of the kernel is especially visible on the problem of classifying sequences to the correct fold. A rich set of motifs made specifically for each SCOP superfamily makes it possible to classify more sequences correctly than with previous motif-based methods.</p>
|
Page generated in 0.0331 seconds