• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 203
  • 55
  • 20
  • 7
  • 6
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 293
  • 136
  • 134
  • 130
  • 88
  • 69
  • 55
  • 46
  • 39
  • 37
  • 37
  • 24
  • 18
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A CBR/RL system for learning micromanagement in real-time strategy games

Gunnerud, Martin Johansen January 2009 (has links)
<p>The gameplay of real-time strategy games can be divided into macromanagement and micromanagement. Several researchers have studied automated learning for macromanagement, using a case-based reasoning/reinforcement learning architecture to defeat both static and dynamic opponents. Unlike the previous research, we present the Unit Priority Artificial Intelligence (UPAI). UPAI is a case-based reasoning/reinforcement learning system for learning the micromanagement task of prioritizing which enemy units to attack in different game situations, through unsupervised learning from experience. We discuss different case representations, as well as the exploration vs exploitation aspect of reinforcement learning in UPAI. Our research demonstrates that UPAI can learn to improve its micromanagement decisions, by defeating both static and dynamic opponents in a micromanagement setting.</p>
32

Edge and line detection of complicated and blurred objects

Haugsdal, Kari January 2010 (has links)
<p>This report deals with edge and line detection in pictures with complicated and/or blurred objects. It explores the alternatives available, in edge detection, edge linking and object recognition. Choice of methods are the Canny edge detection and Local edge search processing combined with regional edge search processing in the form of polygon approximation.</p>
33

Multi-touch Interaction with Gesture Recognition

Nygård, Espen Solberg January 2010 (has links)
<p>This master's thesis explores the world of multi-touch interaction with gesture recognition. The focus is on camera based multi-touch techniques, as these provide a new dimension to multi-touch with its ability to recognize objects. During the project, a multi-touch table based on the technology Diffused Surface Illumination has been built. In addition to building a table, a complete gesture recognition system has been implemented, and different gesture recognition algorithms have been successfully tested in a multi-touch environment. The goal with this table, and the accompanying gesture recognition system, is to create an open and affordable multi-touch solution, with the purpose of bringing multi-touch out to the masses. By doing this, more people will be able to enjoy the benefits of a more natural interaction with computers. In a larger perspective, multi-touch is just the beginning, and by adding additional modalities to our applications, such as speech recognition and full body tracking, a whole new level of computer interaction will be possible.</p>
34

Tracking objects in 3D using Stereo Vision

Endresen, Kai Hugo Hustoft January 2010 (has links)
<p>This report describes a stereo vision system to be used on a mobile robot. The system is able to triangulate the positions of cylindrical and spherical objects in a 3D environment. Triangulation is done in real-time by matching regions in two images, and calculating the disparities between them.</p>
35

Framework Support for Web Application Security

Ødegård, Leif January 2006 (has links)
<p>There are several good reasons to use a framework when you are developing a new web application. We often here that: *** frameworks use known patterns that result in an easily extendable architecture *** frameworks result in loose couplings between different modules in the application *** frameworks allow developer to concentrate on business logic instead of reinventing wheels that is already reinvented several times *** frameworks are often thoroughly tested and contains less bugs than custom solutions But security is rarely mentioned in this setting. Our main motivation in this thesis is therefore to discuss what three popular web application frameworks do to improve the overall security level. In this thesis we have chosen to research Spring, Struts and JSF. We use them to develop small applications and test whether they are vulnerable to different types of attacks or not. We focus on attacks involving metacharacters such that SQL-injection and cross-site scripting, but also security pitfalls connected to access control and error handling. We have found out that all three frameworks do implement some metacharacter handling. Since Spring tries to fill the role of a full-stack application framework, it provides some SQL metacharacter handling to avoid SQL-injections, but we have identified some implementation weaknesses that may lead to vulnerabilities. Cross-site scripting problems are handled in both Spring, Struts, and JSF by HTML-encoding as long as custom RenderKits are not introduced in JSF. When it comes to access control, the framework support is somewhat limited. They do support a role-based access control model, but this is not sufficient in applications where domain object access is connected to users rather than roles. To improve the access control in Struts applications, we provide an overall access control design that is based on aspect-oriented programming and integrates with standard Struts config files. Hopefully, this design is generic enough to suit several application's needs, but also useable to developers such that it results in a more secure access control containing less bugs than custom solutions.</p>
36

Experiments with hedonism, anticipation and reason in synaptic learning of facial affects : A neural simulation study within Connectology

Knutsen, Håvard Tautra January 2007 (has links)
<p>Connectology consist of three basic principles with each their own synaptic learning mechanism: Hedonism (the Skinner synapse), Anticipation (the Pavlov synapse) and Reason (the Hume synapse). This project studies the potentials and weaknesses of these mechanism in visual facial affect recognition. By exploiting the principles of hedonism, a supervision mechanism was created with the purpose of guiding the Pavlov synapses' learning towards the goal of facial affect recognition. Experiments showed that the network performed very poorly, and could not recognize facial affects. A deeper study of the supervision mechanism found a severe problem with its operation. An alternative supervision scheme was created, outside the principles of Connectology, to facilitate testing of the Pavlov synapses in a supervised setting. The Pavlov synapses performed very well. The synapses correctly anticipated all affects, however one of the four affects could not be discriminated from the others. The problem with discriminating the fourth affect was not a problem with the Pavlov learning mechanism, but rather of the neuronal representation of the affects. Hume synapses were then introduced in the hidden cluster. This was done to facilitate the forming of neuronal concepts of the different facial affects in different areas of the cluster. These representations, if successfully formed, should allow the Pavlov synapses to both antipate and discriminate between all facial affects. The forming of concepts did not happen, and thus the Hume synapse did not contribute to better results, but rather degraded them. The conclusion is that the Pavlov synapse lends itself well to learning by supervision, futher work is needed to create a functioning supervision mechanism within the principles of Connectology, and the application of the Hume synapse also calls for futher studies.</p>
37

Duplicate Detection with PMC -- A Parallel Approach to Pattern Matching

Leland, Robert January 2007 (has links)
<p>Fuzzy duplicate detection is an integral part of data cleansing. It consists of finding a set of duplicate records, correctly identifying the original or most representative record and removing the rest. The rate of Internet usage, and data availability and collectability is increasing so we get more and more access to data. A lot of this data is collected from, and entered by humans and this causes noise in the data from typing mistakes, spelling discrepancies, varying schemas, abbreviations, and more. Because of this data cleansing and approximate duplicate detection is now more important than ever. In fuzzy matching records are usually compared by measuring the edit distance between two records. This leads to problems with large data sets where there is a lot of record comparisons to be made so previous solutions have found ways to cut down on the amount of records to be compared. This is often done by creating a key which records are then sorted on with the intention of placing similar records near each other. There are several downsides to this, for example you need to sort and search through potentially large amounts of data several times to catch duplicate data accurately. This project differs in that it presents an approach to the problem which takes advantage of a multiple instruction stream, multiple data stream (MIMD) architecture called a Pattern Matching Chip (PMC), which allows large amounts of parallel character comparisons. This will allow you to do fuzzy matching against the entire data set very quickly, removing the need for clustering and re-arranging of the data which can often lead to omitted duplicates (false negatives). The main point of this paper will be to test the viability of this approach for duplicate detection, examining the performance, potential and scalability of the approach.</p>
38

Learning robot soccer with UCT

Holen, Vidar, Marøy, Audun January 2008 (has links)
<p>Upper Confidence bounds applied to Trees, or UCT, has shown promise for reinforcement learning problems in different kinds of games, but most of the work has been on turn based games and single agent scenarios. In this project we test the feasibility of using UCT in an action-filled multi-agent environment, namely the RoboCup simulated soccer league. Through a series of experiments we test both low level and high level approaches. We were forced to conclude that low level approaches are infeasible, and that while high level learning is possible, cooperative multi-agent planning did not emerge.</p>
39

Intelligent agents in computer games

Løland, Karl Syvert January 2008 (has links)
<p>In this project we examine whether or not a intelligent agent can learn how to play a computer game using the same inputs and outputs as a human. An agent architecture is chosen, implemented, and tested on a standard first person shooter game to see if it can learn how to play that game and find a goal in that game. We conclude the report by discussing potential improvements to the current implementation.</p>
40

Early warnings of critical diagnoses

Alvestad, Stig January 2009 (has links)
<p>A disease which is left untreated for a longer period is more likely to cause negative consequents for the patient. Even though the general practitioner is able to discover the disease quickly in most cases, there are patients who should have been discovered earlier. Electronic patient records store time-stamped health information about patients, recorded by the health personnel treating the patient. This makes it possible to do a retrospective analysis in order to determine whether there was sufficient information to give the diagnose earlier than the general practitioner actually did. Classification algorithms from the machine learning domain can utilise large collections of electronic patient records to build models which can predict whether a patient will get the disease or not. These models could be used to get more knowledge about these diseases and in a long-term perspective they could become a support for the general practitioner in daily practice. The purpose of this thesis is to design and implement a software system which can predict whether a patient will get a disease in the near future or not. The system should attempt to predict the disease before the general practitioner even suspects that the patient might have the disease. Further the objective is to use this system to identify warning signs which are used to make the predictions, and to analyse the usefulness of the predictions and the warning signs. The diseases asthma, diabetes 2 and hypothyroidism have been selected to be the test cases for our methodology. A set of suspicion-indicators which indicates that the general practitioner has suspected the disease are identified in an iterative process. These suspicion-indicators are subsequently used to limit the information available for the classification algorithms. This information is subsequently used to build prediction models, using different classification algoritms. The prediction models are evaluated in terms of various performance measures and the models themselves are analysed manually. Experiments are conducted in order to find favourable parameter values for the information extraction process. Because there are relatively few patients who have the disease test cases, the oversampling technique SMOTE is used to generate additional synthetical patients with the test cases. A set of suspicion-indicators has been identified in cooperation with domain experts. The availability of warning signs decreases as the information available for the classifier diminishes, while the performance of the classifiers is not affected to such a large degree. Applying the SMOTE oversampling technique improves the results for the prediction models. There is not much difference between the performance of the various classification algorithms. The improved problem formulation results in models which are more valid than before. A number of events which are used to predict the test cases have been identified, but their real-world importance remains to be evaluated by domain experts. The performance of the prediction models can be misguiding in terms of practical usefulness. SMOTE is a promising technique for generating additional data, but the evaluation techniques used here are not good enough to make any conclusions.</p>

Page generated in 0.1361 seconds