Spelling suggestions: "subject:"intelligente systemer"" "subject:"intelligentes systemer""
51 |
Tracking objects in 3D using Stereo VisionEndresen, Kai Hugo Hustoft January 2010 (has links)
This report describes a stereo vision system to be used on a mobile robot. The system is able to triangulate the positions of cylindrical and spherical objects in a 3D environment. Triangulation is done in real-time by matching regions in two images, and calculating the disparities between them.
|
52 |
Framework Support for Web Application SecurityØdegård, Leif January 2006 (has links)
There are several good reasons to use a framework when you are developing a new web application. We often here that: *** frameworks use known patterns that result in an easily extendable architecture *** frameworks result in loose couplings between different modules in the application *** frameworks allow developer to concentrate on business logic instead of reinventing wheels that is already reinvented several times *** frameworks are often thoroughly tested and contains less bugs than custom solutions But security is rarely mentioned in this setting. Our main motivation in this thesis is therefore to discuss what three popular web application frameworks do to improve the overall security level. In this thesis we have chosen to research Spring, Struts and JSF. We use them to develop small applications and test whether they are vulnerable to different types of attacks or not. We focus on attacks involving metacharacters such that SQL-injection and cross-site scripting, but also security pitfalls connected to access control and error handling. We have found out that all three frameworks do implement some metacharacter handling. Since Spring tries to fill the role of a full-stack application framework, it provides some SQL metacharacter handling to avoid SQL-injections, but we have identified some implementation weaknesses that may lead to vulnerabilities. Cross-site scripting problems are handled in both Spring, Struts, and JSF by HTML-encoding as long as custom RenderKits are not introduced in JSF. When it comes to access control, the framework support is somewhat limited. They do support a role-based access control model, but this is not sufficient in applications where domain object access is connected to users rather than roles. To improve the access control in Struts applications, we provide an overall access control design that is based on aspect-oriented programming and integrates with standard Struts config files. Hopefully, this design is generic enough to suit several application's needs, but also useable to developers such that it results in a more secure access control containing less bugs than custom solutions.
|
53 |
Experiments with hedonism, anticipation and reason in synaptic learning of facial affects : A neural simulation study within ConnectologyKnutsen, Håvard Tautra January 2007 (has links)
Connectology consist of three basic principles with each their own synaptic learning mechanism: Hedonism (the Skinner synapse), Anticipation (the Pavlov synapse) and Reason (the Hume synapse). This project studies the potentials and weaknesses of these mechanism in visual facial affect recognition. By exploiting the principles of hedonism, a supervision mechanism was created with the purpose of guiding the Pavlov synapses' learning towards the goal of facial affect recognition. Experiments showed that the network performed very poorly, and could not recognize facial affects. A deeper study of the supervision mechanism found a severe problem with its operation. An alternative supervision scheme was created, outside the principles of Connectology, to facilitate testing of the Pavlov synapses in a supervised setting. The Pavlov synapses performed very well. The synapses correctly anticipated all affects, however one of the four affects could not be discriminated from the others. The problem with discriminating the fourth affect was not a problem with the Pavlov learning mechanism, but rather of the neuronal representation of the affects. Hume synapses were then introduced in the hidden cluster. This was done to facilitate the forming of neuronal concepts of the different facial affects in different areas of the cluster. These representations, if successfully formed, should allow the Pavlov synapses to both antipate and discriminate between all facial affects. The forming of concepts did not happen, and thus the Hume synapse did not contribute to better results, but rather degraded them. The conclusion is that the Pavlov synapse lends itself well to learning by supervision, futher work is needed to create a functioning supervision mechanism within the principles of Connectology, and the application of the Hume synapse also calls for futher studies.
|
54 |
Duplicate Detection with PMC -- A Parallel Approach to Pattern MatchingLeland, Robert January 2007 (has links)
Fuzzy duplicate detection is an integral part of data cleansing. It consists of finding a set of duplicate records, correctly identifying the original or most representative record and removing the rest. The rate of Internet usage, and data availability and collectability is increasing so we get more and more access to data. A lot of this data is collected from, and entered by humans and this causes noise in the data from typing mistakes, spelling discrepancies, varying schemas, abbreviations, and more. Because of this data cleansing and approximate duplicate detection is now more important than ever. In fuzzy matching records are usually compared by measuring the edit distance between two records. This leads to problems with large data sets where there is a lot of record comparisons to be made so previous solutions have found ways to cut down on the amount of records to be compared. This is often done by creating a key which records are then sorted on with the intention of placing similar records near each other. There are several downsides to this, for example you need to sort and search through potentially large amounts of data several times to catch duplicate data accurately. This project differs in that it presents an approach to the problem which takes advantage of a multiple instruction stream, multiple data stream (MIMD) architecture called a Pattern Matching Chip (PMC), which allows large amounts of parallel character comparisons. This will allow you to do fuzzy matching against the entire data set very quickly, removing the need for clustering and re-arranging of the data which can often lead to omitted duplicates (false negatives). The main point of this paper will be to test the viability of this approach for duplicate detection, examining the performance, potential and scalability of the approach.
|
55 |
Learning robot soccer with UCTHolen, Vidar, Marøy, Audun January 2008 (has links)
Upper Confidence bounds applied to Trees, or UCT, has shown promise for reinforcement learning problems in different kinds of games, but most of the work has been on turn based games and single agent scenarios. In this project we test the feasibility of using UCT in an action-filled multi-agent environment, namely the RoboCup simulated soccer league. Through a series of experiments we test both low level and high level approaches. We were forced to conclude that low level approaches are infeasible, and that while high level learning is possible, cooperative multi-agent planning did not emerge.
|
56 |
Intelligent agents in computer gamesLøland, Karl Syvert January 2008 (has links)
In this project we examine whether or not a intelligent agent can learn how to play a computer game using the same inputs and outputs as a human. An agent architecture is chosen, implemented, and tested on a standard first person shooter game to see if it can learn how to play that game and find a goal in that game. We conclude the report by discussing potential improvements to the current implementation.
|
57 |
Early warnings of critical diagnosesAlvestad, Stig January 2009 (has links)
A disease which is left untreated for a longer period is more likely to cause negative consequents for the patient. Even though the general practitioner is able to discover the disease quickly in most cases, there are patients who should have been discovered earlier. Electronic patient records store time-stamped health information about patients, recorded by the health personnel treating the patient. This makes it possible to do a retrospective analysis in order to determine whether there was sufficient information to give the diagnose earlier than the general practitioner actually did. Classification algorithms from the machine learning domain can utilise large collections of electronic patient records to build models which can predict whether a patient will get the disease or not. These models could be used to get more knowledge about these diseases and in a long-term perspective they could become a support for the general practitioner in daily practice. The purpose of this thesis is to design and implement a software system which can predict whether a patient will get a disease in the near future or not. The system should attempt to predict the disease before the general practitioner even suspects that the patient might have the disease. Further the objective is to use this system to identify warning signs which are used to make the predictions, and to analyse the usefulness of the predictions and the warning signs. The diseases asthma, diabetes 2 and hypothyroidism have been selected to be the test cases for our methodology. A set of suspicion-indicators which indicates that the general practitioner has suspected the disease are identified in an iterative process. These suspicion-indicators are subsequently used to limit the information available for the classification algorithms. This information is subsequently used to build prediction models, using different classification algoritms. The prediction models are evaluated in terms of various performance measures and the models themselves are analysed manually. Experiments are conducted in order to find favourable parameter values for the information extraction process. Because there are relatively few patients who have the disease test cases, the oversampling technique SMOTE is used to generate additional synthetical patients with the test cases. A set of suspicion-indicators has been identified in cooperation with domain experts. The availability of warning signs decreases as the information available for the classifier diminishes, while the performance of the classifiers is not affected to such a large degree. Applying the SMOTE oversampling technique improves the results for the prediction models. There is not much difference between the performance of the various classification algorithms. The improved problem formulation results in models which are more valid than before. A number of events which are used to predict the test cases have been identified, but their real-world importance remains to be evaluated by domain experts. The performance of the prediction models can be misguiding in terms of practical usefulness. SMOTE is a promising technique for generating additional data, but the evaluation techniques used here are not good enough to make any conclusions.
|
58 |
Adaptive RoboticsFjær, Dag Henrik, Massali, Kjeld Karim Berg January 2009 (has links)
This report explores continuous-time recurrent neural networks (CTRNNs) and their utility in the field of adaptive robotics. The networks herein are evolved in a simulated environment and evaluated on a real robot. The evolved CTRNNs are presented with simple cognitive tasks and the results are analyzed in detail.
|
59 |
Early Warnings of Corporate Bankruptcies Using Machine Learning TechniquesGogstad, Jostein, Øysæd, Jostein January 2009 (has links)
The tax history of a company is used to predict corporate bankruptcies using Bayesian inference. Our developed model uses a combination of Naive Bayesian classification and Gaussian Processes. Based on a sample of 1184 companies, we conclude that the Naive Bayes-Gaussian Process model successfully forecasts corporate bankruptcies with high accuracy. A comparison is performed with the current system in place at one of the largest banks in Norway. We present evidence that our classification model, based solely on tax data, is better than the model currently in place.
|
60 |
Structured data extraction: separating content from noise on news websitesArizaleta, Mikel January 2009 (has links)
In this thesis, we have treated the problem of separating content from noise on news websites. We have approached this problem by using TiMBL, a memory-based learning software. We have studied the relevance of the similarity in the training data and the effect of data size in the performance of the extractions.
|
Page generated in 0.1183 seconds