• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 6
  • Tagged with
  • 107
  • 107
  • 107
  • 42
  • 42
  • 31
  • 27
  • 27
  • 26
  • 26
  • 11
  • 6
  • 6
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

GPU-Based Airway Tree Segmentation and Centerline Extraction

Smistad, Erik January 2012 (has links)
Lung cancer is one of the deadliest and most common types of cancer inNorway. Early and precise diagnosis is crucial for improving the survivalrate. Diagnosis is often done by extracting a tissue sample in the lung throughthe mouth and throat. It is difficult to navigate to the tissue because of thecomplexity of the airways inside the lung and the reduced visibility. Our goalis to make a program that can automatically extract a map of the Airwaysdirectly from X-ray Computer Tomography(CT) images of the patient. Thisis a complex task and requires time consuming processing.In this thesis we explore different methods for extracting the Airways fromCT images. We also investigate parallel processing and the usage of moderngraphic processing units for speeding up the computations. We rate severalmethods in terms of reported performance and the possibility of parallelprocessing. The best rated method is implemented in a parallel frameworkcalled Open Computing Language.The results shows that our implementation is able to extract large parts ofthe Airway Tree, but struggles with the smaller airways and airways thatdeviate from a perfect circular cross-section. Our implementation is ableto process a full CT scan using less than a minute with a modern graphicprocessing units. The implementation is very general and is able to extractother tubular structures as well. To show this we also run our implementationon a Magnetic Resonance Angio dataset for finding blood vessels in the brainand achieve good results.We see a lot of potential in this method for extracting tubular structures. Themethod struggles the most with noise handling and tubes that deviate froma circular cross-sectional shape. We believe that this can be improved byusing another method than ridge traversal for the centerline extraction step.Because this is a local greedy algorithm, it often terminates prematurely dueto noise and other image artifacts.
62

Accessing Cultural Heritage Resources on a Mobile Augmented Reality Platform : A Study on Technology Acceptance

Haugstvedt, Anne-Cecilie January 2012 (has links)
This project follows the design science research methodology and uses an extended version of the technology acceptance model (TAM) to study the acceptance of a mobile augmented reality application with historical photographs and information. A prototype application was developed in accordance with general principles for usability design, and a street survey was conducted, where 42 participants out on the street got the opportunity to try the application before answering a questionnaire. A modified version of the same questionnaire was later on used in a web survey with 200 participants that watched a short video demonstration before answering the questionnaire.The results show that there is an interest in mobile augmented reality applications with historical pictures and information. Both perceived usefulness and perceived enjoyment have a direct impact on the intention to use this type of application. This finding suggests that institutions developing this type of applications can benefit from focusing on both the fun and the useful aspect of their applications.
63

Using open vs. proprietary standards when developing applications for mobile devices

Freberg, Jon January 2012 (has links)
This thesis discusses the possibilities and limitations when developing mobile ap- plications natively vs. HTML5. Research has been carried out to understand how mobile devices can be utilized to its fullest when running applications, and a native iPad application was developed to help in discovering unforeseen challenges. It was developed with a client-server architecture to decrease the amount of work it would take to implement the application natively as a client for all the different mobile platforms.The native approach combined with the client-server architecture is not cross platform by nature, but the study shows that its benefits outweighs the limitations in many cases. It is also shown that time- and cost of the development of an appli- cation favours the HTML5 approach. HTML5 was concluded to be a solution that solves the cross platform problem, but it lacks both performance and API access. However, the type of application developed should be the deciding factor of what solution to choose.
64

Growing Cellular Structures with Substructures Guided by Genetic Algorithms : Using Visualization as Evaluation

Klakken, Trond January 2012 (has links)
A dream about evolvable structures that change to fit its environment could be a peak into the future.Cellular automata (CA) being a simple discrete model, it has the ability to simulate biology by growing, reproducing and dying. Along with genetic algorithms, they both simulates biological systems that can be used to realize this dream.In this thesis, a skyscraper is grown using multiple cellular automata. The skyscraper is grown in a CA simulator and visualizer made for this thesis. The result is a stable structure containing floors, walls, windows and ceilings with lights.Genetic algorithms have been used to grow electrical wiring from a power source in the basement up to power outlets on each floor, powering the lights.The dream is a house that covers all your needs.This thesis is a proof of concept, that it is possible to grow a stable skyscraper using a CA with multiple sub-CAs growing lights and electrical wiring inside.The project is in the area of unconventional computation, done at NTNU Trondheim.
65

Opponent Modeling and Strategic Reasoning in the Real-time Strategy Game Starcraft

Fjell, Magnus Sellereite, Møllersen, Stian Veum January 2012 (has links)
Since the release of BWAPI in 2009, StarCraft has taken the position as the leading platform for research in artificial intelligence in real-time strategy games. With competitions being held annually at AIIDE and CIG, there is much prestige in having an agent compete and do well. This thesis is aimed at presenting a model for doing opponent modeling and strategic reasoning in StarCraft.We present a method for constructing a model based on strategies, on the form of build orders, learned from expert demonstrations. This model is aimed at recognizing the strategy of the opponent and selecting a strategy that is capable of countering the recognized strategy. The method puts weight on the ordering and timing of buildings in order to do advanced recognition.
66

Automatic Fish Classification : Using Image Processing and Case-Based Reasoning

Eliassen, Lars Moland January 2012 (has links)
Counting and classifying fish moving upstream in rivers to spawn is a useful way of monitoring the population of different species. Today, there exist some commercial solutions, along with some research that addresses the area. Case-based reasoning is a process that can be used to solve new problems based on previous problems. This thesis studies the possibilities of combining image processing techniques and case-based reasoning to classify species of fish which are similar to each other in both shape, size and color. Methods for image preprocessing are discussed, and tested. Methods for feature extraction and a case-based reasoning prototype are proposed, implemented and tested with promising results.
67

Controlling a Signal-regulated Pedestrian Crossing using Case-based Reasoning

Kheradmandi, Øyvind Shahin Berntsen, Strøm, Fredrick January 2012 (has links)
The traffic domain, and in particular the domain of traffic control, is a highly complex and uncertain domain. A large network of roads, signal controlling systems, vehicles, pedestrians and other traffic units makes the domain intractable. There are great amounts of data available from different parts of traffic, thus there is a need for a method that can take advantage of this data in a systematical manner.In this thesis, we present a prototype Case-based Reasoning (CBR) system which purpose is to execute traffic at a signal controlled pedestrian crossing. The system uses pedestrian- and vehicle data to take decisions in real-time. The system is created as an OSGI bundle and uses the CVIS (Cooperative Vehicle-Infrastructure System) framework to enable communication with other traffic systems and traffic units. myCBR is used as a framework for making the process of retrieving and reusing cases easier. Experts from Norwegian Public Roads Administration were an important resource in defining the structure of the cases and for filling the case base with useful cases. Pedestrian data is obtained by using a Kinect sensor, and the Intention-based Sliding Doors system created by Solem, a previous MSc at our group, is integrated for interpreting the intention of pedestrians at the crossing. Vehicle data is obtained by using simulation software called SCANeR Studio.The results of the project showed that the CBR system adapted to the current traffic situation, and that correct cases were retrieved. These tests were performed in a limited test environment, and to evaluate the system properly, tests in a real environment is necessary.
68

Revolve Analyzer : Development of racing data analysis software

Møllersen, Lauritz, Stadheim, Per Øyvind January 2012 (has links)
Contains prestudy within racing data analysis, detailed architecture of software for analysis and implementation details.
69

AppSensor : Attack-aware applications compared against a web application firewall and an intrusion detection system

Thomassen, Pål January 2012 (has links)
The thesis takes a look at the OWASP AppSensor project. The OWASP AppSensor project is about the idea of detecting attacks inside the applicaiton. The thesis compares OWASP AppSensor against both a web application firewall and an intrusion detection system. The comparison is based both on a short litterature study and an experiment performed. The experiment was a set of attacks based on OWASP top ten list which were executed against a simple bank web application. In the experiment the intrusion detection systems, web application firewall and the AppSensor detection points inside the application was tested to see which attacks they where able to detect. The results were quite satisfying for both the web application firewall and AppSensor meanin that they detected many attacks but AppSensors detection was slightly better.
70

Bruk av kunstig intelligens for å oppdage innbrudd i datasystemer / Using Artificial Intelligence methods for Intrusion Detection

Grodås, Ole Morten January 2012 (has links)
Med sin store vekst, har internett utviklet seg til et lukrativt domene for organisert kriminalitet. Som andre typer organisert kriminalitet er mesteparten av aktiviteten motivert av økonomisk gevinst. I tillegg til økonomisk motiverte trusselen aktører er noen tilsynelatende drevet av politiske motiver, som nasjonalstaters etterretningsorganisasjoner og cyberterrorister. En viktig del av datakriminalitet er å bryte seg inn i datasystemer og sikre fremtidig kontroll over systemene. Når nettkriminelle har klart å få tilgang til et system installerer de ofte et skjult program for å sikre fremtidige tilgang. Dette programmet kalles en bot og rekrutterer den kompromitterte maskinen inn i et botnet. Flere boter under en felles sentral administrasjon kalles et botnet. Denne oppgaven beskriver utformingen av en botnet detektor og rapporterer resultatene fra testing av detektoren på reelle data fra en organisasjon i Norge. Det foreslåtte systemet er designet rundt et klassisk "misuse detection system". Det tar som input nettverksaktivitetslogg som NetFlow, DNS logg og HTTP logg og søker igjennom denne loggen med et stort signaturrett sett sammen av ulike signatursett som deles fritt på internett. Detektoren er basert på fire hovedkomponenter. 1) En algoritme for å kvantifisere risikoen representert ved en signatur, 2) En algoritme for hvitlisting av dårlige signaturer som vil skape falske positiver, 3) En søkemotor for å søke loggfiler med et stort signatursett, og 4) En algoritme for å identifisere kompromittert datamaskiner ved å aggregere alarm data.All komponentene sett under ett ser ut til å gi en betydelig forbedring i forhold til inntrengningsdeteksjon basert på vanlig signatursøk. En av de viktigste forbedringene er at systemene gjør det mye enklere å håndtere dårlige signaturer som skaper mange falske positiver, forbedring synes å være en kombinasjon av hvitlisting av noen av de dårlige signaturene og at fokuset flyttes fra arbeide med alarmer direkte til å jobbe med aggregert klient risikoer.Systemet er komplementære og synergistiske til noen av de nylig foreslåtte system i forskningslitteraturen som Exposure(Bilge, Kirda, Kruegel, & Balduzzi, 2011) og Notos (Antonakakis, Perdisci, Dagon, Lee, & Feamster, 2010)

Page generated in 0.0809 seconds