• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 9
  • 6
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 103
  • 103
  • 43
  • 38
  • 26
  • 24
  • 22
  • 21
  • 21
  • 17
  • 16
  • 15
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Vaizdo atpažinimas dirbtiniais neuroniniais tinklais / Image recognition with artificial neural networks

Tamošiūnas, Darius 24 July 2014 (has links)
Darbe aprašoma tyrimas, kurio metu buvo sukurta programa, naudojantis OpenCV ir DNT klaidos skleidimo atgal algoritmu, gebanti aptikti ir bandanti klasifikuoti veidus. Darbo eigoje: • Įsigilinta į OpenCV funkcijų biblioteką; • Išanalizuota DNT teorinė medžiaga; • Sukurta programinė įranga, kuri, naudojantis „webcam“, geba aptikti ir bando klasifikuoti veidus; • Atliktas eksperimentinis tyrimas; • Nustatyti programos trūkumai; • Pateikti kiti sprendimo būdai; Realizuota programinė įranga gali būti naudojama edukaciniais tikslais. / The work describes an experiment,in which progress was created a software,by using OpenCV and ANN error back propagation algorithm capable of detecting and attempting to classify the faces. Workflow: • Delved deeply into the OpenCV library functions; • Analyzed the theoretical material of ANN • Developed the software, which, using webcam, is capable of detecting and trying to classify the faces; • Made an experimental study; • Determined the weaknesses of the program; • The other methods; created software can be used for educational purposes.
22

Intravenous bag monitoring with Convolutional Neural Networks

Svensson, Göran, Westlund, Jonas January 2018 (has links)
Drip bags are used in hospital environments to administerdrugs and nutrition to patients. Ensuring that they are usedcorrectly and are refilled in time are important for the safetyof patients. This study examines the use of a ConvolutionalNeural Network (CNN) to monitor the fluid levels of drip bagsvia image recognition to potentially form the base of an earlywarning system, and assisting in making medical care moreefficient. Videos of drip bags were recorded as they wereemptying their contents in a controlled environment and fromdifferent angles. A CNN was built to analyze the recordeddata in order to predict a bags fluid level with a 5% intervalprecision from a given image. The results show that the CNNused performs poorly when monitoring fluid levels in dripbags.
23

Improvement of Automated Guided Vehicle's image recognition : Object detection and identification

Xin, Zhu January 2017 (has links)
Automated Guided Vehicle(AGV) as a kind of material conveying equipment has been widely used in modern manufacturing systems. [1] It carries the goods between the workshop along the designated paths. The ability of localization and recognizing the environment around themselves is the essential technology. AGV navigation is developed from several technologies such as fuzzy theory, neural network and other intelligent technology. Among them, visual navigation is one of the newer navigations, because of its path laying is easy to maintain, can identify variety of road signs. Compared with traditional methods, this approach has a better flexibility and robustness, since it can recognition more than one path branch with high anti-jamming capability. Recognizing the environment from imagery can enhance safety and dependability of an AGV, make it move intelligently and brings broader prospect for it. University West has a Patrolbot which is an AGV robot with basic functions. The task is to enhance the ability of vision analysis, to make it become more practical and flexible. The project is going to add object detection, object recognition and object localization functions on the Patrolbot. This thesis project develops methods based on image recognition, deep learning, machine vision, Convolution Neural Network and related technologies. In this project Patrolbot is a platform to show the result, we can also use this kind of program on any other machines. This report generally describes methods of navigation, image segmentation and object recognition. After analyzing the different methods of image recognition, it is easy to find that Neural Network has more advantages for image recognition, it can reduce the parameters and shorting the training and analyzing time, therefore Convolution Neural Network was introduced detailly. After that, the way to achieve image recognition using convolution neural network was presented and in order to recognize several objects at the same time, an image segmentation was also presented here. On the other hand, to make this image recognition processes to be used widely, the ability of transfer learning becomes important. Therefore, the method of transfer learning is presented to achieve customized requirement.
24

Comparison of Different Techniques of Web GUI-based Testing with the Representative Tools Selenium and EyeSel

Jiang, Haozhen, Chen, Yi January 2017 (has links)
Context. Software testing is becoming more and more important in software development life-cycle especially for web testing. Selenium is one of the most widely used property-based Graph-User-Interface(GUI) web testing tools. Nevertheless, it also has some limitations. For instance, Selenium cannot test the web components in some specific plugins or HTML5 videos frame. But it is important for testers to verify the functionality of plugins or videos on the websites. Recently, the theory of the image recognition-based GUI testing is introduced which can locate and interact with the components to be tested on the websites by image recognition. There are only a few papers do research on comparing property-based GUI web testing and image recognition-based GUI testing. Hence, we formulated our research objectives based on this main gap. Objectives. We want to compare these two different techniques with EyeSel which is the tool represents the image recognition-based GUI testing and Selenium which is the tool represents the property-based GUI testing. We will evaluate and compare the strengths and drawbacks of these two tools by formulating specific JUnit testing scripts. Besides, we will analyze the comparative result and then evaluate if EyeSel can solve some of the limitations associated with Selenium. Therefore, we can conclude the benefits and drawbacks of property-based GUI web testing and image recognition-based GUI testing.   Methods. We conduct an experiment to develop test cases based on websites’ components both by Selenium and EyeSel. The experiment is conducted in an educational environment and we select 50 diverse websites as the subjects of the experiment. The test scripts are written in JAVA and ran by Eclipse.  The experiment data is collected for comparing and analyzing these two tools. Results. We use quantitative analysis and qualitative analysis to analyze our results. First of all, we use quantitative analysis to evaluate the effectiveness and efficiency of two GUI web testing tools. The effectiveness is measured by the number of components that can be tested by these two tools while the efficiency is measured by the measurements of test cases’ development time and execution time. The results are as follows (1) EyeSel can test more number of components in web testing than Selenium (2) Testers need more time to develop test cases by Selenium than by EyeSel (3) Selenium executes the test cases faster than EyeSel. (4) “Results (1)” indicates the effectiveness of EyeSel is better than Selenium while “Results (2)(3)” indicate the efficiency of EyeSel is better than Selenium. Secondly, we use qualitative analysis to evaluate four quality characteristics (learnability, robustness, portability, functionality) of two GUI web testing tools. The results show that portability and functionality of Selenium are better than EyeSel while the learnability of EyeSel is better than Selenium. And both of them have good robustness in web testing. Conclusions. After analyzing the results of comparison between Selenium and EyeSel, we conclude that (1) Image recognition-based GUI testing is more effectiveness than property-based GUI web testing (2) Image recognition-based GUI testing is more efficiency than property-based GUI web testing (3) The portability and functionality of property-based GUI web testing is better than Image recognition-based GUI testing (4) The learnability of image recognition-based GUI testing is better than property-based GUI web testing. (5) Both of them are good at different aspects of robustness
25

The machine refinement of raw graphic data for translation into a low level data base for computer aided architectural design (CAAD)

Leifer, David Mark January 1984 (has links)
It is argued that a significant feature which acts as a disincentive against the adoption of CAAD systems by small private architectural practices, is the awkwardness of communicating with computers when compared with traditional drawing board techniques. This consideration, although not perhaps the dominant feature, may be mitigated by the development of systems in which the onus of communicating is placed on the machine, through the medium of an architect's sketch plan drawing. In reaching this conclusion, a design morphology is suggested, in which the creative generation of building designs is set in the context of the development of a 'data-base' of information which completely and consistently describes the architect's hypothetical building solution. This thesis describes research carried out by the author between 1981 and 1984, and describes the theory, development and application of algorithms to interpret architect's sketch plan drawings, and hence permit the encoding of building geometries for CAAD applications programs.
26

Identifying illicit graphic in the online community using the neural network framework

Vega Ezpeleta, Emilio January 2017 (has links)
In this paper two convolutional neural networks are estimated to classify whether an image contains a swastika or not. The images are gathered from the gaming platform Steam and by scraping a web search engine. The architecture of the networks is kept moderate and the difference between the models is the final layer. The first model uses an average type operation while the second uses the conventional fully-connected layer at the end. The results show that the performance of the two models is similar and the test error is in the 6-9 % range.
27

Inteligentní manipulace s laboratorními objekty pomocí robotu ABB YuMi / Intelligent manipulation of laboratory objects using the ABB YuMi robot

Nevřiva, Václav January 2021 (has links)
The aim of the master thesis is to design a laboratory station and a control program operated by a collaborative robot IRB 14000 YuMi using an integrated effector camera to identify laboratory objects and control the progress of the task. In the introductory part, collaborative robots are briefly introduced, the IRB 14000 on which the task is implemented and the RobotStudio development environment together with the IntegratedVision extension are described in more detail. The following chapters describe the laboratory task itself, its solution and testing of the designed program.
28

High Performance Static Random Access Memory Design for Emerging Applications

Chen, Xiaowei January 2018 (has links)
Memory wall is becoming a more and more serious bottleneck of the processing speed of microprocessors. The mismatch between CPUs and memories has been increasing since three decades ago. SRAM was introduced as the bridge between the main memory and the CPU. SRAM is designed to be on the same die with CPU and stores temporary data and instructions that are to be processed by the CPU. Thus, the performance of SRAMs has a direct impact on the performance of CPUs. With the application of mass amount data to be processed nowadays, there is a great need for high-performance CPUs. Three dimensional CPUs and CPUs that are specifically designed for machine learning are gaining popularity. The objective of this work is to design high-performance SRAM for these two emerging applications. Firstly, a novel delay cell based on dummy TSV is proposed to replace traditional delay cells for better timing control. Secondly, a unique SRAM with novel architecture is custom designed for a high-performance machine learning processor. Post-layout simulation shows that the SRAM works well with the processing core and its design is optimized to work well with machine learning processors based on convolutional neural networks. A prototype of the SRAM is also tapped out to further verify our design.
29

Automatický odhad nadmořské výšky z obrazu / Altitude Estimation from an Image

Vašíček, Jan January 2015 (has links)
This thesis is concerned with the automatic altitude estimation from a single landscape photograph. I solved this task using convolutional neural networks. There was no suitable training dataset available having information about image altitude, thus I  had to create a new one. To estimate human performance in altitude estimation task, an experiment was conducted counting 100 subjects. The goal of this experiment was to measure the accuracy of the human estimate of camera altitude from an image. The measured average estimation error of subjects was 879 m. An automatic system based on convolutional neural networks outperforms humans with an average elevation error 712 m. The proposed system can be used in more complex scenario like the visual camera geo-localization.
30

VTQuestAR: An Augmented Reality Mobile Software Application for Virginia Tech Campus Visitors

Yao, Zhennan 07 January 2021 (has links)
The main campus of Virginia Polytechnic Institute and State University (Virginia Tech) has more than 120 buildings. The campus visitors face problems recognizing a building, finding a building, obtaining directions from one building to another, and getting information about a building. The exploratory development research described herein resulted in an iPhone / iPad software application (app) named VTQuestAR that provides assistance to the campus visitors by using the Augmented Reality (AR) technology. The Machine Learning (ML) technology is used to recognize a sample of 31 campus buildings in real-time. The VTQuestAR app enables the user to have a visual interactive experience with those 31 campus buildings by superimposing building information on top of the building picture shown through the camera. The app also enables the user to get directions from the current location or a building to another building displayed on a 2D map as well as an AR map. The user can perform complex searches on 122 campus buildings by building name, description, abbreviation, category, address, and year built. The app enables the user to take multimedia notes during a campus visit. Our exploratory development research illustrates the feasibility of using AR and ML in providing much more effective assistance to visitors of any organization. / Master of Science / The main campus of Virginia Polytechnic Institute and State University (Virginia Tech) has more than 120 buildings. The campus visitors face problems recognizing a building, finding a building, obtaining directions from one building to another, and getting information about a building. The exploratory development research described herein resulted in an iPhone / iPad software application named VTQuestAR that provides assistance to the campus visitors by using the Augmented Reality (AR) and Machine Learning (ML) technologies. Our research illustrates the feasibility of using AR and ML in providing much more effective assistance to visitors of any organization.

Page generated in 0.1125 seconds