• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 50
  • 20
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 219
  • 59
  • 41
  • 33
  • 30
  • 27
  • 26
  • 25
  • 25
  • 25
  • 24
  • 20
  • 19
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Detekce a rozpoznání dopravního značení / Traffic signs detection and recognition

Dvořák, Michal January 2015 (has links)
The goal of this thesis is the utilization of computer vision methods, in a way that will lead to detection and identification of traffic signs in an image. The final application is to analyze video feed from a video camcorder placed in a vehicle. With focus placed on effective utilization of computer resources in order to achieve real time identification of signs in a video stream.
32

Data-Adaptive Multivariate Density Estimation Using Regular Pavings, With Applications to Simulation-Intensive Inference

Harlow, Jennifer January 2013 (has links)
A regular paving (RP) is a finite succession of bisections that partitions a multidimensional box into sub-boxes using a binary tree-based data structure, with the restriction that an existing sub-box in the partition may only be bisected on its first widest side. Mapping a real value to each element of the partition gives a real-mapped regular paving (RMRP) that can be used to represent a piecewise-constant function density estimate on a multidimensional domain. The RP structure allows real arithmetic to be extended to density estimates represented as RMRPs. Other operations such as computing marginal and conditional functions can also be carried out very efficiently by exploiting these arithmetical properties and the binary tree structure. The purpose of this thesis is to explore the potential for density estimation using RPs. The thesis is structured in three parts. The first part formalises the operational properties of RP-structured density estimates. The next part considers methods for creating a suitable RP partition for an RMRP-structured density estimate. The advantages and disadvantages of a Markov chain Monte Carlo algorithm, already developed, are investigated and this is extended to include a semi-automatic method for heuristic diagnosis of convergence of the chain. An alternative method is also proposed that uses an RMRP to approximate a kernel density estimate. RMRP density estimates are not differentiable and have slower convergence rates than good multivariate kernel density estimators. The advantages of an RMRP density estimate relate to its operational properties. The final part of this thesis describes a new approach to Bayesian inference for complex models with intractable likelihood functions that exploits these operational properties.
33

Detekce pohybujících se objektů ve video sekvenci / Moving Objects Detection in Video Sequences

Němec, Jiří January 2012 (has links)
This thesis deals with methods for the detection of people and tracking objects in video sequences. An application for detection and tracking of players in video recordings of sport activities, e.g. hockey or basketball matches, is proposed and implemented. The designed application uses the combination of histograms of oriented gradients and classification based on SVM (Support Vector Machines) for detecting players in the picture. Moreover, a particle filter is used for tracking detected players. The whole system was fully tested and the results are shown in the graphs and tables with verbal descriptions.
34

Hodnocení provozu malých fotovoltaických elektráren s připojením do sítě nn / Operational Evaluation of the Small Photovoltaic Power Plants Connected to the LV network

Černý, Jaroslav January 2010 (has links)
The objective of this thesis is to perform practical measurements of two small photovoltaic power plants, compare the received results with a theoretical calculation, and evaluate how the two differ from each other. Measured data are divided into three groups. The first group evaluates an overall electric power production in the individual months and compares it with the theoretical presumption that is instrumental to a return of investment calculation. The second group shows us a development of electric power production during characteristic days. We can compare how an ideal day from the FVE production point of view looks like, in other words a sunny and clear-sky day, with the opposite case when it is cloudy and raining all day long. Another extreme is a rapid growth or a performance drop, which can happen in a pho-tovoltaic power plant and a time period when this situation can arise. The third group contains statistic data from the electric power production. They are processed in the form of histograms and polygons of production that graphically illustrate production decomposition into the individual months. These data can be instrumental to a decision making about the form of purchase, whether it is economical for us to use the so called green bonus or not.
35

Vizuální detekce osob v komerčních aplikacích / Human detection in commercial applications

Černín, Jan January 2012 (has links)
The aim of the master thesis is to derive and implement image porcessing methods for people detection and tracking in images or videos. The overall solution was chosen as a combination of modern approaches and methods which were recently presented. The proposed algorithm is able to create trajectory of the person moving in indoor building spaces even under influence of full or partial occlusion for a short period of time. The scene of interest is surveyed by a static camera having direct view on targets. Selected methods are implemented in C# programming language based on OpenCV library. Graphical user interface was created to show the final output of algorithm.
36

Učení a detekce objektů různých tříd v obraze / Multi Object Class Learning and Detection in Image

Chrápek, David January 2012 (has links)
This paper is focused on object learning and recognizing in the image and in the image stream. More specifically on learning and recognizing humans or theirs parts in case they are partly occluded, with possible usage on robotic platforms. This task is based on features called Histogram of Oriented Gradients (HOG) which can work quite well with different poses the human can be in. The human is split into several parts and those parts are detected individually. Then a system of voting is introduced in which detected parts votes for the final positions of found people. For training the detector a linear SVM is used. Then the Kalman filter is used for stabilization of the detector in case of detecting from image stream.
37

DESIGN AND DEVELOPMENT OF AN AUTONOMOUS SOCCER-PLAYING ROBOT

Olson, Steven A. R., Dawson, Chad S., Jacobson, Jared 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / This paper describes the construction of an autonomous soccer playing robot as part of a senior design project at Brigham Young University. Each participating team designed and built a robot to compete in an annual tournament. To accomplish this, each team had access to images received from a camera placed above a soccer field. The creation of image processing and artificial intelligence software were required to allow the robot to perform against other robots in a one-on-one competition. Each participating team was given resources to accomplish this project. This paper contains a summary of the experiences gained by team members and also a description of the key components created for the robot named Prometheus to compete and win the annual tournament.
38

Statistical Regular Pavings and their Applications

Teng, Gloria Ai Hui January 2013 (has links)
We propose using statistical regular pavings (SRPs) as an efficient and adaptive statistical data structure for processing massive, multi-dimensional data. A regular paving (RP) is an ordered binary tree that recursively bisects a box in $\Rz^{d}$ along the first widest side. An SRP is extended from an RP by allowing mutable caches of recursively computable statistics of the data. In this study we use SRPs for two major applications: estimating histogram densities and summarising large spatio-temporal datasets. The SRP histograms produced are $L_1$-consistent density estimators driven by a randomised priority queue that adaptively grows the SRP tree, and formalised as a Markov chain over the space of SRPs. A way to select an estimate is to run a Markov chain over the space of SRP trees, also initialised by the randomised priority queue, but here the SRP tree either shrinks or grows adaptively through pruning or splitting operations. The stationary distribution of the Markov chain is then the posterior distribution over the space of all possible histograms. We then take advantage of the recursive nature of SRPs to make computationally efficient arithmetic averages, and take the average of the states sampled from the stationary distribution to obtain the posterior mean histogram estimate. We also show that SRPs are capable of summarizing large datasets by working with a dataset containing high frequency aircraft position information. Recursively computable statistics can be stored for variable-sized regions of airspace. The regions themselves can be created automatically to reflect the varying density of aircraft observations, dedicating more computational resources and providing more detailed information in areas with more air traffic. In particular, SRPs are able to very quickly aggregate or separate data with different characteristics so that data describing individual aircraft or collected using different technologies (reflecting different levels of precision) can be stored separately and yet also very quickly combined using standard arithmetic operations.
39

True random number generation using genetic algorithms on high performance architectures

MIJARES CHAN, JOSE JUAN 01 September 2016 (has links)
Many real-world applications use random numbers generated by pseudo-random number and true random number generators (TRNG). Unlike pseudo-random number generators which rely on an input seed to generate random numbers, a TRNG relies on a non-deterministic source to generate aperiodic random numbers. In this research, we develop a novel and generic software-based TRNG using a random source extracted from compute architectures of today. We show that the non-deterministic events such as race conditions between compute threads follow a near Gamma distribution, independent of the architecture, multi-cores or co-processors. Our design improves the distribution towards a uniform distribution ensuring the stationarity of the sequence of random variables. We improve the random numbers statistical deficiencies by using a post-processing stage based on a heuristic evolutionary algorithm. Our post-processing algorithm is composed of two phases: (i) Histogram Specification and (ii) Stationarity Enforcement. We propose two techniques for histogram equalization, Exact Histogram Equalization (EHE) and Adaptive EHE (AEHE) that maps the random numbers distribution to a user-specified distribution. EHE is an offline algorithm with O(NlogN). AEHE is an online algorithm that improves performance using a sliding window and achieves O(N). Both algorithms ensure a normalized entropy of (0:95; 1:0]. The stationarity enforcement phase uses genetic algorithms to mitigate the statistical deficiencies from the output of histogram equalization by permuting the random numbers until wide-sense stationarity is achieved. By measuring the power spectral density standard deviation, we ensure that the quality of the numbers generated from the genetic algorithms are within the specified level of error defined by the user. We develop two algorithms, a naive algorithm with an expected exponential complexity of E[O(eN)], and an accelerated FFT-based algorithm with an expected quadratic complexity of E[O(N2)]. The accelerated FFT-based algorithm exploits the parallelism found in genetic algorithms on a homogeneous multi-core cluster. We evaluate the effects of its scalability and data size on a standardized battery of tests, TestU01, finding the tuning parameters to ensure wide-sense stationarity on long runs. / October 2016
40

Advanced Image Processing Using Histogram Equalization and Android Application Implementation

Gaddam, Purna Chandra Srinivas Kumar, Sunkara, Prathik January 2016 (has links)
Now a days the conditions at which the image taken may lead to near zero visibility for the human eye. They may usually due to lack of clarity, just like effects enclosed on earth’s atmosphere which have effects upon the images due to haze, fog and other day light effects. The effects on such images may exists, so useful information taken under those scenarios should be enhanced and made clear to recognize the objects and other useful information. To deal with such issues caused by low light or through the imaging devices experience haze effect many image processing algorithms were implemented. These algorithms also provide nonlinear contrast enhancement to some extent. We took pre-existed algorithms like SMQT (Successive mean Quantization Transform), V Transform, histogram equalization algorithms to improve the visual quality of digital picture with large range scenes and with irregular lighting conditions. These algorithms were performed in two different method and tested using different image facing low light and color change and succeeded in obtaining the enhanced image. These algorithms helps in various enhancements like color, contrast and very accurate results of images with low light. Histogram equalization technique is implemented by interpreting histogram of image as probability density function. To an image cumulative distribution function is applied so that accumulated histogram values are obtained. Then the values of the pixels are changed based on their probability and spread over the histogram. From these algorithms we choose histogram equalization, MATLAB code is taken as reference and made changes to implement in API (Application Program Interface) using JAVA and confirms that the application works properly with reduction of execution time.

Page generated in 0.048 seconds