• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

En jämförelse av maskininlärningsalgoritmer för uppskattning av cykelflöden baserat på cykelbarometer- och väderdata

Aspegren, Sebastian, Dahlström, Jonas January 2016 (has links)
Kontext. Maskininlärningsalgoritmer kan användas för att göra förutsägelser baserat påen mängd data. Vi använder oss utav data ifrån en cykelbarometer lokaliserad vid en cy-kelväg i Malmö i vår forskning. Denna barometer räknar antalet förbipasserande cyklarper dag. Tillsammans med väderdata, som består av temperatur och nederbörd, jämförvi precisionen hos algoritmer för uppskattning av antalet cyklister. I denna studie imple-menterar vi och testar en mängd olika maskininlärningsalgoritmer som finns tillgängliga iprogramvaran Weka. Vi tar hjälp av tidigare forskning inom ämnet för att identifiera vilkaalgoritmer som lämpar sig bäst för vår typ av data. Vi väljer sedan ut de tre algoritmermed bäst träffsäkerhet och undersöker dessa närmare.Mål. Målet med studien är att vi ska få fram vilken maskininlärningsalgoritm som gerdet mest tillförlitliga resultatet för att uppskatta antalet cyklister med hjälp av vår cykel-barometer- och väderdata.Metoder. Vi bearbetar datan ifrån cykelbarometern och väderstationen för att filtrera bortdagar som kan förvränga resultatet. Exempel på data som vi filtrerar bort är helgdagaroch skollov. Med den filtrerade datan implementerar vi ett flertal maskininlärningsalgorit-mer för att uppskatta antalet cyklister som kommer att passera barometern under en näraframtid. Resultaten ifrån algoritmerna använder vi för att jämföra och se vilken algoritmsom ger den mest tillförlitliga uppskattningen för den aktuella tillämpningen.Resultat. Enligt våra resultat är Random SubSpace och Bagging de överlägsna algorit-merna för att uppskatta cykelflöde. I samtliga av våra experiment åstadkommer dessa tvåbättre resultat än övriga algoritmer som finns tillgängliga i Weka. Resultaten därefter skil-jer sig från experiment till experiment men i genomsnitt är Wekas REPTree-algoritm dentredje mest precisa. Variabeln som bidrar mest till vår uppskattning av antalet cyklisterär datum. Utan denna variabel reduceras korrelationen till hälften för samtliga algoritmer.När vi avlägsnar temperatur-variabeln presterar däremot algoritmerna bättre genom attge högre korrelation.Analys. Vi har hittat en korrelation mellan datum och cykelflöden samt kunnat förutsägacykelflöden beroende på datum och väder. Vi förväntade oss inte att variabeln temperatur gör det svårare för algoritmer att uppskatta antal cyklister. Vi antar att detta beror på att människor väljer att cykla efter datum istället för temperatur. / Context. Machine Learning Algorithms can be used to make predictions based on a va-riety of data. We use data from a bicycle barometer located at a bike path in Malmö inour research. This barometer counts the number of passing bikes per day. Together withweather data, consisting of temperature and precipitation, we compare the accuracy ofthe algorithms to estimate the number of cyclists. In this study we implement and test avariety of machine learning algorithms that are available in the software Weka. We rely on previous research in order to identify which algorithms are best suited for our type of data. We will then select the three algorithms with the best accuracy and examine them closer.Goal. The goal of the study is to identify the machine learning algorithm that providesthe most reliable results to estimate the number of cyclists using our bicycle barometer-and weather data.Methods. We process the data from the bicycle barometer and weather station to filter out days that can distort the results. Examples of data that we filter out are public holidays and school holidays. With the filtered data we implement three different machine learningalgorithms to estimate the number of bicyclists who will pass the barometer in the nearfuture. The results from the algorithms are then used to compare and see which algorithmthat makes the most reliable estimate of the current application.Results. According to our results, the Random SubSpace and Bagging methods are thesuperior algorithms to estimate the cycle flow. These algorithms provide the best results in all of our experiments. The results differ beyond those two algorithms but on average Wekas REPTree algorithm is the third most accurate. The variable that contributes the most to our estimate of cyclists is date. Without the date predictor the correlation is reduced to half compared to the other experiments. However, when we eliminate the temperature predictor the correlation increases.Analysis. We have found a correlation between dates and bicycle flows. In addition wehave been able to estimate the cycle flows, depending on date and weather. We did not expect that the variable temperature makes it harder for algorithms to estimate the number of cyclists. We assume that this is because people choose to cycle by date instead of the temperature.
2

Sledování vybraného objektu v dynamickém obraze / Object tracking in videofeed

Klvaňa, Marek January 2011 (has links)
The aim of this thesis is a description and implementation of algorithms of the tracked objects in the video feed. This thesis introduces Mean shift and Continuously adaptive mean shift algorithms which represent category based on kernel tracking. For construction of a model is used a threedimensional color histogram whose construction is described in this thesis as well. The achievements of described algorithms are compared in the testing images sequences and evaluated in details.
3

An Optical Flow Implementation Comparison Study

Bodily, John M. 12 March 2009 (has links) (PDF)
Optical flow is the apparent motion of brightness patterns within an image scene. Algorithms used to calculate the optical flow for a sequence of images are useful in a variety of applications, including motion detection and obstacle avoidance. Typical optical flow algorithms are computationally intense and run slowly when implemented in software, which is problematic since many potential applications of the algorithm require real-time calculation in order to be useful. To increase performance of the calculation, optical flow has recently been implemented on FPGA and GPU platforms. These devices are able to process optical flow in real-time, but are generally less accurate than software solutions. For this thesis, two different optical flow algorithms have been implemented to run on a GPU using NVIDIA's CUDA SDK. Previous FPGA implementations of the algorithms exist and are used to make a comparison between the FPGA and GPU devices for the optical flow calculation. The first algorithm calculates optical flow using 3D gradient tensors and is able to process 640x480 images at about 238 frames per second with an average angular error of 12.1 degrees when run on a GeForce 8800 GTX GPU. The second algorithm uses increased smoothing and a ridge regression calculation to produce a more accurate result. It reduces the average angular error by about 2.3x, but the additional computational complexity of the algorithm also reduces the frame rate by about 1.5x. Overall, the GPU outperforms the FPGA in frame rate and accuracy, but requires much more power and is not as flexible. The most significant advantage of the GPU is the reduced design time and effort needed to implement the algorithms, with the FPGA designs requiring 10x to 12x the effort.
4

Offline Approximate String Matching forInformation Retrieval : An experiment on technical documentation

Dubois, Simon January 2013 (has links)
Approximate string matching consists in identifying strings as similar even ifthere is a number of mismatch between them. This technique is one of thesolutions to reduce the exact matching strictness in data comparison. In manycases it is useful to identify stream variation (e.g. audio) or word declension (e.g.prefix, suffix, plural). Approximate string matching can be used to score terms in InformationRetrieval (IR) systems. The benefit is to return results even if query terms doesnot exactly match indexed terms. However, as approximate string matchingalgorithms only consider characters (nor context neither meaning), there is noguarantee that additional matches are relevant matches. This paper presents the effects of some approximate string matchingalgorithms on search results in IR systems. An experimental research design hasbeen conducting to evaluate such effects from two perspectives. First, resultrelevance is analysed with precision and recall. Second, performance is measuredthanks to the execution time required to compute matches. Six approximate string matching algorithms are studied. Levenshtein andDamerau-Levenshtein computes edit distance between two terms. Soundex andMetaphone index terms based on their pronunciation. Jaccard similarity calculatesthe overlap coefficient between two strings. Tests are performed through IR scenarios regarding to different context,information need and search query designed to query on a technicaldocumentation related to software development (man pages from Ubuntu). Apurposive sample is selected to assess document relevance to IR scenarios andcompute IR metrics (precision, recall, F-Measure). Experiments reveal that all tested approximate matching methods increaserecall on average, but, except Metaphone, they also decrease precision. Soundexand Jaccard Similarity are not advised because they fail on too many IR scenarios.Highest recall is obtained by edit distance algorithms that are also the most timeconsuming. Because Levenshtein-Damerau has no significant improvementcompared to Levenshtein but costs much more time, the last one is recommendedfor use with a specialised documentation. Finally some other related recommendations are given to practitioners toimplement IR systems on technical documentation.
5

Price Prediction of Vinyl Records Using Machine Learning Algorithms

Johansson, David January 2020 (has links)
Machine learning algorithms have been used for price prediction within several application areas. Examples include real estate, the stock market, tourist accommodation, electricity, art, cryptocurrencies, and fine wine. Common approaches in studies are to evaluate the accuracy of predictions and compare different algorithms, such as Linear Regression or Neural Networks. There is a thriving global second-hand market for vinyl records, but the research of price prediction within the area is very limited. The purpose of this project was to expand on existing knowledge within price prediction in general to evaluate some aspects of price prediction of vinyl records. That included investigating the possible level of accuracy and comparing the efficiency of algorithms. A dataset of 37000 samples of vinyl records was created with data from the Discogs website, and multiple machine learning algorithms were utilized in a controlled experiment. Among the conclusions drawn from the results was that the Random Forest algorithm generally generated the strongest results, that results can vary substantially between different artists or genres, and that a large part of the predictions had a good accuracy level, but that a relatively small amount of large errors had a considerable effect on the general results.

Page generated in 0.0788 seconds