• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 225
  • 72
  • 24
  • 22
  • 18
  • 9
  • 9
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 461
  • 461
  • 461
  • 155
  • 128
  • 109
  • 105
  • 79
  • 76
  • 70
  • 67
  • 64
  • 60
  • 55
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Building Predictive Models for Stock Market Performance : En studie om maskininlärning och deras prestanda

Wennmark, Gabriel, Lindgren, Felix January 2023 (has links)
Today it is important for investors to identify which stocks that will result in positive returns in order for the right decision to be made when trading on the stock market. For decades it has been an area of interest for academics, and it is still challenging due to many difficulties and problems. A large number of studies has been carried out in machine learning and stock trading,where many of the studies has resulted in promising results despite these challenges. The aim of this study was to develop and evaluate predictive models for identifying stocks that outperform the Swedish market index OMXSPI. The research utilized a dataset of historical stock data and applied three various machine learning algorithms, Support Vector Machine, Logistic Regression and Decision Trees to predict if excess performance was met. With the help of ten-fold cross-validation and hyperparameter tuning the results were an IT-artefact that produced satisfying results. The results showed that hyperparameter tuning techniques marginally improved the metrics focused-on, namely accuracy and precision. The support vector machine model achieved an accuracy of 58,52% and a precision of 57,51%. The logistic regression model achieved an accuracy of 55,75% and a precision of 54,81%. Finally, the decision tree model which was the best performer, achieved an accuracy of 64,84% and a precision of 65,00%.
152

Assessment of a prediction-based strategy for mixingautonomous and manually driven vehicles in an intersection / Utvärdering av en prediktionsbaserad metod för att blanda autonoma och manuella bilar i en korsning

NADI, ADRIAN, STEFFNER, YLVA January 2017 (has links)
The introduction of autonomous vehicles in traffic is driven by expected gains in multiple areas, such as improvement of health and safety, better resource utilization, pollution reduction and greater convenience. The development of more competent algorithms will determine the rate and level of success for the ambitions around autonomous vehicles. In this thesis work an intersection management system for a mix of autonomous and manually driven vehicles is created. The purpose is to investigate the strategy to combine turn intention prediction for manually driven vehicles with scheduling of autonomous vehicle. The prediction method used is support vector machine (SVM) and scheduling of vehicles have been made by dividing the intersection into an occupancy grid and apply different safety levels. Real-life data comprising recordings of large volumes of traffic through an intersection has been combined with simulated vehicles to assess the relevance of the new algorithms. Measurements of collision rate and traffic flow showed that the algorithms behaved as expected. A miniature vehicle based on a prototype for an autonomous RC-car has been designed with the purpose of testing of the algorithms in a laboratory setting. / Införandet av autonoma fordon i trafiken drivs av förväntade vinster i flera områden, såsom förbättring av hälsa och säkerhet, bättre resursutnyttjande, minskning av föroreningar och ökad bekvämlighet. Utvecklingen av mer kompetenta algoritmer kommer att bestämma hastigheten och nivån på framgång för ambitionerna kring autonoma fordon. I detta examensarbete skapas ett korsningshanteringssystem för en blandning av autonoma och självkörande bilar. Syftet är att undersöka strategin att kombinera prediktion av hur manuellt styrda bilar kommer att svänga med att schemalägga autonoma bilar utifrån detta. Prediktionsmetoden som använts är support vector machine (SVM) och schemaläggning av bilar har gjorts genom att dela upp korsningen i ett occupancy grid och tillämpa olika säkerhetsmarginaler. Verklig data från inspelningar av stora volymer trafik genom en korsning har kombinerats med simulerade fordon för att bedöma relevansen av de nya algoritmerna. Mätningar av kollisioner och trafikflöde visade att algoritmerna uppträdde som förväntat. Ett miniatyrfordon baserat på en prototyp av en självkörande radiostyrd bil har tagits fram i syfte att testa algoritmerna i laboratoriemiljö.
153

Ärendehantering genom maskininlärning

Bennheden, Daniel January 2023 (has links)
Det här examensarbetet undersöker hur artificiell intelligens kan användas för att automatisktkategorisera felanmälan som behandlas i ett ärendehanteringssystem genom att användamaskininlärning och tekniker som text mining. Studien utgår från Design Science ResearchMethodology och Peffers sex steg för designmetodologi som utöver design även berör kravställningoch utvärdering av funktion. Maskininlärningsmodellerna som tagits fram tränades på historiskadata från ärendehanteringssystem Infracontrol Online med fyra typer av olika algoritmer, NaiveBayes, Support Vector Machine, Neural Network och Random Forest. En webapplikation togs framför att demonstrera hur en av de maskininlärningsmodeller som tränats fungerar och kan användasför att kategorisera text. Olika användare av systemet har därefter haft möjlighet att testafunktionen och utvärdera hur den fungerar genom att markera när kategoriseringen avtextprompter träffar rätt respektive fel.Resultatet visar på att det är möjligt att lösa uppgiften med hjälp av maskininlärning. En avgörandedel av utvecklingsarbetet för att göra modellen användbar var urvalet av data som användes för attträna modellen. Olika kunder som använder systemet, använder det på olika sätt, vilket gjorde detfördelaktigt att separera dem och träna modeller för olika kunder individuellt. En källa tillinkonsistenta resultat är hur organisationer förändrar sina processer och ärendehantering över tidoch problemet hanterades genom att begränsa hur långt tillbaka i tiden modellen hämtar data förträning. Dessa två strategier för att hantera problem har nackdelen att den mängd historiska datasom finns tillgänglig att träna modellen på minskar, men resultaten visar inte någon tydlig nackdelför de maskininlärningsmodeller som tränats på mindre datamängder utan även de har en godtagbarträffsäkerhet. / This thesis investigates how artificial intelligence can be used to automatically categorize faultreports that are processed in a case management system by using machine learning and techniquessuch as text mining. The study is based on Design Science Research Methodology and Peffer's sixsteps of design methodology, which in addition to design of an artifact concerns requirements andevaluation. The machine learning models that were developed were trained on historical data fromthe case management system Infracontrol Online, using four types of algorithms, Naive Bayes,Support Vector Machine, Neural Network, and Random Forest. A web application was developed todemonstrate how one of the machine learning models trained works and can be used to categorizetext. Regular users of the system have then had the opportunity to test the performance of themodel and evaluate how it works by marking where it categorizes text prompts correctly.The results show that it is possible to solve the task using machine learning. A crucial part of thedevelopment was the selection of data used to train the model. Different customers using thesystem use it in different ways, which made it advantageous to separate them and train models fordifferent customers independently. Another source of inconsistent results is how organizationschange their processes and thus case management over time. This issue was addressed by limitinghow far back in time the model retrieves data for training. The two strategies for solving the issuesmentioned have the disadvantage that the amount of historical data available for training decreases,but the results do not show any clear disadvantage for the machine learning models trained onsmaller data sets. They perform well and tests show an acceptable level of accuracy for theirpredictions
154

The Role of Data in Projected Quantum Kernels: The Higgs Boson Discrimination / Datans roll i projicerade kvantkärnor: Higgs Boson-diskriminering

Di Marcantonio, Francesco January 2022 (has links)
The development of quantum machine learning is bridging the way to fault tolerant quantum computation by providing algorithms running on the current noisy intermediate scale quantum devices.However, it is difficult to find use-cases where quantum computers exceed their classical counterpart.The high energy physics community is experiencing a rapid growth in the amount of data physicists need to collect, store, and analyze within the more complex experiments are being conceived.Our work approaches the study of a particle physics event involving the Higgs boson from a quantum machine learning perspective.We compare quantum support vector machine with the best classical kernel method grounding our study in a new theoretical framework based on metrics observing at three different aspects: the geometry between the classical and quantum learning spaces, the dimensionality of the feature space, and the complexity of the ML models.We exploit these metrics as a compass in the parameter space because of their predictive power. Hence, we can exclude those areas where we do not expect any advantage in using quantum models and guide our study through the best parameter configurations.Indeed, how to select the number of qubits in a quantum circuits and the number of datapoints in a dataset were so far left to trial and error attempts.We observe, in a vast parameter region, that the used classical rbf kernel model overtakes the performances of the devised quantum kernels.We include in this study the projected quantum kernel - a kernel able to reduce the expressivity of the traditional fidelity quantum kernel by projecting its quantum state back to an approximate classical representation through the measurement of local quantum systems.The Higgs dataset has been proved to be low dimensional in the quantum feature space meaning that the quantum encoding selected is not enough expressive for the dataset under study.Nonetheless, the optimization of the parameters on all the kernels proposed, classical and quantum, revealed a quantum advantage for the projected kernel which well classify the Higgs boson events and surpass the classical ML model. / Utvecklingen inom kvantmaskininlärning banar vägen för nya algoritmer att lösa krävande kvantberäkningar på dagens brusfyllda kvantkomponenter. Däremot är det en utmaning att finna användningsområden för vilka algoritmer som dessa visar sig mer effektiva än sina klassiska motsvarigheter. Forskningen inom högenergifysik upplever för tillfället en drastisk ökning i mängden data att samla, lagra och analysera inom mer komplexa experiment. Detta arbete undersöker Higgsbosonen ur ett kvantmaskinsinlärningsperspektiv. Vi jämför "quantum support vector machine" med den främsta klassiska metoden med avseende på tre olika metriker: geometrin av inlärningsrummen, dimensionaliteten av egenskapsrummen, och tidskomplexiteten av maskininlärningsmetoderna. Dessa tre metriker används för att förutsäga hur problemet manifesterar sig i parameterrummet. På så vis kan vi utesluta regioner i rummet där kvantalgoritmer inte förväntas överprestera klassiska algoritmer. Det finns en godtycklighet i hur antalet qubits och antalet datapunkter bestämms, och resultatet beror på dessa parametrar.I en utbredd region av parameterrummet observerar vi dock att den klassiska rbf-kärnmodellen överpresterar de studerade kvantkärnorna. I denna studie inkluderar vi en projicerad kvantkärna - en kärna som reducerar det totala kvanttillståndet till en ungefärlig klassisk representation genom att mäta en lokal del av kvantsystemet.Den studerade Higgs-datamängden har visat sig vara av låg dimension i kvantegenskapsrummet. Men optimering av parametrarna för alla kärnor som undersökts, klassiska såväl som kvantmekaniska, visade på ett visst kvantövertag för den projicerade kärnan som klassifierar de undersöka Higgs-händelserna som överstiger de klassiska maskininlärningsmodellerna.
155

Maskininlärning för dokumentklassificering av finansielladokument med fokus på fakturor / Machine Learning for Document Classification of FinancialDocuments with Focus on Invoices

Khalid Saeed, Nawar January 2022 (has links)
Automatiserad dokumentklassificering är en process eller metod som syftar till att bearbeta ochhantera dokument i digitala former. Många företag strävar efter en textklassificeringsmetodiksom kan lösa olika problem. Ett av dessa problem är att klassificera och organisera ett stort antaldokument baserat på en uppsättning av fördefinierade kategorier.Detta examensarbete syftar till att hjälpa Medius, vilket är ett företag som arbetar med fakturaarbetsflöde, att klassificera dokumenten som behandlas i deras fakturaarbetsflöde till fakturoroch icke-fakturor. Detta har åstadkommits genom att implementera och utvärdera olika klassificeringsmetoder för maskininlärning med avseende på deras noggrannhet och effektivitet för attklassificera finansiella dokument, där endast fakturor är av intresse.I denna avhandling har två dokumentrepresentationsmetoder "Term Frequency Inverse DocumentFrequency (TF-IDF) och Doc2Vec" använts för att representera dokumenten som vektorer. Representationen syftar till att minska komplexiteten i dokumenten och göra de lättare att hantera.Dessutom har tre klassificeringsmetoder använts för att automatisera dokumentklassificeringsprocessen för fakturor. Dessa metoder var Logistic Regression, Multinomial Naïve Bayes och SupportVector Machine.Resultaten från denna avhandling visade att alla klassificeringsmetoder som använde TF-IDF, föratt representera dokumenten som vektorer, gav goda resultat i from av prestanda och noggranhet.Noggrannheten för alla tre klassificeringsmetoderna var över 90%, vilket var kravet för att dennastudie skulle anses vara lyckad. Dessutom verkade Logistic Regression att ha det lättare att klassificera dokumenten jämfört med andra metoder. Ett test på riktiga data "dokument" som flödarin i Medius fakturaarbetsflöde visade att Logistic Regression lyckades att korrekt klassificeranästan 96% av dokumenten.Avslutningsvis, fastställdes Logistic Regression tillsammans med TF-IDF som de övergripandeoch mest lämpliga metoderna att klara av problmet om dokumentklassficering. Dessvärre, kundeDoc2Vec inte ge ett bra resultat p.g.a. datamängden inte var anpassad och tillräcklig för attmetoden skulle fungera bra. / Automated document classification is an essential technique that aims to process and managedocuments in digital forms. Many companies strive for a text classification methodology thatcan solve a plethora of problems. One of these problems is classifying and organizing a massiveamount of documents based on a set of predefined categories.This thesis aims to help Medius, a company that works with invoice workflow, to classify theirdocuments into invoices and non-invoices. This has been accomplished by implementing andevaluating various machine learning classification methods in terms of their accuracy and efficiencyfor the task of financial document classification, where only invoices are of interest. Furthermore,the necessary pre-processing steps for achieving good performance are considered when evaluatingthe mentioned classification methods.In this study, two document representation methods "Term Frequency Inverse Document Frequency (TF-IDF) and Doc2Vec" were used to represent the documents as fixed-length vectors.The representation aims to reduce the complexity of the documents and make them easier tohandle. In addition, three classification methods have been used to automate the document classification process for invoices. These methods were Logistic Regression, Multinomial Naïve Bayesand Support Vector Machine.The results from this thesis indicate that all classification methods used TF-IDF, to represent thedocuments as vectors, give high performance and accuracy. The accuracy of all three classificationmethods is over 90%, which is the prerequisite for the success of this study. Moreover, LogisticRegression appears to cope with this task very easily, since it classifies the documents moreefficiently compared to the other methods. A test of real data flowing into Medius’ invoiceworkflow shows that Logistic Regression is able to correctly classify up to 96% of the data.In conclusion, the Logistic Regression together with TF-IDF is determined to be the overall mostappropriate method out of the other tested methods. In addition, Doc2Vec suffers to providea good result because the data set is not customized and sufficient for the method to workwell.
156

GIS-baserad analys och validering av habitattyper efter dammutrivning

Edlund, Fredrik January 2021 (has links)
Efter att EU införde ett ramverk år 2000 rörande regionens vattenanvändning, vattendirektivet, beslöt Sveriges regering att från och med sommaren 2020 ompröva rikets vattendammar. I de fall rådande vattenanvändning inte uppfyller de krav som anges i ramverket kan dammutrivning bli aktuellt. Syftet med studien är undersöka och utveckla en metod att utvärdera förändringar av strömhabitat uppströms ett vattendrag efter en dammutrivning. Studieområdet utgörs och begränsas av datamängden i form av flygfoton insamlade med UAV vid två tillfällen över samma område. Även batymetriska data över vattendragets botten från en bottenskanning har använts således även Lantmäteriets nationella höjdmodell. Två fotogrammetriprogram användes i arbetet, dels för att skapa en ortomosaik från flygfoton men även för att utföra en bildnormalisering. GIS programvaran ArcGIS Pro tillhandahåller flera algoritmer för klassificering av raster. Algoritmerna SVM och RT, viktades mot varandra och SVM användes vidare i metoden. Med olika generaliserings-verktyg kunde strömhabitat identifieras och förstärkas. Även olika terrängmodeller skapades från flygfoton och Lantmäteriets nationella höjdmodell. Dessa granskades mot varandra utifrån olika aspekter som variationer i bland annat detaljrikedom, generaliseringsgrad och återspeglandet av vattenytan.  Slutsatsen av studien är att klassificering av strömhabitat kan göras i ett GIS-program med en lägesosäkerhet på mellan 25 och 40 %, beroende på vilka strömhabitat som ska klassificeras. Efter utrivningen uppstod 17 zoner med förändrade strömhabitat vilket var två mer än vad prognoser förutsatt. Vidare påverkades vattenvolymen markant då en minskning på ca 40 % skedde från 2018 till 2020. En areal av ca 1,5 hektar berördes då gammal älvbotten blev torrlagd i samband med dammutrivningen. Ett samband syntes mellan avståndet från kraftverket och torrlagd botten då dessa ytor sågs minska i storlek i takt med att avståndet ökade. Att undersöka vart vattennivån påverkats som mest var inte möjligt i brist på data. Studien har utvecklat en metod att analysera en dammutrivnings påverkan på ett vattendrag med data från UAV och bottenskanning.
157

Real-time Hand Gesture Detection and Recognition for Human Computer Interaction

Dardas, Nasser Hasan Abdel-Qader 08 November 2012 (has links)
This thesis focuses on bare hand gesture recognition by proposing a new architecture to solve the problem of real-time vision-based hand detection, tracking, and gesture recognition for interaction with an application via hand gestures. The first stage of our system allows detecting and tracking a bare hand in a cluttered background using face subtraction, skin detection and contour comparison. The second stage allows recognizing hand gestures using bag-of-features and multi-class Support Vector Machine (SVM) algorithms. Finally, a grammar has been developed to generate gesture commands for application control. Our hand gesture recognition system consists of two steps: offline training and online testing. In the training stage, after extracting the keypoints for every training image using the Scale Invariance Feature Transform (SIFT), a vector quantization technique will map keypoints from every training image into a unified dimensional histogram vector (bag-of-words) after K-means clustering. This histogram is treated as an input vector for a multi-class SVM to build the classifier. In the testing stage, for every frame captured from a webcam, the hand is detected using my algorithm. Then, the keypoints are extracted for every small image that contains the detected hand posture and fed into the cluster model to map them into a bag-of-words vector, which is fed into the multi-class SVM classifier to recognize the hand gesture. Another hand gesture recognition system was proposed using Principle Components Analysis (PCA). The most eigenvectors and weights of training images are determined. In the testing stage, the hand posture is detected for every frame using my algorithm. Then, the small image that contains the detected hand is projected onto the most eigenvectors of training images to form its test weights. Finally, the minimum Euclidean distance is determined among the test weights and the training weights of each training image to recognize the hand gesture. Two application of gesture-based interaction with a 3D gaming virtual environment were implemented. The exertion videogame makes use of a stationary bicycle as one of the main inputs for game playing. The user can control and direct left-right movement and shooting actions in the game by a set of hand gesture commands, while in the second game, the user can control and direct a helicopter over the city by a set of hand gesture commands.
158

A novel hybrid technique for short-term electricity price forecasting in deregulated electricity markets

Hu, Linlin January 2010 (has links)
Short-term electricity price forecasting is now crucial practice in deregulated electricity markets, as it forms the basis for maximizing the profits of the market participants. In this thesis, short-term electricity prices are forecast using three different predictor schemes, Artificial Neural Networks (ANNs), Support Vector Machine (SVM) and a hybrid scheme, respectively. ANNs are the very popular and successful tools for practical forecasting. In this thesis, a hidden-layered feed-forward neural network with back-propagation has been adopted for detailed comparison with other forecasting models. SVM is a newly developed technique that has many attractive features and good performance in terms of prediction. In order to overcome the limitations of individual forecasting models, a hybrid technique that combines Fuzzy-C-Means (FCM) clustering and SVM regression algorithms is proposed to forecast the half-hour electricity prices in the UK electricity markets. According to the value of their power prices, thousands of the training data are classified by the unsupervised learning method of FCM clustering. SVM regression model is then applied to each cluster by taking advantage of the aggregated data information, which reduces the noise for each training program. In order to demonstrate the predictive capability of the proposed model, ANNs and SVM models are presented and compared with the hybrid technique based on the same training and testing data sets in the case studies by using real electricity market data. The data was obtained upon request from APX Power UK for the year 2007. Mean Absolute Percentage Error (MAPE) is used to analyze the forecasting errors of different models and the results presented clearly show that the proposed hybrid technique considerably improves the electricity price forecasting.
159

Non-intrusive driver drowsiness detection system

Abas, Ashardi B. January 2011 (has links)
The development of technologies for preventing drowsiness at the wheel is a major challenge in the field of accident avoidance systems. Preventing drowsiness during driving requires a method for accurately detecting a decline in driver alertness and a method for alerting and refreshing the driver. As a detection method, the authors have developed a system that uses image processing technology to analyse images of the road lane with a video camera integrated with steering wheel angle data collection from a car simulation system. The main contribution of this study is a novel algorithm for drowsiness detection and tracking, which is based on the incorporation of information from a road vision system and vehicle performance parameters. Refinement of the algorithm is more precisely detected the level of drowsiness by the implementation of a support vector machine classification for robust and accurate drowsiness warning system. The Support Vector Machine (SVM) classification technique diminished drowsiness level by using non intrusive systems, using standard equipment sensors, aim to reduce these road accidents caused by drowsiness drivers. This detection system provides a non-contact technique for judging various levels of driver alertness and facilitates early detection of a decline in alertness during driving. The presented results are based on a selection of drowsiness database, which covers almost 60 hours of driving data collection measurements. All the parameters extracted from vehicle parameter data are collected in a driving simulator. With all the features from a real vehicle, a SVM drowsiness detection model is constructed. After several improvements, the classification results showed a very good indication of drowsiness by using those systems.
160

Multi-Criteria Mapping Based on Support Vector Machine and Cluster Distance

Eerla, Vishwa Shanthi 01 November 2016 (has links) (PDF)
There was an increase in a number of applications for a master degree program with the growth in time. It takes huge time to process all the application documents of each and every applicant manually and requires a high volume of the workforce. This can be reduced if automation is used for this process. In any case, before that, an analysis of the complete strides required in preparing was precisely the automation must be utilized to diminish the time and workforces must be finished. The application process for the applicant is actually participating in several steps. First, the applicant sends the complete scanned documents to the uni-assist; from there the applications are received by the student assistant team at the particular university to which the applicant had applied, and then they are sent to the individual departments. At the individual sections, the individual applications will be handled by leading an intensive study to know whether the applicant by their past capabilities scopes to satisfy the prerequisites of further study system to which they have applied. What's more, by considering the required points of interest of the applicant without investigating every single report, and to pack the information and diminish the preparing time for the specific division, by this postulation extend a solitary web apparatus is being produced that can procedure the application which is much dependable in the basic leadership procedure of application.

Page generated in 0.0473 seconds