• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 252
  • 58
  • 58
  • 56
  • 21
  • 12
  • 10
  • 9
  • 8
  • 7
  • 6
  • 5
  • 3
  • 3
  • 2
  • Tagged with
  • 562
  • 226
  • 180
  • 175
  • 171
  • 169
  • 148
  • 81
  • 75
  • 71
  • 68
  • 67
  • 64
  • 64
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
511

Měření výšky postavy v obraze / Height Measurement in Digital Image

Olejár, Adam January 2015 (has links)
The aim of this paper is a summary of the theory necessary for a modification, detection of person and the height calculation of the detected person in the image. These information were then used for implementation of the algoritm. The first half reveals teoretical problems and solutions. Shows the basic methods of image preprocessing and discusses the basic concepts of plane and projective geometry and transformations. Then describes the distortion, that brings into the picture imperfections of optical systems of cameras and the possibilities of removing them. Explains HOG algorithm and the actual method of calculating height of person detected in the image. The second half describes algoritm structure and statistical evaluation.
512

Webový portál pro správu a klasifikaci informací z distribuovaných zdrojů / Web Application for Managing and Classifying Information from Distributed Sources

Vrána, Pavel January 2011 (has links)
This master's thesis deals with data mining techniques and classification of the data into specified categories. The goal of this thesis is to implement a web portal for administration and classification of data from distributed sources. To achieve the goal, it is necessary to test different methods and find the most appropriate one for web articles classification. From the results obtained, there will be developed an automated application for downloading and classification of data from different sources, which would ultimately be able to substitute a user, who would process all the tasks manually.
513

Získávání znalostí z obrazových databází / Knowledge Discovery in Image Databases

Jaroš, Ondřej January 2010 (has links)
This thesis is focused on knowledge discovery from databases, especially on methods of classification and prediction. These methods are described in detail.  Furthermore, this work deals with multimedia databases and the way these databases store data. In particular, the method for processing low-level image and video data is described.  The practical part of the thesis focuses on the implementation of this GMM method used for extracting low-level features of video data and images. In other parts, input data and tools, which the implemented method was compared with, are described.  The last section focuses on experiments comparing extraction efficiency features of high-level attributes of low-level data and the methods implemented in selected classification tools LibSVM.
514

Analýza experimentálních EKG / Analysis of experimental ECG

Mackových, Marek January 2016 (has links)
This thesis is focused on the analysis of experimental ECG records drawn up in isolated rabbit hearts and aims to describe changes in EKG caused by ischemia and left ventricular hypertrophy. It consists of a theoretical analysis of the problems in the evaluation of ECG during ischemia and hypertrophy, and describes an experimental ECG recording. Theoretical part is followed by a practical section which describes the method for calculating morphological parameters, followed by ROC analysis to evaluate their suitability for the classification of hypertrophy and at the end is focused on classification.
515

Automatic Patent Classification

Yehe, Nala January 2020 (has links)
Patents have a great research value and it is also beneficial to the community of industrial, commercial, legal and policymaking. Effective analysis of patent literature can reveal important technical details and relationships, and it can also explain business trends, propose novel industrial solutions, and make crucial investment decisions. Therefore, we should carefully analyze patent documents and use the value of patents. Generally, patent analysts need to have a certain degree of expertise in various research fields, including information retrieval, data processing, text mining, field-specific technology, and business intelligence. In real life, it is difficult to find and nurture such an analyst in a relatively short period of time, enabling him or her to meet the requirement of multiple disciplines. Patent classification is also crucial in processing patent applications because it will empower people with the ability to manage and maintain patent texts better and more flexible. In recent years, the number of patents worldwide has increased dramatically, which makes it very important to design an automatic patent classification system. This system can replace the time-consuming manual classification, thus providing patent analysis managers with an effective method of managing patent texts. This paper designs a patent classification system based on data mining methods and machine learning techniques and use KNIME software to conduct a comparative analysis. This paper will research by using different machine learning methods and different parts of a patent. The purpose of this thesis is to use text data processing methods and machine learning techniques to classify patents automatically. It mainly includes two parts, the first is data preprocessing and the second is the application of machine learning techniques. The research questions include: Which part of a patent as input data performs best in relation to automatic classification? And which of the implemented machine learning algorithms performs best regarding the classification of IPC keywords? This thesis will use design science research as a method to research and analyze this topic. It will use the KNIME platform to apply the machine learning techniques, which include decision tree, XGBoost linear, XGBoost tree, SVM, and random forest. The implementation part includes collection data, preprocessing data, feature word extraction, and applying classification techniques. The patent document consists of many parts such as description, abstract, and claims. In this thesis, we will feed separately these three group input data to our models. Then, we will compare the performance of those three different parts. Based on the results obtained from these three experiments and making the comparison, we suggest using the description part data in the classification system because it shows the best performance in English patent text classification. The abstract can be as the auxiliary standard for classification. However, the classification based on the claims part proposed by some scholars has not achieved good performance in our research. Besides, the BoW and TFIDF methods can be used together to extract efficiently the features words in our research. In addition, we found that the SVM and XGBoost techniques have better performance in the automatic patent classification system in our research.
516

UAV DETECTION AND LOCALIZATION SYSTEM USING AN INTERCONNECTED ARRAY OF ACOUSTIC SENSORS AND MACHINE LEARNING ALGORITHMS

Facundo Ramiro Esquivel Fagiani (10716747) 06 May 2021 (has links)
<div> The Unmanned Aerial Vehicles (UAV) technology has evolved exponentially in recent years. Smaller and less expensive devices allow a world of new applications in different areas, but as this progress can be beneficial, the use of UAVs with malicious intentions also poses a threat. UAVs can carry weapons or explosives and access restricted zones passing undetected, representing a real threat for civilians and institutions. Acoustic detection in combination with machine learning models emerges as a viable solution since, despite its limitations related with environmental noise, it has provided promising results on classifying UAV sounds, it is adaptable to multiple environments, and especially, it can be a cost-effective solution, something much needed in the counter UAV market with high projections for the coming years. The problem addressed by this project is the need for a real-world adaptable solution which can show that an array of acoustic sensors can be implemented for the detection and localization of UAVs with minimal cost and competitive performance.<br><br></div><div> In this research, a low-cost acoustic detection system that can detect, in real time, about the presence and direction of arrival of a UAV approaching a target was engineered and validated. The model developed includes an array of acoustic sensors remotely connected to a central server, which uses the sound signals to estimate the direction of arrival of the UAV. This model works with a single microphone per node which calculates the position based on the acoustic intensity change produced by the UAV, reducing the implementation costs and being able to work asynchronously. The development of the project included collecting data from UAVs flying both indoors and outdoors, and a performance analysis under realistic conditions. <br><br></div><div> The results demonstrated that the solution provides real time UAV detection and localization information to protect a target from an attacking UAV, and that it can be applied in real world scenarios. </div><div><br></div>
517

Získávání znalostí z objektově relačních databází / Knowledge Discovery in Object Relational Databases

Chytka, Karel Unknown Date (has links)
The goal of this master's thesis is to acquaint with a problem of a knowledge discovery and objectrelational data classification. It summarizes problems which are connected with mining spatiotemporal data. There is described data mining kernel algorithm SVM. The second part solves classification method implementation. This method solves data mining in a Caretaker trajectory database. This thesis contains application's implementation for spatio-temporal data preprocessing, their organization in database and presentation too.
518

Metody klasifikace www stránek / Methods for Classification of WWW Pages

Svoboda, Pavel January 2009 (has links)
The main goal of this master's thesis was to study the main principles of classification methods. Basic principles of knowledge discovery process, data mining and using an external class CSSBox are described. Special attantion was paid to implementation of a ,,k-nearest neighbors`` classification method. The first objective of this work was to create training and testing data described by 'n' attributes. The second objective was to perform experimental analysis to determine a good value for 'k', the number of neighbors.
519

Vytvoření nových predikčních modulů v systému pro dolování z dat na platformě NetBeans / Creation of New Prediction Units in Data Mining System on NetBeans Platform

Havlíček, David January 2009 (has links)
The issue of this master's thesis is a creation of new prediction unit for existing system of knowledge discovery in database. The first part of project deal with general problems of knowledge discovery in database and predictive analysis. The second part of the project deal with system developed on FIT, for which is module implemented, used technologies, concept and implementation of mining module for this system. The solution is implemented in Java language and is a built on the NetBeans platform.
520

Performance Benchmarking and Cost Analysis of Machine Learning Techniques : An Investigation into Traditional and State-Of-The-Art Models in Business Operations / Prestandajämförelse och kostnadsanalys av maskininlärningstekniker : en undersökning av traditionella och toppmoderna modeller inom affärsverksamhet

Lundgren, Jacob, Taheri, Sam January 2023 (has links)
Eftersom samhället blir allt mer datadrivet revolutionerar användningen av AI och maskininlärning sättet företag fungerar och utvecklas på. Denna studie utforskar användningen av AI, Big Data och Natural Language Processing (NLP) för att förbättra affärsverksamhet och intelligens i företag. Huvudsyftet med denna avhandling är att undersöka om den nuvarande klassificeringsprocessen hos värdorganisationen kan upprätthållas med minskade driftskostnader, särskilt lägre moln-GPU-kostnader. Detta har potential att förbättra klassificeringsmetoden, förbättra produkten som företaget erbjuder sina kunder på grund av ökad klassificeringsnoggrannhet och stärka deras värdeerbjudande. Vidare utvärderas tre tillvägagångssätt mot varandra och implementationerna visar utvecklingen inom området. Modellerna som jämförs i denna studie inkluderar traditionella maskininlärningsmetoder som Support Vector Machine (SVM) och Logistisk Regression, tillsammans med state-of-the-art transformermodeller som BERT, både Pre-Trained och Fine-Tuned. Artikeln visar att det finns en avvägning mellan prestanda och kostnad vilket illustrerar problemet som många företag, som Valu8, står inför när de utvärderar vilket tillvägagångssätt de ska implementera. Denna avvägning diskuteras och analyseras sedan mer detaljerat för att utforska möjliga kompromisser från varje perspektiv i ett försök att hitta en balanserad lösning som kombinerar prestandaeffektivitet och kostnadseffektivitet. / As society is becoming more data-driven, Artificial Intelligence (AI) and Machine Learning are revolutionizing how companies operate and evolve. This study explores the use of AI, Big Data, and Natural Language Processing (NLP) in improving business operations and intelligence in enterprises. The primary objective of this thesis is to examine if the current classification process at the host company can be maintained with reduced operating costs, specifically lower cloud GPU costs. This can improve the classification method, enhance the product the company offers its customers due to increased classification accuracy, and strengthen its value proposition. Furthermore, three approaches are evaluated against each other, and the implementations showcase the evolution within the field. The models compared in this study include traditional machine learning methods such as Support Vector Machine (SVM) and Logistic Regression, alongside state-of-the-art transformer models like BERT, both Pre-Trained and Fine-Tuned. The paper shows a trade-off between performance and cost, showcasing the problem many companies like Valu8 stand before when evaluating which approach to implement. This trade-off is discussed and analyzed in further detail to explore possible compromises from each perspective to strike a balanced solution that combines performance efficiency and cost-effectiveness.

Page generated in 0.0344 seconds