• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 145
  • 60
  • 27
  • 14
  • 12
  • 11
  • 9
  • 8
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 335
  • 335
  • 106
  • 91
  • 88
  • 67
  • 58
  • 51
  • 47
  • 45
  • 41
  • 41
  • 39
  • 37
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Identifikace objektů v obraze / The idnetification of the objects in the imege

Zavalina, Viktoriia January 2014 (has links)
Master´s thesis deals with methods of objects detection in the image. It contains theoretical, practical and experimental parts. Theoretical part describes image representation, the preprocessing image methods, and methods of detection and identification of objects. The practical part contains a description of the created programs and algorithms which were used in the programs. Application was created in MATLAB. The application offers intuitive graphical user interface and three different methods for the detection and identification of objects in an image. The experimental part contains a test results for an implemented program.
292

Umělá inteligence ve hře Bang! / Artificial Intelligence in Bang! Game

Kolář, Vít January 2010 (has links)
The goal of this master's thesis is to create an artificial intelligence for the Bang! game. There is a full description of the Bang! game, it's entire rules, player's using strategy principles and game analysis from UI point of view included. The thesis also resumes methods of the artificial intelligence and summarizes basic information about the domain of game theory. Next part describes way of the implementation in C++ language and it's proceeding with use of Bayes classification and decision trees based on expert systems. Last part represent analysis of altogether positive results and the conclusion with possible further extensions.
293

Adaptivní klient pro sociální síť Twitter / Adaptive Client for Twitter Social Network

Guňka, Jiří January 2011 (has links)
The goal of this term project is create user friendly client of Twitter. They may use methods of machine learning as naive bayes classifier to mentions new interests tweets. For visualissation this tweets will be use hyperbolic trees and some others methods.
294

Prise en compte économique du long terme dans les choix énergétiques relatifs à la gestion des déchets radioactifs / Economic analysis of long-term energy choices related to the radioactive waste management

Doan, Phuong Hoai Linh 07 December 2017 (has links)
Actuellement, bien que la plupart des pays nucléaires converge vers la même solution technique: le stockage profond pour la gestion des déchets radioactifs de haute activité et à vie longue, les objectifs calendaires divergent d'un pays à l'autre. Grâce au calcul économique, nous souhaitons apporter des éléments de réponse à la question suivante : En termes de temporalité, comment les générations présentes, qui bénéficient de la production d'électricité nucléaire, doivent-elles supporter les charges de la gestion des déchets radioactifs en tenant compte des générations futures ? Cette thèse se propose d'analyser spécifiquement la décision française en tenant compte de son contexte. Nous proposons un ensemble d'outils qui permet d'évaluer l'Utilité du projet de stockage profond en fonction des choix de temporalité. Notre thèse étudie également l'influence en retour des choix de stockage sur le cycle du combustible nucléaire. Au-delà, nous prenons en compte les interactions entre le stockage profond et les choix de parc nucléaire et de cycle du combustible qui constituent un « système complet ». / Nowadays, the deep geological repository is generally considered as the reference solution for the definitive management of spent nuclear fuel/high-level waste, but different countries have decided different disposal deployment schedules. Via the economic calculation, we hope to offer some answers to the following question: In terms of disposal time management, how should the present generations, benefiting from the nuclear power generation, bear the costs of radioactive waste management, while taking into account future generations? This thesis proposes to analyze specifically the French decision in its context. We propose a set of tools to evaluate the Utility of the deep geological repository project according to the deployment schedule choices. Our thesis also studies the influence of disposal choices on the nuclear fuel cycle. Beyond, we also take into account the interactions between the deep geological repository, nuclear fleet and cycle choices which constitute a "complete system".
295

Automatic Patent Classification

Yehe, Nala January 2020 (has links)
Patents have a great research value and it is also beneficial to the community of industrial, commercial, legal and policymaking. Effective analysis of patent literature can reveal important technical details and relationships, and it can also explain business trends, propose novel industrial solutions, and make crucial investment decisions. Therefore, we should carefully analyze patent documents and use the value of patents. Generally, patent analysts need to have a certain degree of expertise in various research fields, including information retrieval, data processing, text mining, field-specific technology, and business intelligence. In real life, it is difficult to find and nurture such an analyst in a relatively short period of time, enabling him or her to meet the requirement of multiple disciplines. Patent classification is also crucial in processing patent applications because it will empower people with the ability to manage and maintain patent texts better and more flexible. In recent years, the number of patents worldwide has increased dramatically, which makes it very important to design an automatic patent classification system. This system can replace the time-consuming manual classification, thus providing patent analysis managers with an effective method of managing patent texts. This paper designs a patent classification system based on data mining methods and machine learning techniques and use KNIME software to conduct a comparative analysis. This paper will research by using different machine learning methods and different parts of a patent. The purpose of this thesis is to use text data processing methods and machine learning techniques to classify patents automatically. It mainly includes two parts, the first is data preprocessing and the second is the application of machine learning techniques. The research questions include: Which part of a patent as input data performs best in relation to automatic classification? And which of the implemented machine learning algorithms performs best regarding the classification of IPC keywords? This thesis will use design science research as a method to research and analyze this topic. It will use the KNIME platform to apply the machine learning techniques, which include decision tree, XGBoost linear, XGBoost tree, SVM, and random forest. The implementation part includes collection data, preprocessing data, feature word extraction, and applying classification techniques. The patent document consists of many parts such as description, abstract, and claims. In this thesis, we will feed separately these three group input data to our models. Then, we will compare the performance of those three different parts. Based on the results obtained from these three experiments and making the comparison, we suggest using the description part data in the classification system because it shows the best performance in English patent text classification. The abstract can be as the auxiliary standard for classification. However, the classification based on the claims part proposed by some scholars has not achieved good performance in our research. Besides, the BoW and TFIDF methods can be used together to extract efficiently the features words in our research. In addition, we found that the SVM and XGBoost techniques have better performance in the automatic patent classification system in our research.
296

Spatial patterns of humus forms, soil organisms and soil biological activity at high mountain forest sites in the Italian Alps

Hellwig, Niels 24 October 2018 (has links)
The objective of the thesis is the model-based analysis of spatial patterns of decomposition properties on the forested slopes of the montane level (ca. 1200-2200 m a.s.l.) in a study area in the Italian Alps (Val di Sole / Val di Rabbi, Autonomous Province of Trento). The analysis includes humus forms and enchytraeid assemblages as well as pH values, activities of extracellular enzymes and C/N ratios of the topsoil. The first aim is to develop, test and apply data-based techniques for spatial modelling of soil ecological parameters. This methodological approach is based on the concept of digital soil mapping. The second aim is to reveal the relationships between humus forms, soil organisms and soil microbiological parameters in the study area. The third aim is to analyze if the spatial patterns of indicators of decomposition differ between the landscape scale and the slope scale. At the landscape scale, sample data from six sites are used, covering three elevation levels at both north- and south-facing slopes. A knowledge-based approach that combines a decision tree analysis with the construction of fuzzy membership functions is introduced for spatial modelling. According to the sampling design, elevation and slope exposure are the explanatory variables. The investigations at the slope scale refer to one north-facing and one south-facing slope, with 30 sites occurring on each slope. These sites have been derived using conditioned Latin Hypercube Sampling, and thus reasonably represent the environmental conditions within the study area. Predictive maps have been produced in a purely data-based approach with random forests. At both scales, the models indicate a high variability of spatial decomposition patterns depending on the elevation and the slope exposure. In general, sites at high elevation on north-facing slopes almost exclusively exhibit the humus forms Moder and Mor. Sites on south-facing slopes and at low elevation exhibit also Mull and Amphimull. The predictions of those enchytraeid species characterized as Mull and Moder indicators match the occurrence of the corresponding humus forms well. Furthermore, referencing the mineral topsoil, the predictive models show increasing pH values, an increasing leucine-aminopeptidase activity, an increasing ratio alkaline/acid phosphomonoesterase activity and a decreasing C/N ratio from north-facing to south-facing slopes and from high to low elevation. The predicted spatial patterns of indicators of decomposition are basically similar at both scales. However, the patterns are predicted in more detail at the slope scale because of a larger data basis and a higher spatial precision of the environmental covariates. These factors enable the observation of additional correlations between the spatial patterns of indicators of decomposition and environmental influences, for example slope angle and curvature. Both the corresponding results and broad model evaluations have shown that the applied methods are generally suitable for modelling spatial patterns of indicators of decomposition in a heterogeneous high mountain environment. The overall results suggest that the humus form can be used as indicator of organic matter decomposition processes in the investigated high mountain area.
297

Vytvoření modulu pro dolování dat z databází / Creation of Unit for Datamining

Krásenský, David Unknown Date (has links)
The goal of this work is to create data mining module for information system Belinda. Data from database of clients will be analyzed using SAS Enterprise Miner. Results acquired using several data mining methods will be compared. During the second phase selected data mining method will be implemented such as module of information system Belinda. The final part of this work is evaluation of acquired results and possibility of using this module.
298

Metody klasifikace www stránek / Methods for Classification of WWW Pages

Svoboda, Pavel January 2009 (has links)
The main goal of this master's thesis was to study the main principles of classification methods. Basic principles of knowledge discovery process, data mining and using an external class CSSBox are described. Special attantion was paid to implementation of a ,,k-nearest neighbors`` classification method. The first objective of this work was to create training and testing data described by 'n' attributes. The second objective was to perform experimental analysis to determine a good value for 'k', the number of neighbors.
299

Peeking Through the Leaves : Improving Default Estimation with Machine Learning : A transparent approach using tree-based models

Hadad, Elias, Wigton, Angus January 2023 (has links)
In recent years the development and implementation of AI and machine learning models has increased dramatically. The availability of quality data paving the way for sophisticated AI models. Financial institutions uses many models in their daily operations. They are however, heavily regulated and need to follow the regulation that are set by central banks auditory standard and the financial supervisory authorities. One of these standards is the disclosure of expected credit losses in financial statements of banks, called IFRS 9. Banks must measure the expected credit shortfall in line with regulations set up by the EBA and FSA. In this master thesis, we are collaborating with a Swedish bank to evaluate different machine learning models to predict defaults of a unsecured credit portfolio. The default probability is a key variable in the expected credit loss equation. The goal is not only to develop a valid model to predict these defaults but to create and evaluate different models based on their performance and transparency. With regulatory challenges within AI the need to introduce transparency in models are part of the process. When banks use models there’s a requirement on transparency which refers to of how easily a model can be understood with its architecture, calculations, feature importance and logic’s behind the decision making process. We have compared the commonly used model logistic regression to three machine learning models, decision tree, random forest and XG boost. Where we want to show the performance and transparency differences of the machine learning models and the industry standard. We have introduced a transparency evaluation tool called transparency matrix to shed light on the different transparency requirements of machine learning models. The results show that all of the tree based machine learning models are a better choice of algorithm when estimating defaults compared to the traditional logistic regression. This is shown in the AUC score as well as the R2 metric. We also show that when models increase in complexity there is a performance-transparency trade off, the more complex our models gets the better it makes predictions. / Under de senaste ̊aren har utvecklingen och implementeringen av AI- och maskininl ̈arningsmodeller o ̈kat dramatiskt. Tillg ̊angen till kvalitetsdata banar va ̈gen fo ̈r sofistikerade AI-modeller. Finansiella institutioner anva ̈nder m ̊anga modeller i sin dagliga verksamhet. De a ̈r dock starkt reglerade och m ̊aste fo ̈lja de regler som faststa ̈lls av centralbankernas revisionsstandard och finansiella tillsynsmyndigheter. En av dessa standarder a ̈r offentligg ̈orandet av fo ̈rva ̈ntade kreditfo ̈rluster i bankernas finansiella rapporter, kallad IFRS 9. Banker m ̊aste ma ̈ta den fo ̈rva ̈ntade kreditfo ̈rlusten i linje med regler som faststa ̈lls av EBA och FSA. I denna uppsats samarbetar vi med en svensk bank fo ̈r att utva ̈rdera olika maskininl ̈arningsmodeller f ̈or att fo ̈rutsa ̈ga fallisemang i en blankokreditsportfo ̈lj. Sannolikheten fo ̈r fallismang ̈ar en viktig variabel i ekvationen fo ̈r fo ̈rva ̈ntade kreditfo ̈rluster. M ̊alet a ̈r inte bara att utveckla en bra modell fo ̈r att prediktera fallismang, utan ocks ̊a att skapa och utva ̈rdera olika modeller baserat p ̊a deras prestanda och transparens. Med de utmaningar som finns inom AI a ̈r behovet av att info ̈ra transparens i modeller en del av processen. Na ̈r banker anva ̈nder modeller finns det krav p ̊a transparens som ha ̈nvisar till hur enkelt en modell kan fo ̈rst ̊as med sin arkitektur, bera ̈kningar, variabel p ̊averkan och logik bakom beslutsprocessen. Vi har ja ̈mfo ̈rt den vanligt anva ̈nda modellen logistisk regression med tre maskininla ̈rningsmodeller: Decision trees, Random forest och XG Boost. Vi vill visa skillnaderna i prestanda och transparens mellan maskininl ̈arningsmodeller och branschstandarden. Vi har introducerat ett verktyg fo ̈r transparensutva ̈rdering som kallas transparensmatris fo ̈r att belysa de olika transparenskraven fo ̈r maskininla ̈rningsmodeller. Resultaten visar att alla tra ̈d-baserade maskininla ̈rningsmodeller a ̈r ett ba ̈ttre val av modell vid prediktion av fallisemang j ̈amfo ̈rt med den traditionella logistiska regressionen. Detta visas i AUC-score samt R2 va ̈rdet. Vi visar ocks ̊a att n ̈ar modeller blir mer komplexa uppst ̊ar en kompromiss mellan prestanda och transparens; ju mer komplexa v ̊ara modeller blir, desto ba ̈ttre blir deras prediktioner.
300

Neural Networks for Modeling of Electrical Parameters and Losses in Electric Vehicle

Fujimoto, Yo January 2023 (has links)
Permanent magnet synchronous machines have various advantages and have showed the most superiorperformance for Electric Vehicles. However, modeling them is difficult because of their nonlinearity. In orderto deal with the complexity, the artificial neural network and machine learning models including k-nearest neighbors, decision tree, random forest, and multiple linear regression with a quadratic model are developed to predict electrical parameters and losses as new prediction approaches for the performance of Volvo Cars’ electric vehicles and evaluate their performance. The test operation data of the Volvo Cars Corporation was used to extract and calculate the input and output data for each prediction model. In order to smooth the effects of each input variable, the input data was normalized. In addition, correlation matrices of normalized inputs were produced, which showed a high correlation between rotor temperature and winding resistance in the electrical parameter prediction dataset. They also demonstrated a strong correlation between the winding temperature and the rotor temperature in the loss prediction dataset.Grid search with 5-fold cross validation was implemented to optimize hyperparameters of artificial neuralnetwork and machine learning models. The artificial neural network models performed the best in MSE and R-squared scores for all the electrical parameters and loss prediction. The results indicate that artificial neural networks are more successful at handling complicated nonlinear relationships like those seen in electrical systems compared with other machine learning algorithms. Compared to other machine learning algorithms like decision trees, k-nearest neighbors, and multiple linear regression with a quadratic model, random forest produced superior results. With the exception of q-axis voltage, the decision tree model outperformed the knearestneighbors model in terms of parameter prediction, as measured by MSE and R-squared score. Multiple linear regression with a quadratic model produced the worst results for the electric parameters prediction because the relationship between the input and output was too complex for a multiple quadratic equation to deal with. Random forest models performed better than decision tree models because random forest ensemblehundreds of subset of decision trees and averaging the results. The k-nearest neighbors performed worse for almost all electrical parameters anticipation than the decision tree because it simply chooses the closest points and uses the average as the projected outputs so it was challenging to forecast complex nonlinear relationships. However, it is helpful for handling simple relationships and for understanding relationships in data. In terms of loss prediction, the k-nearest neighbors and decision tree produced similar results in MSE and R-squared score for the electric machine loss and the inverter loss. Their prediction results were worse than the multiple linear regression with a quadratic model, but they performed better than the multiple linear regression with a quadratic model, for forecasting the power difference between electromagnetic power and mechanical power.

Page generated in 0.0545 seconds