Spelling suggestions: "subject:"expertenwissen"" "subject:"expertenwissens""
1 |
Optimization of combine processes using expert knowledge and methods of artificial intelligence / Optimierung von Mähdruschprozessen unter Nutzung von Expertenwissen und Methoden der Künstlichen IntelligenzEggerl, Anja 10 January 2018 (has links) (PDF)
Combine harvesters are used to gather plants from the field and separate them into the components of value, the grain and the straw. The optimal utilization of existing combine potential is an inevitable task to maximize harvest efficiency and hence to maximize profit. The only way to optimize the threshing and separation processes during harvest is to adjust the combine settings to existing conditions. Operating permanently at optimal harvest efficiency can only be achieved by an automatic control system. However, for reasons of transparency and due to lack of sensors, the approach in this thesis is a combined development of an interactive and an automatic control system for combine process optimization.
The optimization of combine processes is a multi-dimensional and multi-objective optimization problem. The objectives of optimization are the harvest quality parameters. The decision variables, the parameters that can be modified, are the combine settings. Analytical optimization methods require the existence of a model that provides function values in dependence of defined input parameters. A comprehensive quantitative model for the input-output-behavior of the combine does not exist. Alternative optimization methods that handle multi-dimensional and multi-objective optimization problems can be found in the domain of Artificial Intelligence.
In this work, knowledge acquisition was performed in order to obtain expert knowledge on combine process optimization. The result is a knowledge base with six adjustment matrices for different crop and combine types. The adjustment matrices contain problem oriented setting adjustment recommendations in order to solve single issues with quality parameters. A control algorithm has been developed that is also capable of solving multiple issues at the same time, utilizing the acquired expert knowledge. The basic principle to solve the given multi-objective optimization problem is a transformation into one-dimensional single-objective optimization problems which are solved iteratively. Several methods have been developed that are applied sequentially.
In simulation, the average improvement from initial settings to optimized settings, achieved by the control algorithm, is between 34.5 % and 67.6 %. This demonstrates the good performance of the control algorithm.
|
2 |
Optimization of combine processes using expert knowledge and methods of artificial intelligenceEggerl, Anja 10 October 2017 (has links)
Combine harvesters are used to gather plants from the field and separate them into the components of value, the grain and the straw. The optimal utilization of existing combine potential is an inevitable task to maximize harvest efficiency and hence to maximize profit. The only way to optimize the threshing and separation processes during harvest is to adjust the combine settings to existing conditions. Operating permanently at optimal harvest efficiency can only be achieved by an automatic control system. However, for reasons of transparency and due to lack of sensors, the approach in this thesis is a combined development of an interactive and an automatic control system for combine process optimization.
The optimization of combine processes is a multi-dimensional and multi-objective optimization problem. The objectives of optimization are the harvest quality parameters. The decision variables, the parameters that can be modified, are the combine settings. Analytical optimization methods require the existence of a model that provides function values in dependence of defined input parameters. A comprehensive quantitative model for the input-output-behavior of the combine does not exist. Alternative optimization methods that handle multi-dimensional and multi-objective optimization problems can be found in the domain of Artificial Intelligence.
In this work, knowledge acquisition was performed in order to obtain expert knowledge on combine process optimization. The result is a knowledge base with six adjustment matrices for different crop and combine types. The adjustment matrices contain problem oriented setting adjustment recommendations in order to solve single issues with quality parameters. A control algorithm has been developed that is also capable of solving multiple issues at the same time, utilizing the acquired expert knowledge. The basic principle to solve the given multi-objective optimization problem is a transformation into one-dimensional single-objective optimization problems which are solved iteratively. Several methods have been developed that are applied sequentially.
In simulation, the average improvement from initial settings to optimized settings, achieved by the control algorithm, is between 34.5 % and 67.6 %. This demonstrates the good performance of the control algorithm.
|
3 |
Integration of Auxiliary Data Knowledge in Prototype Based Vector Quantization and Classification ModelsKaden, Marika 14 July 2016 (has links) (PDF)
This thesis deals with the integration of auxiliary data knowledge into machine learning methods especially prototype based classification models. The problem of classification is diverse and evaluation of the result by using only the accuracy is not adequate in many applications. Therefore, the classification tasks are analyzed more deeply. Possibilities to extend prototype based methods to integrate extra knowledge about the data or the classification goal is presented to obtain problem adequate models. One of the proposed extensions is Generalized Learning Vector Quantization for direct optimization of statistical measurements besides the classification accuracy. But also modifying the metric adaptation of the Generalized Learning Vector Quantization for functional data, i. e. data with lateral dependencies in the features, is considered.
|
4 |
Immobilienbewertung in Märkten mit geringen Transaktionen – Möglichkeiten statistischer AuswertungenSoot, Matthias 28 July 2021 (has links)
Markttransparenz in Deutschland wird durch die Gutachterausschüsse und auch durch verschiedene private Akteure am Immobilienmarkt realisiert. Insbesondere in Teilmärkten mit geringen Transaktionszahlen stellt die Markttransparenz eine Herausforderung dar, da nicht ausreichend Daten zur Analyse der jeweiligen Märkte zur Verfügung stehen. Aus diesem Grund bedürfen diese Märkte einer tiefergehenden Untersuchung, um auch hier eine ausreichende Markttransparenz zu erreichen. Die Vielfältigkeit der Teilmärkte mit geringen Transaktionszahlen muss dafür differenziert betrachtet werden.
Im Rahmen der Arbeit werden zunächst Unterschiede in den Eigenschaften der Märkte mit geringen Transaktionszahlen untersucht. Hierzu wird mittels einer qualitativen Untersuchung von Leitfadeninterviews sowie der Literatur zum Thema eine Theorie zur Systematisierung der Märkte gebildet. Differenziert für einzelne Märkte kann mit dieser Strukturierung eine passende Auswertestrategie entwickelt werden.
Anschließend erfolgt die Untersuchung von verschiedenen Daten, die bereits in den Märkten mit geringer Transaktionszahl genutzt werden. Kauffälle, die unvollständig erfasst sind, werden derzeit bei Auswertungen vollständig ausgeschlossen (Fallweiser Ausschluss). Teilweise fehlt jedoch nur eine Information für eine multivariate Analyse. Im Rahmen der Arbeit wird untersucht, ob und mit welchen Methoden diese Datenlücken geeignet gefüllt werden können, um eine höhere Genauigkeit in den Analysen auch mit wenigen Daten zu erhalten. Als Methoden werden neben dem Fallweisen Ausschluss eine Mittelwertimputation sowie die Auffüllung der Datenlücken mittels Expectation-Maximization und Random-Forest-Regression untersucht.
Darüber hinaus wird das Expertenwissen, das in verschiedenen Formen von Expertisen (Befragungen, Angebotspreise, Gutachten) geäußert werden kann, untersucht. Zur Erlangung eines Überblicks, wird zunächst das Expertenwissen im Rahmen einer quantitativen Befragung näher betrachtet, um Handlungsweisen und Unterschiede von Experten aus verschiedenen Gruppen aufzudecken. Anschließend werden intersubjektive Experten- und Laienbefragungen im Kontext der Immobilienbewertung ausgewertet sowie Angebotspreise, die von Maklern und ohne Makler vermarktet werden, im Verhältnis zu den realisierten Kaufpreisen untersucht.
Da die untersuchten zusätzlichen Daten wie Angebotsdaten oder Expertenbefragungen in einigen Teilmärkten nicht zur Verfügung stehen oder nur mit hohem Aufwand erzeugt werden können, sind alternative Nutzungsansätze notwendig. Hierzu werden zwei Methoden auf ihre Eignung hinsichtlich räumlich zusammengefasster Auswertungen geprüft. Der Vergleich erfolgt zur in der Praxis etablierten multiplen linearen Regressionsanalyse. Zum einen werden die geographisch gewichtete Regressionsanalyse, die lokale Märkte besser abbilden kann, zum anderen die künstlichen neuronalen Netze, die Nichtlinearitäten besser abbilden können, angewendet.
Im Ergebnis zeigt sich, dass eine Strukturierung der Märkte mit geringer Transaktionszahl möglich ist. Eine sinnvolle Strukturierung erfolgt anhand der Grundgesamtheit des jeweiligen sachlichen/-räumlichen Marktes. Ebenso kann eine Differenzierung nach ländlichen und urbanen Räumen erfolgen.
Mit Imputationsmethoden können die Ergebnisse von Regressionsanalysen deutlich verbessert werden. Selbst bei einem großen Vorkommen von Datenlücken in unterschiedlichen Parametern kann eine Auswertung noch gute Ergebnisse in der Größenordnung der vollständigen Kauffälle liefern. Auch mit der simplen Methode der Mittelwertimputation kann ein gutes Ergebnis erzielt werden.
Experten im Bereich der Immobilienbewertung haben die unterschiedlichsten beruflichen Herkünfte. In ihrer Arbeitsweise lassen sich jedoch keine wesentlichen Systematiken feststellen. Lediglich bei der Nutzung von Daten können Systematiken aufgedeckt werden. Expertenbefragungen weisen grundsätzlich hohe Streuungsmaße auf. Die Streuungsmaße werden dann reduziert, wenn bei den Befragungen Einschränkungen beispielsweise durch eine vorgegebene Skala oder durch vorgeschlagene Werte erfolgen. Weitere Untersuchungen sind dahingehend notwendig. Auch die Abschläge zwischen Angebotspreisen und Kaufpreisen, aber auch die Anpassung von Angebotspreisen im Vermarktungszeitraum, weisen hohe Streuungsbreiten auf. Einen signifikanten Unterschied zwischen der Vermarktung mit oder ohne Makler kann in der untersuchten Stichprobe nicht nachgewiesen werden.
Sowohl die Nutzung der geographisch gewichteten Regressionsanalyse (GWR) als auch die Nutzung von künstlichen neuronalen Netzen (KNN) bieten bei der Auswertung von räumlich zusammengefassten Daten in einer Kreuzvalidierung einen Vorteil. Dies lässt darauf schließen, dass die Märkte sowohl räumlich inhomogen als auch nichtlinear sind. Zielführend erscheint eine Kombination der geographischen Komponente mit nichtparametrischen Ansätzen wie dem Lernverfahren der KNN. / In Germany market transparency is realised by expert’s committees and due to the publication of market reports and market values and by various private players in the real estate market. In sub-markets with low transaction numbers, market transparency is a challenge because not enough data is available to analyse the respective markets. These markets require a more in-depth investigation to achieve sufficient market transparency. The diversity of sub-markets with low transaction numbers must be considered in a differentiated way.
In the context of this work, differences in the characteristics of markets with a small number of transactions are examined. A theory for the systematisation of these markets is formed, using a qualitative investigation of guideline interviews and literature on the topic. Differentiated for individual markets, a suitable evaluation strategy can be developed using the proposed structuring. Subsequently, the analysis of different data, which is already used in real estate valuation, is carried out to investigate its usability for regions with few transactions. Purchase cases which are recorded incompletely, are today excluded from evaluations (case-wise exclusion). However, most of the time only one or two pieces of information for multivariate analysis are missing per case. It is examined whether and with which methods these data gaps can be filled suitably. Besides the case-by-case rejection (default method today), a mean-value-imputation, as well as the filling of data gaps using Expectation-Maximization and Random-Forest-Regression are investigated. Furthermore, the expert’s knowledge, which can be expressed in different forms of expert’s opinions (surveys, offer prices, expert reports), is examined. First of all, the expert knowledge, in general, is examined more closely within the framework of a quantitative survey to uncover patterns of action and differences between experts from different groups. Subsequently, intersubjective expert and layman surveys are evaluated in the context of real estate valuation. Additional offer prices, marketed with or without real estate agents, are compared to the realised purchase prices. Since the additional data examined, such as the supply data or the expert surveys, is not available in some sub-markets or can only be generated at great expense, alternative approaches to utilisation are necessary. For this purpose, two methods are tested for their suitability with regard to spatially summarised data. A comparison to the classically used linear regression analysis is made. On one hand, the geographically weighted regression analysis, which represents local markets more accurately, and the artificial neural networks, which are more suited to represent non-linearities, are applied.
The result shows that a systematisation of markets with a low number of transactions is possible. A structuring based on the population of the respective functional/spatial sub-market takes place. It is also possible to differentiate between rural and urban areas. With imputation methods, the results of regression analyses can be improved significantly. Even if there are large numbers of data gaps in different parameters, an evaluation can still provide adequate results in comparison to an analysis with complete purchase cases if the overall sample is big enough. Already the simple method of mean-value-imputation leads to good results. Experts in the field of real estate valuation have a wide variety of professional backgrounds. However, significant systematics cannot be identified in their working methods. Different behaviour can only be identified by the usage of different data sources. Expert surveys generally show a high degree of dispersion. This degree of dispersion is reduced if the surveys are restricted, e.g. by a given scale or suggested values. Further investigations on these topics are necessary. The discounts between offer prices and purchase prices as well as the adjustment of offer prices within the marketing period are showing a high degree of dispersion. A significant difference between marketing with or without an agent cannot be proven in the examined sample. Both, the use of geographically weighted regression analysis and the use of artificial neural networks (ANN) offer an advantage when evaluating spatially summarised data in cross-validation. This leads to the conclusion that the markets are both geographically inhomogeneous and non-linear. A combination of the geographic component with non-parametric approaches such as the learning procedure of the ANN is appropriate.
|
5 |
Integration of Auxiliary Data Knowledge in Prototype Based Vector Quantization and Classification ModelsKaden, Marika 23 May 2016 (has links)
This thesis deals with the integration of auxiliary data knowledge into machine learning methods especially prototype based classification models. The problem of classification is diverse and evaluation of the result by using only the accuracy is not adequate in many applications. Therefore, the classification tasks are analyzed more deeply. Possibilities to extend prototype based methods to integrate extra knowledge about the data or the classification goal is presented to obtain problem adequate models. One of the proposed extensions is Generalized Learning Vector Quantization for direct optimization of statistical measurements besides the classification accuracy. But also modifying the metric adaptation of the Generalized Learning Vector Quantization for functional data, i. e. data with lateral dependencies in the features, is considered.:Symbols and Abbreviations
1 Introduction
1.1 Motivation and Problem Description . . . . . . . . . . . . . . . . . 1
1.2 Utilized Data Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Prototype Based Methods 19
2.1 Unsupervised Vector Quantization . . . . . . . . . . . . . . . . . . 22
2.1.1 C-means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.1.2 Self-Organizing Map . . . . . . . . . . . . . . . . . . . . . . 25
2.1.3 Neural Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.1.4 Common Generalizations . . . . . . . . . . . . . . . . . . . 30
2.2 Supervised Vector Quantization . . . . . . . . . . . . . . . . . . . . 35
2.2.1 The Family of Learning Vector Quantizers - LVQ . . . . . . 36
2.2.2 Generalized Learning Vector Quantization . . . . . . . . . 38
2.3 Semi-Supervised Vector Quantization . . . . . . . . . . . . . . . . 42
2.3.1 Learning Associations by Self-Organization . . . . . . . . . 42
2.3.2 Fuzzy Labeled Self-Organizing Map . . . . . . . . . . . . . 43
2.3.3 Fuzzy Labeled Neural Gas . . . . . . . . . . . . . . . . . . 45
2.4 Dissimilarity Measures . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.4.1 Differentiable Kernels in Generalized LVQ . . . . . . . . . 52
2.4.2 Dissimilarity Adaptation for Performance Improvement . 56
3 Deeper Insights into Classification Problems
- From the Perspective of Generalized LVQ- 81
3.1 Classification Models . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.2 The Classification Task . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.3 Evaluation of Classification Results . . . . . . . . . . . . . . . . . . 88
3.4 The Classification Task as an Ill-Posed Problem . . . . . . . . . . . 92
4 Auxiliary Structure Information and Appropriate Dissimilarity
Adaptation in Prototype Based Methods 93
4.1 Supervised Vector Quantization for Functional Data . . . . . . . . 93
4.1.1 Functional Relevance/Matrix LVQ . . . . . . . . . . . . . . 95
4.1.2 Enhancement Generalized Relevance/Matrix LVQ . . . . 109
4.2 Fuzzy Information About the Labels . . . . . . . . . . . . . . . . . 121
4.2.1 Fuzzy Semi-Supervised Self-Organizing Maps . . . . . . . 122
4.2.2 Fuzzy Semi-Supervised Neural Gas . . . . . . . . . . . . . 123
5 Variants of Classification Costs and Class Sensitive Learning 137
5.1 Border Sensitive Learning in Generalized LVQ . . . . . . . . . . . 137
5.1.1 Border Sensitivity by Additive Penalty Function . . . . . . 138
5.1.2 Border Sensitivity by Parameterized Transfer Function . . 139
5.2 Optimizing Different Validation Measures by the Generalized LVQ 147
5.2.1 Attention Based Learning Strategy . . . . . . . . . . . . . . 148
5.2.2 Optimizing Statistical Validation Measurements for
Binary Class Problems in the GLVQ . . . . . . . . . . . . . 155
5.3 Integration of Structural Knowledge about the Labeling in Fuzzy
Supervised Neural Gas . . . . . . . . . . . . . . . . . . . . . . . . . 160
6 Conclusion and Future Work 165
My Publications 168
A Appendix 173
A.1 Stochastic Gradient Descent (SGD) . . . . . . . . . . . . . . . . . . 173
A.2 Support Vector Machine . . . . . . . . . . . . . . . . . . . . . . . . 175
A.3 Fuzzy Supervised Neural Gas Algorithm Solved by SGD . . . . . 179
Bibliography 182
Acknowledgements 201
|
6 |
Brücken bauen! Grundlagen des transdisziplinären Forschens zu Human-Cyber-Physical Systems am Beispiel der Modellierung von Expertenwissen zur Elastizität von Materialien mit versteh- und erklärbarer Künstlicher IntelligenzBocklisch, Franziska 08 December 2021 (has links)
Der Artikel beschreibt ein Vorgehen für transdisziplinäre Forschungsteams, die auf Basis der integrativen Rahmenkonzeption „Mensch-Cyber-Technik-System“ (Human-Cyber-Physical System) gemeinsame Forschungsfragestellungen bearbeiten möchten. Es wird eine Systematik skizziert, die am Beispiel versteh- und erklärbarer Künstlicher Intelligenz für die ausgewählte Applikation der „Modellierung von menschlichem Expertenwissen über die Elastizität von Materialien“ schrittweise erläutert wird.
|
Page generated in 0.0655 seconds