141 |
Das Analysekompetenz-Marktpriorität-Portfolio zum Vergleich von Datenanalyseprojekten in der ProduktentwicklungKlement, Sebastian, Saske, Bernhard, Arndt, Stephan, Stelzer, Ralph 03 January 2020 (has links)
Die Künstliche Intelligenz (KI) mit ihren untergeordneten Forschungsgebieten wie maschinelles Lernen (ML), Spracherkennung oder Robotik ist in aller Munde. Die Leistungsfähigkeit und Stabilität von Anwendungen, die im weiteren Sinne KI zur Aufgabenbearbeitung einsetzen, sind gestiegen und durchdringen die Gesellschaft immer mehr. Weltweit wird die KI als eine Schlüsseltechnologie wahrgenommen, die in den nächsten Jahren weiter an Bedeutung gewinnt (Bitkom, DFKI 2017). So zielt auch die Ausschreibung des Bundesministeriums für Wirtschaft und Energie von 02/2019 darauf ab, KI als Schrittmachertechnologie für „[…] volkswirtschaftlich relevante Ökosysteme“ zu fördern (BMWi 2019). Mit der zunehmenden Ausstattung der Produktionsmittel mit Sensoren und der gleichzeitig steigenden Vernetzung dieser, steigt auch die Menge verfügbarer Daten, die für die Generierung von Wissen genutzt werden können (Fraunhofer 2018). Davon profitiert besonders das ML als Teilgebiet der KI. So unterschiedlich die gewonnenen Daten sind, so unterschiedlich sind die Aufgaben, die innerhalb des Maschinenbaus mit diesen bewältigt werden können. Ziele, die mit dem Einsatz von ML verbunden werden, sind beispielsweise selbst optimierende Produktionssysteme oder die bedarfsgerechte Instandhaltung von Anlagen auf Grund einer möglichst genauen Prognose des Ausfallzeitpunktes der Komponenten. Ebenso wie jede andere Technologie bedarf der Einsatz von ML Ressourcen, die in den Unternehmen nur begrenzt vorhanden sind. Die Entscheidung für oder gegen einen Einsatz von ML in Maschinenbauprodukten ist derzeit ganz klar eine strategische und bedingt die Einbeziehung verschiedener Fachbereiche bis hin zum Management des Unternehmens (Saltz et al. 2017). Daher wird ein strategisches Diskussions- und Entscheidungswerkzeug benötigt, welches ein Projekt aus technologischer und wirtschaftlicher Sicht darstellen und fachübergreifend genutzt werden kann sowie ein strukturiertes Vorgehen ermöglicht. Die Autoren schlagen zur Entscheidungsfindung die Nutzung des hier eingeführten Analysekompetenz-Marktpriorität-Portfolios vor, welches speziell auf die Fragestellung des ML Einsatzes im Maschinenbau zugeschnitten ist. Es werden Bewertungstabellen vorgestellt und deren Nutzung erläutert, welche sich an den zu bearbeitenden Prozessschritten für komplexe Datenanalysen (Shearer 2000, Klement et al. 2018) orientiert. Die Ableitung von Normstrategien wird anhand der finalen Darstellung des Portfolios diskutiert. [... aus der Einleitung]
|
142 |
On Teaching Quality Improvement of a Mathematical Topic Using Artificial Neural Networks Modeling (With a Case Study)Mustafa, Hassan M., Al-Hamadi, Ayoub 07 May 2012 (has links)
This paper inspired by simulation by Artificial Neural Networks (ANNs) applied recently for evaluation of phonics methodology to teach children "how to read". A novel approach for teaching a mathematical topic using a computer aided learning (CAL) package applied at educational field (a children classroom). Interesting practical results obtained after field application of suggested CAL package with and without associated teacher''s voice. Presented study highly recommends application of a novel teaching trend based on behaviorism and individuals'' learning styles. That is to improve
quality of children mathematical learning performance.
|
143 |
Towards Efficient Convolutional Neural Architecture DesignRichter, Mats L. 10 May 2022 (has links)
The design and adjustment of convolutional neural network architectures is an opaque and mostly trial and error-driven process.
The main reason for this is the lack of proper paradigms beyond general conventions for the development of neural networks architectures and lacking effective insights into the models that can be propagated back to design decision.
In order for the task-specific design of deep learning solutions to become more efficient and goal-oriented, novel design strategies need to be developed that are founded on an understanding of convolutional neural network models.
This work develops tools for the analysis of the inference process in trained neural network models.
Based on these tools, characteristics of convolutional neural network models are identified that can be linked to inefficiencies in predictive and computational performance.
Based on these insights, this work presents methods for effectively diagnosing these design faults before and during training with little computational overhead.
These findings are empirically tested and demonstrated on architectures with sequential and multi-pathway structures, covering all the common types of convolutional neural network architectures used for classification.
Furthermore, this work proposes simple optimization strategies that allow for goal-oriented and informed adjustment of the neural architecture, opening the potential for a less trial-and-error-driven design process.
|
144 |
Optimierung der Standort- und Betriebsparameter von Infiltrationsbecken zur künstlichen Grundwasseranreicherung hinsichtlich quantitativer und qualitativer EffizienzFichtner, Thomas 08 November 2021 (has links)
Ein kontinuierlich ansteigender Wasserbedarf, verursacht durch verstärktes Bevölkerungswachstum, zunehmende Urbanisierung und Industrialisierung, einhergehend mit einer Übernutzung der verfügbaren Wasserressourcen, führt weltweit zu einem dauerhaften Absinken der Grundwasserstände. Um das zeitliche Ungleichgewicht zwischen lokalem Wasserbedarf und Verfügbarkeit zu überwinden und die daraus resultierenden negativen Auswirkungen abzumildern, erfolgt im Rahmen einer künstlichen Grundwasseranreicherung die gezielte Anreicherung oder Wiederaufladung eines Aquifers. Dazu wird überschüssiges Oberflächenwasser unter kontrollierten Bedingungen versickert oder infiltriert, um es in Zeiten von Wassermangel zur Verfügung zu stellen oder die ökologischen Randbedingungen zu verbessern.
Beim Betrieb der dafür häufig eingesetzten Infiltrationsbecken kommt es in Abhängigkeit von den Standort- (Boden/Klima/Wasserqualität) und den Betriebsparametern (Hydraulische Beladungsrate, Hydraulischer Beladungszyklus) allerdings durch verschiedene Prozesse (Kolmation, Sauerstoff- und Nährstofftransport) häufig zur negativen Beeinflussung der quantitativen und qualitativen Effizienz solcher Anlagen.
Bisher durchgeführte Untersuchungen im Labor- und Feldmaßstab sowie die im Zuge des Betriebes bestehender Infiltrationsbecken gewonnenen Daten liefern hauptsächlich Informationen zum Einfluss einzelner Randbedingungen auf die Veränderung der Infiltrationskapazität bzw. die quantitative Effizienz. Allerdings können auf Basis dieser Daten nicht alle offenen Fragen hinsichtlich des Einflusses der Standort- und Betriebsparameter auf die quantitative und qualitative Effizienz von Infiltrationsbecken vollumfänglich und abschließend beantwortet werden. Aufgrund nicht untersuchter Aspekte sowie widersprüchlicher Daten existieren Unsicherheiten bezüglich der Bewertung hinsichtlich des Einflusses der einzelnen Standort- und Betriebsparameter auf die Effizienz solcher Anlagen.
Zur Generierung von weiterem Wissen über den Einfluss von Standort- und Betriebsparametern auf die Effizienz von Infiltrationsbecken und zur anschließenden Formulierung von Empfehlungen für eine optimierte Standortauswahl sowie Betriebsweise von Infiltrationsbecken erfolgt die Durchführung von Laborversuchen mittels kleinskaliger und großskaliger, physikalischer Modelle. Es werden verschiedene Infiltrationsszenarien bei wechselnden Randbedingungen (Bodenart, Temperatur, Wasserqualität, Hydraulische Beladungsrate, Hydraulischer Beladungszyklus) durchgeführt.
Anhand der gewonnenen Daten kann die Beeinflussung der quantitativen und qualitativen Effizienz durch die verschiedenen Standort- und Betriebsparameter sowie die dadurch beeinflussten Prozesse sehr gut aufgezeigt werden. Das bisher existierende Wissen kann dabei zum Teil bestätigt und um zusätzliche Erkenntnisse erweitert werden.
Es zeigt sich, dass eine höhere hydraulische Durchlässigkeit des anstehenden Bodens eine geringere Reduzierung der Infiltrationskapazität durch Kolmationsprozesse verursacht und zudem für eine bessere Sauerstoffverfügbarkeit sorgt. Darüber hinaus wird ersichtlich, dass Bodentexturen mit einem mittleren Porendurchmesser von 230 µm optimale Bedingungen für eine hohe biologische Aktivität einhergehend mit einem Abbau infiltrierter Substanzen bieten.
Der Nachweis einer verstärkten Reduzierung der Infiltrationskapazität durch Kolmationsprozesse bei erhöhten Temperaturen, aber nicht vorhandener Sonneneinstrahlung, kann nicht erbracht werden, da das Fließen des infiltrierten Wassers signifikant durch die erhöhte Viskosität beeinflusst wird.
Eine schlechtere Wasserqualität, gleichbedeutend mit erhöhten Konzentrationen an abfiltrierbaren Stoffen sowie gelöstem organischen Kohlenstoff, verursacht in den simulierten Infiltrationsszenarien eine stärkere Reduzierung der Infiltrationskapazität. Die physikalischen Kolmationsprozesse tragen dabei den Hauptanteil an der Reduzierung der Infiltrationskapazität.
Des Weiteren wird nachgewiesen, dass eine erhöhte HBR zu einer verstärkten Reduzierung der Infiltrationskapazität und zu einer verschlechterten Sauerstoffverfügbarkeit führt.
Die Länge der Infiltrations- und Trockenphasen während des simulierten Betriebes von Infiltrationsbecken beeinflusst entscheidend die Reduzierung der Infiltrationskapazität sowie die Sauerstoffverfügbarkeit. Dabei kann gezeigt werden, dass unabhängig von der Länge der Infiltrations- und Trockenphasen eine vollständige Wiederherstellung der Sauerstoffverfügbarkeit innerhalb von 24 h im Anschluss an eine Infiltrationsphase gewährleistet wird. Das Verhältnis von Infiltrations- und Trockenphasen, auch als Hydraulischer Beladungszyklus bezeichnet, hat hingegen nahezu keinen Einfluss auf die quantitative Effizienz.
Bei der Betrachtung aller simulierten Infiltrationsszenarien inklusive der Wechselwirkungen zwischen den verschiedenen Standort- und Betriebsparametern können die optimalen Bedingungen für eine hohe quantitative und qualitative Effizienz von Infiltrationsbecken identifiziert werden. Diese sind gegeben beim Vorhandensein eines gut durchlässigen Bodens (hydraulische Leitfähigkeit > 10-4 m s-1), idealerweise mit einem mittleren Porendurchmesser von 230 µm, gepaart mit einer intermittierenden Infiltration von Wasser höherer Qualität ((AFS ≤ 10 mg L-1, BDOC ≤ 10 mg L-1) und der Vermeidung von Infiltrationsphasen länger als 24 h.
Eine Widerspiegelung der experimentellen Ergebnisse sowie eine Vorhersage der Reduzierung der Infiltrationskapazität ist mit dem ausgewählten, analytischen Modell nach Pedretti et al., 2012 aufgrund der unzureichend implementierten Berücksichtigung veränderlicher Eingangsparameter nur bedingt möglich.
Auf Basis der gewonnenen Daten und dem damit einhergehenden erweiterten Wissen über den Einfluss von Standort- und Betriebsparametern auf die Effizienz von Infiltrationsbecken können schlussendlich Empfehlungen für die Standortauswahl und die optimale Betriebsweise ausgesprochen werden.:1 Einleitung...1
2 Grundlagen der künstlichen Grundwasseranreicherung...7
3 Vorliegende Erkenntnisse zur Beeinflussung der quantitativen und qualitativen
Effizienz durch Standort- und Betriebsparameter...38
4 Methoden...49
5 Gewonnene Erkenntnisse hinsichtlich der Beeinflussung der quantitativen und
qualitativen Effizienz durch Standort- und Betriebsparameter...87
6 Empfehlungen zur Optimierung von Standort- und Betriebsbedingungen
von Infiltrationsbecken zur künstlichen Grundwasseranreicherung...128
7 Schlussfolgerung und Ausblick...136 / A continuously rising demand for water, caused by increased population growth, growing urbanization and industrialization, accompanied by overuse of available water resources, is leading to a permanent drop in groundwater levels worldwide. In order to overcome the temporal imbalance between local water demand and availability and to mitigate the resulting negative effects, artificial groundwater recharge involves the managed enrichment or recharging of an aquifer. For this purpose, excess surface water is percolated or infiltrated under controlled conditions in order to make it available in times of water shortage or to improve the ecological boundary conditions.
However, the quantitative and qualitative efficiency of frequently used infiltration basins during the operation is often negatively influenced by a wide variety of processes (clogging, oxygen and nutrient transport), depending on the location (soil/climate/water quality) and the operating parameters (loading rate, loading cycle).
Investigations conducted to date on laboratory and field scale as well as data obtained during the operation of existing infiltration basins provide information on the influence of individual boundary conditions on the change in infiltration capacity or quantitative efficiency. However, not all open questions regarding the influence of site specific and operating parameters on the quantitative and qualitative efficiency of infiltration tanks can be answered completely and conclusively on the basis of these data. Due to aspects that have not been investigated and contradictory data, there are uncertainties in the evaluation regarding the influence of the individual site and operating parameters on the efficiency of the plants.
Laboratory tests using small-scale and large-scale physical models were carried out, in order to generate further knowledge about the influence of site specific and operating parameters on the efficiency of infiltration basins and to formulate subsequently recommendations for an optimised site selection and operation of these plants. Various infiltration scenarios were carried out under changing boundary conditions (soil type, temperature, water quality, hydraulic loading rate, hydraulic loading cycle).
Based on the data obtained, the influence on the quantitative and qualitative efficiency by the various site specific and operating parameters and the processes influenced by them can be demonstrated very well. The existing knowledge can be partially confirmed and extended by additional findings.
It shows that a higher hydraulic permeability of the existing soil causes a lower reduction of the infiltration capacity by clogging processes and provides also a better oxygen availability. Furthermore, it can be observed that soil textures with an average pore diameter of 230 µm offer optimal conditions for high biological activity combined with a strong degradation of infiltrated substances.
In case of higher temperatures but without solar radiation, an increased reduction of the infiltration capacity by clogging processes cannot be observed, since the flow of the infiltrated water is significantly influenced by the increased viscosity.
In the simulated infiltration scenarios, poorer water quality, synonymous with increased concentrations of filterable substances as well as dissolved organic carbon, cause a stronger reduction of the infiltration capacity. Physical clogging processes are contributing the major part to the reduction of the infiltration capacity.
Furthermore, it can be shown that an increased hydraulic loading rate leads to an increased reduction of the infiltration capacity and to a decreased oxygen availability.
The length of the infiltration and drying phases during the simulated operation of infiltration basins has a decisive influence on the reduction of the infiltration capacity and the oxygen availability. It is demonstrated that regardless of the length of the infiltration and drying phases, a complete restoration of oxygen availability can be guaranteed within 24 h following an infiltration phase. In contrast, the ratio of infiltration and dry phases, also known as the hydraulic loading cycle, has almost no influence on the quantitative efficiency.
Optimal conditions for a high quantitative and qualitative efficiency of infiltration basins can be identified, when considering all simulated infiltration scenarios including the interactions between the different site specific and operating parameters. These are given in the presence of a well-permeable soil (hydraulic conductivity > 10-4 m s-1), ideally with an average pore diameter of 230 µm, coupled with an intermittent infiltration of water of higher quality ((AFS ≤ 10 mg L-1, BDOC ≤ 10 mg L-1) and the prevention of infiltration phases longer than 24 h.
A reflection of the experimental results as well as a prediction of the reduction of the infiltration capacity with the selected analytical model according to Pedretti et al., 2012 is only conditionally possible due to the insufficiently implemented consideration of variable input parameters.
Recommendations for site selection and optimal operation were finally made on the basis of the data obtained and the resulting extended knowledge about the influence of site specific and operating parameters on the efficiency of infiltration basins.:1 Einleitung...1
2 Grundlagen der künstlichen Grundwasseranreicherung...7
3 Vorliegende Erkenntnisse zur Beeinflussung der quantitativen und qualitativen
Effizienz durch Standort- und Betriebsparameter...38
4 Methoden...49
5 Gewonnene Erkenntnisse hinsichtlich der Beeinflussung der quantitativen und
qualitativen Effizienz durch Standort- und Betriebsparameter...87
6 Empfehlungen zur Optimierung von Standort- und Betriebsbedingungen
von Infiltrationsbecken zur künstlichen Grundwasseranreicherung...128
7 Schlussfolgerung und Ausblick...136
|
145 |
Improving nuclear medicine with deep learning and explainability: two real-world use cases in parkinsonian syndrome and safety dosimetryNazari, Mahmood 17 March 2022 (has links)
Computer vision in the area of medical imaging has rapidly improved during recent years as a consequence of developments in deep learning and explainability algorithms. In addition, imaging in nuclear medicine is becoming increasingly sophisticated, with the emergence of targeted radiotherapies that enable treatment and imaging on a molecular level (“theranostics”) where radiolabeled targeted molecules are directly injected into the bloodstream. Based on our recent work, we present two use-cases in nuclear medicine as follows: first, the impact of automated organ segmentation required for personalized dosimetry in patients with neuroendocrine tumors and second, purely data-driven identification and verification of brain regions for diagnosis of Parkinson’s disease. Convolutional neural network was used for automated organ segmentation on computed tomography images. The segmented organs were used for calculation of the energy deposited into the organ-at-risk for patients treated with a radiopharmaceutical. Our method resulted in faster and cheaper dosimetry and only differed by 7% from dosimetry performed by two medical physicists. The identification of brain regions, however was analyzed on dopamine-transporter single positron emission tomography images using convolutional neural network and explainability, i.e., layer-wise relevance propagation algorithm. Our findings confirm that the extra-striatal brain regions, i.e., insula, amygdala, ventromedial prefrontal cortex, thalamus, anterior temporal cortex, superior frontal lobe, and pons contribute to the interpretation of images beyond the striatal regions. In current common diagnostic practice, however, only the striatum is the reference region, while extra-striatal regions are neglected. We further demonstrate that deep learning-based diagnosis combined with explainability algorithm can be recommended to support interpretation of this image modality in clinical routine for parkinsonian syndromes, with a total computation time of three seconds which is compatible with busy clinical workflow.
Overall, this thesis shows for the first time that deep learning with explainability can achieve results competitive with human performance and generate novel hypotheses, thus paving the way towards improved diagnosis and treatment in nuclear medicine.
|
146 |
Fusing DL Reasoning with HTN Planning as a Deliberative Layer in Mobile RoboticsHartanto, Ronny 08 March 2010 (has links)
Action planning has been used in the field of robotics for solving long-running tasks. In the robot architectures field, it is also known as the deliberative layer. However, there is still a gap between the symbolic representation on the one hand and the low-level control and sensor representation on the other. In addition, the definition of a planning problem for a complex, real-world robot is not trivial. The planning process could become intractable as its search spaces become large. As the defined planning problem determines the complexity and the computationability for solving the problem, it should contain only relevant states. In this work, a novel approach which amalgamates Description Logic (DL) reasoning with Hierarchical Task Network (HTN) planning is introduced.
The planning domain description as well as fundamental HTN planning concepts are represented in DL and can therefore be subject to DL reasoning; from these representations, concise planning problems are generated for HTN planning. The method is presented through an
example in the robot navigation domain. In addition, a case study of the RoboCup@Home domain is given. As proof of concept, a well-known planning problem that often serves as a benchmark, namely that of the blocks-world, is modeled and solved using this approach.
An analysis of the performance of the approach has been conducted and the results show that this approach yields significantly smaller planning problem descriptions than those generated by current representations in HTN planning.
|
147 |
Multi-wavelength laser line profile sensing for agricultural applicationsStrothmann, Wolfram 03 November 2016 (has links)
This dissertation elaborates on the novel sensing approach of multi-wavelength laser line profiling (MWLP). It is a novel sensor concept that expands on the well-known and broadly adopted laser line profile sensing concept for triangulation-based range imaging. Thereby, the MWLP concept does not just use one line laser but multiple line lasers at different wavelengths scanned by a single monochrome imager. Moreover, it collects not only the 3D distance values but also reflection intensity and backscattering of the laser lines are evaluated. The system collects spectrally selective image-based data in an active manner. Thus, it can be geared toward an application-specific wavelength configuration by mounting a set of lasers of the required wavelengths. Consequently, with this system image-based 3D range data can be collected along with reflection intensity and backscattering data at multiple, selectable wavelengths using just a single monochrome image sensor. Starting from a basic draft of the idea, the approach was realized in terms of hardware and software design and implementation. The approach was shown to be feasible and the prototype performed well as compared with other state-of-the-art sensor systems. The sensor raw data can be visualized and accessed as overlayed distance images, point clouds or mesh. Further, for selected example applications it was demonstrated that the sensor data gathered by the system can serve as descriptive input for real world agricultural classification problems. The sensor data was classified in a pixel-based manner. This allows very flexible, quick and easy adaptation of the classification toward new field situations.
|
148 |
Transparent Object Reconstruction and Registration Confidence Measures for 3D Point Clouds based on Data Inconsistency and Viewpoint AnalysisAlbrecht, Sven 28 February 2018 (has links)
A large number of current mobile robots use 3D sensors as part of their sensor setup. Common 3D sensors, i.e., laser scanners or RGB-D cameras, emit a signal (laser light or infrared light for instance), and its reflection is recorded in order to estimate depth to a surface. The resulting set of measurement points is commonly referred to as 'point clouds'. In the first part of this dissertation an inherent problem of sensors that emit some light signal is addressed, namely that these signals can be reflected and/or refracted by transparent of highly specular surfaces, causing erroneous or missing measurements. A novel heuristic approach is introduced how such objects may nevertheless be identified and their size and shape reconstructed by fusing information from several viewpoints of the scene. In contrast to other existing approaches no prior knowledge about the objects is required nor is the shape of the reconstructed objects restricted to a limited set of geometric primitives. The thesis proceeds to illustrate problems caused by sensor noise and registration errors and introduces mechanisms to address these problems. Finally a quantitative comparison between equivalent directly measured objects, the reconstructions and "ground truth" is provided. The second part of the thesis addresses the problem of automatically determining the quality of the registration for a pair of point clouds. Although a different topic, these two problems are closely related, if modeled in the fashion of this thesis. After illustrating why the output parameters of a popular registration algorithm (ICP) are not suitable to deduce registration quality, several heuristic measures are developed that provide better insight. Experiments performed on different datasets were performed to showcase the applicability of the proposed measures in different scenarios.
|
149 |
Relevance-based Online Planning in Complex POMDPsSaborío Morales, Juan Carlos 17 July 2020 (has links)
Planning under uncertainty is a central topic at the intersection of disciplines such as artificial intelligence, cognitive science and robotics, and its aim is to enable artificial agents to solve challenging problems through a systematic approach to decision-making. Some of these challenges include generating expectations about different outcomes governed by a probability distribution and estimating the utility of actions based only on partial information. In addition, an agent must incorporate observations or information from the environment into its deliberation process and produce the next best action to execute, based on an updated understanding of the world. This process is commonly modeled as a POMDP, a discrete stochastic system that becomes intractable very quickly. Many real-world problems, however, can be simplified following cues derived from contextual information about the relative expected value of actions. Based on an intuitive approach to problem solving, and relying on ideas related to attention and relevance estimation, we propose a new approach to planning supported by our two main contributions: PGS grants an agent the ability to generate internal preferences and biases to guide action selection, and IRE allows the agent to reduce the dimensionality of complex problems while planning online. Unlike existing work that improves the performance of planning on POMDPs, PGS and IRE do not rely on detailed heuristics or domain knowledge, explicit action hierarchies or manually designed dependencies for state factoring. Our results show that this level of autonomy is important to solve increasingly more challenging problems, where manually designed simplifications scale poorly.
|
150 |
On Cognitive Aspects of Human-Level Artificial IntelligenceBesold, Tarek R. 26 January 2015 (has links)
Following an introduction to the context of Human-Level Artificial Intelligence (HLAI) and (computational) analogy research, a formal analysis assessing and qualifying the suitability of the Heuristic-Driven Theory Projection (HDTP) analogy-making framework for HLAI purposes is presented. An account of the application of HDTP (and analogy-based approaches in general) to the study and computational modeling of conceptual blending is outlined, before a proposal and initial proofs of concept for the application of computational analogy engines to modeling and analysis questions in education studies, teaching research, and the learning sciences are described.
Subsequently, the focus is changed from analogy-related aspects in learning and concept generation to rationality as another HLAI-relevant cognitive capacity. After outlining the relation between AI and rationality research, a new conceptual proposal for understanding and modeling rationality in a more human-adequate way is presented, together with a more specific analogy-centered account and an architectural sketch for the (re)implementation of certain aspects of rationality using HDTP.
The methods and formal framework used for the initial analysis of HDTP are then applied for proposing general guiding principles for models and approaches in HLAI, together with a proposal for a formal characterization grounding the notion of heuristics as used in cognitive and HLAI systems as additional application example.
Finally, work is reported trying to clarify the scientific status of HLAI and participating in the debate about (in)adequate means for assessing the progress of a computational system towards reaching (human-level) intelligence.
Two main objectives are achieved: Using analogy as starting point, examples are given as inductive evidence for how a cognitively-inspired approach to questions in HLAI can be fruitful by and within itself. Secondly, several advantages of this approach also with respect to overcoming certain intrinsic problems currently characterizing HLAI research in its entirety are exposed. Concerning individual outcomes, an analogy-based proposal for theory blending as special form of conceptual blending is exemplified; the usefulness of computational analogy frameworks for understanding learning and education is shown and a corresponding research program is suggested; a subject-centered notion of rationality and a sketch for how the resulting theory could computationally be modeled using an analogy framework is discussed; computational complexity and approximability considerations are introduced as guiding principles for work in HLAI; and the scientific status of HLAI, as well as two possible tests for assessing progress in HLAI, are addressed.
|
Page generated in 0.0601 seconds