• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 1
  • 1
  • Tagged with
  • 25
  • 25
  • 12
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Is this unit created in the image of God? : Artificial intelligence and Lutheran anthropology

Ahlberg, Erik January 2024 (has links)
In this study, the potential for artificially intelligent sapient life to be integrated into a Lutheran theological anthropology is investigated. The investigation is done via the means of a reconstruction and reactualisation of Lutheran anthropology, applied to the hypothetical scenario of artificial general intelligences having been created. The study takes its roots in questions of how intelligent life made by human artifice would interact with the Lutheran narrative-relational imago Dei paradigm, and what room there is within the Lutheran framework to integrate such intelligent life. In the study, the analysis will be threefold; with the first chapter dedicated to presenting the basis within Lutheran theology within which the rest of the study is conducted, the second chapter to identifying core points of conflict that may arise were artificial life to be introduced, and the third to finding preliminary solutions to these. Although the study is and must be hypothetical-speculative in nature, the conclusion is reached that there seems to be some manner of room for artificial intelligences to be integrated into a Lutheran way of understanding the imago Dei paradigm, albeit with some lingering issues that can quite hardly be solved entirely until the real dawn of artificial intelligence. Although some reservations remain, it therefore points towards the possibility of future artificial intelligences being Humanity’s theological equals, and leaves it to future studies to reach a more elaborate understanding of what that means and implies in practice, both ethical and dogmatic.
12

Naturally Generated Decision Trees for Image Classification

Ravi, Sumved Reddy 31 August 2021 (has links)
Image classification has been a pivotal area of research in Deep Learning, with a vast body of literature working to tackle the problem, constantly striving to achieve higher accuracies. This push to reach achieve greater prediction accuracy however, has further exacerbated the black box phenomenon which is inherent of neural networks, and more for so CNN style deep architectures. Likewise, it has lead to the development of highly tuned methods, suitable only for a specific data sets, requiring significant work to alter given new data. Although these models are capable of producing highly accurate predictions, we have little to no ability to understand the decision process taken by a network to reach a conclusion. This factor poses a difficulty in use cases such as medical diagnostics tools or autonomous vehicles, which require insight into prediction reasoning to validate a conclusion or to debug a system. In essence, modern applications which utilize deep networks are able to learn to produce predictions, but lack interpretability and a deeper understanding of the data. Given this key point, we look to decision trees, opposite in nature to deep networks, with a high level of interpretability but a low capacity for learning. In our work we strive to merge these two techniques as a means to maintain the capacity for learning while providing insight into the decision process. More importantly, we look to expand the understanding of class relationships through a tree architecture. Our ultimate goal in this work is to create a technique able to automatically create a visual feature based knowledge hierarchy for class relations, applicable broadly to any data set or combination thereof. We maintain these goals in an effort to move away from specific systems and instead toward artificial general intelligence (AGI). AGI requires a deeper understanding over a broad range of information, and more so the ability to learn new information over time. In our work we embed networks of varying sizes and complexity within decision trees on a node level, where each node network is responsible for selecting the next branch path in the tree. Each leaf node represents a single class and all parent and ancestor nodes represent groups of classes. We designed the method such that classes are reasonably grouped by their visual features, where parent and ancestor nodes represent hidden super classes. Our work aims to introduce this method as a small step towards AGI, where class relations are understood through an automatically generated decision tree (representing a class hierarchy), capable of accurate image classification. / Master of Science / Many modern day applications make use of deep networks for image classification. Often these networks are incredibly complex in architecture, and applicable only for specific tasks and data. Standard approaches use just a neural network to produce predictions. However, the internal decision process of the network remains a black box due to the nature of the technique. As more complex human related applications, such as medical image diagnostic tools or autonomous driving software, are being created, they require an understanding of reasoning behind a prediction. To provide this insight into the prediction reasoning, we propose a technique which merges decision trees and deep networks. Tested on the MNIST image data set we were able to achieve an accuracy over 99.0%. We were also able to achieve an accuracy over 73.0% on the CIFAR-10 image data set. Our method is found to create decision trees that are easily understood and are reasonably capable of image classification.
13

How Video Games Raise Awareness on the Technological Singularity

Brunet, Gabriel January 2024 (has links)
Most scientist agree that, by the year 2060, research on artificial intelligence will result in the creation of constructs that exceeds human intellect, starting the most impactful event in our history, the technological singularity. Raising awareness on this concept is important and video games have their role to play, but no research has been made regarding the design principles that they should follow to succeed. This paper aims to rectify that and explain how games raise awareness on the systemic consequences of the technological singularity. To do so, a selection of games which explore singularity-related concepts were analyzed to establish how, and how well they represent the theory, following design methodologies highlighted by other studies. This review provided the necessary design principles to create an awareness raising game on the singularity but also exposed how games fail in tackling the subject, as they do not clearly incorporate or discuss the singularity in their narrative. This research paper is helpful for researchers and game designers as it provides them with the necessary information to analyze or create educational games on the systemic impacts of the singularity.
14

Longitudinal studies of executive and cognitive development after preterm birth

Lundequist, Aiko January 2012 (has links)
Stockholm Neonatal Project is a longitudinal population-based study of children born prematurely in 1988-93, with a very low birth weight (&lt;1500 g), who have been followed prospectively from birth through adolescence. A matched control group was recruited at age 5 ½ years. The overall aim was to investigate long-term developmental outcome, paying particular attention to executive functions (EF) in relation to degree of prematurity, birth weight and medical risks. Study I showed a disadvantage in visuo-motor development at 5 ½ years, especially among the preterm boys. Visuo-motor skills were highly related to IQ, and also to EF. In Study II, neuropsychological profiles typical of preterm children and term born children, respectively, were identified through cluster analysis. The general level of performance corresponded well with IQ, motor functions and parental education in both groups, but preterm children had overall lower results and exhibited greater variability across domains. Study III showed that extremely preterm birth (w. 23-27) per se poses a risk for cognitive outcome at age 18, particularly for EF, and that perinatal medical complications add to the risk. By contrast, adolescents born very preterm (w. 28-31) performed just as well as term-born controls in all cognitive domains. However, adolescents born moderately preterm (w. 32-36) and small for gestational age showed general cognitive deficits. Study IV found that cognitive development was stable over time, with parental education and EF at 5 ½ years as significant predictors for cognitive outcome at age 18. Among preterm children, perinatal medical risks and being small for gestational age had a continued negative impact on cognitive development from 5 ½ to 18 years. Study V demonstrated that neuropsychological scoring of Bender drawings, developed in study I, predicted cognitive outcome in adolescence, indicating that the method  may be useful in developmental screening around school entry. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 3: Manuscript. Paper 4: Manuscript. Paper 5: Submitted.</p>
15

Development of matrices abstract reasoning items to assess fluid intelligence

Chan, Fiona January 2018 (has links)
Matrices reasoning tests, which contain missing pieces in matrices that participants attempt to figure out, are one of the most popular types of tests to measure general intelligence. This thesis introduces several methods to develop matrices items, and presents them in different test forms to assess general intelligence. Part 1 introduces the development of a matrices test with reference to Carpenter’s five rules of Raven’s Progressive Matrices. The test items developed were administered together with the Standard Ravens’ Progressive Matrices (SPM). Results based on confirmatory factor analysis and inter-item correlation demonstrate good construct validity and reliability. Item characteristics are explored with Item-Response Theory (IRT) analyses. Part 2 introduces the development of a large item bank with multiple alternatives for each SPM item, with reference to the item components of the original SPM. Results showed satisfactory test validity and reliability when using the alternative items in a test. Findings also support the hypothesis that the combination of item components accounts for item difficulty. The work lays the foundation for the future development of computer adaptive versions of Raven’s Progressive Matrices. Part 3 introduces the development of an automatic matrix item generator and illustrates the results of the analyses of the items generated using the distribution-of-three rule. Psychometric properties of the items generated are explored to support the validity of the generator. Figural complexity, features, and the frequency at which certain rules were used are discussed to account for the difficulty of the items. Results of further analyses to explore the underlying factors of the difficulty of the generated items are presented and discussed. Results showed that the suggested factors explain a substantial amount of the variance of item difficulty, but are insufficient to predict the item difficulty. Adaptive on-the-fly item generation is yet to be possible for the test at this stage. Overall, the methods for creating matrices reasoning tests introduced in the dissertation provide a useful reference for research on abstract reasoning and fluid intelligence measurements. Implications for other areas of psychometric research are also discussed.
16

A general purpose artificial intelligence framework for the analysis of complex biological systems

Kalantari, John I. 15 December 2017 (has links)
This thesis encompasses research on Artificial Intelligence in support of automating scientific discovery in the fields of biology and medicine. At the core of this research is the ongoing development of a general-purpose artificial intelligence framework emulating various facets of human-level intelligence necessary for building cross-domain knowledge that may lead to new insights and discoveries. To learn and build models in a data-driven manner, we develop a general-purpose learning framework called Syntactic Nonparametric Analysis of Complex Systems (SYNACX), which uses tools from Bayesian nonparametric inference to learn the statistical and syntactic properties of biological phenomena from sequence data. We show that the models learned by SYNACX offer performance comparable to that of standard neural network architectures. For complex biological systems or processes consisting of several heterogeneous components with spatio-temporal interdependencies across multiple scales, learning frameworks like SYNACX can become unwieldy due to the the resultant combinatorial complexity. Thus we also investigate ways to robustly reduce data dimensionality by introducing a new data abstraction. In particular, we extend traditional string and graph grammars in a new modeling formalism which we call Simplicial Grammar. This formalism integrates the topological properties of the simplicial complex with the expressive power of stochastic grammars in a computation abstraction with which we can decompose complex system behavior, into a finite set of modular grammar rules which parsimoniously describe the spatial/temporal structure and dynamics of patterns inferred from sequence data.
17

Utilizing Cross-Domain Cognitive Mechanisms for Modeling Aspects of Artificial General Intelligence

Abdel-Fattah, Ahmed M. H. 31 March 2014 (has links)
In this era of increasingly rapid availability of resources of all kinds, a widespread need to characterize, filtrate, use, and evaluate what could be necessary and useful becomes a crucially vital everyday task. Neither research in the field of artificial intelligence (AI) nor in cognitive science (CogSci) is an exception (let alone within a crossing of both paths). A promised goal of AI was to primarily focus on the study and design of intelligent artifacts that show aspects of human-like general intelligence (GI). That is, facets of intelligence similar to those exhibited by human beings in solving problems related to cognition. However, the focus in achieving AI’s original goal is scattered over time. The initial ambitions in the 1960s and 1970s had grown by the 1980s into an "industry", where not only researchers and engineers but also entire companies developed the AI technologies in building specialized hardware. But the result is that technology afforded us with many, many devices that allegedly work like humans, though they can only be considered as life facilitators (if they even do). This is mainly due to, I propose, basic changes on viewing what true essences of intelligence should have been considered within scientific research when modeling systems with GI capacities. A modern scientific approach to achieving AI by simulating cognition is mainly based on representations and implementations of higher cognition in artificial systems. Luckily, such systems are essentially designed with the intention to be acquired with a "human like" level of GI, so that their functionalities are supported by results (and solution methodologies) from many cognitive scientific disciplines. In classical AI, only a few number of attempts have tried to integrate forms of higher cognitive abilities in a uniform framework that model, in particular, cross-domain reasoning abilities, and solve baffling cognition problems —the kind of problems that a cognitive being (endowed with traits of GI) could only solve. Unlike classical AI, the intersection between the recent research disciplines: artificial general intelligence (AGI) and CogSci, is promising in this regard. The new direction is mostly concerned with studying, modeling, and computing AI capabilities that simulate facets of GI and functioning of higher cognitive mechanisms. Whence, the focus in this thesis is on examining general problem solving capabilities of cognitive beings that are both: "human-comparable" and "cognitively inspired", in order to contribute to answering two substantial research questions. The first seeks to find whether it is still necessary to model higher cognitive abilities in models of AGI, and the second asks about the possibility to utilize cognitive mechanisms to enable cognitive agents demonstrate clear signs of human-like (general) intelligence. Solutions to cross-domain reasoning problems (that characterize human-like thinking) need to be modeled in a way that reflects essences of cognition and GI of the reasoner. This could actually be achieved (among other things) through utilizing cross-domain, higher cognitive mechanisms. Examples of such cognitive mechanisms include analogy-making and concept blending (CB), which are exceptional as active areas of recent research in cognitive science, though not enough attention has been given to the rewards and benefits one gets when they interact. A basic claim of the thesis is that several aspects of human-comparable level of GI are based on forms of (cross-domain) representations and (creative) productions of conceptions. The thesis shows that computing these aspects within AGI-based systems is indispensable for their modeling. In addition, the aspects can be modeled by employing certain cognitive mechanisms. The specific examples of mechanisms most relevant to the current text are computation of generalizations (i.e. abstractions) using analogy-making (i.e. transferring a conceptualization from one domain into another domain) and CB (i.e. merging parts of conceptualizations of two domains into a new domain). Several ideas are presented and discussed in the thesis to support this claim, by showing how the utilization of these mechanisms can be modeled within a logic-based framework. The framework to be used is Heuristic-Driven Theory Projection (HDTP), which can model solutions to a concrete set of cognition problems (including creativity, rationality, noun-noun combinations, and the analysis of counterfactual conditionals). The resulting contributions may be considered as a necessary, although not by any means a sufficient, step to achieve intelligence on a human-comparable scale in AGI-based systems. The thesis thus fills an important gap in models of AGI, because computing intelligence on a human-comparable scale (which is, indeed, an ultimate goal of AGI) needs to consider the modeling of solutions to, in particular, the aforementioned problems.
18

Predicate Calculus for Perception-led Automata

Byrne, Thomas J. January 2023 (has links)
Artificial Intelligence is a fuzzy concept. My role, as I see it, is to put down a working definition, a criterion, and a set of assumptions to set up equations for a workable methodology. This research introduces the notion of Artificial Intelligent Agency, denoting the application of Artificial General Intelligence. The problem being handled by mathematics and logic, and only thereafter semantics, is Self-Supervised Machine Learning (SSML) towards Intuitive Vehicle Health Management, in the domain of cybernetic-physical science. The present work stems from a broader engagement with a major multinational automotive OEM, where Intelligent Vehicle Health Management will dynamically choose suitable variants only to realise predefined variation points. Physics-based models infer properties of a model of the system, not properties of the implemented system itself. The validity of their inference depends on the models’ degree of fidelity, which is always an approximate localised engineering abstraction. In sum, people are not very good at establishing causality. To deduce new truths from implicit patterns in the data about the physical processes that generate the data, the kernel of this transformative technology is the intersystem architecture, occurring in-between and involving the physical and engineered system and the construct thereof, through the communication core at their interface. In this thesis it is shown that the most practicable way to establish causality is by transforming application models into actual implementation. The hypothesis being that the ideal source of training data for SSML, is an isomorphic monoid of indexical facts, trace-preserving events of natural kind.
19

Vztah mezi obecným inteligenčním faktorem g, širokými kognitivními schopnostmi a pracovní pamětí / The relationship between general intelligence faktor g, broad cognitive abilities and working memory

Čeplová, Zuzana January 2011 (has links)
This thesis deals with the relationship between Working Memory, Working Memory Span tasks and general factor g and Broad cognitive abilities. Measured constructs are introduced in the theoretical part, with their evolution, various methods of their measurement and studies investigating the relation between them. The empirical part of the research has been conducted to verify the relationship between Working Memory and general intelligence factor g. It has been done to reveal the relationship between Working Memory Span tasks and Broad cognitive abilities as well. The question concerning the influence of the use of strategy while performing the automatic version of Working Memory Span tasks has been investigated as well.
20

Detektering av fusk vid användning av AI : En studie av detektionsmetoder / Detection of cheating when using AI : A study of detection methods

Ennajib, Karim, Liang, Tommy January 2023 (has links)
Denna rapport analyserar och testar olika metoder som syftar till att särskiljamänskligt genererade lösningar på uppgifter och texter från de som genereras avartificiell intelligens. På senare tid har användningen av artificiell intelligens setten betydande ökning, särskilt bland studenter. Syftet med denna studie är attavgöra om det för närvarande är möjligt att upptäcka fusk från högskolestudenterinom elektroteknik som använder sig av AI. I rapporten testas lösningar påuppgifter och texter genererade av programmet ChatGPT med hjälp av en generellmetod och externa AI-verktyg. Undersökningen omfattar områdena matematik,programmering och skriven text. Resultatet av undersökningen tyder på att detinte är möjligt att upptäcka fusk med hjälp av AI i ämnena matematik ochprogrammering. Dock när det gäller text kan i viss utsträckning fusk vidanvändning av en AI upptäckas. / This report analyzes and tests various methods aimed at distinguishinghuman-generated solutions to tasks and texts from those generated by artificialintelligence. Recently the use of artificial intelligence has seen a significantincrease, especially among students. The purpose of this study is to determinewhether it is currently possible to detect if a college student in electricalengineering is using AI to cheat. In this report, solutions to tasks and textsgenerated by the program ChatGPT are tested using a general methodology andexternal AI-based tools. The research covers the areas of mathematics,programming and written text. The results of the investigation suggest that it is notpossible to detect cheating with the help of an AI in the subjects of mathematicsand programming. In the case of text, cheating by using an AI can be detected tosome extent.

Page generated in 0.1345 seconds