• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 850
  • 383
  • 294
  • 27
  • 17
  • 13
  • 8
  • 7
  • 7
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 1777
  • 563
  • 557
  • 449
  • 401
  • 291
  • 287
  • 284
  • 250
  • 241
  • 241
  • 240
  • 205
  • 188
  • 151
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Towards the Prevention of Dyslexia

Geiger, Gadi, Amara, Domenic G 18 October 2005 (has links)
Previous studies have shown that dyslexic individuals who supplement windowed reading practice with intensive small-scale hand-eye coordination tasks exhibit marked improvement in their reading skills. Here we examine whether similar hand-eye coordination activities, in the form of artwork performed by children in kindergarten, first and second grades, could reduce the number of students at-risk for reading problems. Our results suggest that daily hand-eye coordination activities significantly reduce the number of students at-risk. We believe that the effectiveness of these activities derives from their ability to prepare the students perceptually for reading.
82

Asymptotics of Gaussian Regularized Least-Squares

Lippert, Ross, Rifkin, Ryan 20 October 2005 (has links)
We consider regularized least-squares (RLS) with a Gaussian kernel. Weprove that if we let the Gaussian bandwidth $\sigma \rightarrow\infty$ while letting the regularization parameter $\lambda\rightarrow 0$, the RLS solution tends to a polynomial whose order iscontrolled by the relative rates of decay of $\frac{1}{\sigma^2}$ and$\lambda$: if $\lambda = \sigma^{-(2k+1)}$, then, as $\sigma \rightarrow\infty$, the RLS solution tends to the $k$th order polynomial withminimal empirical error. We illustrate the result with an example.
83

Joint intentions as a model of multi-agent cooperation in complex dynamic environments

Jennings, Nick R. January 1992 (has links)
Computer-based systems are being used to tackle increasingly complex problems in ever more demanding domains. The size and amount of knowledge needed by such systems means they are becoming unwieldy and difficult to engineer into reliable, consistent products. One paradigm for overcoming this barrier is to decompose the problem into smaller more manageable components which can communicate and cooperate at the level of sharing processing responsibilities and information. Until recently, research in multi-agent systems has been based on ad hoc models of action and interaction; however, the notion of intentions is beginning to emerge as a prime candidate upon which a sound theory could be based. This research develops a new model of joint intentions as a means of describing the activities of groups of agents working collaboratively. The model stresses the role of intentions in controlling agents� current and future actions; defining preconditions which must be satisfied before joint problem solving can commence and prescribing how individual agents should behave once it has been established. Such a model becomes especially important in dynamic environments in which agents may possess neither complete nor correct beliefs about their world or other agents, have changeable goals and fallible actions and be subject to interruption from external events. The theory has been implemented in a general purpose cooperation framework, called GRATE*, and applied to the real-world problem of electricity transportation management. In this application, individual problem solvers have to take decisions using partial, imprecise information and respond to an ever changing external world. This fertile environment enabled the quantitative benefits of the theory to be assessed and comparisons with other models of collaborative problem solving to be undertaken. These experiments highlighted the high degree of coherence attained by GRATE* problem solving groups, even in the most dynamic and unpredictable application contexts.
84

Improving Companion AI Behavior in MimicA

Toy, Daniel 01 December 2017 (has links)
Companion characters are an important aspect of video games and appear in many different genres. Their role is typically to support the player as they progress through the game by helping to complete tasks or assisting in combat. However, oftentimes, these companion characters are limited in their ability to dynamically react to new situations and fail to properly assist the player. In this paper, we present a solution by improving upon the MimicA framework, which allows companion characters to emulate the human player. The framework takes a learn by observation approach by storing the game state when the player performs an action. This is then used by machine learning classifiers to determine what action to take and where it should be done. Because the framework makes little assumptions about the rules of the game and focuses on a single session experience, it is flexible enough to apply to a variety of different games and requires no prior training data. We have further improved the original MimicA framework by adding feature selection, n-gram analysis, an improved feedback system, random forest classifier, and a new system for picking a location for actions. In addition, we refactored and updated the original framework to make it easier to use for game developers and the game, Lord of Towers, which was used as a proof of concept. Further, we create another game, Lord of Caves, to demonstrate the flexibility of the new version of the framework. We validated our work using automated simulations and a user study. In our automated simulations, we found random forest was a consistently strong performer. Our user study found that the our implementation of n-grams was successful and 19 of 26 believed our framework would be useful to a game developer.
85

the Next Library

Daitha, Maithreyi 24 July 2023 (has links)
In a world where knowledge is the driving force behind human progress, it becomes imperative to understand the intricate dynamics of its creation, preservation, and distribution. This architectural thesis delves into the essence of knowledge and aims to unravel the profound meaning behind these fundamental aspects. By examining the Great Library of Alexandria as a symbol of a global knowledge and fragility, we embark on a transformative journey. The thesis investigates the nature of knowledge itself, posing essential questions about its essence and significance. What does knowledge truly represent, and how do we acknowledge its value in our lives? Through a comprehensive exploration, we aim to comprehend the creation of knowledge and its transformative potential in various domains. Furthermore, the Great Library of Alexandria stands as a compelling symbol of fragility, emphasizing the delicate nature of the artifacts we create. This iconic institution serves as a poignant reminder of the impermanence that surrounds human achievements. By studying the library's historical significance, architectural intricacies, and its ultimate demise, we gain profound insights into the precarious nature of preserving knowledge. By embarking on this journey, we seek to understand not only the importance of preserving knowledge but also the means to achieve effective preservation. Ultimately, this research aims to use AI text to image tools (midjourney) and traditional architectural inquiry methods to deepen our appreciation for the vast wealth of knowledge we have generated and highlight the responsibility we bear in safeguarding and sharing it. By understanding the fragility of knowledge, we can foster a collective consciousness that recognizes the transformative power of knowledge. / Master of Architecture / Knowledge propels human progress, shaping our world in remarkable ways. In this thesis, we embark on a transformative exploration of the creation, preservation, and distribution of knowledge, unraveling its profound meaning. Our investigation centers around the Great Library of Alexandria, a symbol of global knowledge and fragility. We delve into the very nature of knowledge, posing essential questions about its essence and significance. What does knowledge truly represent, and how does it enrich our lives? Through a comprehensive exploration, we aim to understand the creation of knowledge and its potential to transform various domains. Moreover, the Great Library of Alexandria serves as a poignant symbol of fragility, highlighting the delicate nature of human achievements. By studying its historical significance, architectural intricacies, and eventual demise, we gain profound insights into the precarious task of preserving knowledge. Our journey goes beyond mere preservation; it seeks to uncover effective means of safeguarding knowledge. By understanding the importance of preserving knowledge, we can nurture a collective consciousness that recognizes its transformative power. Ultimately, this research aims to deepen our appreciation for the vast wealth of knowledge we have generated and emphasizes our responsibility to protect and share it. Through an understanding of knowledge's fragility, we can foster a society that values and harnesses its transformative potential for the betterment of all.
86

QUANTIFYING TRUST IN DEEP LEARNING WITH OBJECTIVE EXPLAINABLE AI METHODS FOR ECG CLASSIFICATION / EVALUATING TRUST AND EXPLAINABILITY FOR DEEP LEARNING MODELS

Siddiqui, Mohammad Kashif 11 1900 (has links)
Trustworthiness is a roadblock in mass adoption of artificial intelligence (AI) in medicine. This thesis developed a framework to explore the trustworthiness as it applies to AI in medicine with respect to common stakeholders in medical device development. Within this framework the element of explainability of AI models was explored by evaluating explainable AI (XAI) methods. In current literature a litany of XAI methods are available that provide a variety of insights into the learning and function of AI models. XAI methods provide a human readable output for the AI’s learning process. These XAI methods tend to be bespoke and provide very subjective outputs with varying degrees of quality. Currently, there are no metrics or methods of objectively evaluating XAI outputs against outputs from different types of XAI methods. This thesis presents a set of constituent elements (similarity, stability and novelty) to explore the concept of explainability and then presents a series of metrics to evaluate those constituent elements. Thus providing a repeatable and testable framework to evaluate XAI methods and their generated explanations. This is accomplished using subject matter expert (SME) annotated ECG signals (time-series signals) represented as images to AI models and XAI methods. A small subset from all available XAI methods, Vanilla Saliency, SmoothGrad, GradCAM and GradCAM++ were used to generate XAI outputs for a VGG-16 based deep learning classification model. The framework provides insights about XAI method generated explanations for the AI and how closely that learning corresponds to SME decision making. It also objectively evaluates how closely explanations generated by any XAI method resemble outputs from other XAI methods. Lastly, the framework provides insights about possible novel learning done by the deep learning model beyond what was identified by the SMEs in their decision making. / Thesis / Master of Applied Science (MASc) / The goal of this thesis was to develop a framework of how trustworthiness can be improved for a variety of stakeholders in the use of AI in medical applications. Trust was broken down into basic elements (Explainability, Verifiability, Fairness & Ro- bustness) and ’Explainability’ was further explored. This was done by determining how explainability (offered by XAI methods) can address the needs (Accuracy, Safety, and Performance) of stakeholders and how those needs can be evaluated. Methods of comparison (similarity, stability, and novelty) were developed that allow an objective evaluation of the explanations from various XAI methods using repeatable metrics (Jaccard, Hamming, Pearson Correlation, and TF-IDF). Combining the results of these measurements into the framework of trust, work towards improving AI trust- worthiness and provides a way to evaluate and compare the utility of explanations.
87

Developing services based on Artificial Intelligence

Karlsson, Marcus January 2019 (has links)
This thesis explores the development process of services based on artificial intelligence (AI) technology within an industrial setting. There has been a renewed interest in the technology and leading technology companies as well as many start-ups has integrated it into their market offerings. The technology´s general application potential for enhancing products and services along with the task automation possibility for improved operational excellence makes it a valuable asset for companies. However, the implementation rate of AI services is still low for many industrial actors. The research in the area has been technically dominated with little contribution from other disciplines. Therefore, the purpose of this thesis is to identify development challenges of AI services and drawing on service development- and value-theory to propose a process framework promoting implementation. The work will have two main contributions. Firstly, to compare differences in theoretical and practical development challenges and secondly to combine AI with service development and value theory. The empirical research is done through a single case study based on a systematic combining research approach. It moves iteratively between the theory and empirical findings to direct and support the thesis throughout the work process. The data was collected through semi-structured interviews with a purposive sample. It consisted of two groups of interview participants, one AI expert group and one case internal group. This was supported by participant observation of the case environment. The data analysis was done through flexible pattern matching. The results were divided into two sections, practical challenges and development aspect of AI service development. These were combined with the selected theories and a process framework was generated. The study showed a current understudied area of business and organisational aspect regarding AI service development. Several such challenges were identified with limited theoretical research as support. For a wider industrial adoption of AI technology, more research is needed to understand the integration into the organisation. Further, sustainability and ethical aspect were found not to be a primary concern, only mention in one of the interviews. This, despite the plethora of theory and identified risks found in the literature. Lastly, the interdisciplinary research approach was found to be beneficial to the AI field to integrate the technology into an industrial setting. The developed framework could draw from existing service development models to help manage the identified challenges. / Denna uppsats utforskar utvecklingsprocessen av tjänster baserade på artificiell intelligens (AI) i en industriell miljö. Tekniken har fått ett förnyat intresse vilket har lett till att allt fler ledande teknik företag och start-up:s har integrerat AI i deras marknads erbjudande. Teknikens generella applikations möjlighet för att kunna förbättra produkter och tjänster tillsammans med dess automatiserings möjlighet för ökad operationell effektivitet gör den till en värdefull tillgång för företag. Dock så är implementations graden fortfarande låg för majoriteten av industrins aktörer. Forskningen inom AI området har varit mycket teknik dominerat med lite bidrag från andra forskningsdiscipliner. Därför syftar denna uppsats att identifiera utvecklingsutmaningar med AI tjänster och genom att hämta delar från tjänsteutveckling- och värde teori generera ett processramverk som premierar implementation. Uppsatsen har två huvudsakliga forskningsbidrag. Först genom att jämföra skillnader mellan teoretiska och praktiska utvecklingsutmaningar, sedan bidra genom att kombinera AI med tjänsteutveckling- och värdeteori. Den empiriska forskningen utfördes genom en fallstudie baserad på ett systematic combining tillvägagångsätt. På så sätt rör sig forskning iterativt mellan teori och empiri för att forma och stödja uppsatsen genom arbetet. Datat var insamlad genom semi strukturerade intervjuer med två separata, medvetet valda intervjugrupper där ena utgjorde en AI expert grupp och andra en intern grupp för fallstudien. Detta stöttades av deltagande observationer inom fallstudiens miljö. Dataanalysen utfördes med metoden flexible pattern matching. Resultatet var uppdelat i två olika sektioner, den första med praktiska utmaningar och den andra med utvecklingsaspekter av AI tjänsteutveckling. Dessa kombinerades med de utvalda teorierna för att skapa ett processramverk. Uppsatsen visar ett under studerat område angående affär och organisation i relation till AI tjänsteutveckling. Ett flertal av sådana utmaningar identifierades med begränsat stöd i existerande forskningslitteratur. För en mer utbredd adoption av AI tekniken behövs mer forskning för att förstå hur AI ska integreras med organisationer. Vidare, hållbarhet och etiska aspekter var inte en primär aspekt i resultatet, endast bemött i en av intervjuerna trots samlingen av artiklar och identifierade risker i litteraturen. Till sist, det tvärvetenskapliga angreppsättet var givande för AI området för att bättre integrera tekniken till en industriell miljö. Det utvecklade processramverket kunde bygga på existerande tjänsteutvecklings modeller för att hantera de identifierade utmaningarna.
88

Generalization over contrast and mirror reversal, but not figure-ground reversal, in an "edge-based

Riesenhuber, Maximilian 10 December 2001 (has links)
Baylis & Driver (Nature Neuroscience, 2001) have recently presented data on the response of neurons in macaque inferotemporal cortex (IT) to various stimulus transformations. They report that neurons can generalize over contrast and mirror reversal, but not over figure-ground reversal. This finding is taken to demonstrate that ``the selectivity of IT neurons is not determined simply by the distinctive contours in a display, contrary to simple edge-based models of shape recognition'', citing our recently presented model of object recognition in cortex (Riesenhuber & Poggio, Nature Neuroscience, 1999). In this memo, I show that the main effects of the experiment can be obtained by performing the appropriate simulations in our simple feedforward model. This suggests for IT cell tuning that the possible contributions of explicit edge assignment processes postulated in (Baylis & Driver, 2001) might be smaller than expected.
89

ARTIFICIAL INTELLIGENCE (AI) APPLICATIONS IN AUTOMATING CUSTOMER SERVICES AND EMPLOYEE SUPERVISION

Tong, Siliang, 0000-0002-1730-1075 January 2020 (has links)
Across two essays, I explore how artificial intelligence (AI) applications can help businesses automate customer service with deep learning-driven natural conversation and improve employee performance with work supervision. I apply machine learning methods such as audio analytics and text mining, as well as field experiments to explore these new AI-driven capabilities in customer service and employee supervision automation. Substantively, this research tackles emerging business questions regarding how AI applications can assist customer purchases and employee job performance. In Essay One, I apply two experiments to investigate when and how AI voicebots work or struggle in persuading customers relative to human agents. In Experiment 1, I apply audio analytics to extract agents’ voice features (i.e., pitch, amplitude, and speed) and speech content (i.e., selling adaptivity). My analyses suggest two distinct routes to explain how agents’ speech patterns account for their performance. Analyses in Experiment 2 demonstrate that relative to human agents, AI bots could backfire and lead to worse performance when the customer persuasion task is more complex. In my second essay, I explore the coexistence of performance improvement and employee resistance to AI supervision. Specifically, I develop a novel two-by-two field experiment, which randomly assigns the AI or human supervision entity and discloses the entity or not, to separate the economic gain from negative reactance to AI. In addition, I uncover the underlying mechanism by identifying employees’ subjective bias to the AI feedback quality and heightened fear of job replacement once they know the supervision entity is AI rather than human managers. I propose two strategies to alleviate employees’ resistance to AI supervision. / Business Administration/Marketing
90

Hanteringen av etiska dilemman : Vid implementeringen av AI-algoritmer

Koskelainen, Amanda, Höglund, David January 2024 (has links)
Denna studie syftar till att undersöka hur människor som skapar och implementerar AI-algoritmer förhåller sig till etiska dilemman som de möter i samband med sitt arbete. Målet var att fastställa vilka etiska dilemman som finns, svårigheter och åsikter kopplade till dessa samt hur de kan hanteras. Studien bygger på 5 semistrukturerade kvalitativa intervjuer med IT-arkitekter som berörs av AI i sitt dagliga arbete, samt en litteraturgenomgång. Litteraturgenomgången tydliggör begreppen: AI, AI-algoritmer och etik, redogör för fastställda etiska principer kopplade till AI och beskriver människans relation till AI och etik. Intervjuerna ämnar att upplysa om hur arbetet ser ut i praktiken och förklara hur etiska dilemman kan förekomma vid skapandet och implementeringen av AI-algoritmer, resultatet analyseras utifrån litteraturgenomgången. Resultatet visar att människan har en viktig, men samtidigt svår roll när det gäller att ta hänsyn till etik i samband med sitt arbete. De etiska principerna är alla av stor vikt och intresse, de skiljer sig dock från varandra i förhållande till komplexitet och relevans. Människan bakom maskinen är medveten om svårigheterna med sitt dagliga arbete och försöker efter bästa förmåga tillämpa samtliga etiska principer i det. Det är idag inte möjligt att fullkomligt följa de etiska principerna. Detta skapar ett behov av beslutsfattande gällande till vilken grad teknologins effektivitet kan utnyttjas utan att skapa allt för stora etiska risker i samhället.

Page generated in 0.0928 seconds