Spelling suggestions: "subject:"[een] INTERACTIVE MACHINE LEARNING"" "subject:"[enn] INTERACTIVE MACHINE LEARNING""
1 |
A Semi-Automatic Grading Experience for Digital Ink QuizzesRhees, Brooke Ellen 01 January 2017 (has links)
Teachers who want to assess student learning and provide quality feedback are faced with a challenge when trying to grade assignments quickly. There is currently no system which will provide both a fast-to-grade quiz and a rich testing experience. Previous attempts to speed up grading time include NLP-based text analysis to automate grading and scanning in documents for manual grading with recyclable feedback. However, automated NLP systems all focus solely on text-based problems, and manual grading is still linear in the number of students. Machine learning algorithms exist which can interactively train a computer quickly classify digital ink strokes. We used stroke recognition and interactive machine learning concepts to build a grading interface for digital ink quizzes, to allow non-text open-ended questions that can then be semiautomatically graded. We tested this system on a Computer Science class with 361 students using a set of quiz questions which their teacher provided, evaluated its effectiveness, and determined some of its limitations. Adaptations to the interface and the training process as well as further work to resolve intrinsic stroke perversity are required to make this a truly effective system. However, using the system we were able to reduce grading time by as much as 10x for open-ended responses.
|
2 |
Personalization of home rehabilitation training by incorporating interactive machine learning into the designLi, Yinchu January 2022 (has links)
Home rehabilitation training has become an important part for patients to recover and maintain physical conditions due to the high health care cost and limited supervision in the clinic. Various technologies have been designed for assisting rehabilitation training but most of them are not able to provide personalized feedback and support according to different standards of patients’ physical condition and movement capability. The thesis aims to explore what information provided by the technology would be helpful for personalizing rehabilitation by incorporating interactive machine learning as part of a large research project, which has been discussed as an effective tool in motion interaction design to build conversation and provide personalized information. The participatory design methodology was conducted with bodystorming and role-playing approach in the workshops to collect people’s opinions on the role of technology, the design requirements and the way to present personalized feedback in rehabilitation training. The author collaborated with the research group to apply thematic analysis in the analysis of the workshop videos and drew the design spaces for future interaction design including three roles to integrate technology, five design concepts and some design takeaways to present feedback. Two interactive prototypes were envisioned based on the analysis result as an explorative design to incorporate the interplay between patients and machine learning in rehabilitation training.
|
3 |
Teaching robots social autonomy from in situ human supervisionSenft, Emmanuel January 2018 (has links)
Traditionally the behaviour of social robots has been programmed. However, increasingly there has been a focus on letting robots learn their behaviour to some extent from example or through trial and error. This on the one hand excludes the need for programming, but also allows the robot to adapt to circumstances not foreseen at the time of programming. One such occasion is when the user wants to tailor or fully specify the robot's behaviour. The engineer often has limited knowledge of what the user wants or what the deployment circumstances specifically require. Instead, the user does know what is expected from the robot and consequently, the social robot should be equipped with a mechanism to learn from its user. This work explores how a social robot can learn to interact meaningfully with people in an efficient and safe way by learning from supervision by a human teacher in control of the robot's behaviour. To this end we propose a new machine learning framework called Supervised Progressively Autonomous Robot Competencies (SPARC). SPARC enables non-technical users to control and teach a robot, and we evaluate its effectiveness in Human-Robot Interaction (HRI). The core idea is that the user initially remotely operates the robot, while an algorithm associates actions to states and gradually learns. Over time, the robot takes over the control from the user while still giving the user oversight of the robot's behaviour by ensuring that every action executed by the robot has been actively or passively approved by the user. This is particularly important in HRI, as interacting with people, and especially vulnerable users, is a complex and multidimensional problem, and any errors by the robot may have negative consequences for the people involved in the interaction. Through the development and evaluation of SPARC, this work contributes to both HRI and Interactive Machine Learning, especially on how autonomous agents, such as social robots, can learn from people and how this specific teacher-robot interaction impacts the learning process. We showed that a supervised robot learning from their user can reduce the workload of this person, and that providing the user with the opportunity to control the robot's behaviour substantially improves the teaching process. Finally, this work also demonstrated that a robot supervised by a user could learn rich social behaviours in the real world, in a large multidimensional and multimodal sensitive environment, as a robot learned quickly (25 interactions of 4 sessions during in average 1.9 minutes) to tutor children in an educational game, achieving similar behaviours and educational outcomes compared to a robot fully controlled by the user, both providing 10 to 30% improvement in game metrics compared to a passive robot.
|
4 |
Robot learners: interactive instance-based learning with social robotsPark, Hae Won 08 June 2015 (has links)
On one hand, academic and industrial researchers have been developing and deploying robots that are used as educational tutors, mediators, and motivational tools. On the other hand, an increasing amount of interest has been placed on non-expert users being able to program robots intuitively, which has led to promising research efforts in the fields of machine learning and human-robot interaction. This dissertation focuses on bridging the gap between the two subfields of robotics to provide personalized experience for the users during educational, entertainment, and therapeutic sessions with social robots. In order to make the interaction continuously engaging, the workspace shared between the user and the robot should provide personalized contexts for interaction while the robot learns to participate in new tasks that arise.
This dissertation aims to solve the task-learning problem using an instance-based framework that stores human demonstrations as task instances. These instances are retrieved when confronted with a similar task in which the system generates predictions of task behaviors based on prior solutions. The main issues associated with the instance-based approach, i.e., knowledge encoding and acquisition, are addressed in this dissertation research using interactive methods of machine learning. This approach, further referred to as interactive instance-based learning (IIBL), utilizes the keywords people use to convey task knowledge to others to formulate task instances. The key features suggested by the human teacher are extracted during the demonstrations of the task. Regression approaches have been developed in this dissertation to model similarities between cases for instance retrieval including multivariate linear regression and sensitivity analysis using neural networks. The learning performance of the IIBL methods were then evaluated while participants engaged in various block stacking and inserting scenarios and tasks on a touchscreen tablet with a humanoid robot Darwin.
In regard to end-users programming robots, the main benefit of the IIBL framework is that the approach fully utilizes the explanatory behavior of the instance-based method which makes the learning process transparent to the human teacher. Such an environment not only encourages the user to produce better demonstrations, but also prompts the user to intervene at the moment a new instance is needed. It was shown through user studies that participants naturally adapt their teaching behavior to the robot learner's progress and adjust the timing and the number of demonstrations. It was also observed that the human-robot teaching and learning scenarios facilitate the emergence of various social behaviors from participants. Encouraging social interaction is often an objective of the task especially with children with cognitive disabilities, and a pilot study with children with autism spectrum disorder revealed promising results comparable to the typically developing group.
Finally, this dissertation investigated the necessity of renewable context for prolonged interaction with robot companions. Providing personalized tasks that match each individual's preferences and developmental stages enhances the quality of the user experience with robot learners. Confronted with the limitations of the physical workspace, this research proposes utilizing commercially available touchscreen smart devices as a shared platform for engaging the user in educational, entertainment, and therapeutic tasks with the robot learners.
To summarize, this dissertation attempts to defend the thesis statement that a robot learner that utilizes an IIBL approach improves the performance and efficiency of general task learning, and when combined with the state-of-the-art mobile technology that provides personalized context for interaction, enhances the user's experience for prolonged engagement of the task.
|
5 |
Assisting physiotherapists by designing a system utilising Interactive Machine LearningGeorgiev, Nikolay January 2021 (has links)
Millions of people throughout the world suffer from physical injuries and impairments and require physiotherapy to successfully recover. There are numerous obstacles in the way of having access to the necessary care – high costs, shortage of medical personnel and the need to travel to the appropriate medical facilities, something even more challenging during the Covid-19 pandemic. One approach to addressing this issue is to incorporate technology in the practice of physiotherapists, allowing them to help more patients. Using research through design, this thesis explores how interactive machine learning can be utilised in a system, designed for aiding physiotherapists. To this end, after a literature review, an informal case study was conducted. In order to explore what functionality the suggested system would need, an interface prototype was iteratively developed and subsequently evaluated through formative testing by three physiotherapists. All participants found value in the proposed system, and were interested in how such a system can be implemented and potentially used in practice. In particular the ability of the system to monitor the correct execution of the exercises by the patient, and the increased engagement during rehabilitative training brought by the sonification. Several suggestions for future developments in the topic are also presented at the end of this work.
|
6 |
A Machine Learning Approach to Controlling Musical Synthesizer Parameters in Real-Time Live PerformanceSommer, Nathan 16 June 2020 (has links)
No description available.
|
7 |
SAMPLS: A prompt engineering approach using Segment-Anything-Model for PLant Science researchSivaramakrishnan, Upasana 30 May 2024 (has links)
Comparative anatomical studies of diverse plant species are vital for the understanding of changes in gene functions such as those involved in solute transport and hormone signaling in plant roots. The state-of-the-art method for confocal image analysis called PlantSeg utilized U-Net for cell wall segmentation. U-Net is a neural network model that requires training with a large amount of manually labeled confocal images and lacks generalizability. In this research, we test a foundation model called the Segment Anything Model (SAM) to evaluate its zero-shot learning capability and whether prompt engineering can reduce the effort and time consumed in dataset annotation, facilitating a semi-automated training process. Our proposed method improved the detection rate of cells and reduced the error rate as compared to state-of-the-art segmentation tools. We also estimated the IoU scores between the proposed method and PlantSeg to reveal the trade-off between accuracy and detection rate for different quality of data. By addressing the challenges specific to confocal images, our approach offers a robust solution for studying plant structure. Our findings demonstrated the efficiency of SAM in confocal image segmentation, showcasing its adaptability and performance as compared to existing tools. Overall, our research highlights the potential of foundation models like SAM in specialized domains and underscores the importance of tailored approaches for achieving accurate semantic segmentation in confocal imaging. / Master of Science / Studying different plant species' anatomy is crucial for understanding how genes work, especially those related to moving substances and signaling in plant roots. Scientists often use advanced techniques like confocal microscopy to examine plant tissues in detail. Traditional techniques like PlantSeg in automatically segmenting plant cells require a lot of computational resources and manual effort in preparing the dataset and training the model. In this study, we develop a novel technique using Segment-Anything-Model that could learn to identify cells without needing as much training data. We found that SAM performed better than other methods, detecting cells more accurately and making fewer mistakes. By comparing SAM with PlantSeg, we could see how well they worked with different types of images. Our results show that SAM is a reliable option for studying plant structures using confocal imaging. This research highlights the importance of using tailored approaches like SAM to get accurate results from complex images, offering a promising solution for plant scientists.
|
8 |
Towards improving automation with user inputÅström, Joakim January 2021 (has links)
As complex systems become more available, the possibility to leverage human intelligence to continuously train these systems is becoming increasingly valuable. Collecting and incorporating feedback from end-users into the system development processes could hold great potential for future development of autonomous systems, but it is not without difficulties A literature review was conducted with the aim to review and help categorize the different dynamics relevant to the act of collecting and implementing user feedback in system development processes. Practical examples of such system are commonly found in active and interactive learning systems, which were studied with a particular interest towards possible novel applications in the industrial sector. This review was complimented by an exploratory experiment, aimed at testing how system accuracy affected the feedback provided by users for a simulated people recognition system. The findings from these studies indicate that when and how feedback is given along with the context of use is of importance for the interplay between system and user. The findings are discussed in relation to current directions in machine learning and interactive learning systems. The study concludes that factors such as system criticality, the phase in which feedback is given, how feedback is given, and the user’s understanding of the learning process all have a large impact on the interactions and outcomes of the user-automation interplay. Suggestions of how to design feedback collection for increased user engagement and increased data assimilation are given.
|
9 |
DESIGNING FOR THE IMAGINATION OF SONIC NATURAL INTERFACESKnudsen, Tore January 2018 (has links)
In this thesis I present explorative work that shows how sounds beyond speak can be used on the input side in the design of interactive experiences and natural interfaces. By engagingin explorative approaches with a material view on sound and interactive machine learning, I’ve shown how these two counterparts may be combined with a goal to envision new possibilities and perspectives on sonic natural interfaces beyond speech. This exploration has been guided with a theoretical background ofdesign materials, machine learning, sonic interaction design and with a research through design driven process, I’ve used iterative prototyping and workshops with participants to conduct knowledge and guide the explorative process. My design work has resultedin new prototyping tools for designers to work with sound and interactive machine learning as well as a prototypes concept for kids that aims to manifest the material ndings around sound and interactive machine learning that I’ve done in this project.By evaluating my design work in contextual settings with participants, I’ve conducted both analytical and productive investigations than can construct new perspectives on how sound based interfaces beyond speech can be designed to support new interactive experiences with artefacts. Here my focus has been to engage with sound as a design material from both contextual and individual perspectives, and how this can be explored by end-users empowered by interactive machine learning to foster new forms of creative engagement with our physical world.
|
10 |
Designing User Interfaces for Interaction with Machine Learning Models / Designandet av användargränssnitt för interaktion med maskininlärningsmodellerSundberg, Nils January 2021 (has links)
Antagning.se and universityadmissions.se are two websites that enables people to apply for higher education. These websites are developed and maintained by ITS, which is a department at Umeå University. Antagning.se and universityadmissions.se allows applicants to add documents such as grades through a document upload function. Recently, There has been some experimentation with machine learning as a way to read documents that are uploaded. This study explores the possibility to use machine-learning in the user interface of the document upload function in a way that assists the users to upload the documents correctly. The objective is to determine whether doing this affects the users confidence that they uploaded a document successfully. The Double diamond method was used to design a lo-fi, a mid-fi and a hi-fi prototype. The lo-fi prototype were developed during an innovation sprint at ITS, with the purpose to develop a user interface for a Swedish folk high school validation system. The mid-fi prototype were tested using a qualitative user test to find issues that had to be addressed in the hi-fi prototype. A quantitative user test were conducted in order to determine if it affected the users confidence that they completed a task successfully versus if the same task were performed on the current system in use by Antagning and University Admissions. The results from the user testing of the hi-fi prototype were analyzed. Using hypothesis testing, it could not be determined that there were a significant difference in user confidence between the hi-fi prototype and the current system. / Antagning.se och universityadmissions.se är två webbplatser vars syften är att möjliggöra ansökan till högre utbildning i Sverige. ITS, som är en del av Umeå Universitet, är ansvariga för att driva och utveckla dessa webbplatser. Som en del av ansökningsprocessen så krävs det i vissa fall kompletterande information, exempelvis betyg eller andra typer av dokument. Personen som söker kan då scanna in dessa dokument och ladda upp den genom antagning.se eller universityadmissions.se. ITS har experimenterat med maskinlärning för att läsa in dessa inscannade dokument. Detta arbete utforskar möjligheten att använda maskinlärning i användargränssnittet till denna dokumentuppladdningsfunktion. Detta som ett sätt att assistera användaren i att se till att dokumenten laddas upp korrekt. Syftet med detta avgöra om detta har någon inverkan på hur säker användaren är att hen har genomfört uppladningen av dokumentet på ett korrekt sätt. Double diamond-metoden användes för att utveckla en lo-fi, en mid-fi och en hi-fi-prototyp. Lo-fi-prototypen togs fram under en innovationssprint på ITS, där syftet var att utveckla ett användargränssnitt till ett valideringssystem för folkhögskoledokument som maskinlärning. Mid-fi-prototypen testades med en kvalitativ metod för att hitta problem med gränssnittets design. Dessa problem togs i åtanke när hi-fi-prototypen togs fram. En kvantitativ användarstudie genomfördes för att avgöra om användaren upplevde någon skillnad i hur säkra de var att de laddat upp ett dokument korrekt. Detta jämfört med det befintliga systemet som antagning.se idag använder för att ladda upp dokument. Resultatet av studien var att det inte gick att påvisa någon skillnad mellan det befintliga systemet och hi-fi-prototypen gällande hur säker användaren var att dokumentet laddats upp korrekt.
|
Page generated in 0.075 seconds