• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 138
  • 3
  • 1
  • Tagged with
  • 142
  • 141
  • 141
  • 140
  • 140
  • 140
  • 140
  • 140
  • 82
  • 43
  • 40
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Hybrid classical-quantum algorithms for optimization and machine learning

Zardini, Enrico 30 April 2024 (has links)
Quantum computing is a form of computation that exploits quantum mechanical phenomena for information processing, with promising applications (among others) in optimization and machine learning. Indeed, quantum machine learning is currently one of the most popular directions of research in quantum computing, offering solutions with an at-least-theoretical advantage compared to the classical counterparts. Nevertheless, the quantum devices available in the current Noisy Intermediate-Scale Quantum (NISQ) era are limited in the number of qubits and significantly affected by noise. An interesting alternative to the current prototypes of general-purpose quantum devices is represented by quantum annealers, specific-purpose quantum machines implementing the heuristic search for solving optimization problems known as quantum annealing. However, despite the higher number of qubits, the current quantum annealers are characterised by very sparse topologies. These practical issues have led to the development of hybrid classical-quantum schemes, aiming at leveraging the strengths of both paradigms while circumventing some of the limitations of the available devices. In this thesis, several hybrid classical-quantum algorithms for optimization and machine learning are introduced and/or empirically assessed, as the empirical evaluation is a fundamental part of algorithmic research. The quantum computing models taken into account are both quantum annealing and circuit-based universal quantum computing. The results obtained have shown the effectiveness of most of the proposed approaches.
132

Incremental Linearization for Satisfiability and Verification Modulo Nonlinear Arithmetic and Transcendental Functions

Irfan, Ahmed January 2018 (has links)
Satisfiability Modulo Theories (SMT) is the problem of deciding the satisfiability of a first-order formula with respect to some theory or combination of theories; Verification Modulo Theories (VMT) is the problem of analyzing the reachability for transition systems represented in terms of SMT formulae. In this thesis, we tackle the problems of SMT and VMT over the theories of polynomials over the reals (NRA), over the integers (NIA), and of NRA augmented with transcendental functions (NTA). We propose a new abstraction-refinement approach called Incremental Linearization. The idea is to abstract nonlinear multiplication and transcendental functions as uninterpreted functions in an abstract domain limited to linear arithmetic with uninterpreted functions. The uninterpreted functions are incrementally axiomatized by means of upper- and lower-bounding piecewise-linear constraints. In the case of transcendental functions, particular care is required to ensure the soundness of the abstraction. The method has been implemented in the MathSAT SMT solver, and in the nuXmv VMT model checker. An extensive experimental evaluation on a wide set of benchmarks from verification and mathematics demonstrates the generality and the effectiveness of our approach. Moreover, the proposed technique is an enabler for the (nonlinear) VMT problems arising in practical scenarios with design environments such as Simulink. This capability has been achieved by integrating nuXmv with Simulink using a compilation-based approach and is evaluated on an industrial-level case study.
133

Energy-efficient, Large-scale Ultra-wideband Communication and Localization

Vecchia, Davide 08 July 2022 (has links)
Among the low-power wireless technologies that have emerged in recent years, ultra-wideband (UWB) has successfully established itself as the reference for accurate ranging and localization, both outdoors and indoors. Due to its unprecedented performance, paired with relatively low energy consumption, UWB is going to play a central role in the next wave of location-based applications. As the trend of integration in smartphones continues, UWB is also expected to reach ordinary users, revolutionizing our lives the same way GPS and similar technologies have done. But the impact of UWB may not be limited to ranging and localization. Because of its considerable data rate, and its robustness to obstacles and interference, UWB communication may hold untapped potential for sensing and control applications. Nevertheless, several research questions still need to be answered to assess whether UWB can be adopted widely in the communication and localization landscapes. On one hand, the rapid evolution of UWB radios and the release of ever more efficient chips is a clear indication of the growing market for this technology. However, for it to become pervasive, full-fledged communication and localization systems must be developed and evaluated, tackling the shortcomings affecting current prototypes. UWB systems are typically single-hop networks designed for small areas, making them impractical for large-scale coverage. This limitation is found in communication and localization systems alike. Specifically for communication systems, energy-efficient multi-hop protocols are hitherto unexplored. As for localization systems, they rely on mains-powered anchors to circumvent the issue of energy consumption, in addition to only supporting small areas. Very few options are available for light, easy to deploy infrastructures using battery-powered anchors. Nonetheless, large-scale systems are required in common settings like industrial facilities and agricultural fields, but also office spaces and museums. The general goal of enabling UWB in spaces like these entails a number of issues. Large multi-hop infrastructures exacerbate the known limitations of small, single-hop, networks; notably, reliability and latency requirements clash with the need to reduce energy consumption. Finally, when device mobility is a factor, continuity of operations across the covered area is a challenge in itself. In this thesis, we design energy-efficient UWB systems for large-scale areas, supporting device mobility across multi-hop infrastructures. As our opening contribution, we study the unique interference rejection properties of the radio to inform our design. This analysis yields a number of findings on the impact of interference in communication and distance estimation, that are directly usable by developers to improve UWB solutions. These findings also suggest that concurrent transmissions in the same frequency channel are a practical option in UWB. While the overlapping of frames is typically avoided to prevent collisions, concurrent transmissions have counter-intuitively been used to provide highly reliable communication primitives for a variety of traffic patterns in narrowband radios. In our first effort to use concurrent transmissions in a full system, we introduce the UWB version of Glossy, a renowned protocol for efficient network-wide synchronization and data dissemination. Inspired by the success of concurrency-based protocols in narrowband, we then apply the same principles to define a novel data collection protocol, Weaver. Instead of relying on independent Glossy floods like state-of-the-art systems, we weave multiple data flows together to make our collection engine faster, more reliable and more energy-efficient. With Glossy and Weaver supporting the communication aspect in large-scale networks, we then propose techniques for large-scale localization systems. We introduce TALLA, a TDoA solution for continuous position estimation based on wireless synchronization. We evaluate TALLA in an UWB testbed and in simulations, for which we replicate accurately the behavior of the clocks in our real-world platforms. We then offer a glimpse of what TALLA can be employed for, deploying an infrastructure in a science museum to track visitors. The collected movement traces allow us to analyze fine-grained stop-move mobility patterns and infer the sequence of visited exhibits, which is only possible because of the high spatio-temporal granularity offered by TALLA. Finally, with SONAR, we tackle the issue of large-scale ranging and localization when the infrastructure cannot be mains-powered. By blending synchronization and scheduling operations into neighbor discovery and ranging, we drastically reduce energy consumption and ensure years-long system lifetime. Overall, this thesis enhances UWB applicability in scenarios that were previously precluded to the technology, by providing the missing communication and localization support for large areas and battery-powered devices. Throughout the thesis, we follow an experiment-driven approach to validate our protocol models and simulations. Based on the evidence collected during this research endeavor, we develop full systems that operate in a large testbed at our premises, showing that our solutions are immediately applicable in real settings.
134

Deep Learning for Brain Structural Connectivity Analysis: From Tissue Segmentation to Tractogram Alignment

Amorosino, Gabriele 22 July 2024 (has links)
Magnetic Resonance Imaging (MRI) is a cornerstone in neuroimaging for studying brain anatomy and functions. Anatomical MRI images, such as T1-weighted (T1-w) scans, allow the non-invasive visualization of the brain tissues, enabling the investigation of the brain morphology and facilitating the diagnosis of both acquired (e.g., tumors, stroke lesions, infections) and congenital (e.g., malformations) brain disorders. T1-w images provide a detailed representation of brain anatomy and accurate differentiation between the main brain structures, such as white matter (WM) and gray matter (GM), therefor they are frequently used in combination with advanced sequences such as diffusion MRI (dMRI) for the computation of the structural connectivity of the brain. In particular, from the processing of dMRI data, it is possible to investigate the structures of WM through tractography techniques, obtaining a virtual representation of the WM pathways called tractogram. Since the tractogram is a collection of digital fibers representing the neuronal axons connecting the brain's cortical areas, it is the fundamental element for studying the brain's structural connectivity. A critical step for processing the tractography data is the accurate labeling of the brain tissues, usually performed through brain tissue segmentation of T1-w images. Even though the gold standard is manual segmentation, it is time-consuming and prone to intra/inter-operator variability. Automated model-based methods produce more consistent and reliable results, however, they struggle with accuracy in the case of pathological brains due to reliance on priors based on normal anatomy. Recently, deep learning (DL) has shown the potential of supervised data-driven approaches for brain tissue segmentation by leveraging the information encoded in the signal intensity of T1-w images. As a first contribution of this thesis, we reported empirical evidence that a data-driven approach is effective for brain tissue segmentation in pathological brains. By implementing a DL network trained on a large dataset of only healthy subjects, we demonstrated improvements in segmenting the brain tissues compared to models based on healthy anatomical priors, especially on severely distorted brains. Additionally, we published a benchmark for enabling an open investigation into improving tissue segmentation of distorted brains, providing a training dataset of about one thousand healthy individuals with T1-w MR images and corresponding brain tissue labels, and a test dataset includes several tens of individuals with severe brain distortions. Another crucial aspect of processing tractography data for brain connectivity analysis is the correct alignment of the WM structures across different subjects or their normalization into a common reference space, usually performed as tractography alignment. The best practice is to perform the registration using T1-w images and then apply the resulting transformation to align the tractography, despite T1-w images lacking fiber orientation information. In light of this, various methods have been proposed to leverage the information of the WM from dMRI data, ranging from scalar diffusion maps to more complex models encoding fiber orientation in the voxels. As a second contribution to the thesis, we provide a comprehensive survey of methods for conducting tractogram alignment. Additionally, we include an empirical study with the results of a quantitative comparison among the main methods for which an implementation is available. From our findings, the use of increasingly complex diffusion models does not significantly improve the alignment of tractograms. Conversely, correspondence methods that use the fibers directly to compute the alignment outperform voxel-based methods, albeit with some limitations: not producing a deformation field, operating in an unsupervised manner, and avoiding using anatomical information. Recently, geometric deep learning (GDL) models have shown promising results in handling non-grid data like tractograms, offering new possibilities for WM structure alignment. The third main contribution of this thesis is implementing a GDL model for tractogram alignment through a supervised approach guided by fiber correspondence. The alignment is predicted as the displacement of fiber points, based on a GDL registration framework that combines graph convolutional networks and differentiable loopy belief propagation, incorporating the definition of fiber structure into the encoding of the graph. Our empirical analysis demonstrates the advantages of utilizing the proposed GDL framework over traditional volumetric registration, showcasing high alignment accuracy, low inference time, and good generalization capabilities. Overall, this thesis advances the methodology for processing MRI data for brain structural connectivity, addressing the challenges of tissue segmentation and tractography alignment, proving the potential of DL approaches also in the case of pathological brains.
135

Real-time adaptation of stimulation protocols for neuroimaging studies

Kalinina, Elena January 2018 (has links)
Neuroimaging techniques allow to acquire images of the brain involved in cognitive tasks. In traditional neuroimaging studies, the brain response to external stimulation is investigated. Stimulation categories, the order they are presented to the subject and the presentation duration are dened in the stimulation protocol. The protocol is xed before the beginning of the study and does not change in the course of experiment. Recently, there has been a major rise in the number of real-time neuroscientic experiments where the incoming brain data is analysed in an online mode. Real-time neuroimaging studies open an avenue for approaching a whole new broad range of questions, like, for instance, how the outcome of the cognitive task depends on the current brain state. Real-time experiments need a dierent protocol type that can be exibly and interactively adjusted in line with the experimental scope, e.g. hypotheses testing or optimising design for individual subject's parameters. A plethora of methods is currently deployed for protocol adaptation: information theory, optimisation algorithms, genetic algorithms. What is lacking, however, is the paradigm for interacting with the subject's state, brain state in particular. I am addressing this problem in my research. I have concentrated on two types of real-time experiments: closed-loop stimulation experiments and brain-state dependent stimulation (BSDS). As the rst contribution, I put forward a method for closed-loop stimulation adaptation and apply it in a real-time Galvanic Skin Response (GSR) experimental setting. The second contribution is an unsupervised method for brain state detection and a real-time functional Magnetic Resonance Imaging (rtfMRI) setup making use of this method. In a neurofeedback setting the goal is for the subject to achieve a target state. Ideally, the stimulation protocol should be adapted to the subject to better guide them towards that state. One way to do this would be modelling the subject's activity in a way that we can evaluate the eect of various stimulation options and choose the optimised ones, maximising the reward or minimising the error. However, currently developing such models for neuroimaging neurofeedback experiments presents a number of challenges, namely: complex dynamics of a very noisy neural signal and non-trivial mapping of neural and cognitive processes. We designed a simpler experiment as a proof of concept using GSR signal. We showed that if it is possible to model the subject's state and the dynamics of the system, it is also possible to steer the subject towards the desired state. In BSDS, there is no target state, but the challenge lies in the most accurate identication of the subject state in any given moment. The reference, state-of-the-art method for determining the current brain state is the use of machine learning classiers, or multivariate decoding. However, running supervised machine learning classiers on neuroimaging data has a number of issues that might seriously limit their application, especially in real- time scenarios. For BSDS, we show how an unsupervised machine learning algorithm (clustering in real-time) can be employed with fMRI data to determine the onset of the activated brain state. We also developed a real-time fMRI setup for BSDS that uses this method. In an initial attempt to base BSDS on brain decoding, we encountered a set of issues related to classier use. These issues prompted us to developed a new set of methods based on statistical inference that help address fundamental neuroscientic questions. The methods are presented as the secondary contribution of the thesis.
136

From Legal Contracts to Formal Specifications

Soavi, Michele 27 October 2022 (has links)
The challenge of implementing and executing a legal contract in a machine has been gaining significant interest recently with the advent of blockchain, smart contracts, LegalTech and IoT technologies. Popular software engineering methods, including agile ones, are unsuitable for such outcome-critical software. Instead, formal specifications are crucial for implementing smart contracts to ensure they capture the intentions of stakeholders, also that their execution is compliant with the terms and conditions of the original natural-language legal contract. This thesis concerns supporting the semi-automatic generation of formal specifications of legal contracts written in Natural Language (NL). The main contribution is a framework, named Contratto, where the transformation process from NL to a formal specification is subdivided into 5 steps: (1) identification of ambiguous terms in the contract and manual disambiguation; (2) structural and semantic annotation of the legal contract; (3) discovery of relationships among the concepts identified in step (2); (4) formalization of the terms used in the NL text into a domain model; (5) generation of formal expressions that describe what should be implemented by programmers in a smart contract. A systematic literature review on the main topic of the thesis was performed to support the definition of the framework. Requirements were derived from standard business contracts for a preliminary implementation of tools that support the transformation process, particularly concerning step (2). A prototype environment was proposed to semi-automate the transformation process although significant manual intervention is required. The preliminary evaluation confirms that the annotation tool can perform the annotation as well as human annotators, albeit novice ones.
137

Towards reconnecting Computer Science Education with the World out there

Angeli, Lorenzo 10 December 2021 (has links)
Computing is becoming exponentially more pervasive, and so-called process of ``Digital Transformation'' is but starting. As computers become ever more relevant, our societies will need computing professionals that are well-equipped to face the many challenges their own discipline amplified. The education of computer scientists, so far, mostly focused on equipping them with technical skills. Society and academia, however, are increasingly recognising computing as a field where disciplines collide and intersect. An example that we investigate is that of Innovation and Entrepreneurship (I&E), a field that has often be used to equip computer science students with soft skills and non-technical competences. Computer science faces some unique problems, among which a lower student interest for non-technical subjects, and a constant process of epistemic and technological obsolescence. This thesis showcases some experiences that aim to address these challenges, going towards (re)connecting the Humans and Machines participating in computer science education with the needs of the World of today and tomorrow. Our work combines some theoretical reflections with pedagogical experiments, to ensure that our work has at the same time descriptive power and empirical validation. To aid teachers and learners in the change process, these experiments share a pedagogical approach rooted on Active Learning, ranging from Challenge-Based Learning to Peer Education, to custom-tailored teaching methodologies. In designing each experiment, we start by asking ourselves: how is what we want to teach practiced in the real world? Theoretically, this thesis contributes to the state of the art by conducting a horizontal exploration of how computer science education can enter an age ever more dominated by so-called ambiguity. Methodologically, we propose lightweight techniques for qualitative measurement that are rigorous, but introduce little methodological burden, emphasising our work's reflective and exploratory dimension. Our work aims to show how, using the same broad design process, courses can be flexibly adapted to fit an ever-changing world, including significant disruptions such as the transition to online education.
138

Fast, Reliable, Low-power Wireless Monitoring and Control with Concurrent Transmissions

Trobinger, Matteo 27 July 2021 (has links)
Low-power wireless technology is a part and parcel of our daily life, shaping the way in which we behave, interact, and more generally live. The ubiquity of cheap, tiny, battery-powered devices augmented with sensing, actuation, and wireless communication capabilities has given rise to a ``smart" society, where people, machines, and objects are seamlessly interconnected, among themselves and with the environment. Behind the scenes, low-power wireless protocols are what enables and rules all interactions, organising these embedded devices into wireless networks, and orchestrating their communications. The recent years have witnessed a persistent increase in the pervasiveness and impact of low-power wireless. After having spawned a wide spectrum of powerful applications in the consumer domain, low-power wireless solutions are extending their influence over the industrial context, where their adoption as part of feedback control loops is envisioned to revolutionise the production process, paving the way for the Fourth Industrial Revolution. However, as the scale and relevance of low-power wireless systems continue to grow, so do the challenges posed to the communication substrates, required to satisfy ever more strict requirements in terms of reliability, responsiveness, and energy consumption. Harmonising these conflicting demands is far beyond what is enabled by current network stacks and control architectures; the need to timely bridge this gap has spurred a new wave of interest in low-power wireless networking, and directly motivated our work. In this thesis, we take on this challenge with a main conceptual and technical tool: concurrent transmissions (CTX), a technique that, by enforcing nodes to transmit concurrently, has been shown to unlock unprecedented fast, reliable, and energy efficient multi-hop communications in low-power wireless networks, opening new opportunities for protocol design. We first direct our research endeavour towards industrial applications, focusing on the popular IEEE 802.15.4 narrowband PHY layer, and advance the state of the art along two different directions: interference resilience and aperiodic wireless control. We tackle radio-frequency noise by extensively analysing, for the first time, the dependability of CTX under different types, intensities, and distributions of reproducible interference patterns, and by devising techniques to push it further. Specifically, we concentrate on CRYSTAL, a recently proposed communication protocol that relies on CTX to rapidly and dependably collect aperiodic traffic. By integrating channel hopping and noise detection in the protocol operation, we provide a novel communication stack capable of supporting aperiodic transmissions with near-perfect reliability and a per-mille radio duty cycle despite harsh external interference. These results lay the ground towards the exploitation of CTX for aperiodic wireless control; we explore this research direction by co-designing the Wireless Control Bus (WCB), our second contribution. WCB is a clean-slate CTX-based communication stack tailored to event-triggered control (ETC), an aperiodic control strategy holding the capability to significantly improve the efficiency of wireless control systems, but whose real-world impact has been hampered by the lack of appropriate networking support. Operating in conjunction with ETC, WCB timely and dynamically adapts the network operation to the control demands, unlocking an order-of-magnitude reduction in energy costs w.r.t. traditional periodic approaches while retaining the same control performance, therefore unleashing and concretely demonstrating the true ETC potential for the first time. Nevertheless, low-power wireless communications are rapidly evolving, and new radios striking novel trade-offs are emerging. Among these, in the second part of the thesis we focus on ultra-wideband (UWB). By providing hitherto missing networking primitives for multi-hop dissemination and collection over UWB, we shed light on the communication potentialities opened up by the high data throughput, clock precision, and noise resilience offered by this technology. Specifically, as a third contribution, we demonstrate that CTX not only can be successfully exploited for multi-hop UWB communications but, once embodied in a full-fledged system, provide reliability and energy performance akin to narrowband. Furthermore, the higher data rate and clock resolution of UWB chips unlock up to 80% latency reduction w.r.t. narrowband CTX, along with orders-of-magnitude improvements in network-wide time synchronization. These results showcase how UWB CTX could significantly benefit a multitude of applications, notably including low-power wireless control. With WEAVER, our last contribution, we make an additional step towards this direction, by supporting the key functionality of data collection with an ultra-fast convergecast stack for UWB. Challenging the internal mechanics of CTX, WEAVER interleaves data and acknowledgements flows in a single, self-terminating network-wide flood, enabling the concurrent collection of different packets from multiple senders with unprecedented latency, reliability, and energy efficiency. Overall, this thesis pushes forward the applicability and performance of low-power wireless, by contributing techniques and protocols to enhance the dependability, timeliness, energy efficiency, and interference resilience of this technology. Our research is characterized by a strong experimental slant, where the design of the systems we propose meets the reality of testbed experiments and evaluation. Via our open-source implementations, researchers and practitioners can directly use, extend, and build upon our contributions, fostering future work and research on the topic.
139

EFFETTI DELLA RICERCA DI INFORMAZIONI DI SALUTE ONLINE SULLE AZIONI DEL MEDICO E DEL PAZIENTE / EFFECTS OF ONLINE HEALTH INFORMATION SEEKING ON PHYSICIAN/PATIENT'S ACTIONS

AFFINITO, LETIZIA 25 March 2013 (has links)
Il 40 per cento degli intervistati afferma che non ha trovato informazioni esaustive sui rischi e benefici dei farmaci trovati, mentre il 52 per cento afferma che le informazioni trovate hanno aiutato a seguire le indicazioni e i consigli del medico. Tra i rispondenti che si sono sottoposti a visita medica e che hanno discusso le informazioni trovate online con il proprio medico di fiducia, l'84 per cento ha ricevuto la prescrizione di farmaci. Di questi, solo il 17 per cento riporta che il farmaco prescritto era lo stesso trovato online, il 74 per cento è stato inviato da uno specialista e l'80 per cento ha ricevuto una prescrizione per test diagnostici. Più della metà dei rispondenti ha anche riportato azioni intraprese dal medico diverse dalla prescrizione del farmaco trovato online. Il 20 per cento degli intervistati afferma che le informazioni trovate sul farmaco da prescrizione in Internet hanno ridotto il suo / la sua fiducia nel medico, mentre il 41 per cento afferma che lo ha aiutato ad avere una comunicazione migliore con il proprio medico di fiducia. Nonostante le preoccupazioni sulle conseguenze negative della comunicazione di salute online, non abbiamo riscontrato differenze in termini di effetti sulla salute tra i pazienti che hanno assunto i farmaci “menzionati” online e coloro che hanno preso altri farmaci da prescrizione. / We conducted a national online survey about health care experiences associated with digital communication of prescription drugs. 46 percent of the sample (265 adults) found information about prescription drugs during their online search in the last 12 months. 40 percent of respondents agreed they didn’t find exhaustive information about risks and benefits while 52 percent agreed it helped in following their physician’s indications and advise. Among the respondents who had a physician visit during which health information found online was discussed, 84 percent received a drug prescription with only 17 percent being the same drug found on internet, 74 percent was sent to a specialist and 80 percent received a diagnostic test prescription. More than half also reported actions taken by their physician other than prescribing the drug brand found online. 20 percent respondents states that info found on the prescription drug in Internet reduced his/her trust in the physician while 41 percent states it helped in his/her communication with physician. Despite concerns about online health communication’s negative consequences, we found no differences in health effects between patients who took “advocated”/”mentioned” drugs and those who took other prescription drugs.
140

Automatic Assessment of L2 Spoken English

Bannò, Stefano 18 May 2023 (has links)
In an increasingly interconnected world where English has become the lingua franca of business, culture, entertainment, and academia, learners of English as a second language (L2) have been steadily growing. This has contributed to an increasing demand for automatic spoken language assessment systems for formal settings and practice situations in Computer-Assisted Language Learning. One common misunderstanding about automated assessment is the assumption that machines should replicate the human process of assessment. Instead, computers are programmed to identify, extract, and quantify features in learners' productions, which are subsequently combined and weighted in a multidimensional space to predict a proficiency level or grade. In this regard, transferring human assessment knowledge and skills into an automatic system is a challenging task since this operation should take into account the complexity and the specificities of the proficiency construct. This PhD thesis presents research conducted on methods and techniques for the automatic assessment and feedback of L2 spoken English, mainly focusing on the application of deep learning approaches. In addition to overall proficiency grades, the main forms of feedback explored in this thesis are feedback on grammatical accuracy and assessment related to particular aspects of proficiency (e.g., grammar, pronunciation, rhythm, fluency, etc.). The first study explores the use of written data and the impact of features extracted through grammatical error detection on proficiency assessment, while the second illustrates a pipeline which starts from disfluency detection and removal, passes through grammatical error correction, and ends with proficiency assessment. Grammar, as well as rhythm, pronunciation, and lexical and semantic aspects, is also considered in the third study, which investigates whether it is possible to use systems targeting specific facets of proficiency analytically when only holistic scores are available. Finally, in the last two studies, we investigate the use of self-supervised learning speech representations for both holistic and analytic proficiency assessment. While aiming at enhancing the performance of state-of-the-art automatic systems, the present work pays particular attention to the validity and interpretability of assessment both holistically and analytically and intends to pave the way to a more profound and insightful knowledge and understanding of automatic systems for speaking assessment and feedback.

Page generated in 0.0729 seconds