• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 3
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 9
  • 9
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Deep-learning for high dimensional sequential observations : application to continuous gesture recognition / Modélisation par réseaux de neurones profonds pour l'apprentissage continu d'objets et de gestes par un robot

Granger, Nicolas 10 January 2019 (has links)
Cette thèse a pour but de contribuer à améliorer les interfaces Homme-machine. En particulier, nos appareils devraient répliquer notre capacité à traiter continûment des flux d'information. Cependant, le domaine de l’apprentissage statistique dédié à la reconnaissance de séries temporelles pose de multiples défis. Nos travaux utilisent la reconnaissance de gestes comme exemple applicatif, ces données offrent un mélange complexe de poses corporelles et de mouvements, encodées sous des modalités très variées. La première partie de notre travail compare deux modèles temporels de l’état de l’art pour la reconnaissance continue sur des séquences, plus précisément l’hybride réseau de neurones -- modèle de Markov caché (NN-HMM) et les réseaux de neurones récurrents bidirectionnels (BD-RNN) avec des unités commandées par des portes. Pour ce faire, nous avons implémenté un environnement de test partagé qui est plus favorable à une étude comparative équitable. Nous proposons des ajustements sur les fonctions de coût utilisées pour entraîner les réseaux de neurones et sur les expressions du modèle hybride afin de gérer un large déséquilibre des classes de notre base d’apprentissage. Bien que les publications récentes semblent privilégier l’architecture BD-RNN, nous démontrons que l’hybride NN-HMM demeure compétitif. Cependant, ce dernier est plus dépendant de son modèle d'entrées pour modéliser les phénomènes temporels à court terme. Enfin, nous montrons que les facteurs de variations appris sur les entrées par les deux modèles sont inter-compatibles. Dans un second temps, nous présentons une étude de l'apprentissage dit «en un coup» appliqué aux gestes. Ce paradigme d'apprentissage gagne en attention mais demeure peu abordé dans le cas de séries temporelles. Nous proposons une architecture construite autour d’un réseau de neurones bidirectionnel. Son efficacité est démontrée par la reconnaissance de gestes isolés issus d’un dictionnaire de langage des signes. À partir de ce modèle de référence, nous proposons de multiples améliorations inspirées par des travaux dans des domaines connexes, et nous étudions les avantages ou inconvénients de chacun / This thesis aims to improve the intuitiveness of human-computer interfaces. In particular, machines should try to replicate human's ability to process streams of information continuously. However, the sub-domain of Machine Learning dedicated to recognition on time series remains barred by numerous challenges. Our studies use gesture recognition as an exemplar application, gestures intermix static body poses and movements in a complex manner using widely different modalities. The first part of our work compares two state-of-the-art temporal models for continuous sequence recognition, namely Hybrid Neural Network--Hidden Markov Models (NN-HMM) and Bidirectional Recurrent Neural Networks (BDRNN) with gated units. To do so, we reimplemented the two within a shared test-bed which is more amenable to a fair comparative work. We propose adjustments to Neural Network training losses and the Hybrid NN-HMM expressions to accommodate for highly imbalanced data classes. Although recent publications tend to prefer BDRNNs, we demonstrate that Hybrid NN-HMM remain competitive. However, the latter rely significantly on their input layers to model short-term patterns. Finally, we show that input representations learned via both approaches are largely inter-compatible. The second part of our work studies one-shot learning, which has received relatively little attention so far, in particular for sequential inputs such as gestures. We propose a model built around a Bidirectional Recurrent Neural Network. Its effectiveness is demonstrated on the recognition of isolated gestures from a sign language lexicon. We propose several improvements over this baseline by drawing inspiration from related works and evaluate their performances, exhibiting different advantages and disadvantages for each
12

Methods of Handling Missing Data in One Shot Response Based Power System Control

Dahal, Niraj 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The thesis extends the work done in [1] [2] by Rovnyak, et al. where the authors have described about transient event prediction and response based one shot control using decision trees trained and tested in a 176 bus model of WECC power system network. This thesis contains results from rigorous simulations performed to measure robustness of the existing one shot control subjected to missing PMU's data ranging from 0-10%. We can divide the thesis into two parts in which the first part includes understanding of the work done in [2] using another set of one-shot control combinations labelled as CC2 and the second part includes measuring their robustness while assuming missing PMU's data. Previous work from [2] involves use of decision trees for event detection based on different indices to classify a contingency as a 'Fault' or 'No fault' and another set of decision trees that decides either to actuate 'Control' or 'No control'. The actuation of control here means application of one-shot control combination to possibly bring the system to a new equilibrium point which would otherwise attain loss of synchronism. The work done in [2] also includes assessing performance of the one shot control without event detection. The thesis is organized as follows- Chapter 1 of the thesis highlights the effect of missing PMUs' data in a power system network and the need to address them appropriately. It also provides a general idea of transient stability and response of a transient fault in a power system. Chapter 2 forms the foundation of the thesis as it describes the work done in [1] [2] in detail. It describes the power system model used, contingencies set, and different indices used for decision trees. It also describes about the one shot control combination (CC1) deduced by Rovnyak, et.al. of which performance is later tested in this thesis assuming different missing data scenarios. In addition to CC1, the chapter also describes another set of control combination (CC2) whose performance is also tested assuming the same missing data scenarios. This chapter also explains about the control methodology used in [2]. Finally the performance metrics of the DTs are explained at the end of the chapter. These are the same performance metrics used in [2] to measure the robustness of the one shot control. Chapter 2 is thus more a literature review of previous work plus inclusion of few simulation results obtained from CC2 using exactly the same model and same control methodology. Chapter 3 describes different techniques of handling missing data from PMUs most of which have been used in and referred from different previous papers. Finally Chapter 4 presents the results and analysis of the simulation. The thesis is wrapped up explaining future enhancements and room for improvements.
13

One Shot Object Detection : For Tracking Purposes

Verhulsdonck, Tijmen January 2017 (has links)
One of the things augmented reality depends on is object tracking, which is a problem classically found in cinematography and security. However, the algorithms designed for the classical application are often too expensive computationally or too complex to run on simpler mobile hardware. One of the methods to do object tracking is with a trained neural network, this has already led to great results but is unfortunately still running into some of the same problems as the classical algorithms. For this reason a neural network designed specifically for object tracking on mobile hardware needs to be developed. This thesis will propose two di erent neural networks designed for object tracking on mobile hardware. Both are based on a siamese network structure and methods to improve their accuracy using filtering are also introduced. The first network is a modified version of “CNN architecture for geometric matching” that utilizes an a ne regression to perform object tracking. This network was shown to underperform in the MOT benchmark as-well as the VOT benchmark and therefore not further developed. The second network is an object detector based on “SqueezeDet” in a siamese network structure utilizing the performance optimized layers of “MobileNets”. The accuracy of the object detector network is shown to be competitive in the VOT benchmark, placing at the 16th place compared to trackers from the 2016 challenge. It was also shown to run in real-time on mobile hardware. Thus the one shot object detection network used for a tracking application can improve the experience of augmented reality applications on mobile hardware.
14

Detecting, Tracking, And Recognizing Activities In Aerial Video

Reilly, Vladimir 01 January 2012 (has links)
In this dissertation, we address the problem of detecting humans and vehicles, tracking them in crowded scenes, and finally determining their activities in aerial video. Even though this is a well explored problem in the field of computer vision, many challenges still remain when one is presented with realistic data. These challenges include large camera motion, strong scene parallax, fast object motion, large object density, strong shadows, and insufficiently large action datasets. Therefore, we propose a number of novel methods based on exploiting scene constraints from the imagery itself to aid in the detection and tracking of objects. We show, via experiments on several datasets, that superior performance is achieved with the use of proposed constraints. First, we tackle the problem of detecting moving, as well as stationary, objects in scenes that contain parallax and shadows. We do this on both regular aerial video, as well as the new and challenging domain of wide area surveillance. This problem poses several challenges: large camera motion, strong parallax, large number of moving objects, small number of pixels on target, single channel data, and low frame-rate of video. We propose a method for detecting moving and stationary objects that overcomes these challenges, and evaluate it on CLIF and VIVID datasets. In order to find moving objects, we use median background modelling which requires few frames to obtain a workable model, and is very robust when there is a large number of moving objects in the scene while the model is being constructed. We then iii remove false detections from parallax and registration errors using gradient information from the background image. Relying merely on motion to detect objects in aerial video may not be sufficient to provide complete information about the observed scene. First of all, objects that are permanently stationary may be of interest as well, for example to determine how long a particular vehicle has been parked at a certain location. Secondly, moving vehicles that are being tracked through the scene may sometimes stop and remain stationary at traffic lights and railroad crossings. These prolonged periods of non-motion make it very difficult for the tracker to maintain the identities of the vehicles. Therefore, there is a clear need for a method that can detect stationary pedestrians and vehicles in UAV imagery. This is a challenging problem due to small number of pixels on the target, which makes it difficult to distinguish objects from background clutter, and results in a much larger search space. We propose a method for constraining the search based on a number of geometric constraints obtained from the metadata. Specifically, we obtain the orientation of the ground plane normal, the orientation of the shadows cast by out of plane objects in the scene, and the relationship between object heights and the size of their corresponding shadows. We utilize the above information in a geometry-based shadow and ground plane normal blob detector, which provides an initial estimation for the locations of shadow casting out of plane (SCOOP) objects in the scene. These SCOOP candidate locations are then classified as either human or clutter using a combination of wavelet features, and a Support Vector Machine. Additionally, we combine regular SCOOP and inverted SCOOP candidates to obtain vehicle candidates. We show impressive results on sequences from VIVID and CLIF datasets, and provide comparative quantitative and qualitative analysis. We also show that we can extend the SCOOP detection method to automatically estimate the iv orientation of the shadow in the image without relying on metadata. This is useful in cases where metadata is either unavailable or erroneous. Simply detecting objects in every frame does not provide sufficient understanding of the nature of their existence in the scene. It may be necessary to know how the objects have travelled through the scene over time and which areas they have visited. Hence, there is a need to maintain the identities of the objects across different time instances. The task of object tracking can be very challenging in videos that have low frame rate, high density, and a very large number of objects, as is the case in the WAAS data. Therefore, we propose a novel method for tracking a large number of densely moving objects in an aerial video. In order to keep the complexity of the tracking problem manageable when dealing with a large number of objects, we divide the scene into grid cells, solve the tracking problem optimally within each cell using bipartite graph matching and then link the tracks across the cells. Besides tractability, grid cells also allow us to define a set of local scene constraints, such as road orientation and object context. We use these constraints as part of cost function to solve the tracking problem; This allows us to track fast-moving objects in low frame rate videos. In addition to moving through the scene, the humans that are present may be performing individual actions that should be detected and recognized by the system. A number of different approaches exist for action recognition in both aerial and ground level video. One of the requirements for the majority of these approaches is the existence of a sizeable dataset of examples of a particular action from which a model of the action can be constructed. Such a luxury is not always possible in aerial scenarios since it may be difficult to fly a large number of missions to observe a particular event multiple times. Therefore, we propose a method for v recognizing human actions in aerial video from as few examples as possible (a single example in the extreme case). We use the bag of words action representation and a 1vsAll multi-class classification framework. We assume that most of the classes have many examples, and construct Support Vector Machine models for each class. Then, we use Support Vector Machines that were trained for classes with many examples to improve the decision function of the Support Vector Machine that was trained using few examples, via late weighted fusion of decision values.
15

SELF-SUPERVISED ONE-SHOT LEARNING FOR AUTOMATIC SEGMENTATION OF GAN-GENERATED IMAGES

Ankit V Manerikar (16523988) 11 July 2023 (has links)
<p>Generative Adversarial Networks (GANs) have consistently defined the state-of-the-art in the generative modelling of high-quality images in several applications.  The images generated using GANs, however, do not lend themselves to being directly used in supervised learning tasks without first being curated through annotations.  This dissertation investigates how to carry out automatic on-the-fly segmentation of GAN-generated images and how this can be applied to the problem of producing high-quality simulated data for X-ray based security screening.  The research exploits the hidden layer properties of GAN models in a self-supervised learning framework for the automatic one-shot segmentation of images created by a style-based GAN.  The framework consists of a novel contrastive learner that is based on a Sinkhorn distance-based clustering algorithm and that learns a compact feature space for per-pixel classification of the GAN-generated images.  This facilitates faster learning of the feature vectors for one-shot segmentation and allows on-the-fly automatic annotation of the GAN images.  We have tested our framework on a number of standard benchmarks (CelebA, PASCAL, LSUN) to yield a segmentation performance that not only exceeds the semi-supervised baselines by an average wIoU margin of 1.02 % but also improves the inference speeds by a factor of 4.5.  This dissertation also presents BagGAN, an extension of our framework to the problem domain of X-ray based baggage screening.  BagGAN produces annotated synthetic baggage X-ray scans to train machine-learning algorithms for the detection of prohibited items during security screening.  We have compared the images generated by BagGAN with those created by deterministic ray-tracing models for X-ray simulation and have observed that our GAN-based baggage simulator yields a significantly improved performance in terms of image fidelity and diversity.  The BagGAN framework is also tested on the PIDRay and other baggage screening benchmarks to produce segmentation results comparable to their respective baseline segmenters based on manual annotations.</p>
16

INFERENCE FOR ONE-SHOT DEVICE TESTING DATA

Ling, Man Ho 10 1900 (has links)
<p>In this thesis, inferential methods for one-shot device testing data from accelerated life-test are developed. Due to constraints on time and budget, accelerated life-tests are commonly used to induce more failures within a reasonable amount of test-time for obtaining more lifetime information that will be especially useful in reliability analysis. One-shot devices, which can be used only once as they get destroyed immediately after testing, yield observations only on their condition and not on their real lifetimes. So, only binary response data are observed from an one-shot device testing experiment. Since no failure times of units are observed, we use the EM algorithm for determining the maximum likelihood estimates of the model parameters. Also, inference for the reliability at a mission time and the mean lifetime at normal operating conditions are also developed.</p> <p>The thesis proceeds as follows. Chapter 2 considers the exponential distribution with single-stress relationship and develops inferential methods for the model parameters, the reliability and the mean lifetime. The results obtained by the EM algorithm are compared with those obtained from the Bayesian approach. A one-shot device testing data is analyzed by the proposed method and presented as an illustrative example. Next, in Chapter 3, the exponential distribution with multiple-stress relationship is considered and corresponding inferential results are developed. Jackknife technique is described for the bias reduction in the developed estimates. Interval estimation for the reliability and the mean lifetime are also discussed based on observed information matrix, jackknife technique, parametric bootstrap method, and transformation technique. Again, we present an example to illustrate all the inferential methods developed in this chapter. Chapter 4 considers the point and interval estimation for the one-shot device testing data under the Weibull distribution with multiple-stress relationship and illustrates the application of the proposed methods in a study involving the development of tumors in mice with respect to risk factors such as sex, strain of offspring, and dose effects of benzidine dihydrochloride. A Monte Carlo simulation study is also carried out to evaluate the performance of the EM estimates for different levels of reliability and different sample sizes. Chapter 5 describes a general algorithm for the determination of the optimal design of an accelerated life-test plan for one-shot device testing experiment. It is based on the asymptotic variance of the estimated reliability at a specific mission time. A numerical example is presented to illustrate the application of the algorithm. Finally, Chapter 6 presents some concluding remarks and some additional research problems that would be of interest for further study.</p> / Doctor of Philosophy (PhD)
17

One-shot pattern projection for dense and accurate 3D reconstruction in structured light

Fernández Navarro, Sergio 22 June 2012 (has links)
This thesis focuses on the problem of 3D acquisition using coded structured light (CSL). In CSL, a projected pattern impinges artificial texture onto the object surface, increasing the number of correspondences in the retrieved image. Finally, 3D acquisition is pursued by triangulation. An active research is being done in CSL techniques for moving scenarios. In this thesis, a review of the main CSL approaches is presented. Afterwards, we perform a deep study of the two most used frequency-based techniques, and a new proposal for automatic selection of the window width using Windowed Fourier Transform (WFT). Using this analysis, we implemented a new technique for one-shot dense acquisition, able to work in moving scenarios. The technique is based on adaptive WFT and DeBruijn coding. The results show the proposed method obtains dense acquisition with accuracy levels comparable to DeBruijn algorithms. Finally, the thesis focuses on the problem of registration in SL. / Esta tesis estudia el problema de la reconstrucción 3D con Luz Estructurada (LE). En LE se proyecta un patrón en la superficie del objecto, a fin de incrementar la textura y el número de correspondencias con la imagen capturada, de la que se extrae la información 3D. Actualmente se trabaja en soluciones de LE para entornos moviles. La tesis presenta un compendio de las principales tecnicas en LE. Además, se estudian en detalles las dos propuestas de análisis frecuencial, proponiendo un algoritmo para el análisis del patrón capturado. Con ésto, se propone un método de un único patrón proyectado, obteniendo reconstrucción densa. La técnica se basa en WFT combinado con codificación DeBruijn. Los resultados muestran niveles de precisión comparables con otras técnicas DeBruijn, pero obteniendo reconstrucción densa. Finalmente, se estudia el problema de registro de reconstrucciones LE.
18

Time Management In Partitioned Systems

Kodancha, A Hariprasad 10 1900 (has links)
Time management is one of the critical modules of safety-critical systems. Applications need strong assurance from the operating system that their hard real-time requirements are met. Partitioned system has recently evolved as a means to provide protection to safety critical applications running on an Avionics computer resource. Each partition has an application running strictly for a specified duration. These applications use the CPU on a cyclic basis. Applications running on a real-time systems request the service of time management in one way or the other. An application may request for a time-out while waiting for a resource, may voluntarily relinquish the CPU for some delay time or may have deadline before which it is expected to complete its tasks. These requests must be handled in a deterministic and accurate way with lower overheads. Time management within an operating system uses the hardware timers to service the time-out requests. The three well-known approaches for handling timer requests are tick-based, one-shot and firm timer. Traditionally tick-based has been the most popular approach that relies on periodic interrupt timer, although it has a poor accuracy. One-shot timer approach provides better accuracy as the timer interrupt can be generated exactly when required. Firm timers use soft timers in combination with one-shot timer wherein the expired timers are checked at strategic points in the kernel. The thesis compares the performance of these three approaches for partitioned systems and provides an insight about the suitability of the approaches. The thesis presents tick-based and one-shot timer algorithms that handle time-out requests of real-time applications running on a partitioned system by adhering to time partitioning rules. It compares the performance of these algorithms. It presents an one-shot timer algorithm named hierarchical multiple linked lists and the experimental results proves that the algorithm performs better than other conventional linked list based one-shot timer algorithms. The thesis also analyzes the timing behavior of real-time applications for partitioned systems. The hard real-time system under consideration is avionics system and an indigenously developed ARINC-653 compliant real-time operating system has been used to measure the performance.
19

Some Inferential Results for One-Shot Device Testing Data Analysis

So, Hon Yiu January 2016 (has links)
In this thesis, we develop some inferential results for one-shot device testing data analysis. These extend and generalize existing methods in the literature. First, a competing-risk model is introduced for one-shot testing data under accelerated life-tests. One-shot devices are products which will be destroyed immediately after use. Therefore, we can observe only a binary status as data, success or failure, of such products instead of its lifetime. Many one-shot devices contain multiple components and failure of any one of them will lead to the failure of the device. Failed devices are inspected to identify the specific cause of failure. Since the exact lifetime is not observed, EM algorithm becomes a natural tool to obtain the maximum likelihood estimates of the model parameters. Here, we develop the EM algorithm for competing exponential and Weibull cases. Second, a semi-parametric approach is developed for simple one-shot device testing data. Semi-parametric estimation is a model that consists of parametric and non-parametric components. For this purpose, we only assume the hazards at different stress levels are proportional to each other, but no distributional assumption is made on the lifetimes. This provides a greater flexibility in model fitting and enables us to examine the relationship between the reliability of devices and the stress factors. Third, Bayesian inference is developed for one-shot device testing data under exponential distribution and Weibull distribution with non-constant shape parameters for competing risks. Bayesian framework provides statistical inference from another perspective. It assumes the model parameters to be random and then improves the inference by incorporating expert's experience as prior information. This method is shown to be very useful if we have limited failure observation wherein the maximum likelihood estimator may not exist. The thesis proceeds as follows. In Chapter 2, we assume the one-shot devices to have two components with lifetimes having exponential distributions with multiple stress factors. We then develop an EM algorithm for developing likelihood inference for the model parameters as well as some useful reliability characteristics. In Chapter 3, we generalize to the situation when lifetimes follow a Weibull distribution with non-constant shape parameters. In Chapter 4, we propose a semi-parametric model for simple one-shot device test data based on proportional hazards model and develop associated inferential results. In Chapter 5, we consider the competing risk model with exponential lifetimes and develop inference by adopting the Bayesian approach. In Chapter 6, we generalize these results on Bayesian inference to the situation when the lifetimes have a Weibull distribution. Finally, we provide some concluding remarks and indicate some future research directions in Chapter 7. / Thesis / Doctor of Philosophy (PhD)
20

Interactive quantum information theory

Touchette, Dave 04 1900 (has links)
La théorie de l'information quantique s'est développée à une vitesse fulgurante au cours des vingt dernières années, avec des analogues et extensions des théorèmes de codage de source et de codage sur canal bruité pour la communication unidirectionnelle. Pour la communication interactive, un analogue quantique de la complexité de la communication a été développé, pour lequel les protocoles quantiques peuvent performer exponentiellement mieux que les meilleurs protocoles classiques pour certaines tâches classiques. Cependant, l'information quantique est beaucoup plus sensible au bruit que l'information classique. Il est donc impératif d'utiliser les ressources quantiques à leur plein potentiel. Dans cette thèse, nous étudions les protocoles quantiques interactifs du point de vue de la théorie de l'information et étudions les analogues du codage de source et du codage sur canal bruité. Le cadre considéré est celui de la complexité de la communication: Alice et Bob veulent faire un calcul quantique biparti tout en minimisant la quantité de communication échangée, sans égard au coût des calculs locaux. Nos résultats sont séparés en trois chapitres distincts, qui sont organisés de sorte à ce que chacun puisse être lu indépendamment. Étant donné le rôle central qu'elle occupe dans le contexte de la compression interactive, un chapitre est dédié à l'étude de la tâche de la redistribution d'état quantique. Nous prouvons des bornes inférieures sur les coûts de communication nécessaires dans un contexte interactif. Nous prouvons également des bornes atteignables avec un seul message, dans un contexte d'usage unique. Dans un chapitre subséquent, nous définissons une nouvelle notion de complexité de l'information quantique. Celle-ci caractérise la quantité d'information, plutôt que de communication, qu'Alice et Bob doivent échanger pour calculer une tâche bipartie. Nous prouvons beaucoup de propriétés structurelles pour cette quantité, et nous lui donnons une interprétation opérationnelle en tant que complexité de la communication quantique amortie. Dans le cas particulier d'entrées classiques, nous donnons une autre caractérisation permettant de quantifier le coût encouru par un protocole quantique qui oublie de l'information classique. Deux applications sont présentées: le premier résultat général de somme directe pour la complexité de la communication quantique à plus d'une ronde, ainsi qu'une borne optimale, à un terme polylogarithmique près, pour la complexité de la communication quantique avec un nombre de rondes limité pour la fonction « ensembles disjoints ». Dans un chapitre final, nous initions l'étude de la capacité interactive quantique pour les canaux bruités. Étant donné que les techniques pour distribuer de l'intrication sont bien étudiées, nous nous concentrons sur un modèle avec intrication préalable parfaite et communication classique bruitée. Nous démontrons que dans le cadre plus ardu des erreurs adversarielles, nous pouvons tolérer un taux d'erreur maximal de une demie moins epsilon, avec epsilon plus grand que zéro arbitrairement petit, et ce avec un taux de communication positif. Il s'ensuit que les canaux avec bruit aléatoire ayant une capacité positive pour la transmission unidirectionnelle ont une capacité positive pour la communication interactive quantique. Nous concluons avec une discussion de nos résultats et des directions futures pour ce programme de recherche sur une théorie de l'information quantique interactive. / Quantum information theory has developed tremendously over the past two decades, with analogues and extensions of the source coding and channel coding theorems for unidirectional communication. Meanwhile, for interactive communication, a quantum analogue of communication complexity has been developed, for which quantum protocols can provide exponential savings over the best possible classical protocols for some classical tasks. However, quantum information is much more sensitive to noise than classical information. It is therefore essential to make the best use possible of quantum resources. In this thesis, we take an information-theoretic point of view on interactive quantum protocols and study the interactive analogues of source compression and noisy channel coding. The setting we consider is that of quantum communication complexity: Alice and Bob want to perform some joint quantum computation while minimizing the required amount of communication. Local computation is deemed free. Our results are split into three distinct chapters, and these are organized in such a way that each can be read independently. Given its central role in the context of interactive compression, we devote a chapter to the task of quantum state redistribution. In particular, we prove lower bounds on its communication cost that are robust in the context of interactive communication. We also prove one-shot, one-message achievability bounds. In a subsequent chapter, we define a new, fully quantum notion of information cost for interactive protocols and a corresponding notion of information complexity for bipartite tasks. It characterizes how much quantum information, rather than quantum communication, Alice and Bob must exchange in order to implement a given bipartite task. We prove many structural properties for these quantities, and provide an operational interpretation for quantum information complexity as the amortized quantum communication complexity. In the special case of classical inputs, we provide an alternate characterization of information cost that provides an answer to the following question about quantum protocols: what is the cost of forgetting classical information? Two applications are presented: the first general multi-round direct-sum theorem for quantum protocols, and a tight lower bound, up to polylogarithmic terms, for the bounded-round quantum communication complexity of the disjointness function. In a final chapter, we initiate the study of the interactive quantum capacity of noisy channels. Since techniques to distribute entanglement are well-studied, we focus on a model with perfect pre-shared entanglement and noisy classical communication. We show that even in the harder setting of adversarial errors, we can tolerate a provably maximal error rate of one half minus epsilon, for an arbitrarily small epsilon greater than zero, at positive communication rates. It then follows that random noise channels with positive capacity for unidirectional transmission also have positive interactive quantum capacity. We conclude with a discussion of our results and further research directions in interactive quantum information theory.

Page generated in 0.0382 seconds