• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 23
  • 23
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Early Social Communication Vulnerabilities of Children at Genetic Risk for Autism Spectrum Disorder

Lisa R. Hamrick (8941913) 26 July 2022 (has links)
<p>Early detection and characterization of autism spectrum disorder (ASD) may be improved by incorporating ecologically valid methods into ASD screening and assessment, capitalizing on prospective monitoring of high-risk populations, and targeting highly informative ASD features that emerge early in development. The present study aims to address these barriers by characterizing early vocal and pre-linguistic communication features present during naturalistic behavior samples of young children with neurogenetic syndromes (NGS). Participants were 39 children aged 5-30 months diagnosed with an NGS and 39 children aged 4-26 months at low risk for developmental delays. Participants completed a daylong audio recording of child vocalizations from which measures of early vocal features (child vocalization rate, canonical babbling ratio, and pitch variability) were obtained. Participants and their mothers also completed an unstructured play-based task during which pre-linguistic communicative features (communication complexity and function) were coded. We first used Bayesian analyses to compare the early vocal and pre-linguistic communication features of children with NGS to those of children at low risk for developmental delays. Children with NGS used less canonical babble, lower communication complexity overall and for behaviors for the purposes of joint attention. Next, we conducted a cluster analysis of early vocal and pre-linguistic communication features using the full sample of NGS and low-risk participants. The selected model identified 6 clusters that were primarily differentiated by canonical babbling and communicative function. These clusters differentiated participants beyond risk status, chronological age and adaptive age. Furthermore, certain clusters reflected differences in adaptive communication and socialization skills that may be relevant to early ASD profiles. These findings suggest that canonical babble and communicative function provide meaningful information about early developmental risk and may be useful to incorporate into the ASD screening and diagnostic processes.</p>
12

Space in Proof Complexity

Vinyals, Marc January 2017 (has links)
ropositional proof complexity is the study of the resources that are needed to prove formulas in propositional logic. In this thesis we are concerned with the size and space of proofs, and in particular with the latter. Different approaches to reasoning are captured by corresponding proof systems. The simplest and most well studied proof system is resolution, and we try to get our understanding of other proof systems closer to that of resolution. In resolution we can prove a space lower bound just by showing that any proof must have a large clause. We prove a similar relation between resolution width and polynomial calculus space that lets us derive space lower bounds, and we use it to separate degree and space. For cutting planes we show length-space trade-offs. This is, there are formulas that have a proof in small space and a proof in small length, but there is no proof that can optimize both measures at the same time. We introduce a new measure of space, cumulative space, that accounts for the space used throughout a proof rather than only its maximum. This is exploratory work, but we can also prove new results for the usual space measure. We define a new proof system that aims to capture the power of current SAT solvers, and we show a landscape of length-space trade-offs comparable to those in resolution. To prove these results we build and use tools from other areas of computational complexity. One area is pebble games, very simple computational models that are useful for modelling space. In addition to results with applications to proof complexity, we show that pebble game cost is PSPACE-hard to approximate. Another area is communication complexity, the study of the amount of communication that is needed to solve a problem when its description is shared by multiple parties. We prove a simulation theorem that relates the query complexity of a function with the communication complexity of a composed function. / <p>QC 20170509</p>
13

New bounds for information complexity and quantum query complexity via convex optimization tools

Brandeho, Mathieu 28 September 2018 (has links) (PDF)
Cette thèse rassemble trois travaux sur la complexité d'information et sur la complexité en requête quantique. Ces domaines d'études ont pour points communs les outils mathématiques pour étudier ces complexités, c'est-à-dire les problèmes d'optimisation.Les deux premiers travaux concernent le domaine de la complexité en requête quantique, en généralisant l'important résultat suivant: dans l'article cite{LMRSS11}, leurs auteurs parviennent à caractériser la complexité en requête quantique, à l'aide de la méthode par adversaire, un programme semi-définie positif introduit par A. Ambainis dans cite{Ambainis2000}. Cependant, cette caractérisation est restreinte aux modèles à temps discret, avec une erreur bornée. Ainsi, le premier travail consiste à généraliser leur résultat aux modèles à temps continu, tandis que le second travail est une démarche, non aboutie, pour caractériser la complexité en requête quantique dans le cas exact et pour erreur non bornée.Dans ce premier travail, pour caractériser la complexité en requête quantique aux modèles à temps discret, nous adaptons la démonstration des modèles à temps discret, en construisant un algorithme en requête adiabatique universel. Le principe de cet algorithme repose sur le théorème adiabatique cite{Born1928}, ainsi qu'une solution optimale du dual de la méthode par adversaire. À noter que l'analyse du temps d'exécution de notre algorithme adiabatique est basée sur preuve qui ne nécessite pas d'écart dans le spectre de l'Hamiltonien.Dans le second travail, on souhaite caractériser la complexité en requête quantique pour une erreur non bornée ou nulle. Pour cela on reprend et améliore la méthode par adversaire, avec une approche de la mécanique lagrangienne, dans laquelle on construit un Lagrangien indiquant le nombre de requêtes nécessaires pour se déplacer dans l'espace des phases, ainsi on peut définir l'``action en requête''. Or ce lagrangien s'exprime sous la forme d'un programme semi-defini, son étude classique via les équations d'Euler-Lagrange nécessite l'utilisation du théorème de l'enveloppe, un puissant outils d'économathématiques. Le dernier travail, plus éloigné, concerne la complexité en information (et par extension la complexité en communication) pour simuler des corrélations non-locales. Ou plus précisement la quantitié d'information (selon Shannon) que doive s'échanger deux parties pour obtenir ses corrélations. Dans ce but, nous définissons une nouvelle complexité, denommée la zero information complexity IC_0, via le modèle sans communication. Cette complexité a l'avantage de s'exprimer sous la forme d'une optimization convexe. Pour les corrélations CHSH, on résout le problème d'optimisation pour le cas à une seule direction où nous retrouvons un résultat connu. Pour le scénario à deux directions, on met numériquement en évidence la validité de cette borne, et on résout une forme relaxée de IC_0 qui est un nouveau résultat. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
14

Échantillonnage des distributions continues non uniformes en précision arbitraire et protocole pour l'échantillonnage exact distribué des distributions discrètes quantiques

Gravel, Claude 03 1900 (has links)
La thèse est divisée principalement en deux parties. La première partie regroupe les chapitres 2 et 3. La deuxième partie regroupe les chapitres 4 et 5. La première partie concerne l'échantillonnage de distributions continues non uniformes garantissant un niveau fixe de précision. Knuth et Yao démontrèrent en 1976 comment échantillonner exactement n'importe quelle distribution discrète en n'ayant recours qu'à une source de bits non biaisés indépendants et identiquement distribués. La première partie de cette thèse généralise en quelque sorte la théorie de Knuth et Yao aux distributions continues non uniformes, une fois la précision fixée. Une borne inférieure ainsi que des bornes supérieures pour des algorithmes génériques comme l'inversion et la discrétisation figurent parmi les résultats de cette première partie. De plus, une nouvelle preuve simple du résultat principal de l'article original de Knuth et Yao figure parmi les résultats de cette thèse. La deuxième partie concerne la résolution d'un problème en théorie de la complexité de la communication, un problème qui naquit avec l'avènement de l'informatique quantique. Étant donné une distribution discrète paramétrée par un vecteur réel de dimension N et un réseau de N ordinateurs ayant accès à une source de bits non biaisés indépendants et identiquement distribués où chaque ordinateur possède un et un seul des N paramètres, un protocole distribué est établi afin d'échantillonner exactement ladite distribution. / The thesis is divided mainly into two parts. Chapters 2 and 3 contain the first part. Chapters 4 and 5 contain the second part. The first part is about sampling non uniform continuous distributions with a given level of precision. Knuth and Yao showed in 1976 how to sample exactly any discrete distribution using a source of unbiased identically and independently distributed bits. The first part of this thesis extends the theory of Knuth and Yao to non uniform continuous distributions once the precision is fixed. A lower bound and upper bounds for generic algorithms based on discretization or inversion are given as well. In addition, a new simple proof of the original result of Knuth and Yao is given here. The second part is about the solution of a problem in communication complexity that originally appeared within the field of quantum information science. Given a network of N computers with a server capable of generating random unbiased bits and a parametric discrete distribution with a vector of N real parameters where each computer owns one and only one parameter, a protocol to sample exactly the distribution in a distributed manner is given here.
15

Computations on Massive Data Sets : Streaming Algorithms and Two-party Communication / Calculs sur des grosses données : algorithmes de streaming et communication entre deux joueurs

Konrad, Christian 05 July 2013 (has links)
Dans cette thèse on considère deux modèles de calcul qui abordent des problèmes qui se posent lors du traitement des grosses données. Le premier modèle est le modèle de streaming. Lors du traitement des grosses données, un accès aux données de façon aléatoire est trop couteux. Les algorithmes de streaming ont un accès restreint aux données: ils lisent les données de façon séquentielle (par passage) une fois ou peu de fois. De plus, les algorithmes de streaming utilisent une mémoire d'accès aléatoire de taille sous-linéaire dans la taille des données. Le deuxième modèle est le modèle de communication. Lors du traitement des données par plusieurs entités de calcul situées à des endroits différents, l'échange des messages pour la synchronisation de leurs calculs est souvent un goulet d'étranglement. Il est donc préférable de minimiser la quantité de communication. Un modèle particulier est la communication à sens unique entre deux participants. Dans ce modèle, deux participants calculent un résultat en fonction des données qui sont partagées entre eux et la communication se réduit à un seul message. On étudie les problèmes suivants: 1) Les couplages dans le modèle de streaming. L'entrée du problème est un flux d'arêtes d'un graphe G=(V,E) avec n=|V|. On recherche un algorithme de streaming qui calcule un couplage de grande taille en utilisant une mémoire de taille O(n polylog n). L'algorithme glouton remplit ces contraintes et calcule un couplage de taille au moins 1/2 fois la taille d'un couplage maximum. Une question ouverte depuis longtemps demande si l'algorithme glouton est optimal si aucune hypothèse sur l'ordre des arêtes dans le flux est faite. Nous montrons qu'il y a un meilleur algorithme que l'algorithme glouton si les arêtes du graphe sont dans un ordre uniformément aléatoire. De plus, nous montrons qu'avec deux passages on peut calculer un couplage de taille strictement supérieur à 1/2 fois la taille d'un couplage maximum sans contraintes sur l'ordre des arêtes. 2) Les semi-couplages en streaming et en communication. Un semi-couplage dans un graphe biparti G=(A,B,E) est un sous-ensemble d'arêtes qui couple tous les sommets de type A exactement une fois aux sommets de type B de façon pas forcement injective. L'objectif est de minimiser le nombre de sommets de type A qui sont couplés aux même sommets de type B. Pour ce problème, nous montrons un algorithme qui, pour tout 0<=ε<=1, calcule une O(n^((1-ε)/2))-approximation en utilisant une mémoire de taille Ô(n^(1+ε)). De plus, nous montrons des bornes supérieures et des bornes inférieurs pour la complexité de communication entre deux participants pour ce problème et des nouveaux résultats concernant la structure des semi-couplages. 3) Validité des fichiers XML dans le modèle de streaming. Un fichier XML de taille n est une séquence de balises ouvrantes et fermantes. Une DTD est un ensemble de contraintes de validité locales d'un fichier XML. Nous étudions des algorithmes de streaming pour tester si un fichier XML satisfait les contraintes décrites dans une DTD. Notre résultat principal est un algorithme de streaming qui fait O(log n) passages, utilise 3 flux auxiliaires et une mémoire de taille O(log^2 n). De plus, pour le problème de validation des fichiers XML qui décrivent des arbres binaires, nous présentons des algorithmes en un passage et deux passages qui une mémoire de taille sous-linéaire. 4) Correction d'erreur pour la distance du cantonnier. Alice et Bob ont des ensembles de n points sur une grille en d dimensions. Alice envoit un échantillon de petite taille à Bob qui, après réception, déplace ses points pour que la distance du cantonnier entre les points d'Alice et les points de Bob diminue. Pour tout k>0 nous montrons qu'il y a un protocole presque optimal de communication avec coût de communication Ô(kd) tel que les déplacements des points effectués par Bob aboutissent à un facteur d'approximation de O(d) par rapport aux meilleurs déplacements de d points. / In this PhD thesis, we consider two computational models that address problems that arise when processing massive data sets. The first model is the Data Streaming Model. When processing massive data sets, random access to the input data is very costly. Therefore, streaming algorithms only have restricted access to the input data: They sequentially scan the input data once or only a few times. In addition, streaming algorithms use a random access memory of sublinear size in the length of the input. Sequential input access and sublinear memory are drastic limitations when designing algorithms. The major goal of this PhD thesis is to explore the limitations and the strengths of the streaming model. The second model is the Communication Model. When data is processed by multiple computational units at different locations, then the message exchange of the participating parties for synchronizing their calculations is often a bottleneck. The amount of communication should hence be as little as possible. A particular setting is the one-way two-party communication setting. Here, two parties collectively compute a function of the input data that is split among the two parties, and the whole message exchange reduces to a single message from one party to the other one. We study the following four problems in the context of streaming algorithms and one-way two-party communication: (1) Matchings in the Streaming Model. We are given a stream of edges of a graph G=(V,E) with n=|V|, and the goal is to design a streaming algorithm that computes a matching using a random access memory of size O(n polylog n). The Greedy matching algorithm fits into this setting and computes a matching of size at least 1/2 times the size of a maximum matching. A long standing open question is whether the Greedy algorithm is optimal if no assumption about the order of the input stream is made. We show that it is possible to improve on the Greedy algorithm if the input stream is in uniform random order. Furthermore, we show that with two passes an approximation ratio strictly larger than 1/2 can be obtained if no assumption on the order of the input stream is made. (2) Semi-matchings in Streaming and in Two-party Communication. A semi-matching in a bipartite graph G=(A,B,E) is a subset of edges that matches all A vertices exactly once to B vertices, not necessarily in an injective way. The goal is to minimize the maximal number of A vertices that are matched to the same B vertex. We show that for any 0<=ε<=1, there is a one-pass streaming algorithm that computes an O(n^((1-ε)/2))-approximation using Ô(n^(1+ε)) space. Furthermore, we provide upper and lower bounds on the two-party communication complexity of this problem, as well as new results on the structure of semi-matchings. (3) Validity of XML Documents in the Streaming Model. An XML document of length n is a sequence of opening and closing tags. A DTD is a set of local validity constraints of an XML document. We study streaming algorithms for checking whether an XML document fulfills the validity constraints of a given DTD. Our main result is an O(log n)-pass streaming algorithm with 3 auxiliary streams and O(log^2 n) space for this problem. Furthermore, we present one-pass and two-pass sublinear space streaming algorithms for checking validity of XML documents that encode binary trees. (4) Budget-Error-Correcting under Earth-Mover-Distance. We study the following one-way two-party communication problem. Alice and Bob have sets of n points on a d-dimensional grid [Δ]^d for an integer Δ. Alice sends a small sketch of her points to Bob and Bob adjusts his point set towards Alice's point set so that the Earth-Mover-Distance of Bob's points and Alice's points decreases. For any k>0, we show that there is an almost tight randomized protocol with communication cost Ô(kd) such that Bob's adjustments lead to an O(d)-approximation compared to the k best possible adjustments that Bob could make.
16

Computations on Massive Data Sets : Streaming Algorithms and Two-party Communication

Konrad, Christian 05 July 2013 (has links) (PDF)
In this PhD thesis, we consider two computational models that address problems that arise when processing massive data sets. The first model is the Data Streaming Model. When processing massive data sets, random access to the input data is very costly. Therefore, streaming algorithms only have restricted access to the input data: They sequentially scan the input data once or only a few times. In addition, streaming algorithms use a random access memory of sublinear size in the length of the input. Sequential input access and sublinear memory are drastic limitations when designing algorithms. The major goal of this PhD thesis is to explore the limitations and the strengths of the streaming model. The second model is the Communication Model. When data is processed by multiple computational units at different locations, then the message exchange of the participating parties for synchronizing their calculations is often a bottleneck. The amount of communication should hence be as little as possible. A particular setting is the one-way two-party communication setting. Here, two parties collectively compute a function of the input data that is split among the two parties, and the whole message exchange reduces to a single message from one party to the other one. We study the following four problems in the context of streaming algorithms and one-way two-party communication: (1) Matchings in the Streaming Model. We are given a stream of edges of a graph G=(V,E) with n=|V|, and the goal is to design a streaming algorithm that computes a matching using a random access memory of size O(n polylog n). The Greedy matching algorithm fits into this setting and computes a matching of size at least 1/2 times the size of a maximum matching. A long standing open question is whether the Greedy algorithm is optimal if no assumption about the order of the input stream is made. We show that it is possible to improve on the Greedy algorithm if the input stream is in uniform random order. Furthermore, we show that with two passes an approximation ratio strictly larger than 1/2 can be obtained if no assumption on the order of the input stream is made. (2) Semi-matchings in Streaming and in Two-party Communication. A semi-matching in a bipartite graph G=(A,B,E) is a subset of edges that matches all A vertices exactly once to B vertices, not necessarily in an injective way. The goal is to minimize the maximal number of A vertices that are matched to the same B vertex. We show that for any 0<=ε<=1, there is a one-pass streaming algorithm that computes an O(n^((1-ε)/2))-approximation using Ô(n^(1+ε)) space. Furthermore, we provide upper and lower bounds on the two-party communication complexity of this problem, as well as new results on the structure of semi-matchings. (3) Validity of XML Documents in the Streaming Model. An XML document of length n is a sequence of opening and closing tags. A DTD is a set of local validity constraints of an XML document. We study streaming algorithms for checking whether an XML document fulfills the validity constraints of a given DTD. Our main result is an O(log n)-pass streaming algorithm with 3 auxiliary streams and O(log^2 n) space for this problem. Furthermore, we present one-pass and two-pass sublinear space streaming algorithms for checking validity of XML documents that encode binary trees. (4) Budget-Error-Correcting under Earth-Mover-Distance. We study the following one-way two-party communication problem. Alice and Bob have sets of n points on a d-dimensional grid [Δ]^d for an integer Δ. Alice sends a small sketch of her points to Bob and Bob adjusts his point set towards Alice's point set so that the Earth-Mover-Distance of Bob's points and Alice's points decreases. For any k>0, we show that there is an almost tight randomized protocol with communication cost Ô(kd) such that Bob's adjustments lead to an O(d)-approximation compared to the k best possible adjustments that Bob could make.
17

Secret Key Generation in the Multiterminal Source Model : Communication and Other Aspects

Mukherjee, Manuj January 2017 (has links) (PDF)
This dissertation is primarily concerned with the communication required to achieve secret key (SK) capacity in a multiterminal source model. The multiterminal source model introduced by Csiszár and Narayan consists of a group of remotely located terminals with access to correlated sources and a noiseless public channel. The terminals wish to secure their communication by agreeing upon a group secret key. The key agreement protocol involves communicating over the public channel, and agreeing upon an SK secured from eavesdroppers listening to the public communication. The SK capacity, i.e., the maximum rate of an SK that can be agreed upon by the terminals, has been characterized by Csiszár and Narayan. Their capacity-achieving key generation protocol involved terminals communicating to attain omniscience, i.e., every terminal gets to recover the sources of the other terminals. While this is a very general protocol, it often requires larger rates of public communication than is necessary to achieve SK capacity. The primary focus of this dissertation is to characterize the communication complexity, i.e., the minimum rate of public discussion needed to achieve SK capacity. A lower bound to communication complexity is derived for a general multiterminal source, although it turns out to be loose in general. While the minimum rate of communication for omniscience is always an upper bound to the communication complexity, we derive tighter upper bounds to communication complexity for a special class of multiterminal sources, namely, the hypergraphical sources. This upper bound yield a complete characterization of hypergraphical sources where communication for omniscience is a rate-optimal protocol for SK generation, i.e., the communication complexity equals the minimum rate of communication for omniscience. Another aspect of the public communication touched upon by this dissertation is the necessity of omnivocality, i.e., all terminals communicating, to achieve the SK capacity. It is well known that in two-terminal sources, only one terminal communicating success to generate a maximum rate secret key. However, we are able to show that for three or more terminals, omnivocality is indeed required to achieve SK capacity if a certain condition is met. For the specific case of three terminals, we show that this condition is also necessary to ensure omnivocality is essential in generating a SK of maximal rate. However, this condition is no longer necessary when there are four or more terminals. A certain notion of common information, namely, the Wyner common information, plays a central role in the communication complexity problem. This dissertation thus includes a study of multiparty versions of the two widely used notions of common information, namely, Wyner common information and Gács-Körner (GK) common information. While evaluating these quantities is difficult in general, we are able to derive explicit expressions for both types of common information in the case of hypergraphical sources. We also study fault-tolerant SK capacity in this dissertation. The maximum rate of SK that can be generated even if an arbitrary subset of terminals drops out is called a fault-tolerant SK capacity. Now, suppose we have a fixed number of pairwise SKs. How should one distribute them amongpairs of terminals, to ensure good fault tolerance behavior in generating a groupSK? We show that the distribution of the pairwise keys according to a Harary graph provides a certain degree of fault tolerance, and bounds are obtained on its fault-tolerant SK capacity.
18

Feasibility, Efficiency, and Robustness of Secure Computation

Hai H Nguyen (14206922) 02 December 2022 (has links)
<p>Secure computation allows mutually distrusting parties to compute over private data. Such collaborations have widespread applications in social, scientific, commercial, and security domains. However, the overhead of achieving security is a major bottleneck to the adoption of such technologies. In this context, this thesis aims to design the most secure protocol within budgeted computational or network resources by mathematically formulating it as an optimization problem. </p> <p>With the rise in CPU power and cheap RAM, the offline-online model for secure computation has become the prominent model for real-world security systems. This thesis investigates the above-mentioned optimization problem in the information-theoretic offline-online model. In particular, this thesis presents the following selected sample of our research in greater detail. </p> <p>Round and Communication Complexity: Chor-Kushilevitz-Beaver characterized the round and communication complexity of secure two-party computation. Since then, the case of functions with randomized output remained unexplored. We proved the decidability of determining these complexities. Next, if such a protocol exists, we construct the optimal protocol; otherwise, we present an obstruction to achieving security. </p> <p>Rate and Capacity of secure computation: The efficiency of converting the offline samples into secure computation during the online phase is essential. However, investigating this ``production rate'' for general secure computations seems analytically intractable. Towards this objective, we introduce a new model of secure computation -- one without any communication -- that has several practical applications. We lay the mathematical foundations of formulating rate and capacity questions in this framework. Our research identifies the first tight rate and capacity results (a la Shannon) in secure computation. </p> <p>Reverse multiplication embedding: We identify a new problem in algebraic complexity theory that unifies several efficiency objectives in cryptography. Reverse multiplication embedding seeks to implement as many (base field) multiplications as possible using one extension field multiplication. We present optimal construction using algebraic function fields. This embedding has subsequently led to efficient improvement of secure computation, homomorphic encryption, proof systems, and leakage-resilient cryptography. </p> <p>Characterizing the robustness to side-channel attacks: Side-channel attacks present a significant threat to the offline phase. We introduce the cryptographic analog of common information to characterize the offline phase's robustness quantitatively. We build a framework for security and attack analysis. In the context of robust threshold cryptography, we present a state-of-the-art attack, threat assessment, and security fix for Shamir's secret-sharing. </p> <p><br></p>
19

Impacts of Complexity and Timing of Communication Interruptions on Visual Detection Tasks

Stader, Sally 01 January 2014 (has links)
Auditory preemption theory suggests two competing assumptions for the attention-capturing and performance-altering properties of auditory tasks. In onset preemption, attention is immediately diverted to the auditory channel. Strategic preemption involves a decision process in which the operator maintains focus on more complex auditory messages. The limitation in this process is that the human auditory, or echoic, memory store has a limit of 2 to 5 seconds, after which the message must be processed or it decays. In contrast, multiple resource theory suggests that visual and auditory tasks may be efficiently time-shared because two different pools of cognitive resources are used. Previous research regarding these competing assumptions has been limited and equivocal. Thus, the current research focused on systematically examining the effects of complexity and timing of communication interruptions on visual detection tasks. It was hypothesized that both timing and complexity levels would impact detection performance in a multi-task environment. Study 1 evaluated the impact of complexity and timing of communications occurring before malfunctions in an ongoing visual detection task. Twenty-four participants were required to complete each of the eight timing blocks that included simple or complex communications occurring simultaneously, and at 2, 5, or 8 seconds before detection events. For simple communications, participants repeated three pre-recorded words. However, for complex communications, they generated three words beginning with the same last letter of a word prompt. Results indicated that complex communications at two seconds or less occurring before a visual detection event significantly impacted response time with a 1.3 to 1.6 second delay compared to all the other timings. Detection accuracy for complex communication tasks under the simultaneous condition was significantly degraded compared to simple communications at five seconds or more prior to the task. This resulted in a 20% decline in detection accuracy. Additionally, participants' workload ratings for complex communications were significantly higher than simple communications. Study 2 examined the timing of communications occurring at the corresponding seconds after the visual detection event. Twenty-four participants were randomly assigned to the communication complexity and timing blocks as in study 1. The results did not find significant performance effects of timing or complexity of communications on detection performance. However the workload ratings for the 2 and 5 second complex communication presentations were higher compared to the same simple communication conditions. Overall, these findings support the strategic preemption assumption for well-defined, complex communications. The onset preemption assumption for simple communications was not supported. These results also suggest that the boundaries of the multiple resource theory assumption may exist up to the limits of the echoic memory store. Figures of merit for task performance under the varying levels of timing and complexity are presented. Several theoretical and practical implications are discussed.
20

Information theory for multi-party peer-to-peer communication protocols / Théorie de l’information pour protocoles de communication peer-to-peer

Urrutia, Florent 25 May 2018 (has links)
Cette thèse a pour sujet les protocoles de communication peer-to-peer asynchrones. Nous introduisons deux mesures basées sur la théorie de l'information,la Public Information Complexity (PIC) et la Multi-party Information Complexity (MIC), étudions leurs propriétés et leur relation avec d'autres mesures fondamentales en calcul distribué, telles que la communication complexity et la randomness complexity. Nous utilisons ensuite ces deux mesures pour étudier la fonction parité et la fonction disjointness. / This thesis is concerned with the study of multi-party communicationprotocols in the asynchronous message-passing peer-to-peer model. We introducetwo new information measures, the Public Information Complexity(PIC) and the Multi-party Information Complexity (MIC), study their propertiesand how they are related to other fundamental quantities in distributedcomputing such as communication complexity and randomness complexity.We then use these two measures to study the parity function and the disjointness function.

Page generated in 0.51 seconds