• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analysing the information contributions and anatomical arrangement of neurons in population codes

Yarrow, Stuart James January 2015 (has links)
Population coding—the transmission of information by the combined activity of many neurons—is a feature of many neural systems. Identifying the role played by individual neurons within a population code is vital for the understanding of neural codes. In this thesis I examine which stimuli are best encoded by a given neuron within a population and how this depends on the informational measure used, on commonly-measured neuronal properties, and on the population size and the spacing between stimuli. I also show how correlative measures of topography can be used to test for significant topography in the anatomical arrangement of arbitrary neuronal properties. The neurons involved in a population code are generally clustered together in one region of the brain, and moreover their response selectivity is often reflected in their anatomical arrangement within that region. Although such topographic maps are an often-encountered feature in the brains of many species, there are no standard, objective procedures for quantifying topography. Topography in neural maps is typically identified and described subjectively, but in cases where the scale of the map is close to the resolution limit of the measurement technique, identifying the presence of a topographic map can be a challenging subjective task. In such cases, an objective statistical test for detecting topography would be advantageous. To address these issues, I assess seven measures by quantifying topography in simulated neural maps, and show that all but one of these are effective at detecting statistically significant topography even in weakly topographic maps. The precision of the neural code is commonly investigated using two different families of statistical measures: (i) Shannon mutual information and derived quantities when investigating very small populations of neurons and (ii) Fisher information when studying large populations. The Fisher information always predicts that neurons convey most information about stimuli coinciding with the steepest regions of the tuning curve, but it is known that information theoretic measures can give very different predictions. Using a Monte Carlo approach to compute a stimulus-specific decomposition of the mutual information (the stimulus-specific information, or SSI) for populations up to hundreds of neurons in size, I address the following questions: (i) Under what conditions can Fisher information accurately predict the information transmitted by a neuron within a population code? (ii) What are the effects of level of trial-to-trial variability (noise), correlations in the noise, and population size on the best-encoded stimulus? (iii) How does the type of task in a behavioural experiment (i.e. fine and coarse discrimination, classification) affect the best-encoded stimulus? I show that, for both unimodal and monotonic tuning curves, the shape of the SSI is dependent upon trial-to-trial variability, population size and stimulus spacing, in addition to the shape of the tuning curve. It is therefore important to take these factors into account when assessing which stimuli a neuron is informative about; just knowing the tuning curve may not be sufficient.
2

Exploring the Composition of Coding Theory and Cryptography through Secure Computation, Succinct Arguments, and Local Codes

Alexander R Block (13154601) 26 July 2022 (has links)
<p>    We examine new ways in which coding theory and cryptography continue to be  composed together, and show that the composition of these two fields  yield new constructions in the areas of Secure Computation Protocols,  Succinct Interactive Arguments, and Locally Decodable Codes. This  dissertation is a continuation of several decades of research in  composing coding theory and cryptography; examples include secret  sharing, encryption schemes, randomness extraction, pseudo-random number  generation, and the PCP theorem, to name a few. </p> <p>    In Part I of this dissertation, we examine the composition of  coding theory with cryptography, explicitly and implicitly. On the  explicit side, we construct a new family of linear error-correcting  codes, based on algebraic geometric codes, and use this family to  construct new correlation extractors (Ishai et al., FOCS 2009).  Correlation extractors are two-party secure computation protocols for  distilling samples of a leaky correlation (e.g., pre-processed secret  shares that have been exposed to side-channel attacks) into secure and  fresh shares of another correlation (e.g., shares of oblivious  transfer). Our correlation extractors are (nearly) optimal in all  parameters. On the implicit side, we use coding theoretic arguments to  show the security of succinct interactive arguments (Micali, FOCS 1994).  Succinct interactive arguments are a restriction of interactive proofs  (Goldwasser, Micali, Rackoff, STOC 1985) for which security only holds  against computationally bounded provers (i.e., probabilistic polynomial  time), and where the proofs are sub-linear in the size of the statement  being proven. Our new succinct interactive arguments are the first  public-coin, zero-knowledge arguments with time and space efficient  provers: we give two protocols where any NP statement that is verifiable  by a time-T space-S RAM program in is provable time O~(T) and space S *  polylog(T).<br>      In Part II of this dissertation, we examine the composition of  cryptography with coding theory, again explicitly and implicitly,  focusing specifically on locally decodable codes (Katz and Trevisan,  STOC 2000). Locally decodable codes, or LDCs, are error-correcting codes  with super-efficient probabilistic decoding procedures that allow for  decoding individual symbols of the encoded message, without decoding the  entire codeword. On the implicit side, we utilize cryptographic  analysis tools to give a conceptually simpler proof of the so-called  "Hamming-to-InsDel" compiler (Ostrovsky and Paskin-Cherniavsky, ITS  2015). This compiler transforms any Hamming LDC (i.e., a code that is  resilient to bit-flip errors) to another LDC that is resilient to the  broad class of insertion-deletion errors, approximately preserving the  rate and error-tolerance of the code at the cost of a poly-logarithmic  increase in the query complexity. We further extend this compiler to  both the private LDC (Ostrovsky, Pandey, and Sahai, ICALP 2007) setting,  where the encoder and decoder are assumed to share a secret key unknown  to the adversarial channel, and the resource-bounded LDC (Blocki,  Kulkarni, and Zhou, ITC 2020) setting, where the adversarial channel is  assumed to be resource constrained. On the explicit side, we utilize two  cryptographic primitives to give new constructions of alternative  notions of LDCs. First, we use cryptographic puzzles (Bitansky et al.,  ITCS 2016) to construct resource-bounded Hamming LDCs in the standard  model without random oracles, answering an open question of Blocki,  Kulkarni, and Zhou (ITC 2020); we then naturally extend these LDCs to  the InsDel setting via our previously mentioned compiler. Second, we use  digital signature schemes to directly construct computationally relaxed  LDCs (Blocki et al., ITIT 2021) that are resilient to  insertion-deletion errors. Computationally relaxed LDCs allow the  decoder to output an extra symbol signifying it does not know the  correct output and are only secure against probabilistic polynomial time  adversarial channels. To the best of our knowledge, this is the first  such LDC (of any type) resilient against insertion-deletion errors that  does not rely on the aforementioned compiler.</p>
3

Reconstruction and Local Recovery of Data from Synchronization Errors

Minshen Zhu (15334783) 21 April 2023 (has links)
<p>In this thesis we study the complexity of data recovery from synchronization errors, namely insertion and deletion (insdel) errors.</p> <p>Insdel Locally Decodable Codes (Insdel LDCs) are error-correcting codes that admit super-efficient decoding algorithms even in the presence of many insdel errors. The study of such codes for Hamming errors have spanned several decades, whereas work on the insdel analogue had amounted to only a few papers before our work. This work initiates a systematic study of insdel LDCs, seeking to bridge this gap through designing codes and proving limitations. Our upper bounds essentially match those for Hamming LDCs in important ranges of parameters, even though insdel LDCs are more general than Hamming LDCs. Our main results are lower bounds that are exponentially stronger than the ones inherited from the Hamming LDCs. These results also have implications for the well-studied variant of relaxed LDCs. For this variant, besides showing the first results in the insdel setting, we also answer an open question for the Hamming variant by showing a strong lower bound.</p> <p>In the trace reconstruction problem, the goal is to recover an unknown source string x \in {0,1}n from random traces, which are obtained by hitting the source string with random deletion/insertions at a fixed rate. Mean-based algorithms are a class of reconstruction algorithms whose outputs depend only on the empirical estimates of individual bits. The number of traces needed for mean-based trace reconstruction has already been settled. We further study the performance of mean-based algorithms in a scenario where one wants to distinguish between two source strings parameterized by their edit distance, and we also provide explicit construction of strings that are hard to distinguish. We further establish an equivalence to the Prouhet-Tarry-Escott problem from number theory, which ends up being an obstacle to constructing explicit hard instances against mean-based algorithms.</p>
4

On communication with Perfect Feedback against Bit-flips and Erasures

Shreya Nasa (18432009) 29 April 2024 (has links)
<p dir="ltr">We study the communication model with perfect feedback considered by Berlekamp (PhD Thesis, 1964), in which Alice wishes to communicate a binary message to Bob through a noisy adversarial channel, and has the ability to receive feedback from Bob via an additional noiseless channel. Berlekamp showed that in this model one can tolerate 1/3 fraction of errors (a.k.a., bit-flips or substitutions) with non-vanishing communication rate, which strictly improves upon the 1/4 error rate that is tolerable in the classical one-way communication setting without feedback. In the case when the channel is corrupted by erasures, it is easy to show that a fraction of erasures tending to 1 can be tolerated in the noiseless feedback setting, which also beats the 1/2 fraction that is maximally correctable in the no-feedback setting. In this thesis, we consider a more general perfect feedback channel that may introduce both errors and erasures. We show the following results:</p><p dir="ltr">1. If α, β ∈ [0, 1) are such that 3α + β < 1, then there exists a code that achieves a positive communication rate tolerating α fraction of errors and β fraction of erasures. Furthermore, no code can achieve a positive-rate in this channel when 3α + β ≥ 1.</p><p dir="ltr">2. For the case when 3α + β < 1, we compute the maximal asymptotic communication rate achievable in this setting.</p>
5

Storing information through complex dynamics in recurrent neural networks

Molter, Colin C 20 May 2005 (has links)
The neural net computer simulations which will be presented here are based on the acceptance of a set of assumptions that for the last twenty years have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First of all, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural net should be coded in some way or another in one of the dynamical attractors of the brain and retrieved by stimulating the net so as to trap its dynamics in the desired item's basin of attraction. The second view shared by neural net researchers is to base the learning of the synaptic matrix on a local Hebbian mechanism. The last assumption is the presence of chaos and the benefit gained by its presence. Chaos, although very simply produced, inherently possesses an infinite amount of cyclic regimes that can be exploited for coding information. Moreover, the network randomly wanders around these unstable regimes in a spontaneous way, thus rapidly proposing alternative responses to external stimuli and being able to easily switch from one of these potential attractors to another in response to any coming stimulus. In this thesis, it is shown experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the back, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause but the consequence of the learning. However, it appears as an helpful consequence that widens the net's encoding capacity. To learn the information to be stored, an unsupervised Hebbian learning algorithm is introduced. By leaving the semantics of the attractors to be associated with the feeding data unprescribed, promising results have been obtained in term of storing capacity.
6

Avaliação de desempenho de esquemas de modulação e codificação na presença de interferência de co-canal / Performance evaluation of modulation and coding schemes in the presence of co-channel interference

Altamirano Carrillo, Carlos Daniel 19 August 2018 (has links)
Orientador: Celso de Almeida / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-19T04:18:46Z (GMT). No. of bitstreams: 1 AltamiranoCarrillo_CarlosDaniel_M.pdf: 813233 bytes, checksum: 862b6d12c2e773acbd4f388a59e9eadd (MD5) Previous issue date: 2011 / Resumo: Este trabalho avalia os efeitos da interferência de co-canal na taxa de erro de bits (BER) de sistemas de transmissão digitais sem fio. O ambiente do sistema considera canais com ruído gaussiano (AWGN) e canais com desvanecimento Rayleigh na presença de um interferente de co-canal dominante, onde os usuários empregam esquemas de modulação BPSK e M-QAM e também códigos corretores de erros. Os códigos corretores de erros utilizados em sistemas com expansão de banda são os códigos convolucional e turbo, e em sistemas sem expansão de banda são a modulação-codificada por treliça (TCM) e a modulação-codificada turbo (TTCM). Os efeitos da interferência de co-canal na taxa de erro de bit serão avaliados derivando-se expressões teóricas e mediante a simulação de Monte Carlo, variando o tipo de canal e os esquemas de modulação e codificação. Este trabalho mostra que a interferência de co-canal introduz patamares na taxa de erro de bit, que os sistemas sem expansão de banda são mais susceptíveis à interferência e que os códigos corretores de erro são uma boa ferramenta para mitigar os efeitos da interferência de co-canal / Abstract: This work evaluates the effects of co-channel interference on the bit error rate (BER) of digital transmission systems. The transmission system considers gaussian noise channels (AWGN) and Rayleigh fading channels in the presence of a dominant co-channel interferer, where all users employ BPSK and M-QAM modulations and error control coding. For systems that present bandwidth expansion the considered error control codes are convolutional and turbo codes, and for systems that do not present bandwidth expansion are considered trellis coded modulation (TCM) and turbo trellis coded modulation (TTCM). The effects of co-channel interference on the bit error rate are evaluated by deriving theoretical expressions and via Monte Carlo simulation, varying the channel type, the modulation and coding schemes. This work shows that co-channel interference introduces floors on the bit error rate, that systems without bandwidth expansion are more susceptible to interference, and that error control codes are a good tool to mitigate the co-channel interference effects / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
7

Determining Performance of Channel Decoders / Одређивање перформанси декодера заштитних кодова / Određivanje performansi dekodera zaštitnih kodova

Minja Aleksandar 23 January 2019 (has links)
<p>This thesis contains some of the results obtained by the author in the course of his<br />postgraduate research in the fields of Communication system modeling and<br />Information and coding theory. The results are presented in mathematical form and are<br />verified by numerical simulations. Most of them are motivated by challenges arising in<br />the design and standardization of 5G communication systems and are of practical and<br />scientific relevance. Main contributions of the thesis are divided into two parts. In the<br />first part of this thesis we introduce a novel SNR-invariant quasi-analytical technique<br />for estimating the error rate of a communication link over the geodesic channel. We<br />compared this technique to the Monte Carlo and Importance Sampling methods and it<br />has been found out that it outperforms other methods in both accuracy and speed. In<br />the second part of the thesis we introduce an optimization procedure, based on the<br />variable force repulsion method, for the design of spherical codes that are tailored to<br />the TCM and achieve lower error rates at high SNR, then their counterparts that are<br />optimized for minimum distance. The performance of these codes is verified using the<br />method developed in part I of this thesis which is suitable for simulating error rates at<br />high SNR.</p> / <p>Ова дисертација садржи неке од резултата аутора добијених током његовог<br />постдипломског истраживања у областима моделовања комуникационих система<br />и теорије информација и заштитног кодовања. Резултати су представљени у<br />математичком формату и верификовани су нумеричким симулацијама. Већина<br />њих је мотивисана проблемима који се појављују приликом развоја и<br />стандрадизације 5G комуникационих система и имају велики научни и практични<br />значај. Дисертација је подељена у два дела. Први део уводи нови<br />квазианалитички поступак за естимацију вероватноће грешке декодера<br />заштитних кодова. Математички је показано и експериментално потврђено да је<br />нови симулациони поступак значајно брзи од постојећих симулационих<br />постпупака (монте карло и поступак узорковања по значајности) који се користе у<br />пракси. У другом делу тезе представљен је проблем конструкције<br />вишедимензионалне трелис кодоване модулације (енг. Trellis Coded<br />Modulation - TCM) помоћу сферичних кодова. Развијен је нови алгоритам за<br />конструкцију сферичних кодова који су прилагођени структури TCM кода и<br />показано је да такви TCM кодови имају знатно боље перформансе од постојећих.<br />Вероватноћа грешке ових нових TCM кодова је естимирана применом<br />симулационог поступка који је дат у првом делу дисертације.</p> / <p>Ova disertacija sadrži neke od rezultata autora dobijenih tokom njegovog<br />postdiplomskog istraživanja u oblastima modelovanja komunikacionih sistema<br />i teorije informacija i zaštitnog kodovanja. Rezultati su predstavljeni u<br />matematičkom formatu i verifikovani su numeričkim simulacijama. Većina<br />njih je motivisana problemima koji se pojavljuju prilikom razvoja i<br />standradizacije 5G komunikacionih sistema i imaju veliki naučni i praktični<br />značaj. Disertacija je podeljena u dva dela. Prvi deo uvodi novi<br />kvazianalitički postupak za estimaciju verovatnoće greške dekodera<br />zaštitnih kodova. Matematički je pokazano i eksperimentalno potvrđeno da je<br />novi simulacioni postupak značajno brzi od postojećih simulacionih<br />postpupaka (monte karlo i postupak uzorkovanja po značajnosti) koji se koriste u<br />praksi. U drugom delu teze predstavljen je problem konstrukcije<br />višedimenzionalne trelis kodovane modulacije (eng. Trellis Coded<br />Modulation - TCM) pomoću sferičnih kodova. Razvijen je novi algoritam za<br />konstrukciju sferičnih kodova koji su prilagođeni strukturi TCM koda i<br />pokazano je da takvi TCM kodovi imaju znatno bolje performanse od postojećih.<br />Verovatnoća greške ovih novih TCM kodova je estimirana primenom<br />simulacionog postupka koji je dat u prvom delu disertacije.</p>
8

Storing information through complex dynamics in recurrent neural networks

Molter, Colin 20 May 2005 (has links)
The neural net computer simulations which will be presented here are based on the acceptance of a set of assumptions that for the last twenty years have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First of all, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural net should be coded in some way or another in one of the dynamical attractors of the brain and retrieved by stimulating the net so as to trap its dynamics in the desired item's basin of attraction. The second view shared by neural net researchers is to base the learning of the synaptic matrix on a local Hebbian mechanism. The last assumption is the presence of chaos and the benefit gained by its presence. Chaos, although very simply produced, inherently possesses an infinite amount of cyclic regimes that can be exploited for coding information. Moreover, the network randomly wanders around these unstable regimes in a spontaneous way, thus rapidly proposing alternative responses to external stimuli and being able to easily switch from one of these potential attractors to another in response to any coming stimulus.<p><p>In this thesis, it is shown experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the back, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause but the consequence of the learning. However, it appears as an helpful consequence that widens the net's encoding capacity. To learn the information to be stored, an unsupervised Hebbian learning algorithm is introduced. By leaving the semantics of the attractors to be associated with the feeding data unprescribed, promising results have been obtained in term of storing capacity. / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
9

Multimedia Forensics Using Metadata

Ziyue Xiang (17989381) 21 February 2024 (has links)
<p dir="ltr">The rapid development of machine learning techniques makes it possible to manipulate or synthesize video and audio information while introducing nearly indetectable artifacts. Most media forensics methods analyze the high-level data (e.g., pixels from videos, temporal signals from audios) decoded from compressed media data. Since media manipulation or synthesis methods usually aim to improve the quality of such high-level data directly, acquiring forensic evidence from these data has become increasingly challenging. In this work, we focus on media forensics techniques using the metadata in media formats, which includes container metadata and coding parameters in the encoded bitstream. Since many media manipulation and synthesis methods do not attempt to hide metadata traces, it is possible to use them for forensics tasks. First, we present a video forensics technique using metadata embedded in MP4/MOV video containers. Our proposed method achieved high performance in video manipulation detection, source device attribution, social media attribution, and manipulation tool identification on publicly available datasets. Second, we present a transformer neural network based MP3 audio forensics technique using low-level codec information. Our proposed method can localize multiple compressed segments in MP3 files. The localization accuracy of our proposed method is higher compared to other methods. Third, we present an H.264-based video device matching method. This method can determine if the two video sequences are captured by the same device even if the method has never encountered the device. Our proposed method achieved good performance in a three-fold cross validation scheme on a publicly available video forensics dataset containing 35 devices. Fourth, we present a Graph Neural Network (GNN) based approach for the analysis of MP4/MOV metadata trees. The proposed method is trained using Self-Supervised Learning (SSL), which increased the robustness of the proposed method and makes it capable of handling missing/unseen data. Fifth, we present an efficient approach to compute the spectrogram feature with MP3 compressed audio signals. The proposed approach decreases the complexity of speech feature computation by ~77.6% and saves ~37.87% of MP3 decoding time. The resulting spectrogram features lead to higher synthetic speech detection performance.</p>

Page generated in 0.1321 seconds