• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 262
  • 98
  • 48
  • 29
  • 21
  • 11
  • 9
  • 6
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 555
  • 101
  • 93
  • 88
  • 79
  • 64
  • 64
  • 63
  • 63
  • 57
  • 49
  • 48
  • 45
  • 42
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Vilka tv-glasögon har du? : En studie i hur partipolitiskt aktiva personer tolkar tv-serien Scooby Doo

Landstedt, Christopher January 2008 (has links)
Abstract Title: What TV-glasses do you wear? A study in how party-political people decode the TVshow Scooby Doo (Vilka tv-glasögon har du? En studie i hur partipolitiskt aktiva personer tolkar tv-serien Scooby Doo) Number of pages: 47 (54 including enclosures) Author: Christopher Landstedt Tutor: Amelie Hössjer Course: Media and Communication Studies C Period: Autumn term 2007 University: Division of Media and Communication, Department of Information Science, Uppsala University. Purpose/Aim: The aim of this essay is to make a study in how party-political people, 18-25 years old, both female and male, decode the messages in the TV-show Scooby Doo from 1969. Do they decode the show differently because of their political view, their gender or, and their social background? Is there a pattern in the decoding or is it based on a more individual level? Material/Method: A qualitative method containing a total number of 16 individual interviews with young adults, 18-25 years old, half of them female, the other half male, were used. All of the participants are members of political youth parties/organizations, equally divided in left and right wing parties. Scooby Doo was chosen thanks to the lack of political meanings and messages in the show and its objective aura. The respondents got to see a preselected episode from the first season ever of Scooby Doo. After they finished watching the show, the interview took place. The interview contained questions on a deeper lever regardingthe episode. Stuart Hall’s all time classic encoding-decoding theory is used as the main theory with the support from other theories in the same field. Main results: The degree of active reading is overall equal among the young adults that participated in the study. Differences can be found in the way they decode the sender’s messages and what values they put into the message. The leftwing respondents tended to decode the show in more oppositional way than the rightwing people who tended to read the messages dominant. There is an exception to every rule, also in this case. To sum it all up in one last sentence it should be said that some people’s personal values shine through, and aremore obvious than others. Keywords: encoding-decoding, gender, television, interpretation, Scooby Doo, political view, leftwing and rightwing
212

Reklamfilm eller skräckfilm? : En kvalitativ studie om unga flickors tolkning av Apolivas TV-reklam.

Thor, Isabelle, Turesson, Urban January 2010 (has links)
Under sommaren och hösten 2009 fick Apolivas reklamfilm stor uppmärksamhet av såväl det svenska folket som i media. Uppståndelsen grundades i att en stor andel människor som tagit del av reklamen uppfattade den som skrämmande, medan Apolivas avsikt var att skildra de svenska, något melankoliska, väderförhållandena. Syftet med denna uppsats är att undersöka tolkningsskedet av Apolivas reklamfilm, hos flickor i nionde klass. Vår specifika frågeställning utvecklades till att ta reda på om den utvalda gruppen tolkar budskapet och känslan som förmedlas i reklamfilmen, på det sätt som Apoliva avsett, med utgångspunkt i Halls (2009) encoding/decoding-teori. Enligt denna modell finns tre olika sätt att tolka ett budskap; dominant, oppositionell eller förhandlande tolkning. I en dominant tolkning godtar mottagaren budskapet, medan den oppositionella tolkningen innebär att mottagaren vänder sig mot budskapet. I den förhandlande tolkningen godtar mottagaren budskapet till viss del i en mix av tvivel. Intervjuerna inleddes med en visning av Apolivas reklamfilm, därefter ställdes frågor med tyngdpunkten på att ta reda på vilka känslor och budskap som respondenterna upplevde av reklamen. Undersökningen genomfördes med hjälp av kvalitativa intervjuer på åtta stycken flickor i årskurs nio. Respondenternas svar analyserades med hjälp av Halls (2009) encoding/decoding-teori, och under arbetets gång kom vi till insikt att modellen behövde modifieras. Detta blev nödvändigt då vi i analysen märkte att en stor del av de intervjuade missuppfattat det inkodade budskapet. Trots att vissa respondenter inte tolkat budskapet som avsändaren planerat, var det inte otänkbart att dessa ändå godtog det uppfattade budskapet till viss del. Således delades modellen in i olika kategorier av egenuppfattningar. Svaret på huvudfrågan som ställdes till materialet; Med utgångspunkt i Halls teori om encoding/decoding, tolkar kvinnliga elever i årskurs 9 budskapet och känslan som kommuniceras i reklamfilmen, på det sätt som avsändaren avser? är att flickorna inte kan anses tolka reklamens budskap och känsla på avsett sätt. Endast en respondent kunde bedömas göra en dominant tolkning av Apolivas reklamfilm. Påtagligt är istället hur många som faktiskt gjort en annan tolkning av budskapet, än vad Apoliva avsåg sända ut.
213

Hybrid ARQ Using Serially Concatenated Block Codes for Real-Time Communication : An Iterative Decoding Approach

Uhlemann, Elisabeth January 2001 (has links)
The ongoing wireless communication evolution offers improvements for industrial applications where traditional wireline solutions causes prohibitive problems in terms of cost and feasibility. Many of these new wireless applications are packet oriented and time-critical. The deadline dependent coding (DDC) communication protocol presented here is explicitly intended for wireless real-time applications. The objective of the work described in this thesis is therefore to develop the foundation for an efficient and reliable real-time communication protocol for critical deadline dependent communication over unreliable wireless channels. Since the communication is packet oriented, block codes are suitable for error control. Reed-Solomon codes are chosen and incorporated in a concatenated coding scheme using iterative detection with trellis based decoding algorithms. Performance bounds are given for parallel and serially concatenated Reed-Solomon codes using BPSK. The convergence behavior of the iterative decoding process for serially concatenated block codes is examined and two different stopping criteria are employed based on the log-likelihood ratio of the information bits. The stopping criteria are also used as a retransmission criterion, incorporating the serially concatenated block codes in a type-I hybrid ARQ (HARQ) protocol. Different packet combining techniques specifically adapted to the concatenated HARQ (CHARQ) scheme are used. The extrinsic information used in the iterative decoding process is saved and used when decoding after a retransmission. This technique can be seen as turbo code combining or concatenated code combining and is shown to improve performance. Saving the extrinsic information may also be seen as a doping criterion yielding faster convergence. As such, the extrinsic information can be used in conjunction with traditional diversity combining schemes. The performance in terms of bit error rate and convergence speed is improved with only negligible additional complexity. Consequently, CHARQ based on serially concatenated block codes using iterative detection creates a flexible and reliable scheme capable of meeting specified required realtime constraints.
214

Kodavimas-dekodavimas uždaros kilpos principu gyvosiose sistemose / Closed-loop coding-decoding of living systems

Kirvelis, Dobilas Jonas 26 May 2009 (has links)
Pateikiama kodavimo-dekodavimo schema, veikianti uždaros kilpos principu, kaip informacinis gyvųjų sistemų funkcinės organizacijos pamatas. Ji pagrįsta teoriniais ir eksperimentiniais tyrimais. Ši kodavimo-dekodavimo schema naudojama sisteminiu ir bioinformatikos požiūriu paaiškinti 1) biogenezę – genotipo ir fenotipo dinaminės dvejybės atsiradimą, 2) žinduolių, ypač žmogaus, smegenų žievės (neocortex) funkcinę organizaciją. Teigiama, kad suvokimo bei mąstymo funkcinis pamatas - analizė per sintezę - yra labiausiai išvystyta gyvųjų sistemų kodavimo-dekodavimo procedūra. Tai ypatinga problemų sprendimo technologija, sugebanti individualiai gaminti informaciją. Pagrindinis dėmesys skiriamas neuronų tinklų funkcinei veiklai aiškinti taikant neurosluoksnių, kurie atlieka daugiamačių (erdvės ir laiko) signalų filtro funkcijas vykdydami integralines transformacijas, sąvoką bei miglotosios (fuzzy) logikos principus – daugiau-mažiau-lygu logiką. Pateikiamas galimas atminties mechanizmų neurochaoso principas bei hipotetinė schema, kaip yra sudarytos regos analizatoriaus neurosluoksninės struktūros, vykdančios kvaziortogonalines vaizdų kodavimo-dekodavimo procedūras, artimas Ermito-Lagero bei kvaziholografinėms transformacijoms. / The concept of closed-loop coding-decoding as the informational principle of funcional organization of living systems is presented. It is based on theory and experimental research. The scheme of coding-decoding is applied to interpret 1) the biogenesis as the dynamic duality of genotype and phenotype, and 2) the functional organization of the mammalian, especialy, human neocortex. It is stated that analysis by synthesis as the functional base of perception and thinking is the most developed coding-decoding procedure of living systems. It is the special technology of problems solving, that is able to produce information individually. The main attention is payed to the interpretation of functioning of the neuronets by the principles of fuzzy logics (more-less-equal logics) and the concept of neurolayers, that function as the multidimensional (space and time) signal filters carrying out the integral transformations. The hypothetic scheme of the functional organization of the neurolayer structures of the visual analyzer is presented. It interprets the quasi-orthogonal procedures similar to Hermite-Lagger transformations, and the quasi-holographic ones. The possible neurochaotic principle of the memory mechanisms is discussed.
215

Robust Lossy Source Coding for Correlated Fading Channels

SHAHIDI, SHERVIN 28 September 2011 (has links)
Most of the conventional communication systems use channel interleaving as well as hard decision decoding in their designs, which lead to discarding channel memory and soft-decision information. This simplification is usually done since the complexity of handling the memory or soft-decision information is rather high. In this work, we design two lossy joint source-channel coding (JSCC) schemes that do not use explicit algebraic channel coding for a recently introduced channel model, in order to take advantage of both channel memory and soft-decision information. The channel model, called the non-binary noise discrete channel with queue based noise (NBNDC-QB), obtains closed form expressions for the channel transition distribution, correlation coefficient, and many other channel properties. The channel has binary input and $2^q$-ary output and the noise is a $2^q$-ary Markovian stationary ergodic process, based on a finite queue, where $q$ is the output's soft-decision resolution. We also numerically show that the NBNDC-QB model can effectively approximate correlated Rayleigh fading channels without losing its analytical tractability. The first JSCC scheme is the so called channel optimized vector quantizer (COVQ) and the second scheme consists of a scalar quantizer, a proper index assignment, and a sequence maximum a posteriori (MAP) decoder, designed to harness the redundancy left in the quantizer's indices, the channel's soft-decision output, and noise time correlation. We also find necessary and sufficient condition when the sequence MAP decoder is reduced to an instantaneous symbol-by-symbol decoder, i.e., a simple instantaneous mapping. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2011-09-25 19:43:28.785
216

BELIEF PROPAGATION DECODING OF FINITE-LENGTH POLAR CODES

RAJAIE, TARANNOM 01 February 2012 (has links)
Polar codes, recently invented by Arikan, are the first class of codes known to achieve the symmetric capacity for a large class of channels. The symmetric capacity is the highest achievable rate subject to using the binary input letters of the channel with equal probability. Polar code construction is based on a phenomenon called channel polarization. The encoding as well as the decoding operation of polar codes can be implemented with O(N logN) complexity, where N is the blocklength of the code. In this work, we study the factor graph representation of finite-length polar codes and their effect on the belief propagation (BP) decoding process over Binary Erasure Channel (BEC). Particularly, we study the parity-check-based (H-Based) as well as the generator based (G-based) factor graphs of polar codes. As these factor graphs are not unique for a code, we study and compare the performance of Belief Propagation (BP) decoders on number of well-known graphs. Error rates and complexities are reported for a number of cases. Comparisons are also made with the Successive Cancellation (SC) decoder. High errors are related to the so-called stopping sets of the underlying graphs. we discuss the pros and cons of BP decoder over SC decoder for various code lengths. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2012-01-31 17:10:59.955
217

Statistical Models and Algorithms for Studying Hand and Finger Kinematics and their Neural Mechanisms

Castellanos, Lucia 01 August 2013 (has links)
The primate hand, a biomechanical structure with over twenty kinematic degrees of freedom, has an elaborate anatomical architecture. Although the hand requires complex, coordinated neural control, it endows its owner with an astonishing range of dexterous finger movements. Despite a century of research, however, the neural mechanisms that enable finger and grasping movements in primates are largely unknown. In this thesis, we investigate statistical models of finger movement that can provide insights into the mechanics of the hand, and that can have applications in neural-motor prostheses, enabling people with limb loss to regain natural function of the hands. There are many challenges associated with (1) the understanding and modeling of the kinematics of fingers, and (2) the mapping of intracortical neural recordings into motor commands that can be used to control a Brain-Machine Interface. These challenges include: potential nonlinearities; confounded sources of variation in experimental datasets; and dealing with high degrees of kinematic freedom. In this work we analyze kinematic and neural datasets from repeated-trial experiments of hand motion, with the following contributions: We identified static, nonlinear, low-dimensional representations of grasping finger motion, with accompanying evidence that these nonlinear representations are better than linear representations at predicting the type of object being grasped over the course of a reach-to-grasp movement. In addition, we show evidence of better encoding of these nonlinear (versus linear) representations in the firing of some neurons collected from the primary motor cortex of rhesus monkeys. A functional alignment of grasping trajectories, based on total kinetic energy, as a strategy to account for temporal variation and to exploit a repeated-trial experiment structure. An interpretable model for extracting dynamic synergies of finger motion, based on Gaussian Processes, that decomposes and reduces the dimensionality of variance in the dataset. We derive efficient algorithms for parameter estimation, show accurate reconstruction of grasping trajectories, and illustrate the interpretation of the model parameters. Sound evidence of single-neuron decoding of interpretable grasping events, plus insights about the amount of grasping information extractable from just a single neuron. The Laplace Gaussian Filter (LGF), a deterministic approximation to the posterior mean that is more accurate than Monte Carlo approximations for the same computational cost, and that in an off-line decoding task is more accurate than the standard Population Vector Algorithm.
218

Understanding the Form and Function of Neuronal Physiological Diversity

Tripathy, Shreejoy J. 31 October 2013 (has links)
For decades electrophysiologists have recorded and characterized the biophysical properties of a rich diversity of neuron types. This diversity of neuron types is critical for generating functionally important patterns of brain activity and implementing neural computations. In this thesis, I developed computational methods towards quantifying neuron diversity and applied these methods for understanding the functional implications of within-type neuron variability and across-type neuron diversity. First, I developed a means for defining the functional role of differences among neurons of the same type. Namely, I adapted statistical neuron models, termed generalized linear models, to precisely capture how the membranes of individual olfactory bulb mitral cells transform afferent stimuli to spiking responses. I then used computational simulations to construct virtual populations of biophysically variable mitral cells to study the functional implications of within-type neuron variability. I demonstrate that an intermediate amount of intrinsic variability enhances coding of noisy afferent stimuli by groups of biophysically variable mitral cells. These results suggest that within-type neuron variability, long considered to be a disadvantageous consequence of biological imprecision, may serve a functional role in the brain. Second, I developed a methodology for quantifying the rich electrophysiological diversity across the majority of the neuron types throughout the mammalian brain. Using semi-automated text-mining, I built a database, Neuro- Electro, of neuron type specific biophysical properties extracted from the primary research literature. This data is available at http://neuroelectro.org, which provides a publicly accessible interface where this information can be viewed. Though the extracted physiological data is highly variable across studies, I demonstrate that knowledge of article-specific experimental conditions can significantly explain the observed variance. By applying simple analyses to the dataset, I find that there exist 5-7 major neuron super-classes which segregate on the basis of known functional roles. Moreover, by integrating the NeuroElectro dataset with brain-wide gene expression data from the Allen Brain Atlas, I show that biophysically-based neuron classes correlate highly with patterns of gene expression among voltage gated ion channels and neurotransmitters. Furthermore, this work lays the conceptual and methodological foundations for substantially enhanced data sharing in neurophysiological investigations in the future.
219

Design of effective decoding techniques in network coding networks / Suné von Solms

Von Solms, Suné January 2013 (has links)
Random linear network coding is widely proposed as the solution for practical network coding applications due to the robustness to random packet loss, packet delays as well as network topology and capacity changes. In order to implement random linear network coding in practical scenarios where the encoding and decoding methods perform efficiently, the computational complex coding algorithms associated with random linear network coding must be overcome. This research contributes to the field of practical random linear network coding by presenting new, low complexity coding algorithms with low decoding delay. In this thesis we contribute to this research field by building on the current solutions available in the literature through the utilisation of familiar coding schemes combined with methods from other research areas, as well as developing innovative coding methods. We show that by transmitting source symbols in predetermined and constrained patterns from the source node, the causality of the random linear network coding network can be used to create structure at the receiver nodes. This structure enables us to introduce an innovative decoding scheme of low decoding delay. This decoding method also proves to be resilient to the effects of packet loss on the structure of the received packets. This decoding method shows a low decoding delay and resilience to packet erasures, that makes it an attractive option for use in multimedia multicasting. We show that fountain codes can be implemented in RLNC networks without changing the complete coding structure of RLNC networks. By implementing an adapted encoding algorithm at strategic intermediate nodes in the network, the receiver nodes can obtain encoded packets that approximate the degree distribution of encoded packets required for successful belief propagation decoding. Previous work done showed that the redundant packets generated by RLNC networks can be used for error detection at the receiver nodes. This error detection method can be implemented without implementing an outer code; thus, it does not require any additional network resources. We analyse this method and show that this method is only effective for single error detection, not correction. In this thesis the current body of knowledge and technology in practical random linear network coding is extended through the contribution of effective decoding techniques in practical network coding networks. We present both analytical and simulation results to show that the developed techniques can render low complexity coding algorithms with low decoding delay in RLNC networks. / Thesis (PhD (Computer Engineering))--North-West University, Potchefstroom Campus, 2013
220

Design of effective decoding techniques in network coding networks / Suné von Solms

Von Solms, Suné January 2013 (has links)
Random linear network coding is widely proposed as the solution for practical network coding applications due to the robustness to random packet loss, packet delays as well as network topology and capacity changes. In order to implement random linear network coding in practical scenarios where the encoding and decoding methods perform efficiently, the computational complex coding algorithms associated with random linear network coding must be overcome. This research contributes to the field of practical random linear network coding by presenting new, low complexity coding algorithms with low decoding delay. In this thesis we contribute to this research field by building on the current solutions available in the literature through the utilisation of familiar coding schemes combined with methods from other research areas, as well as developing innovative coding methods. We show that by transmitting source symbols in predetermined and constrained patterns from the source node, the causality of the random linear network coding network can be used to create structure at the receiver nodes. This structure enables us to introduce an innovative decoding scheme of low decoding delay. This decoding method also proves to be resilient to the effects of packet loss on the structure of the received packets. This decoding method shows a low decoding delay and resilience to packet erasures, that makes it an attractive option for use in multimedia multicasting. We show that fountain codes can be implemented in RLNC networks without changing the complete coding structure of RLNC networks. By implementing an adapted encoding algorithm at strategic intermediate nodes in the network, the receiver nodes can obtain encoded packets that approximate the degree distribution of encoded packets required for successful belief propagation decoding. Previous work done showed that the redundant packets generated by RLNC networks can be used for error detection at the receiver nodes. This error detection method can be implemented without implementing an outer code; thus, it does not require any additional network resources. We analyse this method and show that this method is only effective for single error detection, not correction. In this thesis the current body of knowledge and technology in practical random linear network coding is extended through the contribution of effective decoding techniques in practical network coding networks. We present both analytical and simulation results to show that the developed techniques can render low complexity coding algorithms with low decoding delay in RLNC networks. / Thesis (PhD (Computer Engineering))--North-West University, Potchefstroom Campus, 2013

Page generated in 0.0774 seconds