71 |
10 sekunder i rampljuset : Hur företag använder Snapchat som strategisk kommunikation och varumärkesbyggandeNodbrant, Ellen, Parfält, Maria January 2017 (has links)
Social media play a significant role in people’s daily communication and life. Even companies use them to reach a broader audience. This study aims to explore if Snapchat can be a useful platform for companies in their strategic communication, especially regarding corporate branding. Snapchat is the fastest growing social media today and they use content in a completely different way than other major social medias. This paper will show how Arla Sweden is using Snapchat to communicate with a specific target group, create engagement and strengthen brand identity. The study focuses on how users understand Arla´s content and is divided into two parts, where two qualitative methods are used. The first part focuses on Arla´s use of Snapchat and is conducted by an interview with an expert from Arla´s social media department. The second one aims to understand the user experience and is collected through four focus group interviews. The study is based on theories about corporate branding and Stuart Hall´s theory of encoding/decoding. The result shows that the users are critical to companies using Snapchat. The users view Snapchat as a private medium for friends and are negative to companies using it as a marketing tool. However, the study shows a positive result regarding users’ first experience with Arla´s content. The conclusion is that Snapchat can be used for corporate branding as long as companies make sure that the content is interesting enough for the target group.
|
72 |
Who you gonna Call? Not Ghostbusters! : En genusmedveten analys av varför remaken Ghostbusters blivit hatadHamrin, Linnea, Holmstedt, Amanda January 2017 (has links)
This essay is about the hate Ghostbusters (2016) has received. To find out why the film is hated, a reception study has been done. YouTube and IMDb are the two websites used in this study to collect the hate-comments and reviews. Ghostbusters (2016) is a remake of the original film with the same name from 1984, the original solely has men in the main roles. A big difference between the two films is that the remake contains women instead of men in the main roles. The trailer of Ghostbusters (2016) got many “downvotes” on YouTube and there after the hatred streamed in. Why is the film hated?The theories used in this study contains subjects of representation, attraction and fat studies. As a method Stuart Hall’s Encoding/decoding was used to analyze the reception of the film. This essay contains content which can be connected to cultural studies and feminism.
|
73 |
Combining outputs from machine translation systemsSalim, Fahim January 2011 (has links)
Combining Outputs from Machine Translation Systems By Fahim A. Salim Supervised by: Ing. Zdenek Zabokrtsky, Ph.D Institute of Formal and Applied Linguistics, Charles University in Prague 2010. Abstract: Due to the massive ongoing research there are many paradigms of Machine Translation systems with diverse characteristics. Even systems designed on the same paradigm might perform differently in different scenarios depending upon their training data used and other design decisions made. All Machine Translation Systems have their strengths and weaknesses and often weakness of one MT system is the strength of the other. No single approach or system seems to always perform best, therefore combining different approaches or systems i.e. creating systems of Hybrid nature, to capitalize on their strengths and minimizing their weaknesses in an ongoing trend in Machine Translation research. But even Systems of Hybrid nature has limitations and they also tend to perform differently in different scenarios. Thanks to the World Wide Web and open source, nowadays one can have access to many different and diverse Machine Translation systems therefore it is practical to have techniques which could combine the translation of different MT systems and produce a translation which is better than any of the individual systems....
|
74 |
High-Performance Decoder Architectures For Low-Density Parity-Check CodesZhang, Kai 09 January 2012 (has links)
The Low-Density Parity-Check (LDPC) codes, which were invented by Gallager back in 1960s, have attracted considerable attentions recently. Compared with other error correction codes, LDPC codes are well suited for wireless, optical, and magnetic recording systems due to their near- Shannon-limit error-correcting capacity, high intrinsic parallelism and high-throughput potentials. With these remarkable characteristics, LDPC codes have been adopted in several recent communication standards such as 802.11n (Wi-Fi), 802.16e (WiMax), 802.15.3c (WPAN), DVB-S2 and CMMB. This dissertation is devoted to exploring efficient VLSI architectures for high-performance LDPC decoders and LDPC-like detectors in sparse inter-symbol interference (ISI) channels. The performance of an LDPC decoder is mainly evaluated by area efficiency, error-correcting capability, throughput and rate flexibility. With this work we investigate tradeoffs between the four performance aspects and develop several decoder architectures to improve one or several performance aspects while maintaining acceptable values for other aspects. Firstly, we present a high-throughput decoder design for the Quasi-Cyclic (QC) LDPC codes. Two new techniques are proposed for the first time, including parallel layered decoding architecture (PLDA) and critical path splitting. Parallel layered decoding architecture enables parallel processing for all layers by establishing dedicated message passing paths among them. The decoder avoids crossbar-based large interconnect network. Critical path splitting technique is based on articulate adjustment of the starting point of each layer to maximize the time intervals between adjacent layers, such that the critical path delay can be split into pipeline stages. Furthermore, min-sum and loosely coupled algorithms are employed for area efficiency. As a case study, a rate-1/2 2304-bit irregular LDPC decoder is implemented using ASIC design in 90 nm CMOS process. The decoder can achieve an input throughput of 1.1 Gbps, that is, 3 or 4 times improvement over state-of-art LDPC decoders, while maintaining a comparable chip size of 2.9 mm^2. Secondly, we present a high-throughput decoder architecture for rate-compatible (RC) LDPC codes which supports arbitrary code rates between the rate of mother code and 1. While the original PLDA is lack of rate flexibility, the problem is solved gracefully by incorporating the puncturing scheme. Simulation results show that our selected puncturing scheme only introduces the BER performance degradation of less than 0.2dB, compared with the dedicated codes for different rates specified in the IEEE 802.16e (WiMax) standard. Subsequently, PLDA is employed for high throughput decoder design. As a case study, a RC- LDPC decoder based on the rate-1/2 WiMax LDPC code is implemented in CMOS 90 nm process. The decoder can achieve an input throughput of 975 Mbps and supports any rate between 1/2 and 1. Thirdly, we develop a low-complexity VLSI architecture and implementation for LDPC decoder used in China Multimedia Mobile Broadcasting (CMMB) systems. An area-efficient layered decoding architecture based on min-sum algorithm is incorporated in the design. A novel split-memory architecture is developed to efficiently handle the weight-2 submatrices that are rarely seen in conventional LDPC decoders. In addition, the check-node processing unit is highly optimized to minimize complexity and computing latency while facilitating a reconfigurable decoding core. Finally, we propose an LDPC-decoder-like channel detector for sparse ISI channels using belief propagation (BP). The BP-based detection computationally depends on the number of nonzero interferers only and are thus more suited for sparse ISI channels which are characterized by long delay but a small fraction of nonzero interferers. Layered decoding algorithm, which is popular in LDPC decoding, is also adopted in this paper. Simulation results show that the layered decoding doubles the convergence speed of the iterative belief propagation process. Exploring the special structure of the connections between the check nodes and the variable nodes on the factor graph, we propose an effective detector architecture for generic sparse ISI channels to facilitate the practical application of the proposed detection algorithm. The proposed architecture is also reconfigurable in order to switch flexible connections on the factor graph in the time-varying ISI channels.
|
75 |
Fluency as a bridge to comprehension: an efficacy study of the RAVE-O literacy programSchmidt, Maxine Katarina 30 April 2019 (has links)
The purpose of this study was to investigate the effectiveness of a theoretically-grounded reading intervention in children with reading difficulties. Participants were between the ages of 8 to 10 years from a community-based program for children with learning disabilities and a single-case research (SCR) design was employed. An adapted version of the RAVE-O intervention was delivered which focused on instruction in phonology, orthography, semantics, syntax, and morphology in building children’s word-level fluency skills. Norm-referenced word-level reading, decoding, and reading comprehension measures were collected at pre- and post-test, and progress monitoring data via curriculum-based measures were also collected. Overall results based on percentage of non-overlapping data (PND) analyses indicated moderate effects for decoding fluency and reading comprehension and small effects for decoding accuracy and reading fluency. Implications for educators and professionals working with elementary school students identified with reading difficulties are discussed. / Graduate
|
76 |
Armazenamento e reconstrução de imagens comprimidas via codificação e decodificação / Storage and reconstruction of images by coding and decodingTravassos, Natalia Caroline Lopes [UNESP] 19 December 2016 (has links)
Submitted by NATALIA CAROLINE LOPES TRAVASSOS null (nataliacaroline2006@yahoo.com.br) on 2017-02-15T14:56:05Z
No. of bitstreams: 1
natalia_dissertação.pdf: 3422187 bytes, checksum: 73f8b94c641709f43b16401694318651 (MD5) / Approved for entry into archive by Juliano Benedito Ferreira (julianoferreira@reitoria.unesp.br) on 2017-02-21T17:52:48Z (GMT) No. of bitstreams: 1
travassos_ncl_me_ilha.pdf: 3422187 bytes, checksum: 73f8b94c641709f43b16401694318651 (MD5) / Made available in DSpace on 2017-02-21T17:52:48Z (GMT). No. of bitstreams: 1
travassos_ncl_me_ilha.pdf: 3422187 bytes, checksum: 73f8b94c641709f43b16401694318651 (MD5)
Previous issue date: 2016-12-19 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Este trabalho apresenta um algoritmo de codificação para imagens comprimidas que representa
cada pixel de uma imagem e suas coordenadas por um único valor. Para cada
pixel e suas coordenadas, esse valor único é armazenado em um vetor que é usado na
reconstrução da imagem sem que sua qualidade seja comprometida. O método proposto
apresenta melhorias em relação a dois outros algoritmos propostos anteriormente, sendo
que um deles já é uma melhoria do primeiro. O algoritmo apresentado neste trabalho difere
dos outros dois algoritmos estudados na diminuição significativa do espaço necessário
para armazenamento das imagens, na determinação de uma taxa de compressão exata e
na redução do tempo de processamento de decodificação. Um outro avanço apresentado
foi a compressão de imagens coloridas utilizando a ferramenta wavemenu em conjunto
com o algoritmo que determina a taxa de compressão. / This work presents an encoding algorithm for compressed images that represents each pixel
of an image and its coordinates by a single value. For each pixel and its coordinates, this
unique value is stored in a vector that is used in the reconstruction of the image without
its quality being compromised. The proposed method presents improvements in relation
to two other algorithms previously proposed, one of which is already an improvement
for the first one. The algorithm presented in this work differs from the other ones in
the foollowing characteristcs: by the significant reduction of the space required for image
storage, the determination of an exact compression rate and the reduction of the decoding
processing time. Another advancement was the compression of colored images using the
tool wavemenu in improvement with the algorithm that determines the compression ratio.
|
77 |
The Differential Contributions of Auditory-verbal and Visuospatial Working Memory on Decoding Skills in Children Who Are Poor DecodersSquires, Katie E 01 May 2013 (has links)
This study investigated the differential contribution of auditory-verbal and visuospatial working memory (WM) on decoding skills in second- and fifth-grade children identified with poor decoding. Thirty-two second-grade students and 22 fifth-grade students completed measures that assessed simple and complex auditory-verbal and visuospatial memory, phonological awareness, orthographic knowledge, listening comprehension and verbal and nonverbal intelligence. Bivariate correlations revealed that complex auditory-verbal WM was moderately and significantly correlated to word attack at second grade. The simple auditory-verbal WM measure was moderately and significantly correlated to word identification in fifth grade. The complex visuospatial WM measures were not correlated to word identification or word attack for second-grade students. However, for fifth-grade participants, there was a negative correlation between a complex visuospatial WM measure and word attack and a positive correlation between orthographic knowledge and word identification. Different types of WM measures predicted word identification and word attack ability in second and fifth graders. We wondered whether the processes involved in visuospatial memory (the visuospatial sketchpad) or auditory-verbal memory (the phonological loop), acting alone, would predict decoding skills. They did not. Similarly, the cognitive control abilities related to executive functions (measured by our complex memory tasks), acting alone, did not predict decoding at either grade. The optimal prediction models for each grade involved various combinations of storage, cognitive control, and retrieval processes. Second graders appeared to rely more on the processes involved in auditory-verbal WM when identifying words, while fifth-grade students relied on the visuospatial domains to identify words. For second-grade students, both complex visuospatial and auditory-verbal WM predicted word attack ability, but by fifth grade, only the visual domains predicted word attack. This study has implications for training instruction in reading. It was not the individual contributions of auditory-verbal or visuospatial WM that best predicted reading ability in second and fifth grade decoders, but rather, a combination of factors. Training WM in isolation of other skills does not increase reading ability. In fact, for young students, too much WM storage can interfere with learning to decode.
|
78 |
Aspects of List-of-Two DecodingEriksson, Jonas January 2006 (has links)
<p>We study the problem of list decoding with focus on the case when we have a list size limited to two. Under this restriction we derive general lower bounds on the maximum possible size of a list-of-2-decodable code. We study the set of correctable error patterns in an attempt to obtain a characterization. For a special family of Reed-Solomon codes - which we identify and name 'class-I codes' - we give a weight-based characterization of the correctable error patterns under list-of-2 decoding. As a tool in this analysis we use the theoretical framework of Sudan's algorithm. The characterization is used in an exact calculation of the probability of transmission error in the symmetric channel when list-of-2 decoding is used. The results from the analysis and complementary simulations for QAM-systems show that a list-of-2 decoding gain of nearly 1 dB can be achieved.</p><p>Further we study Sudan's algorithm for list decoding of Reed-Solomon codes for the special case of the class-I codes. For these codes algorithms are suggested for both the first and second step of Sudan's algorithm. Hardware solutions for both steps based on the derived algorithms are presented.</p>
|
79 |
Hardware Accelerator for Duo-binary CTC decoding : Algorithm Selection, HW/SW Partitioning and FPGA ImplementationBjärmark, Joakim, Strandberg, Marco January 2006 (has links)
<p>Wireless communication is always struggling with errors in the transmission. The digital data received from the radio channel is often erroneous due to thermal noise and fading. The error rate can be lowered by using higher transmission power or by using an effective error correcting code. Power consumption and limits for electromagnetic radiation are two of the main problems with handheld devices today and an efficient error correcting code will lower the transmission power and therefore also the power consumption of the device. </p><p>Duo-binary CTC is an improvement of the innovative turbo codes presented in 1996 by Berrou and Glavieux and is in use in many of today's standards for radio communication i.e. IEEE 802.16 (WiMAX) and DVB-RSC. This report describes the development of a duo-binary CTC decoder and the different problems that were encountered during the process. These problems include different design issues and algorithm choices during the design.</p><p>An implementation in VHDL has been written for Alteras Stratix II S90 FPGA and a reference-model has been made in Matlab. The model has been used to simulate bit error rates for different implementation alternatives and as bit-true reference for the hardware verification.</p><p>The final result is a duo-binary CTC decoder compatible with Alteras Stratix II designs and a reference model that can be used when simulating the decoder alone or the whole signal processing chain. Some of the features of the hardware are that block sizes, puncture rates and number of iterations are dynamically configured between each block Before synthesis it is possible to choose how many decoders that will work in parallel and how many bits the soft input will be represented in. The circuit has been run in 100 MHz in the lab and that gives a throughput around 50Mbit with four decoders working in parallel. This report describes the implementation, including its development, background and future possibilities.</p>
|
80 |
Elever från samma klass? : En studie av hur elever i en skolklass på Komvux tolkar filmen <em>Crash</em>.Bergström, Ola, Strömvall, Johan January 2010 (has links)
<p>I den här uppsatsen har vi studerat hur en film aktiverar människors sociala och kulturella positioner. Vi visade filmen <em>Crash </em>för åtta komvuxelever, vilket följdes av en kvalitativ intervju med dem. Eleverna fick svara på frågor om sin egen bakgrund, filmens budskap och rollfigurer, hur de uppfattade filmen och den föreslagna verkligheten i filmen, samt frågor kring sin egen framtid. Informanternas svar har hjälpt oss att synliggöra hur det görs över- och underordningar i filmen. Med hjälp av vår bakgrund och analytiska verktyg har vi interagerat med våra informanter och de har bidragit med perspektiv som vi aldrig hade kunnat se med tillämpning av enbart teorier. Något som blev framträdande var att de applicerade problematiken som den amerikansktillverkade filmen tar upp, på ett svenskt samhälle. Vi kom fram till att filmen som populärkulturellt medel hjälper till att reproducera gamla maktordningar och fördomar. Trots att detta inte var filmskaparens intention, blir det här till fördomar och stereotypa föreställningar om att det finns rasskillnader. Med den här studien har vi synliggjort att filmen medverkar till att reproducera maktordningar. Film och dess folkliga inflytande, skulle kunna bli ett redskap för nytänkande runt klass, kön och etnicitet med mera.</p>
|
Page generated in 0.0299 seconds