• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 219
  • 48
  • 42
  • 31
  • 22
  • 13
  • 8
  • 6
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 503
  • 92
  • 68
  • 55
  • 50
  • 46
  • 46
  • 44
  • 40
  • 39
  • 36
  • 35
  • 33
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

The Effect of Encoding Specificity on Learning in a Multimedia Environment

LaBoone, Emet L. 09 May 2006 (has links)
The purpose of this study was to examine the effect of encoding specificity on learning in a multimedia environment. Based upon the theory of encoding specificity there should be a relationship between the modality for which a learner encodes information into memory and the modality used to assess the learner's knowledge. Modality attributes for purposes of this study included visual (animation) and verbal information (narration and text). Two-hundred and fifteen students viewed a computer animation on lighting formation which was presented in one of three different modalities (animation with narration, animation with text, text only). Following the instruction students were assessed in one of three modalities (animation with narration, animation with text, text only) on recall and transfer. A 3 Encoding/Study x 3 Retrieval/Test (animation with narration, animation with text, text only) full-factorial post-test only design was used to assess the effects of matched and mismatched encoding and retrieval modalities in a multimedia environment. Encoding specificity suggests that there is an interaction between the conditions at encoding and retrieval such to say that the to-be-remembered item will not be as effective during retrieval unless the cue was specifically encoded at time of storage. Unfortunately, the present study did not find much to support the claim of encoding specificity based upon modality. The use of modality in both encoding and retrieval condition to support encoding specificity was found only in the AT-AT matched recall group versus the mismatched groups. Furthermore, significance was not found in any of the matched mismatched transfer conditions. / Ph. D.
112

Auf dem Weg zu einem TEI-Austauschformat für ägyptisch-koptische Texte

Gerhards, Simone, Schweitzer, Simon 20 April 2016 (has links) (PDF)
Diverse ägyptologische Großprojekte (TLA: http://aaew.bbaw.de/tla; Ramses: http://ramses.ulg.ac.be/; Rubensohn: http://elephantine.smb.museum/; Karnak: http://www.cfeetk.cnrs.fr/karnak/) erstellen annotierte Korpora. Für einen Datenaustausch ist ein standardisiertes Austauschformat, das auf TEI beruht, dringend erforderlich. Dazu haben sich diese Großprojekte zusammengeschlossen, um einen gemeinsamen Vorschlag zu erarbeiten. In unserem Vortrag möchten wir den aktuellen Stand der Diskussion präsentieren: Was ist der Basistext in der Auszeichnung: hieroglyphische Annotation oder die Umschrift des Textes? Wie geht man mit den verschiedenen Schriftformaten um? Können die Metadatenangaben im Header mithilfe gemeinsamer Thesauri standardisiert werden? Was wird inline, was wird stand-off annotiert?
113

Pucken glider in... : En jämförelse av två hockeyklubbars varumärkesidentitet och varumärkesimage

Christiansson, Josefine, Jansson, Josephine, Lindster, Julia January 2016 (has links)
Att folk inte förstår vad man säger beror på flera olika anledningar. En självklar orsak till att detta är till exempel givetvis om man inte talar samma språk. Men även om man skulle begripa varandra i en konversation betyder inte det att sändarens budskap tolkas på önskat sätt av mottagaren. Kommunikationen är avgörande för att en förståelse mellan två parter ska kunna skapas. Stuart Hall (1973) problematiserar kommunikationsprocessen och kedjan från det att meddelandet produceras till dess distribution och slutliga konsumtion. Kommunikationen mellan kodare och avkodare är aldrig given i sin överenskommelse. Att man talar samma språk betyder följaktligen inte att man förstår varandra fullt ut. Det finns många andra faktorer som påverkar hur meddelandet avkodas av mottagaren. Vi har valt att undersöka hur två hockeyklubbar kommunicerar med sina supportrar. Studiens syfte är att göra en jämförelse mellan de två varumärkena. I respektive varumärkes identitet och image söks likheter och olikheter. Den eventuella varumärkeskongruensen studeras ur ett kommunikativt perspektiv.   Intentionen är att ta reda på hur kommunikativa processer kan påverka varumärkens identitet och image och relationen mellan dem. Det teoretiska ramverk som använts består av marknadskommunikation, encoding och decoding och varumärkesteori. Under varumärkesteorin följer kompletteringar om sport och varumärken, varumärkesidentitet och varumärkesimage. Studien baseras på en kvalitativ och en kvantitativ metod. Den kvalitativa metoden består av gruppintervjuer som utförts för att fastställa varumärkenas identiteter medan den kvantitativa metoden utgörs av enkäter som identifierar varumärkenas image. En av de slutsatser som presenteras i studien är att det både finns kongruens och skillnader angående överensstämmelsen mellan varumärkesidentitet- och image. Sammanfattningsvis går det att konstatera att resurser utgör den stora skillnaden klubbarna mellan i förutsättningen för att mottagarna ska kunna tillgodoses på ett bra sätt. / The reason people don’t understand what others are saying can depend on a lot of things. One obvious reason would be that two individuals don’t speak the same language. Even if the two individuals would understand the words in a conversation, it’s not certain that the meaning of the message reaches the receiver the intended way. The communication settles the agreement for understanding between two parts. Stuart Hall (1973) problematizes the communication process and the chain from when the message is created and coded with meaning, through its distribution until its consumption. The communication between an encoder and a decoder are not ever certain. To not speak the same language is the same as not understanding each other. There are many other factors that influence how the receiver decodes a message.   This thesis aims to identify the brand congruence related to two Swedish ice hockey brands. The intention is to examine how communicative processes can influence the differences between how the brand identity and brand image are interpreted. The theory frame used in the thesis concludes of the following theories; Marketing Communication, Encoding and Decoding and Brand Theory. The Brand Theory is specified within the areas of Sports and Brands, Brand Identity and Brand Image.  The thesis includes both a qualitative and a quantitative method. The qualitative interviews are used to establish the brand identities while the quantitative questionnaire study aims to define the brand images. One of the major conclusions is that there are both congruence and differences in the comparison within both of the brands. Another conclusion is that the differences in how substantial the congruence is relates to resources and the brands background.
114

Ultra High Compression For Weather Radar Reflectivity Data

Makkapati, Vishnu Vardhan 11 1900 (has links)
Weather is a major contributing factor in aviation accidents, incidents and delays. Doppler weather radar has emerged as a potent tool to observe weather. Aircraft carry an onboard radar but its range and angular resolution are limited. Networks of ground-based weather radars provide extensive coverage of weather over large geographic regions. It would be helpful if these data can be transmitted to the pilot. However, these data are highly voluminous and the bandwidth of the ground-air communication links is limited and expensive. Hence, these data have to be compressed to an extent where they are suitable for transmission over low-bandwidth links. Several methods have been developed to compress pictorial data. General-purpose schemes do not take into account the nature of data and hence do not yield high compression ratios. A scheme for extreme compression of weather radar data is developed in this thesis that does not significantly degrade the meteorological information contained in these data. The method is based on contour encoding. It approximates a contour by a set of systematically chosen ‘control’ points that preserve its fine structure upto a certain level. The contours may be obtained using a thresholding process based on NWS or custom reflectivity levels. This process may result in region and hole contours, enclosing ‘high’ or ‘low’ areas, which may be nested. A tag bit is used to label region and hole contours. The control point extraction method first obtains a smoothed reference contour by averaging the original contour. Then the points on the original contour with maximum deviation from the smoothed contour between the crossings of these contours are identified and are designated as control points. Additional control points are added midway between the control point and the crossing points on either side of it, if the length of the segment between the crossing points exceeds a certain length. The control points, referenced with respect to the top-left corner of each contour for compact quantification, are transmitted to the receiving end. The contour is retrieved from the control points at the receiving end using spline interpolation. The region and hole contours are identified using the tag bit. The pixels between the region and hole contours at a given threshold level are filled using the color corresponding to it. This method is repeated till all the contours for a given threshold level are exhausted, and the process is carried out for all other thresholds, thereby resulting in a composite picture of the reconstructed field. Extensive studies have been conducted by using metrics such as compression ratio, fidelity of reconstruction and visual perception. In particular the effect of the smoothing factor, the choice of the degree of spline interpolation and the choice of thresholds are studied. It has been shown that a smoothing percentage of about 10% is optimal for most data. A degree 2 of spline interpolation is found to be best suited for smooth contour reconstruction. Augmenting NWS thresholds has resulted in improved visual perception, but at the expense of a decrease in the compression ratio. Two enhancements to the basic method that include adjustments to the control points to achieve better reconstruction and bit manipulations on the control points to obtain higher compression are proposed. The spline interpolation inherently tends to move the reconstructed contour away from the control points. This has been somewhat compensated by stretching the control points away from the smoothed reference contour. The amount and direction of stretch are optimized with respect to actual data fields to yield better reconstruction. In the bit manipulation study, the effects of discarding the least significant bits of the control point addresses are analyzed in detail. Simple bit truncation introduces a bias in the contour description and reconstruction, which is removed to a great extent by employing a bias compensation mechanism. The results obtained are compared with other methods devised for encoding weather radar contours.
115

Algoritmos e desenvolvimento de arquitetura para codificação binária adaptativa ao contexto para o decodificador H.264/AVC / Algorithms and architecture design for context-adaptive binary arithmetic coder for the H.264/AVC decoder

Depra, Dieison Antonello January 2009 (has links)
As inovações tecnológicas têm propiciado transformações nas formas de interação e, principalmente, na comunicação entre as pessoas. Os avanços nas áreas de tecnologia da informação e comunicações abriram novos horizontes para a criação de demandas até então não existentes. Nesse contexto, a utilização de vídeo digital de alta definição para aplicações de tempo real ganha ênfase. Entretanto, os desafios envolvidos na manipulação da quantidade de informações necessárias à sua representação, fomentam pesquisas na indústria e na academia para minimizar os impactos sobre a largura de banda necessária para transmissão e/ou no espaço para o seu armazenamento. Para enfrentar esses problemas diversos padrões de compressão de vídeo têm sido desenvolvidos sendo que, nesse aspecto, o padrão H.264/AVC é considerado o estado da arte. O padrão H.264/AVC introduz ganhos significativos na taxa de compressão, em relação a seus antecessores, porém esses ganhos vêem acompanhados pelo aumento na complexidade computacional das ferramentas aplicadas como, por exemplo, a Codificação Aritmética Binária Adaptativa ao Contexto (CABAC). A complexidade computacional relacionado ao padrão H.264/AVC é tal que torna impraticável sua execução em software (para operar em um processador de propósito geral, ao menos para nos disponíveis atuais) com a finalidade de realizar a codificação ou decodificação em tempo real para sequências de vídeo de alta definição. Esta dissertação apresenta uma arquitetura de hardware para o processo de decodificação do CABAC, conforme especificação do padrão H.264/AVC. Tendo o objetivo de contribuir para a resolução de alguns dos problemas relacionados à tarefa de decodificação de vídeo de alta definição em tempo real. Para isso, apresenta-se uma introdução sobre conceitos fundamentais da compressão de dados e vídeo digital, além da discussão sobre as principais características do padrão H.264/AVC. O conjunto de algoritmos presentes no CABAC e o fluxo de decodificação do CABAC são descritos em detalhes. Para fundamentar as decisões de projeto um vasto conjunto de experimentos foi realizado para analisar o comportamento do bitstream durante o processo de decodificação do CABAC. A arquitetura de hardware proposta e desenvolvida é apresentada em detalhes, tendo seu desempenho comparado com outras propostas encontradas na literatura. Os resultados obtidos mostram que a arquitetura desenvolvida é eficaz em seu objetivo, pois atinge a capacidade de processamento de vídeos em alta definição (HD1080p) em tempo real. Além disso, os experimentos realizados deram origem a observações inovadoras, que permitiram determinar os pontos chave para minimizar os gargalos inerentes ao conjunto de algoritmos que compõe o CABAC. / The technological innovations of recent decades have brought changes in the forms of human interaction especially in communication area. Advances in the areas of information technology and communications opened new horizons for creating demands non-existent so far. In this scenario the high-definition digital video for real-time applications has gained emphasis for this context. However, the challenges involved in handling the amount of information necessary for its representation, promoting research in industry and academia to minimize the impact on the bandwidth needed for transmission and / or the space for the storage. To address those problems several video compression standards have been developed and the H.264/AVC standard is the state-of-the-art. The H.264/AVC standard introduces significant gains in compression rate, compared to its predecessors. These gains are obtained by an increase in computational complexity of the techniques used, such as the CABAC. The computational requirements of H.264/AVC standard is so strong that make its implementation impractical in software (to operate on a general purpose processor) for the purpose of performing encoding or decoding in real time for high-definition video sequences. This dissertation presents a new CABAD architecture with the implementation in hardware intended to solve the problems related to the task of decoding high-definition video in real time. An introduction to fundamental concepts of data compression and digital video is presented, in addition to discussing the main features of the H.264/AVC standard. The set of algorithms the CABAC and of the CABAD decode flow are described in detail. A wide number of experiments were conducted to identify the static and dynamic behavior of the bitstream to support the design decisions. At the end the developed architecture is examined and compared with other proposals found in literature. The results show that the architecture developed is effective in its purpose to handle high-definition video (HD1080p) in real time. Furthermore, the experiments have led to innovative observations to determine the key points to minimize the bottlenecks inherent in the set of algorithms that make the CABAD.
116

Algoritmos e desenvolvimento de arquitetura para codificação binária adaptativa ao contexto para o decodificador H.264/AVC / Algorithms and architecture design for context-adaptive binary arithmetic coder for the H.264/AVC decoder

Depra, Dieison Antonello January 2009 (has links)
As inovações tecnológicas têm propiciado transformações nas formas de interação e, principalmente, na comunicação entre as pessoas. Os avanços nas áreas de tecnologia da informação e comunicações abriram novos horizontes para a criação de demandas até então não existentes. Nesse contexto, a utilização de vídeo digital de alta definição para aplicações de tempo real ganha ênfase. Entretanto, os desafios envolvidos na manipulação da quantidade de informações necessárias à sua representação, fomentam pesquisas na indústria e na academia para minimizar os impactos sobre a largura de banda necessária para transmissão e/ou no espaço para o seu armazenamento. Para enfrentar esses problemas diversos padrões de compressão de vídeo têm sido desenvolvidos sendo que, nesse aspecto, o padrão H.264/AVC é considerado o estado da arte. O padrão H.264/AVC introduz ganhos significativos na taxa de compressão, em relação a seus antecessores, porém esses ganhos vêem acompanhados pelo aumento na complexidade computacional das ferramentas aplicadas como, por exemplo, a Codificação Aritmética Binária Adaptativa ao Contexto (CABAC). A complexidade computacional relacionado ao padrão H.264/AVC é tal que torna impraticável sua execução em software (para operar em um processador de propósito geral, ao menos para nos disponíveis atuais) com a finalidade de realizar a codificação ou decodificação em tempo real para sequências de vídeo de alta definição. Esta dissertação apresenta uma arquitetura de hardware para o processo de decodificação do CABAC, conforme especificação do padrão H.264/AVC. Tendo o objetivo de contribuir para a resolução de alguns dos problemas relacionados à tarefa de decodificação de vídeo de alta definição em tempo real. Para isso, apresenta-se uma introdução sobre conceitos fundamentais da compressão de dados e vídeo digital, além da discussão sobre as principais características do padrão H.264/AVC. O conjunto de algoritmos presentes no CABAC e o fluxo de decodificação do CABAC são descritos em detalhes. Para fundamentar as decisões de projeto um vasto conjunto de experimentos foi realizado para analisar o comportamento do bitstream durante o processo de decodificação do CABAC. A arquitetura de hardware proposta e desenvolvida é apresentada em detalhes, tendo seu desempenho comparado com outras propostas encontradas na literatura. Os resultados obtidos mostram que a arquitetura desenvolvida é eficaz em seu objetivo, pois atinge a capacidade de processamento de vídeos em alta definição (HD1080p) em tempo real. Além disso, os experimentos realizados deram origem a observações inovadoras, que permitiram determinar os pontos chave para minimizar os gargalos inerentes ao conjunto de algoritmos que compõe o CABAC. / The technological innovations of recent decades have brought changes in the forms of human interaction especially in communication area. Advances in the areas of information technology and communications opened new horizons for creating demands non-existent so far. In this scenario the high-definition digital video for real-time applications has gained emphasis for this context. However, the challenges involved in handling the amount of information necessary for its representation, promoting research in industry and academia to minimize the impact on the bandwidth needed for transmission and / or the space for the storage. To address those problems several video compression standards have been developed and the H.264/AVC standard is the state-of-the-art. The H.264/AVC standard introduces significant gains in compression rate, compared to its predecessors. These gains are obtained by an increase in computational complexity of the techniques used, such as the CABAC. The computational requirements of H.264/AVC standard is so strong that make its implementation impractical in software (to operate on a general purpose processor) for the purpose of performing encoding or decoding in real time for high-definition video sequences. This dissertation presents a new CABAD architecture with the implementation in hardware intended to solve the problems related to the task of decoding high-definition video in real time. An introduction to fundamental concepts of data compression and digital video is presented, in addition to discussing the main features of the H.264/AVC standard. The set of algorithms the CABAC and of the CABAD decode flow are described in detail. A wide number of experiments were conducted to identify the static and dynamic behavior of the bitstream to support the design decisions. At the end the developed architecture is examined and compared with other proposals found in literature. The results show that the architecture developed is effective in its purpose to handle high-definition video (HD1080p) in real time. Furthermore, the experiments have led to innovative observations to determine the key points to minimize the bottlenecks inherent in the set of algorithms that make the CABAD.
117

Algoritmos e desenvolvimento de arquitetura para codificação binária adaptativa ao contexto para o decodificador H.264/AVC / Algorithms and architecture design for context-adaptive binary arithmetic coder for the H.264/AVC decoder

Depra, Dieison Antonello January 2009 (has links)
As inovações tecnológicas têm propiciado transformações nas formas de interação e, principalmente, na comunicação entre as pessoas. Os avanços nas áreas de tecnologia da informação e comunicações abriram novos horizontes para a criação de demandas até então não existentes. Nesse contexto, a utilização de vídeo digital de alta definição para aplicações de tempo real ganha ênfase. Entretanto, os desafios envolvidos na manipulação da quantidade de informações necessárias à sua representação, fomentam pesquisas na indústria e na academia para minimizar os impactos sobre a largura de banda necessária para transmissão e/ou no espaço para o seu armazenamento. Para enfrentar esses problemas diversos padrões de compressão de vídeo têm sido desenvolvidos sendo que, nesse aspecto, o padrão H.264/AVC é considerado o estado da arte. O padrão H.264/AVC introduz ganhos significativos na taxa de compressão, em relação a seus antecessores, porém esses ganhos vêem acompanhados pelo aumento na complexidade computacional das ferramentas aplicadas como, por exemplo, a Codificação Aritmética Binária Adaptativa ao Contexto (CABAC). A complexidade computacional relacionado ao padrão H.264/AVC é tal que torna impraticável sua execução em software (para operar em um processador de propósito geral, ao menos para nos disponíveis atuais) com a finalidade de realizar a codificação ou decodificação em tempo real para sequências de vídeo de alta definição. Esta dissertação apresenta uma arquitetura de hardware para o processo de decodificação do CABAC, conforme especificação do padrão H.264/AVC. Tendo o objetivo de contribuir para a resolução de alguns dos problemas relacionados à tarefa de decodificação de vídeo de alta definição em tempo real. Para isso, apresenta-se uma introdução sobre conceitos fundamentais da compressão de dados e vídeo digital, além da discussão sobre as principais características do padrão H.264/AVC. O conjunto de algoritmos presentes no CABAC e o fluxo de decodificação do CABAC são descritos em detalhes. Para fundamentar as decisões de projeto um vasto conjunto de experimentos foi realizado para analisar o comportamento do bitstream durante o processo de decodificação do CABAC. A arquitetura de hardware proposta e desenvolvida é apresentada em detalhes, tendo seu desempenho comparado com outras propostas encontradas na literatura. Os resultados obtidos mostram que a arquitetura desenvolvida é eficaz em seu objetivo, pois atinge a capacidade de processamento de vídeos em alta definição (HD1080p) em tempo real. Além disso, os experimentos realizados deram origem a observações inovadoras, que permitiram determinar os pontos chave para minimizar os gargalos inerentes ao conjunto de algoritmos que compõe o CABAC. / The technological innovations of recent decades have brought changes in the forms of human interaction especially in communication area. Advances in the areas of information technology and communications opened new horizons for creating demands non-existent so far. In this scenario the high-definition digital video for real-time applications has gained emphasis for this context. However, the challenges involved in handling the amount of information necessary for its representation, promoting research in industry and academia to minimize the impact on the bandwidth needed for transmission and / or the space for the storage. To address those problems several video compression standards have been developed and the H.264/AVC standard is the state-of-the-art. The H.264/AVC standard introduces significant gains in compression rate, compared to its predecessors. These gains are obtained by an increase in computational complexity of the techniques used, such as the CABAC. The computational requirements of H.264/AVC standard is so strong that make its implementation impractical in software (to operate on a general purpose processor) for the purpose of performing encoding or decoding in real time for high-definition video sequences. This dissertation presents a new CABAD architecture with the implementation in hardware intended to solve the problems related to the task of decoding high-definition video in real time. An introduction to fundamental concepts of data compression and digital video is presented, in addition to discussing the main features of the H.264/AVC standard. The set of algorithms the CABAC and of the CABAD decode flow are described in detail. A wide number of experiments were conducted to identify the static and dynamic behavior of the bitstream to support the design decisions. At the end the developed architecture is examined and compared with other proposals found in literature. The results show that the architecture developed is effective in its purpose to handle high-definition video (HD1080p) in real time. Furthermore, the experiments have led to innovative observations to determine the key points to minimize the bottlenecks inherent in the set of algorithms that make the CABAD.
118

Making Video Streaming More Efficient Using Per-Shot Encoding

Gådin, Douglas, Hermanson, Fanny, Marhold, Anton, Sikström, Joel, Winman, Johan January 2022 (has links)
The demand for streaming high-quality video increases each year and the energy used by consumers is estimated to increase by 23% from 2020 to 2030. The largest contributor to this is increased data transmission. To minimise data transmission, a video encoding method called per-shot encoding can be used, which splits and processes a video into smaller segments called shots. By utilising this method, the bitrate for a video can be reduced without compromising quality. This leads to less data that needs to be transmitted, which reduces energy consumption. In this project, a website that interfaces with a per-shot encoder is implemented. To evaluate the per-shot encoder, both visual quality and bitrate are quantitatively measured. From evaluation, the bitrate is reduced by up to 2.5% for a selection of videos, without compromising the viewing experience. This is a substantial decrease compared to alternative methods. / Efterfrågan av högkvalitativ videoströmning ökar varje år och konsumenters energianvändning uppskattas att ha ökat med 23% år 2030 jämfört med år 2020. Den största orsaken till detta är ökad dataöverföring. För att minska mängden data som behöver skickas kan per-shot-kodning användas, vilket är en videokodningsmetod som delar upp och bearbetar en video i ett flertal mindre delar som kallas shots. Bithastigheten för en video kan minskas med hjälp av per-shot-kodning utan att påverka kvaliteten. Detta leder till att mindre data behöver skickas, vilket innebär minskad energiförbrukning. I detta projekt har en per-shot-kodare tillsammans med en hemsida utvecklats. För att utvärdera per-shot-kodaren kommer skillnad i kvalitet och bithastighet att mätas kvantitativt. Utvärderingen har visat att per-shot-kodaren kan minska bithastigheten med upp till 2.5% för ett urval av videor, utan att påverka tittarupplevelsen. Detta är en avsevärd minskning jämfört med alternativa metoder.
119

Reference frames for planning reach movement in the parietal and premotor cortices

Taghizadeh, Bahareh 17 February 2015 (has links)
No description available.
120

Komprese videa v obvodu FPGA / Implementation of video compression into FPGA chip

Tomko, Jakub January 2014 (has links)
This thesis is focused on the compression algorithm's analysis of MJPEG format and its implementation in FPGA chip. Three additional video bitstream reduction methods have been evaluated for real-time low latency applications of MJPEG format. These methods are noise filtering, inter-frame encoding and lowering video's quality. Based on this analysis, a MJPEG codec has been designed for implementation into FPGA chip XC6SLX45, from Spartan-6 family.

Page generated in 0.0391 seconds