• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1540
  • 499
  • 154
  • 145
  • 145
  • 121
  • 55
  • 55
  • 47
  • 36
  • 36
  • 34
  • 17
  • 17
  • 16
  • Tagged with
  • 3395
  • 488
  • 475
  • 370
  • 340
  • 285
  • 261
  • 250
  • 242
  • 238
  • 234
  • 220
  • 214
  • 213
  • 210
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Functional understanding of space : Representing spatial knowledge using concepts grounded in an agent's purpose

Sjöö, Kristoffer January 2011 (has links)
This thesis examines the role of function in representations of space by robots - that is, dealing directly and explicitly with those aspects of space and objects in space that serve some purpose for the robot. It is suggested that taking function into account helps increase the generality and robustness of solutions in an unpredictable and complex world, and the suggestion is affirmed by several instantiations of functionally conceived spatial models. These include perceptual models for the "on" and "in" relations based on support and containment; context-sensitive segmentation of 2-D maps into regions distinguished by functional criteria; and, learned predictive models of the causal relationships between objects in physics simulation. Practical application of these models is also demonstrated in the context of object search on a mobile robotic platform. / QC 20111125
342

Contextual information retrieval from the WWW

Limbu, Dilip Kumar January 2008 (has links)
Contextual information retrieval (CIR) is a critical technique for today’s search engines in terms of facilitating queries and returning relevant information. Despite its importance, little progress has been made in its application, due to the difficulty of capturing and representing contextual information about users. This thesis details the development and evaluation of the contextual SERL search, designed to tackle some of the challenges associated with CIR from the World Wide Web. The contextual SERL search utilises a rich contextual model that exploits implicit and explicit data to modify queries to more accurately reflect the user’s interests as well as to continually build the user’s contextual profile and a shared contextual knowledge base. These profiles are used to filter results from a standard search engine to improve the relevance of the pages displayed to the user. The contextual SERL search has been tested in an observational study that has captured both qualitative and quantitative data about the ability of the framework to improve the user’s web search experience. A total of 30 subjects, with different levels of search experience, participated in the observational study experiment. The results demonstrate that when the contextual profile and the shared contextual knowledge base are used, the contextual SERL search improves search effectiveness, efficiency and subjective satisfaction. The effectiveness improves as subjects have actually entered fewer queries to reach the target information in comparison to the contemporary search engine. In the case of a particularly complex search task, the efficiency improves as subjects have browsed fewer hits, visited fewer URLs, made fewer clicks and have taken less time to reach the target information when compared to the contemporary search engine. Finally, subjects have expressed a higher degree of satisfaction on the quality of contextual support when using the shared contextual knowledge base in comparison to using their contextual profile. These results suggest that integration of a user’s contextual factors and information seeking behaviours are very important for successful development of the CIR framework. It is believed that this framework and other similar projects will help provide the basis for the next generation of contextual information retrieval from the Web.
343

Evaluation of Internet search tools instrument design

Saunders, Tana 03 1900 (has links)
Thesis (MPhil)--Stellenbosch University, 2004. / ENGLISH ABSTRACT: This study investigated Internet search tools / engines to identify desirable features that can be used as a benchmark or standard to evaluate web search engines. In the past, the Internet was thought of as a big spider's web, ultimately connecting all the bits of information. It has now become clear that this is not the case, and that the bow tie analogy is more accurate. This analogy suggests that there is a central core of well-connected pages, with links IN and OUT to other pages, tendrils and orphan pages. This emphasizes the importance of selecting a search tool that is well connected and linked to the central core. Searchers must take into account that not all search tools search the Invisible Web and this will reflect on the search tool selected. Not all information found on the Web and Internet is reliable, current and accurate, and Web information must be evaluated in terms of authority, currency, bias, purpose of the Web site, etc. Different kinds of search tools are available on the Internet, such as search engines, directories, library gateways, portals, intelligent agents, etc. These search tools were studied and explored. A new categorization for online search tools consisting of Intelligent Agents, Search Engines, Directories and Portals / Hubs is suggested. This categorization distinguishes the major differences between the 21 kinds of search tools studied. Search tools / engines consist of spiders, crawlers, robots, indexes and search tool software. These search tools can be further distinguished by their scope, internal or external searches and whether they search Web pages or Web sites. Most search tools operate within a relationship with other search tools, and they often share results, spiders and databases. This relationship is very dynamic. The major international search engines have identifiable search features. The features of Google, Yahoo, Lycos and Excite were studied in detail. Search engines search for information in different ways, and present their results differently. These characteristics are critical to the Recall/Precision ratio. A well-planned search strategy will improve the Precision/Recall ratio and consider the web-user capabilities and needs. Internet search tools/engines is not a panacea for all information needs, and have pros and cons. The Internet search tool evaluation instrument was developed based on desirable features of the major search tools, and is considered a benchmark or standard for Internet search tools. This instrument, applied to three South African search tools, provided insight into the capabilities of the local search tools compared to the benchmark suggested in this study. The study concludes that the local search engines compare favorably with the major ones, but not enough so to use them exclusively. Further research into this aspect is needed. Intelligent agents are likely to become more popular, but the only certainty in the future of Internet search tools is change, change, and change. / AFRIKAANSE OPSOMMING: Hierdie studie het Internetsoekinstrumente/-enjins ondersoek met die doel om gewenste eienskappe te identifiseer wat as 'n standaard kan dien om soekenjins te evalueer. In die verlede is die Internet gesien as 'n groot spinnerak, wat uiteindelik al die inligtingsdeeltjies verbind. Dit het egter nou duidelik geword dat dit glad nie die geval is nie, en dat die strikdas analogie meer akkuraat is. Hierdie analogie stel voor dat daar 'n sentrale kern van goed gekonnekteerde bladsye is, met skakels IN en UIT na ander bladsye, tentakels en weesbladsye. Dit beklemtoon die belangrikheid om die regte soekinstrument te kies, naamlik een wat goed gekonnekteer is, en geskakel is met die sentrale kern van dokumente. Soekers moet in gedagte hou dat nie alle soekenjins in die Onsigbare Web soek nie, en dit behoort weerspieël te word in die keuse van die soekinstrument. Nie alle inligting wat op die Web en Internet gevind word is betroubaar, op datum en akkuraat nie, en Web-inligting moet geëvalueer word in terme van outoriteit, tydigheid, vooroordeel, doel van die Webruimte, ens. Verskillende soorte soekinstrumente is op die Internet beskikbaar, soos soekenjins, gidse, biblioteekpoorte, portale, intelligente agente, ens. Hierdie soekinstrumente is bestudeer en verken. 'n Nuwe kategorisering vir aanlyn soekinstrumente bestaande uit Intelligente Agente, Soekinstrumente, Gidse en Portale/Middelpunte word voorgestel. Hierdie kategorisering onderskei die hoofverskille tussen die 21 soorte soekinstrumente wat bestudeer is. Soekinstrumente/-enjins bestaan uit spinnekoppe, kruipers, robotte, indekse en soekinstrument sagteware. Hierdie soekinstrumente kan verder onderskei word deur hulle omvang, interne of eksterne soektogte en of hulle op Webbladsye of Webruimtes soek. Die meeste soekinstrumente werk in verhouding met ander soekinstrumente, en hulle deel dikwels resultate, spinnekoppe en databasisse. Hierdie verhouding is baie dinamies. Die hoof internasionale soekenjins het soekeienskappe wat identifiseerbaar is. Die eienskappe van Google, Yahoo en Excite is in besonderhede bestudeer. Soekenjins soek op verskillende maniere na inligting, en lê hulle resultate verskillend voor. Hierdie karaktereienskappe is krities vir die Herwinning/Presisie verhouding. 'n Goedbeplande soekstrategie sal die Herwinning/Presisie verhouding verbeter. Internet soekinstrumente/-enjins is nie die wondermiddel vir alle inligtingsbehoeftes nie, en het voor- en nadele. Die Internet soekinstrument evalueringsmeganisme se ontwikkeling is gebaseer op gewenste eienskappe van die hoof soekinstrumente, en word beskou as 'n standaard vir Internet soekinstrumente. Hierdie instrument, toegepas op drie Suid-Afrikaanse soekenjins, het insae verskaf in die doeltreffendheid van die plaaslike soekinstrumente soos vergelyk met die standaard wat in hierdie studie voorgestel word. In die studie word tot die slotsom gekom dat die plaaslike soekenjins gunstig vergelyk met die hoof soekenjins, maar nie genoegsaam sodat hulle eksklusief gebruik kan word nie. Verdere navorsing oor hierdie aspek is nodig. Intelligente Agente sal waarskynlik meer gewild word, maar die enigste sekerheid vir die toekoms van Internet soekinstrumente is verandering, verandering en nogmaals verandering.
344

Is trust in SEM an intergenerational trait? : A study of sponsored links and generational attitudes towards them

Fredlund, Jesper, Biedron, Timmy January 2018 (has links)
Title: Is trust in SEM an intergenerational trait? Date: 2018-05-22 Level: Bachelor Thesis in International Marketing Author: Jesper Fredlund 930427 & Timmy Biedron 961128 Supervisor: Henrietta Nilson Problem formulation: How do age correlate with trust and attitude towards SEM on Google in Sweden? Purpose: The purpose of this study is to see if the Swedish Digital Natives are more likely to be trusting search engine marketing, as opposed to the older generations of Digital Immigrants, and by doing this gaining a better understanding of the attitudes towards search engines and search enginemarketing in Sweden. Theoretical framework: The theoretical framework of this paper consists of theories about BannerBlindness, Text Blindness, EHS Theory, Search Engine Marketing, Sponsored Links, Organic Links,Generations. Methodology: This is a quantitative study with 429 respondents in an online survey. It contains Swedish users of search engines divided into groups of those born before 1980 and those born after. Empirical findings: Our study found out that Digital Natives are slightly more likely to favour Search Engine Marketing than Digital Immigrants are. Conclusion: No matter the target of your Search Engine Marketing campaign you should approach itcautiously, since both Digital Natives and Digital Immigrants have been shown to hold a negative bias against these campaigns over organic links. Keywords: SEM, SEA, Search Engines, Search Behaviour, Organic links, Sponsored links.
345

Online hashing for fast similarity search

Cakir, Fatih 02 February 2018 (has links)
In this thesis, the problem of online adaptive hashing for fast similarity search is studied. Similarity search is a central problem in many computer vision applications. The ever-growing size of available data collections and the increasing usage of high-dimensional representations in describing data have increased the computational cost of performing similarity search, requiring search strategies that can explore such collections in an efficient and effective manner. One promising family of approaches is based on hashing, in which the goal is to map the data into the Hamming space where fast search mechanisms exist, while preserving the original neighborhood structure of the data. We first present a novel online hashing algorithm in which the hash mapping is updated in an iterative manner with streaming data. Being online, our method is amenable to variations of the data. Moreover, our formulation is orders of magnitude faster to train than state-of-the-art hashing solutions. Secondly, we propose an online supervised hashing framework in which the goal is to map data associated with similar labels to nearby binary representations. For this purpose, we utilize Error Correcting Output Codes (ECOCs) and consider an online boosting formulation in learning the hash mapping. Our formulation does not require any prior assumptions on the label space and is well-suited for expanding datasets that have new label inclusions. We also introduce a flexible framework that allows us to reduce hash table entry updates. This is critical, especially when frequent updates may occur as the hash table grows larger and larger. Thirdly, we propose a novel mutual information measure to efficiently infer the quality of a hash mapping and retrieval performance. This measure has lower complexity than standard retrieval metrics. With this measure, we first address a key challenge in online hashing that has often been ignored: the binary representations of the data must be recomputed to keep pace with updates to the hash mapping. Based on our novel mutual information measure, we propose an efficient quality measure for hash functions, and use it to determine when to update the hash table. Next, we show that this mutual information criterion can be used as an objective in learning hash functions, using gradient-based optimization. Experiments on image retrieval benchmarks confirm the effectiveness of our formulation, both in reducing hash table recomputations and in learning high-quality hash functions.
346

Implementação e análise de algoritmos para estimação de movimento em processadores paralelos tipo GPU (Graphics Processing Units) / Implementation and analysis of algorithms for motion estimation onto parallels processors type GPU

Monteiro, Eduarda Rodrigues January 2012 (has links)
A demanda por aplicações que processam vídeos digitais têm obtido atenção na indústria e na academia. Considerando a manipulação de um elevado volume de dados em vídeos de alta resolução, a compressão de vídeo é uma ferramenta fundamental para reduzir a quantidade de informações de modo a manter a qualidade viabilizando a respectiva transmissão e armazenamento. Diferentes padrões de codificação de vídeo foram desenvolvidos para impulsionar o desenvolvimento de técnicas avançadas para este fim, como por exemplo, o padrão H.264/AVC. Este padrão é considerado o estado-da-arte, pois proporciona maior eficiência em codificação em relação a padrões existentes (MPEG-4). Entre todas as ferramentas inovadoras apresentadas pelas mais recentes normas de codificação, a Estimação de Movimento (ME) é a técnica que provê a maior parcela dos ganhos. A ME busca obter a relação de similaridade entre quadros vizinhos de uma cena, porém estes ganhos são obtidos ao custo de um elevado custo computacional representando a maior parte da complexidade total dos codificadores atuais. O objetivo do trabalho é acelerar o processo de ME, principalmente quando vídeos de alta resolução são codificados. Esta aceleração concentra-se no uso de uma plataforma massivamente paralela, denominada GPU (Graphics Processing Unit). Os algoritmos da ME apresentam um elevado potencial de paralelização e são adequados para implementação em arquiteturas paralelas. Assim, diferentes algoritmos têm sido propostos a fim de diminuir o custo computacional deste módulo. Este trabalho apresenta a implementação e a exploração do paralelismo de dois algoritmos da ME em GPU, focados na codificação de vídeo de alta definição e no processamento em tempo real. O algoritmo Full Search (FS) é conhecido como algoritmo ótimo, pois encontra os melhores resultados a partir de uma busca exaustiva entre os quadros. O algoritmo rápido Diamond Search (DS) reduz significativamente a complexidade da ME mantendo a qualidade de vídeo próxima ao desempenho apresentado pelo FS. A partir da exploração máxima do paralelismo dos algoritmos FS e DS e do processamento paralelo disponível nas GPUs, este trabalho apresenta um método para mapear estes algoritmos em GPU, considerando a arquitetura CUDA (Compute Unified Device Architecture). Para avaliação de desempenho, as soluções CUDA são comparadas com as respectivas versões multi-core (utilizando biblioteca OpenMP) e distribuídas (utilizando MPI como infraestrutura de suporte). Todas as versões foram avaliadas em diferentes resoluções e os resultados foram comparados com algoritmos da literatura. As implementações propostas em GPU apresentam aumentos significativos, em termos de desempenho, em relação ao software de referência do codificador H.264/AVC e, além disso, apresentam ganhos expressivos em relação às respectivas versões multi-core, distribuída e trabalhos GPGPU propostos na literatura. / The demand for applications processing digital videos has become the focus of attention in industry and academy. Considering the manipulation of the high volume of data contained in high resolution digital videos, video compression is a fundamental tool for reduction in the amount of information in order to maintain the quality and, thus enabling its respective transfer and storage. As to obtain the development of advanced video coding techniques, different standards of video encoding were developed, for example, the H.264/AVC. This standard is considered the state-of-art for proving high coding efficiency compared to previous standards (MPEG-4). Among all innovative tools featured by the latest video coding standards, the Motion Estimation is the technique that provides the most important coding gains. ME searches obtain the similarity relation between neighboring frames of the one scene. However, these gains were obtained by the elevated computational cost, representing the greater part of the total complexity of the current encoders. The goal of this project is to accelerate the Motion Estimation process, mainly when high resolution digital videos were encoded. This acceleration focuses on the use of a massively parallel platform called GPU (Graphics Processing Unit). The Motion Estimation block matching algorithms present a high potential for parallelization and are suitable for implementation in parallel architectures. Therefore, different algorithms have been proposed to decrease the computational complexity of this module. This work presents the implementation and parallelism exploitation of two motion estimation algorithms in GPU focused in encoding high definition video and the real time processing. Full Search algorithm (FS) is known as optimal since it finds the best match by exhaustively searching between frames. The fast Diamond Search algorithm reduces significantly the ME complexity while keeping the video quality near FS performance. By exploring the maximum inherent parallelism of FS and DS and the available parallel processing capability of GPUs, this work presents an efficient method to map out these algorithms onto GPU considering the CUDA architecture (Compute Unified Device Architecture). For performance evaluation, the CUDA solutions are compared with respective multi-core (using OpenMP library) and distributed (using MPI as supporting infrastructure) versions. All versions were evaluated in different video resolutions and the results were compared with algorithms found in the literature. The proposed implementations onto GPU present significant increase, in terms of performance, in relation with the H.264/AVC encoder reference software and, moreover, present expressive gains in relation with multi-core, distributed versions and GPGPU alternatives proposed in literature.
347

Learning Search Strategies from Human Demonstration for Robotic Assembly Tasks

Ehlers, Dennis January 2018 (has links)
Learning from Demonstration (LfD) has been used in robotics research for the last decades to solve issues pertaining to conventional programming of robots. This framework enables a robot to learn a task simply from a human demonstration. However, it is unfeasible to teach a robot all possible scenarios, which may lead to e.g. the robot getting stuck. In order to solve this, a search is necessary. However, no current work is able to provide a search approach that is both simple and general. This thesis develops and evaluates a new framework based on LfD that combines both of these aspects. A single demonstration of a human search is made and a model of it is learned. From this model a search trajectory is sampled and optimized. Based on that trajectory, a prediction of the encountered environmental forces is made. An impedance controller with feed-forward of the predicted forces is then used to evaluate the algorithm on a Peg-in-Hole task. The final results show that the framework is able to successfully learn and reproduce a search from just one single human demonstration. Ultimately some suggestions are made for further benchmarks and development.
348

Locality Sensitive Indexing for Efficient High-Dimensional Query Answering in the Presence of Excluded Regions

January 2016 (has links)
abstract: Similarity search in high-dimensional spaces is popular for applications like image processing, time series, and genome data. In higher dimensions, the phenomenon of curse of dimensionality kills the effectiveness of most of the index structures, giving way to approximate methods like Locality Sensitive Hashing (LSH), to answer similarity searches. In addition to range searches and k-nearest neighbor searches, there is a need to answer negative queries formed by excluded regions, in high-dimensional data. Though there have been a slew of variants of LSH to improve efficiency, reduce storage, and provide better accuracies, none of the techniques are capable of answering queries in the presence of excluded regions. This thesis provides a novel approach to handle such negative queries. This is achieved by creating a prefix based hierarchical index structure. First, the higher dimensional space is projected to a lower dimension space. Then, a one-dimensional ordering is developed, while retaining the hierarchical traits. The algorithm intelligently prunes the irrelevant candidates while answering queries in the presence of excluded regions. While naive LSH would need to filter out the negative query results from the main results, the new algorithm minimizes the need to fetch the redundant results in the first place. Experiment results show that this reduces post-processing cost thereby reducing the query processing time. / Dissertation/Thesis / Masters Thesis Computer Science 2016
349

Development of a search engine marketing model using the application of a dual strategy

Kritzinger, Wouter Thomas January 2017 (has links)
Thesis (DTech (Informatics))--Cape Peninsula University of Technology, 2017. / Any e-commerce venture using a website as main shop-front should invest in marketing their website. Previous empirical evidence shows that most Search Engine Marketing (SEM) spending (approximately 82%) is allocated to Pay Per Click (PPC) campaigns while only 12% was spent on Search Engine Optimisation (SEO). The remaining 6% of the total spending was allocated to other SEM strategies. No empirical work was found on how marketing expenses compare when used solely for either the one or the other of the two main types of SEM. In this study, a model will be designed to guide the development of a dual SEM strategy.
350

Implementação e análise de algoritmos para estimação de movimento em processadores paralelos tipo GPU (Graphics Processing Units) / Implementation and analysis of algorithms for motion estimation onto parallels processors type GPU

Monteiro, Eduarda Rodrigues January 2012 (has links)
A demanda por aplicações que processam vídeos digitais têm obtido atenção na indústria e na academia. Considerando a manipulação de um elevado volume de dados em vídeos de alta resolução, a compressão de vídeo é uma ferramenta fundamental para reduzir a quantidade de informações de modo a manter a qualidade viabilizando a respectiva transmissão e armazenamento. Diferentes padrões de codificação de vídeo foram desenvolvidos para impulsionar o desenvolvimento de técnicas avançadas para este fim, como por exemplo, o padrão H.264/AVC. Este padrão é considerado o estado-da-arte, pois proporciona maior eficiência em codificação em relação a padrões existentes (MPEG-4). Entre todas as ferramentas inovadoras apresentadas pelas mais recentes normas de codificação, a Estimação de Movimento (ME) é a técnica que provê a maior parcela dos ganhos. A ME busca obter a relação de similaridade entre quadros vizinhos de uma cena, porém estes ganhos são obtidos ao custo de um elevado custo computacional representando a maior parte da complexidade total dos codificadores atuais. O objetivo do trabalho é acelerar o processo de ME, principalmente quando vídeos de alta resolução são codificados. Esta aceleração concentra-se no uso de uma plataforma massivamente paralela, denominada GPU (Graphics Processing Unit). Os algoritmos da ME apresentam um elevado potencial de paralelização e são adequados para implementação em arquiteturas paralelas. Assim, diferentes algoritmos têm sido propostos a fim de diminuir o custo computacional deste módulo. Este trabalho apresenta a implementação e a exploração do paralelismo de dois algoritmos da ME em GPU, focados na codificação de vídeo de alta definição e no processamento em tempo real. O algoritmo Full Search (FS) é conhecido como algoritmo ótimo, pois encontra os melhores resultados a partir de uma busca exaustiva entre os quadros. O algoritmo rápido Diamond Search (DS) reduz significativamente a complexidade da ME mantendo a qualidade de vídeo próxima ao desempenho apresentado pelo FS. A partir da exploração máxima do paralelismo dos algoritmos FS e DS e do processamento paralelo disponível nas GPUs, este trabalho apresenta um método para mapear estes algoritmos em GPU, considerando a arquitetura CUDA (Compute Unified Device Architecture). Para avaliação de desempenho, as soluções CUDA são comparadas com as respectivas versões multi-core (utilizando biblioteca OpenMP) e distribuídas (utilizando MPI como infraestrutura de suporte). Todas as versões foram avaliadas em diferentes resoluções e os resultados foram comparados com algoritmos da literatura. As implementações propostas em GPU apresentam aumentos significativos, em termos de desempenho, em relação ao software de referência do codificador H.264/AVC e, além disso, apresentam ganhos expressivos em relação às respectivas versões multi-core, distribuída e trabalhos GPGPU propostos na literatura. / The demand for applications processing digital videos has become the focus of attention in industry and academy. Considering the manipulation of the high volume of data contained in high resolution digital videos, video compression is a fundamental tool for reduction in the amount of information in order to maintain the quality and, thus enabling its respective transfer and storage. As to obtain the development of advanced video coding techniques, different standards of video encoding were developed, for example, the H.264/AVC. This standard is considered the state-of-art for proving high coding efficiency compared to previous standards (MPEG-4). Among all innovative tools featured by the latest video coding standards, the Motion Estimation is the technique that provides the most important coding gains. ME searches obtain the similarity relation between neighboring frames of the one scene. However, these gains were obtained by the elevated computational cost, representing the greater part of the total complexity of the current encoders. The goal of this project is to accelerate the Motion Estimation process, mainly when high resolution digital videos were encoded. This acceleration focuses on the use of a massively parallel platform called GPU (Graphics Processing Unit). The Motion Estimation block matching algorithms present a high potential for parallelization and are suitable for implementation in parallel architectures. Therefore, different algorithms have been proposed to decrease the computational complexity of this module. This work presents the implementation and parallelism exploitation of two motion estimation algorithms in GPU focused in encoding high definition video and the real time processing. Full Search algorithm (FS) is known as optimal since it finds the best match by exhaustively searching between frames. The fast Diamond Search algorithm reduces significantly the ME complexity while keeping the video quality near FS performance. By exploring the maximum inherent parallelism of FS and DS and the available parallel processing capability of GPUs, this work presents an efficient method to map out these algorithms onto GPU considering the CUDA architecture (Compute Unified Device Architecture). For performance evaluation, the CUDA solutions are compared with respective multi-core (using OpenMP library) and distributed (using MPI as supporting infrastructure) versions. All versions were evaluated in different video resolutions and the results were compared with algorithms found in the literature. The proposed implementations onto GPU present significant increase, in terms of performance, in relation with the H.264/AVC encoder reference software and, moreover, present expressive gains in relation with multi-core, distributed versions and GPGPU alternatives proposed in literature.

Page generated in 0.0473 seconds