• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 179
  • 61
  • 35
  • 25
  • 17
  • 12
  • 11
  • 7
  • 7
  • 7
  • 4
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 393
  • 153
  • 115
  • 101
  • 83
  • 79
  • 74
  • 61
  • 57
  • 56
  • 41
  • 39
  • 38
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Vision Based Obstacle Detection And Avoidance Using Low Level Image Features

Senlet, Turgay 01 April 2006 (has links) (PDF)
This study proposes a new method for obstacle detection and avoidance using low-level MPEG-7 visual descriptors. The method includes training a neural network with a subset of MPEG-7 visual descriptors extracted from outdoor scenes. The trained neural network is then used to estimate the obstacle presence in real outdoor videos and to perform obstacle avoidance. In our proposed method, obstacle avoidance solely depends on the estimated obstacle presence data. In this study, backpropagation algorithm on multi-layer perceptron neural network is utilized as a feature learning method. MPEG-7 visual descriptors are used to describe basic features of the given scene image and by further processing these features, input data for the neural network is obtained. The learning/training phase is carried out on specially constructed synthetic video sequence with known obstacles. Validation and tests of the algorithms are performed on actual outdoor videos. Tests on indoor videos are also performed to evaluate the performance of the proposed algorithms in indoor scenes. Throughout the study, OdBot 2 robot platform, which has been developed by the author, is used as reference platform. For final testing of the obstacle detection and avoidance algorithms, simulation environment is used. From the simulation results and tests performed on video sequences, it can be concluded that the proposed obstacle detection and avoidance methods are robust against visual changes in the environment that are common to most of the outdoor videos. Findings concerning the used methods are presented and discussed as an outcome of this study.
302

Scheduling algorithms for resilient packet ring networks with video transport applications /

Zhu, Jian, January 1900 (has links)
Thesis (M. App. Sc.)--Carleton University, 2005. / Includes bibliographical references (p. 71-76). Also available in electronic format on the Internet.
303

Scalable internet video-on-demand systems

Zink, Michael. Unknown Date (has links)
Techn. University, Diss., 2003--Darmstadt.
304

Implementation and Evaluation of Encoder Tools for Multi-Channel Audio

Malmelöv, Tomas January 2019 (has links)
The increasing interest for immersive experiences in areas such as augmented and virtual reality makes high quality 3D sound more important than ever before. A technique for capturing and rendering 3D audio which has received more attention during the last twenty years are Higher Order Ambisonics (HOA). Higher Order Ambisonics is a scene based audio format which has a lot of advantages compared to other standard formats. Hovever, one problem with HOA is that it requires a lot of bandwidth. For example, sending an uncoded high quality HOA signal requires 49 channels to be transmitted at the same time which requires a bandwidth of about 40 Mbps. A lot of effort has been made in the last ten years on coding HOA signals. In this thesis, two different approaches are taken on coding HOA signals. In one approach, called Sound Field Rotation (SFR) in this thesis, the microphone that records the sound field is virtually rotated to see if it is possible to make some of the channels zero. The second approach, called Sound Field Decomposition (SFD) in this thesis, use Principal component analysis to decompose a sound field into a foreground and background component. The Sound Field Decomposition approach is inspired by the emerging MPEG-H 3D Audio standard for coding HOA signals. The result shows that the Sound Field Rotation method only works for very simple sound scenes. It has also been shown that a 49 channels HOA signal can be reduced to as little as 7 channels if the sound scene consists of a point source. The Sound Field Deomposition method worked for more complex sound scenes. It was shown that a MPEG similar system could be improved. Result from MUSHRA (Multiple stimuli with hidden reference and anchor) listening tests showed that an improved MPEG similar system reached a MUSHRA score about 78 while the MPEG similar system reached 55 at a bitrate of 256 kbps. Without coding each monochannels with the 3GPP EVS (Enhanced voice services) codec, the improved MPEG similar system reached the MUSHRA score 85. At 256 kbps, the improved MPEG similar system coded the HOA signal into six channels instead of 49 for the uncoded signal. From objective results, it was shown that the improved MPEG similar system had largest effect at low bitrates.
305

Study of the audio coding algorithm of the MPEG-4 AAC standard and comparison among implementations of modules of the algorithm

Hoffmann, Gustavo André January 2002 (has links)
Audio coding is used to compress digital audio signals, thereby reducing the amount of bits needed to transmit or to store an audio signal. This is useful when network bandwidth or storage capacity is very limited. Audio compression algorithms are based on an encoding and decoding process. In the encoding step, the uncompressed audio signal is transformed into a coded representation, thereby compressing the audio signal. Thereafter, the coded audio signal eventually needs to be restored (e.g. for playing back) through decoding of the coded audio signal. The decoder receives the bitstream and reconverts it into an uncompressed signal. ISO-MPEG is a standard for high-quality, low bit-rate video and audio coding. The audio part of the standard is composed by algorithms for high-quality low-bit-rate audio coding, i.e. algorithms that reduce the original bit-rate, while guaranteeing high quality of the audio signal. The audio coding algorithms consists of MPEG-1 (with three different layers), MPEG-2, MPEG-2 AAC, and MPEG-4. This work presents a study of the MPEG-4 AAC audio coding algorithm. Besides, it presents the implementation of the AAC algorithm on different platforms, and comparisons among implementations. The implementations are in C language, in Assembly of Intel Pentium, in C-language using DSP processor, and in HDL. Since each implementation has its own application niche, each one is valid as a final solution. Moreover, another purpose of this work is the comparison among these implementations, considering estimated costs, execution time, and advantages and disadvantages of each one.
306

Study of the audio coding algorithm of the MPEG-4 AAC standard and comparison among implementations of modules of the algorithm

Hoffmann, Gustavo André January 2002 (has links)
Audio coding is used to compress digital audio signals, thereby reducing the amount of bits needed to transmit or to store an audio signal. This is useful when network bandwidth or storage capacity is very limited. Audio compression algorithms are based on an encoding and decoding process. In the encoding step, the uncompressed audio signal is transformed into a coded representation, thereby compressing the audio signal. Thereafter, the coded audio signal eventually needs to be restored (e.g. for playing back) through decoding of the coded audio signal. The decoder receives the bitstream and reconverts it into an uncompressed signal. ISO-MPEG is a standard for high-quality, low bit-rate video and audio coding. The audio part of the standard is composed by algorithms for high-quality low-bit-rate audio coding, i.e. algorithms that reduce the original bit-rate, while guaranteeing high quality of the audio signal. The audio coding algorithms consists of MPEG-1 (with three different layers), MPEG-2, MPEG-2 AAC, and MPEG-4. This work presents a study of the MPEG-4 AAC audio coding algorithm. Besides, it presents the implementation of the AAC algorithm on different platforms, and comparisons among implementations. The implementations are in C language, in Assembly of Intel Pentium, in C-language using DSP processor, and in HDL. Since each implementation has its own application niche, each one is valid as a final solution. Moreover, another purpose of this work is the comparison among these implementations, considering estimated costs, execution time, and advantages and disadvantages of each one.
307

Coprojeto de um decodificador de áudio AAC-LC em FPGA

Sampaio, Renato Coral 07 1900 (has links)
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2013. / Submitted by Alaíde Gonçalves dos Santos (alaide@unb.br) on 2014-01-21T10:04:59Z No. of bitstreams: 1 2013_RenatoCoralSampaio.pdf: 3776105 bytes, checksum: ec34db0ab9261723cadcfe2fd5f9432e (MD5) / Approved for entry into archive by Guimaraes Jacqueline(jacqueline.guimaraes@bce.unb.br) on 2014-02-14T11:52:25Z (GMT) No. of bitstreams: 1 2013_RenatoCoralSampaio.pdf: 3776105 bytes, checksum: ec34db0ab9261723cadcfe2fd5f9432e (MD5) / Made available in DSpace on 2014-02-14T11:52:25Z (GMT). No. of bitstreams: 1 2013_RenatoCoralSampaio.pdf: 3776105 bytes, checksum: ec34db0ab9261723cadcfe2fd5f9432e (MD5) / A Codificação de áudio está presente hoje nos mais diversos aparelhos eletrônicos desde o rádio, a televisão, o computador, os tocadores de música portáteis e nos celulares. Em 2007, o governo do Brasil definiu o padrão do Sistema Brasileiro de TV Digital (SBTVD) que adotou o AAC Advanced Audio Coding para codificação de áudio. Neste trabalho, utilizamos a abordagem de coprojeto combinando software e hardware para implementar uma solução de alto desempenho e baixo consumo de energia em um FPGA, capaz de decodificar até 6 canais de áudio em tempo real. Apresentamos os detalhes da solução bem como os testes de desempenho e qualidade. Por fim, apresentamos os resultados de utilização de hardware e performance juntamente com uma comparação com as demais soluções encontradas na literatura. _______________________________________________________________________________________ ABSTRACT / Audio Coding is present today in many electronic devices. It can be found in radio, tv, computers, portable audio players and mobile phones. In 2007 the Brazilian Government defined the brazilian Digital TV System standard (SBTVD) and adopted the AAC - Advanced Audio Coding as the audio codec. In this work we use the co-design of hardware and software approach to implement a high performance and low energy solution on an FPGA, able to decode up to 6 channels of audio in real-time. The solution architecture and details are presented along with performance and quality tests. Finally, hardware usage and performance results are presented and compared to other solutions found in literature.
308

Redes Neurais Probabilísticas para Classificação de Imagens Binárias

PIRES, Glauber Magalhães 31 January 2009 (has links)
Made available in DSpace on 2014-06-12T15:52:53Z (GMT). No. of bitstreams: 1 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Este trabalho propõe uma nova abordagem para classificação de objetos em imagens binárias de duas dimensões usando descritores de curvatura, descritores de momento e uma rede neural artificial. O modelo proposto classifica objetos utilizando uma rede neural supervisionada e, através do uso de uma distribuição de probabilidade, associa um coeficiente de certeza para cada classificação. Foram utilizados os descritores de imagens conhecidos por Momento de Hu e o Curvature Scale Space para prover uma representação invariante às transformações das imagens, enquanto que o modelo neural proposto utiliza a correlação máxima entre as representações dos objetos para efetuar a classificação e uma distribuição de probabilidade para calcular o coeficiente de certeza da classificação de cada imagem. A avaliação da robustez baseou-se na medida da precisão da classificação para imagens rotacionadas, escaladas e com transformações não-lineares que formam um conjunto de imagens padrão, usado pelo grupo MPEG na criação da norma MPEG-7, demonstrando assim a aplicabilidade do método
309

Metadados em multimidia : aplicações e conceitos em MPEG-7 / Multimidia metadata concepts and applications in MPEG-7

Ferreira, Luis Andre Villanueva da Costa 16 March 2007 (has links)
Orientador: Luiz Cesar Martini / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-08T14:42:59Z (GMT). No. of bitstreams: 1 Ferreira_LuisAndreVillanuevadaCosta_M.pdf: 1477572 bytes, checksum: 5c37464c03f469380fb804cf8249299a (MD5) Previous issue date: 2007 / Resumo: Neste trabalho, uma pesquisa foi realizada com o objetivo de apresentar os metadados (ou dados sobre dados) e especificamente metadados em multimídia com o padrão MPEG-7 como sendo a base. Inicialmente, a teoria envolvendo os metadados em geral e metadados em multimídia em particular são apresentadas, com o propósito de preparar o leitor para suas aplicações, especialmente em vista da explosão de mídia estar sendo atualmente disponibilizada online. O usuário deve ser capaz de localizar a mídia que necessita de forma eficiente dentre esse mar de informação, e esse é o principal objetivo dos metadados. Com isso em mente, o MPEG-7 foi desenvolvido pela ISO (International Organization for Standardization ou Organização Internacional para Padronização). O MPEG-7 é padrão aberto baseado no formato XML (Extensible Markup Language ou Linguagem de Marcação Extensível) que permite que aperfeiçoamentos sejam desenvolvidos continuamente, o que é muito útil considerando o fluxo de novas mídias sendo disponibilizadas aos usuários diariamente. Este trabalho portanto apresenta os dois tipos de semânticas para metadados em multimídia, de baixo e alto nível. As semânticas de baixo nível incluem informações concretas sobre uma mídia como cores, contraste, pixels, etc. Por outro lado, as semânticas de alto nível são mais abstratas, e levam em consideração o que de fato está sendo exibido pela mídia, como pedras, arvores, pessoas, rios, etc. O intervalo semântico entre esses dois tipos de metadados apresenta um grande problema para a programação de metas. O principal objetivo deste trabalho é demonstrar como, utilizando a aplicação IBM VideoAnnex desenvolvida baseada no padrão MPEG-7, metadados em multimídia de baixo e alto nível semântico podem ser criados e anexados a um arquivo de vídeo em MPEG-2. A maior parte do processo de anexação é realizada diretamente utilizando a aplicação VideoAnnex como será mostrado, mas diversos metadados de alto nível devem ser inseridos diretamente no arquivo XML. Como as aplicações de metadados em multimídia são tão vastas quanto a mídia que eles representam, este trabalho apresentará primeiro uma abordagem mais geral para uso com um filme qualquer, e obviamente muito menos completa, e em seguida uma abordagem mais específica utilizando como objetivo uma partida de futebol. Estudos na área de metadados em multimídia ainda estão na fase inicial, então este trabalho não apresenta uma abordagem final, mas sim uma nova opção no uso de metadados em multimídia, especialmente importantes quando se considera a atual difusão da Internet e TV Digital / Abstract: In this work, a research was made in order to present Metadata (or data about data) and specifically Multimedia Metadata applications with the MPEG-7 standard as the basis. At first, the theory concerning Metadata in general and Multimedia Metadata in particular is presented, with the purpose to prepare the reader for its uses, specially with the current explosion of Media being made available online. The user must be able to locate the media he requires amongst this sea of information, and that is the Metadata main objective. With that in mind, MPEG-7 was developed by ISO (International Organization for Standardization). MPEG-7 is an open standard based on a XML (Extensible Markup Language) format that allows for improvements to be made continuously, which is very helpful considering new media is being made available to users every day. This work then presents the two types of Multimedia Metadata semantics, low level and high level. The low level semantics include concrete information about a movie or image like color, contrast, pixels, etc. On the other hand, high-level semantics take a more abstract approach, and take into consideration what is actually on a scene or image like rocks, sky, trees, actions, etc. The Semantic Gap between these two Metadata types presents a problem for most Metadata programming. This work?s main objective is to present how, using the IBM Videoannex application, developed based on the MPEG-7 standard, Multimedia Metadata of both low and high level semantics can be annexed to a MPEG-2 video file. Most of the annexation process can be made directly through the Videoannex application as will be shown, but several high level Metadata need to be inserted directly into the XML file. Since Multimedia Metadata uses are as vast as the media they represent, this work will present first an approach more general to any movie, but obviously much less complete and then a more focused approach, using a soccer match as target. Multimedia Metadata studies are still in the beginning stage, so this work doesn?t present an end approach, but more of a different option to the use of Multimedia Metadata, specially important with the current spread of the Internet and Digital TV / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
310

Study of the audio coding algorithm of the MPEG-4 AAC standard and comparison among implementations of modules of the algorithm

Hoffmann, Gustavo André January 2002 (has links)
Audio coding is used to compress digital audio signals, thereby reducing the amount of bits needed to transmit or to store an audio signal. This is useful when network bandwidth or storage capacity is very limited. Audio compression algorithms are based on an encoding and decoding process. In the encoding step, the uncompressed audio signal is transformed into a coded representation, thereby compressing the audio signal. Thereafter, the coded audio signal eventually needs to be restored (e.g. for playing back) through decoding of the coded audio signal. The decoder receives the bitstream and reconverts it into an uncompressed signal. ISO-MPEG is a standard for high-quality, low bit-rate video and audio coding. The audio part of the standard is composed by algorithms for high-quality low-bit-rate audio coding, i.e. algorithms that reduce the original bit-rate, while guaranteeing high quality of the audio signal. The audio coding algorithms consists of MPEG-1 (with three different layers), MPEG-2, MPEG-2 AAC, and MPEG-4. This work presents a study of the MPEG-4 AAC audio coding algorithm. Besides, it presents the implementation of the AAC algorithm on different platforms, and comparisons among implementations. The implementations are in C language, in Assembly of Intel Pentium, in C-language using DSP processor, and in HDL. Since each implementation has its own application niche, each one is valid as a final solution. Moreover, another purpose of this work is the comparison among these implementations, considering estimated costs, execution time, and advantages and disadvantages of each one.

Page generated in 0.0359 seconds