• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 15
  • 12
  • 9
  • 8
  • 7
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 113
  • 31
  • 24
  • 23
  • 18
  • 17
  • 16
  • 13
  • 13
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Transform-Domain Adaptive Constrained Filtering Algorithms for Time Delay Estimation

Hou, Jui-Hsiang 27 June 2002 (has links)
The convergence speed using the conventional approaches, viz., time-domain adaptive constrained and unconstrained LMS algorithms, becomes slowly, when dealing with the correlated source signals. In consequence, the performance of time delay estimation (TDE) will be degraded, dramatically. To improve this problem, the so-called transform-domain adaptive constrained filtering scheme, i.e., the adaptive constrained discrete-cosine-transform (DCT) LMS algorithm, has been proposed in [15]. However, the use of any one orthogonal transform will not result in a completely diagonal the input signal auto-correlation matrix for all types of input signals. In fact, the significant non-diagonal entries in the transform-domain auto-correlation matrix, will deteriorate the convergence performance of the algorithm. To further overcome the problem described above, in this thesis, a modified approach, referred as the adaptive constrained modified DCT-LMS (CMDCT-LMS) algorithm, is devised for TDE under a wide class of input processes. In addition, based on the orthogonal discrete wavelet transform (DWT), an adaptive constrained modified DMT-LMS (CMDWT-LMS) algorithm is also devised and applied to the problem of TDE. We show that the proposed two modified constrained approaches for TDE does perform well than the unmodified approaches under different source signal models. Moreover, the adaptive CMDCT-LMS filtering algorithm does perform slightly better than the adaptive CMDWT-LMS filtering algorithm as observed from the simulation results.
52

DCT-based Image/Video Compression: New Design Perspectives

Sun, Chang January 2014 (has links)
To push the envelope of DCT-based lossy image/video compression, this thesis is motivated to revisit design of some fundamental blocks in image/video coding, ranging from source modelling, quantization table, quantizers, to entropy coding. Firstly, to better handle the heavy tail phenomenon commonly seen in DCT coefficients, a new model dubbed transparent composite model (TCM) is developed and justified. Given a sequence of DCT coefficients, the TCM first separates the tail from the main body of the sequence, and then uses a uniform distribution to model DCT coefficients in the heavy tail, while using a parametric distribution to model DCT coefficients in the main body. The separation boundary and other distribution parameters are estimated online via maximum likelihood (ML) estimation. Efficient online algorithms are proposed for parameter estimation and their convergence is also proved. When the parametric distribution is truncated Laplacian, the resulting TCM dubbed Laplacian TCM (LPTCM) not only achieves superior modeling accuracy with low estimation complexity, but also has a good capability of nonlinear data reduction by identifying and separating a DCT coefficient in the heavy tail (referred to as an outlier) from a DCT coefficient in the main body (referred to as an inlier). This in turn opens up opportunities for it to be used in DCT-based image compression. Secondly, quantization table design is revisited for image/video coding where soft decision quantization (SDQ) is considered. Unlike conventional approaches where quantization table design is bundled with a specific encoding method, we assume optimal SDQ encoding and design a quantization table for the purpose of reconstruction. Under this assumption, we model transform coefficients across different frequencies as independently distributed random sources and apply the Shannon lower bound to approximate the rate distortion function of each source. We then show that a quantization table can be optimized in a way that the resulting distortion complies with certain behavior, yielding the so-called optimal distortion profile scheme (OptD). Guided by this new theoretical result, we present an efficient statistical-model-based algorithm using the Laplacian model to design quantization tables for DCT-based image compression. When applied to standard JPEG encoding, it provides more than 1.5 dB performance gain (in PSNR), with almost no extra burden on complexity. Compared with the state-of-the-art JPEG quantization table optimizer, the proposed algorithm offers an average 0.5 dB gain with computational complexity reduced by a factor of more than 2000 when SDQ is off, and a 0.1 dB performance gain or more with 85% of the complexity reduced when SDQ is on. Thirdly, based on the LPTCM and OptD, we further propose an efficient non-predictive DCT-based image compression system, where the quantizers and entropy coding are completely re-designed, and the relative SDQ algorithm is also developed. The proposed system achieves overall coding results that are among the best and similar to those of H.264 or HEVC intra (predictive) coding, in terms of rate vs visual quality. On the other hand, in terms of rate vs objective quality, it significantly outperforms baseline JPEG by more than 4.3 dB on average, with a moderate increase on complexity, and ECEB, the state-of-the-art non-predictive image coding, by 0.75 dB when SDQ is off, with the same level of computational complexity, and by 1 dB when SDQ is on, at the cost of extra complexity. In comparison with H.264 intra coding, our system provides an overall 0.4 dB gain or so, with dramatically reduced computational complexity. It offers comparable or even better coding performance than HEVC intra coding in the high-rate region or for complicated images, but with only less than 5% of the encoding complexity of the latter. In addition, our proposed DCT-based image compression system also offers a multiresolution capability, which, together with its comparatively high coding efficiency and low complexity, makes it a good alternative for real-time image processing applications.
53

Análise do conteúdo de um sistema de informação destinado à microempresa brasileira por meio de aplicação da descoberta de conhecimento em textos

Ramos, Hélia de Sousa Chaves 28 February 2008 (has links)
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Economia, Administração, Contabilidade e Ciência da Informação e Documentação, Departamento de Ciência da Informação e Documentação, 2008. / Submitted by Jaqueline Oliveira (jaqueoliveiram@gmail.com) on 2008-12-02T15:17:53Z No. of bitstreams: 1 DISSERTACAO_2008_HeliaDeSousaCRamos.pdf: 2175025 bytes, checksum: befb78b55511f15b89dd404aa64c76f4 (MD5) / Approved for entry into archive by Georgia Fernandes(georgia@bce.unb.br) on 2009-02-16T14:01:35Z (GMT) No. of bitstreams: 1 DISSERTACAO_2008_HeliaDeSousaCRamos.pdf: 2175025 bytes, checksum: befb78b55511f15b89dd404aa64c76f4 (MD5) / Made available in DSpace on 2009-02-16T14:01:35Z (GMT). No. of bitstreams: 1 DISSERTACAO_2008_HeliaDeSousaCRamos.pdf: 2175025 bytes, checksum: befb78b55511f15b89dd404aa64c76f4 (MD5) / Esta pesquisa aborda a aplicação da técnica de Descoberta de Conhecimento em Texto (DCT) em bases de dados textuais (de conteúdos não-estruturados), repositórios de informações não evidentes, as quais podem se revelar importantes fontes de informação para aplicações diversas, envolvendo processos de tomada de decisão. O objetivo central da pesquisa é verificar a eficácia da DCT na descoberta de informações que possam apoiar a construção de indicadores úteis à tomada de decisão estratégica, assim como a definição de políticas públicas para a microempresa. O estudo de caso foi o conteúdo textual do Serviço Brasileiro de Respostas Técnicas (SBRT), um sistema de informação tecnológica na Web destinado ao setor produtivo, notadamente empreendedores, micro e pequenas empresas, fruto de um esforço compartilhado entre governo, instituições de pesquisa, universidades e iniciativa privada. A metodologia adotada contempla a aplicação da DCT em 6.041 documentos extraídos do sistema de informação SBRT, para a qual foi utilizado o pacote de software SAS Data Mining Solution. A técnica utilizada foi a geração de agrupamentos de documentos a partir dos termos minerados na base de dados. Foram realizadas análises comparativas entre agrupamentos semelhantes e foi selecionado um dos agrupamentos para análise mais aprofundada. Os resultados dessas análises demonstram a eficácia do uso da DCT para extrair informações ocultas em documentos textuais, as quais não poderiam ser visualizadas a partir de recursos tradicionais de recuperação da informação. Uma importante descoberta foi a de que a preocupação com o meio ambiente é um forte componente nas demandas feitas pelos usuários do serviço SBRT. Observou-se a possibilidade de se extraírem informações úteis para apoio à construção de indicadores e à orientação de políticas internas à rede SBRT, assim como para o setor de pequenas e médias empresas. Evidenciou-se, ainda, o potencial da DCT para subsidiar a tomada de decisão, podendo, inclusive, ser utilizada para fins de inteligência competitiva nas organizações. _______________________________________________________________________________________ ABSTRACT / This research addresses the application of Knowledge Discovery in Texts (KDT) in textual databases (of non-structural contents), repositories of non-evident information that can reveal to be important sources of information for several purposes involving decision-making processes. The main objective of the research is to verify the effectiveness of KDT for discovering information that may support the construction of ST&I indicators useful for the strategic decision-making process, as well as for the definition of public policies destined to microenterprises. The case study of the research was the textual content of the Brazilian Service for Technical Answers (Serviço Brasileiro de Respostas Técnicas – SBRT), a technological information database, available in the Web, geared to the Brazilian production sector, specially micro and small enterprises or entrepreneurs. SBRT is a shared effort accomplished by government, research institutions, universities and the private sector. The methodology adopted encompasses the application of KDT in 6.041 documents extracted from SBRT database by using the SAS Data Mining Solution software package. The technique adopted was document clustering from terms mined in the database. A comparative analysis of similar clusters was carried out and one of the clusters was selected to be subject of more profound investigation. The results of these analyses demonstrate the efficacy of using KDT to extract hidden information – that could not be found by using the traditional information retrieval – from textual documents. An important discovery was that environmental concerns are strongly present in the demands posted by SBRT’s users. It was observed the possibility to extract useful information to construct ST&I indicators and to orient policies for SBRT network and for the microenterprise sector as a whole. It was also evidenced the potential of KDT to support decision-making processes in organizations, and, in addition, to be used for competitive intelligence purposes.
54

Descoberta de conhecimento em texto aplicada a um sistema de atendimento ao consumidor

Schiessl, José Marcelo 12 April 2007 (has links)
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Economia, Administração, Contabilidade e Ciência da Informação e Documentação, Departamento de Ciência da Informação e Documentação, 2007. / Submitted by Luis Felipe Souza (luis_felas@globo.com) on 2009-01-12T11:44:00Z No. of bitstreams: 1 Dissertacao_2007_JoseSchiessl.pdf: 1688737 bytes, checksum: 8cde16615a96a2427a9cdfb62c3f48cc (MD5) / Approved for entry into archive by Georgia Fernandes(georgia@bce.unb.br) on 2009-03-04T14:33:25Z (GMT) No. of bitstreams: 1 Dissertacao_2007_JoseSchiessl.pdf: 1688737 bytes, checksum: 8cde16615a96a2427a9cdfb62c3f48cc (MD5) / Made available in DSpace on 2009-03-04T14:33:25Z (GMT). No. of bitstreams: 1 Dissertacao_2007_JoseSchiessl.pdf: 1688737 bytes, checksum: 8cde16615a96a2427a9cdfb62c3f48cc (MD5) / Analisa um Serviço de Atendimento ao Consumidor de uma instituição financeira que centraliza, em forma textual, os questionamentos, as reclamações, os elogios e as sugestões, verbais ou escritas, de clientes. Discute a complexidade da informação armazenada em linguagem natural para esse tipo de sistema. Visa apresentar alternativa para extração de conhecimento de bases textuais com a criação de agrupamentos e modelo de classificação automática de textos para agilizar a tarefa realizada atualmente por pessoas. Apresenta uma revisão de literatura que mostra a Descoberta de Conhecimento em Texto como uma extensão da Descoberta de Conhecimento em Dados que utiliza técnicas do Processamento de Linguagem Natural para adequar o texto a um formato apropriado para a mineração de dados e destaca a importância do processo dentro da Ciência da Informação. Aplica a Descoberta de Conhecimento em Texto em uma base do Serviço de Atendimento ao Cliente com objetivo de criar automaticamente agrupamentos de documentos para posterior criação de um modelo categorizador automático dos novos documentos recebidos diariamente. Essas etapas contam com a validação de especialistas de domínio que atestam a qualidade dos agrupamentos e do modelo. Cria indicadores de desempenho que avaliam o grau de satisfação do cliente em relação aos produtos e serviços oferecidos para fornecer subsídio à gestão na política de atendimento. _______________________________________________________________________________________________________________ ABSTRACT / It analyses a Help Desk System of a federal institution that centralizes customer answers, complains, compliments, and suggestions, spoken or written. It argues about information complexity stored in natural language. It intends to present an alternative for knowledge extraction from textual databases by creating clusters and automatic classification model of texts in order to improve the current tasks made by employees. It presents a literature revision that shows the Knowledge Discovery in Text as an extension of Knowledge Discovery in Data that utilizes the Natural Language Processing in order to adequate the text into an appropriated format to data mining and enhances the importance of the process in the Information Science field. It applies the Knowledge Discovery in Text techniques in the Help Desk Database in order to create cluster of documents and, after that, to build an automatic classification model to new documents received every day. These steps need to be validated by specialist in the area to verify the model and clusters quality. It creates performance indexes in order to measure the customer satisfaction related to products and services to provide information for decision makers.
55

Studium sekvencí genů zbarvení u činčil na základě homologie se sekvencemi vybraných savců

Poslušná, Michala January 2016 (has links)
In domesticated animals there are many different coat colours and mutations, often connected with pleiotropic effects. The aim of this thesis named The study of colour genes sequences in chinchilla based on homology of human and mice sequences was describe molecular genetic principles of pigmentation, introduce genes involved in melanogenesis and influencing a melanin function, their structure, mutations and mention other mutations which change the phenotype. Informations about alleles TYR, TYRP1, TYRP2/DCT, agouti, AGRP, gene group MCR (MC1R-MC5R) and more are focused on human (Homo sapiens), mouse (Mus musculus) and chinchilla (Chinchilla lanigera). In these three species was compared selected genes sequences TYR and TYRP2/DCT, results were studied, commented and mutations, which are very specific, but there are some analogs between human and mouse, were highlighted.
56

Reconhecimento de face utilizando transformada discreta do cosseno bidimensional, análise de componentes principais bidimensional e mapas auto-organizáveis concorrentes

Guimarães, Thayso Silva 14 May 2010 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The identification of a person by their face is one of the most effective non-intrusive methods in biometrics, however, is also one of the greatest challenges for researchers in the area, consisting of research in psychophysics, neuroscience, engineering, pattern recognition, analysis and image processing, computer vision and applied in face recognition by humans and by machines. The algorithm proposed in this dissertation for face recognition was developed in three stages. In the first stage feature matrices are derived of faces using the Two-Dimensional Discrete Cosine Transform (2D-DCT) and Two-Dimensional Principal Component Analysis (2D-PCA). The training of the Concurrent Self-Organizing Map (Csoma) is performed in the second stage using the characteristic matrices of the faces. And finally, the third stage we obtain the feature matrix of the image consulting classifying it using the CSOM network of the second step. To check the performance of face recognition algorithm proposed in this paper were tested using three well-known image databases in the area of image processing: ORL, YaleA and Face94. / A identificação de uma pessoa pela sua face é um dos métodos não-intrusivo mais efetivo em biometria, no entanto, também é um dos maiores desafios para os pesquisadores na área; consistindo em pesquisas em psicofísica, neurociência, engenharia, reconhecimento de padrões, análise e processamento de imagens, e visão computacional aplicada no reconhecimento de faces pelos seres humanos e pelas máquinas. O algoritmo proposto nesta dissertação para reconhecimento de faces foi desenvolvido em três etapas. Na primeira etapa são obtidas as matrizes características das faces utilizando a Two-Dimensional Discrete Cosine Transform (2D-DCT) e a Two-Dimensional Principal Component Analysis (2D-PCA). O treinamento da Concurrent Self-Organizing Map (CSOM) é realizado na segunda etapa usando as matrizes características das faces. E finalmente, na terceira etapa obtém-se a matriz característica da imagem consulta classificando-a utilizando a rede CSOM da segunda etapa. Para verificar o desempenho do algoritmo de reconhecimento de faces proposto neste trabalho foram realizados testes utilizando três bancos de imagens bem conhecidos na área de processamento de imagens: ORL, YaleA e Face94. / Mestre em Ciências
57

Parallel JPEG Processing with a Hardware Accelerated DSP Processor / Parallell JPEG-behandling med en hårdvaruaccelerarad DSP processor

Andersson, Mikael, Karlström, Per January 2004 (has links)
This thesis describes the design of fast JPEG processing accelerators for a DSP processor. Certain computation tasks are moved from the DSP processor to hardware accelerators. The accelerators are slave co processing machines and are controlled via a new instruction set. The clock cycle and power consumption is reduced by utilizing the custom built hardware. The hardware can perform the tasks in fewer clock cycles and several tasks can run in parallel. This will reduce the total number of clock cycles needed. First a decoder and an encoder were implemented in DSP assembler. The cycle consumption of the parts was measured and from this the hardware/software partitioning was done. Behavioral models of the accelerators were then written in C++ and the assembly code was modified to work with the new hardware. Finally, the accelerators were implemented using Verilog. Extension of the accelerator instructions was given following a custom design flow.
58

A Selection of H.264 Encoder Components Implemented and Benchmarked on a Multi-core DSP Processor

Einemo, Jonas, Lundqvist, Magnus January 2010 (has links)
H.264 is a video coding standard which offers high data compression rate at the cost of a high computational load. This thesis evaluates how well parts of the H.264 standard can be implemented for a new multi-core digital signal processing processor architecture called ePUMA. The thesis investigates if real-time encoding of high definition video sequences could be performed. The implementation consists of the motion estimation, motion compensation, discrete cosine transform, inverse discrete cosine transform, quantization and rescaling parts of the H.264 standard. Benchmarking is done using the ePUMA system simulator and the results are compared to an implementation of an existing H.264 encoder for another multi-core processor architecture called STI Cell. The results show that the selected parts of the H.264 encoder could be run on 6 calculation cores in 5 million cycles per frame. This setup leaves 2 calculation cores to run the remaining parts of the encoder.
59

Neuronové sítě v algoritmech vodoznačení audio signálů / Neural networks in audio signal watermarking algorithms

Kaňa, Ondřej January 2010 (has links)
Digital watermarking is a technique for digital multimedia copyright protection. The robustness and the imperceptibility are the main requirements of the watermark. This thesis deals with watermarking audio signals using artificial neural networks. There is described audio watermarking method in the DCT domain. Method is based on human psychoacoustic model and the techniques of neural networks.
60

CAE Tool for Evaluation of Park Lock Mechanism in a DCT Transmission / CAE verktyg för utvärdering av parkeringslåsmekanism i en växellåda med dubbla kopplingar

Andersson, Rasmus January 2017 (has links)
A park lock mechanism is a device that is fitted to an automatic transmission on a vehicle. The mechanism lock up the transmission so that no rolling of the vehicle can be done when the vehicle is put in the park position. The aim of this thesis is to develop a method in order the evaluate designs on a Park Lock Mechanism (PLM) that can be found in a dual clutch transmission (DCT). A Computer Aided Engineering (CAE) tool to calculate the output that is required for an evaluation of a park lock mechanism design will be created. The CAE tool shall calculate static, dynamic, and snap torque on a ratchet wheel in a gradient, with or without a trailer, also the minimum and maximum coefficient of friction between the pawl and cone, pull out force, the maximum amount of rollback, torque needed from the return spring, preload force from actuator spring, and engagement speed. The CAE tool created uses an Excel Visual Basics for Applications (VBA) workbook for all calculations. The tool allows the user to choose different vehicles with the required specification to evaluate the values for that PLM. The CAE tool will save time and cost if lots of different PLM’s are going to be designed. The CAE tool has potential for future work when more calculations can be added that can be in use for the evaluation the PLM. The CAE tool developed by the master thesis student calculates all the required values for evaluation of a PLM design, executed in a fast, efficient, and easy to use program.

Page generated in 0.0408 seconds