• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 15
  • 12
  • 9
  • 8
  • 7
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 113
  • 31
  • 24
  • 23
  • 18
  • 17
  • 16
  • 13
  • 13
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Challenging the dual coding theory : Does Affective Information Play a Greater Role in Abstract Compared to Concrete Word Processing?

Almgren, Ingrid January 2018 (has links)
It has long been held that concrete material has a processing advantage over abstract material, as predicted by Dual Coding Theory (Paivio,1991), although this has been challenged. For example, based on evidence for behavioural and neuroscientific studies, Kousta,, Vigliocco, Vinson, & Del Campo, (2011) proposed that emotional valance had a greater influence in the processing of abstract words, and that under some circumstances there may be no concreteness effect and might even be an abstractness effect. This would not be predicted by DCT. In addition, Isen and Daubman (1984) have claimed that emotional valence, and particularly positive emotion can influence cognitive processing. Specifically, they demonstrated that positive emotion was associated with more inclusive categorization of ambiguous category members. This current study was a 2 x 2 between group design to investigate the effect of positive and negative valence on recognition memory for concrete and abstract words and on categorization. Contrary to what was predicted by Dual Coding Theory, abstract words were generally better recognized than concrete, with there being an additional interaction with valence. A significant interaction between word type and valence on categorization was also found. Results partially support Kousta et al. (2011).
82

Design and analysis of Discrete Cosine Transform-based watermarking algorithms for digital images. Development and evaluation of blind Discrete Cosine Transform-based watermarking algorithms for copyright protection of digital images using handwritten signatures and mobile phone numbers.

Al-Gindy, Ahmed M.N. January 2011 (has links)
This thesis deals with the development and evaluation of blind discrete cosine transform-based watermarking algorithms for copyright protection of digital still images using handwritten signatures and mobile phone numbers. The new algorithms take into account the perceptual capacity of each low frequency coefficients inside the Discrete Cosine Transform (DCT) blocks before embedding the watermark information. They are suitable for grey-scale and colour images. Handwritten signatures are used instead of pseudo random numbers. The watermark is inserted in the green channel of the RGB colour images and the luminance channel of the YCrCb images. Mobile phone numbers are used as watermarks for images captured by mobile phone cameras. The information is embedded multiple-times and a shuffling scheme is applied to ensure that no spatial correlation exists between the original host image and the multiple watermark copies. Multiple embedding will increase the robustness of the watermark against attacks since each watermark will be individually reconstructed and verified before applying an averaging process. The averaging process has managed to reduce the amount of errors of the extracted information. The developed watermarking methods are shown to be robust against JPEG compression, removal attack, additive noise, cropping, scaling, small degrees of rotation, affine, contrast enhancements, low-pass, median filtering and Stirmark attacks. The algorithms have been examined using a library of approximately 40 colour images of size 512 512 with 24 bits per pixel and their grey-scale versions. Several evaluation techniques were used in the experiment with different watermarking strengths and different signature sizes. These include the peak signal to noise ratio, normalized correlation and structural similarity index measurements. The performance of the proposed algorithms has been compared to other algorithms and better invisibility qualities with stronger robustness have been achieved.
83

Reconfigurable Computing For Video Coding

Huang, Jian 01 January 2010 (has links)
Video coding is widely used in our daily life. Due to its high computational complexity, hardware implementation is usually preferred. In this research, we investigate both ASIC hardware design approach and reconfigurable hardware design approach for video coding applications. First, we present a unified architecture that can perform Discrete Cosine Transform (DCT), Inverse Discrete Cosine Transform (IDCT), DCT domain motion estimation and compensation (DCT-ME/MC). Our proposed architecture is a Wavefront Array-based Processor with a highly modular structure consisting of 8*8 Processing Elements (PEs). By utilizing statistical properties and arithmetic operations, it can be used as a high performance hardware accelerator for video transcoding applications. We show how different core algorithms can be mapped onto the same hardware fabric and can be executed through the pre-defined PEs. In addition to the simplified design process of the proposed architecture and savings of the hardware resources, we also demonstrate that high throughput rate can be achieved for IDCT and DCT-MC by fully utilizing the sparseness property of DCT coefficient matrix. Compared to fixed hardware architecture using ASIC design approach, reconfigurable hardware design approach has higher flexibility, lower cost, and faster time-to-market. We propose a self-reconfigurable platform which can reconfigure the architecture of DCT computations during run-time using dynamic partial reconfiguration. The scalable architecture for DCT computations can compute different number of DCT coefficients in the zig-zag scan order to adapt to different requirements, such as power consumption, hardware resource, and performance. We propose a configuration manager which is implemented in the embedded processor in order to adaptively control the reconfiguration of scalable DCT architecture during run-time. In addition, we use LZSS algorithm for compression of the partial bitstreams and on-chip BlockRAM as a cache to reduce latency overhead for loading the partial bitstreams from the off-chip memory for run-time reconfiguration. A hardware module is designed for parallel reconfiguration of the partial bitstreams. The experimental results show that our approach can reduce the external memory accesses by 69% and can achieve 400 MBytes/s reconfiguration rate. Detailed trade-offs of power, throughput, and quality are investigated, and used as a criterion for self-reconfiguration. Prediction algorithm of zero quantized DCT (ZQDCT) to control the run-time reconfiguration of the proposed scalable architecture has been used, and 12 different modes of DCT computations including zonal coding, multi-block processing, and parallel-sequential stage modes are supported to reduce power consumptions, required hardware resources, and computation time with a small quality degradation. Detailed trade-offs of power, throughput, and quality are investigated, and used as a criterion for self-reconfiguration to meet the requirements set by the users.
84

From content-based to semantic image retrieval : low level feature extraction, classification using image processing and neural networks, content based image retrieval, hybrid low level and high level based image retrieval in the compressed DCT domain

Mohamed, Aamer Saleh Sahel January 2010 (has links)
Digital image archiving urgently requires advanced techniques for more efficient storage and retrieval methods because of the increasing amount of digital. Although JPEG supply systems to compress image data efficiently, the problems of how to organize the image database structure for efficient indexing and retrieval, how to index and retrieve image data from DCT compressed domain and how to interpret image data semantically are major obstacles for further development of digital image database system. In content-based image, image analysis is the primary step to extract useful information from image databases. The difficulty in content-based image retrieval is how to summarize the low-level features into high-level or semantic descriptors to facilitate the retrieval procedure. Such a shift toward a semantic visual data learning or detection of semantic objects generates an urgent need to link the low level features with semantic understanding of the observed visual information. To solve such a 'semantic gap' problem, an efficient way is to develop a number of classifiers to identify the presence of semantic image components that can be connected to semantic descriptors. Among various semantic objects, the human face is a very important example, which is usually also the most significant element in many images and photos. The presence of faces can usually be correlated to specific scenes with semantic inference according to a given ontology. Therefore, face detection can be an efficient tool to annotate images for semantic descriptors. In this thesis, a paradigm to process, analyze and interpret digital images is proposed. In order to speed up access to desired images, after accessing image data, image features are presented for analysis. This analysis gives not only a structure for content-based image retrieval but also the basic units ii for high-level semantic image interpretation. Finally, images are interpreted and classified into some semantic categories by semantic object detection categorization algorithm.
85

English for everybody and everywhere: conexões e convergências / English for everybody and everywhere: connections and convergences. 2019

Rossi, Heloyse 06 February 2019 (has links)
Submitted by Edineia Teixeira (edineia.teixeira@unioeste.br) on 2019-03-11T18:41:13Z No. of bitstreams: 2 Heloyse_ Rossi_2018.pdf: 11853805 bytes, checksum: c9714d90ddb5b2c8134a833f815ea8e6 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2019-03-11T18:41:13Z (GMT). No. of bitstreams: 2 Heloyse_ Rossi_2018.pdf: 11853805 bytes, checksum: c9714d90ddb5b2c8134a833f815ea8e6 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2019-02-06 / The smartphones have become a vital part of human life, their great expansion and modernization make people connect in cyberspace from the moment they wake up until the time they go to sleep. Taking this into account, the educational processes can create strategies so that this everyday use of smartphones can be converted, at least in parts, into Learning activities. This dissertation, called English for Everybody and Everywhere: Connections and Convergences, linked to the research line Language: Linguistic, Cultural and Teaching Practices in the Master of Art from the State University of the West of Paraná (UNIOESTE), proposes the use of four smartphone applications, Let's Learn English, Duolingo, LyTrans English, and WhatsApp, in the process of English Language Learning, aiming to approach learning to the reality of 21st century students, in order to provide a more creative, interactive and dynamic environment in the classroom, in a process of convergence that establishes connections between students, teachers, cyberspace and their knowledge. This study is an action research with a qualitative approach, in which the researcher assumed the role of teacher, developing practical activities in the applications for smartphone, mentioned above, with a group of high school students from a public school in the city of Cascavel - PR. At the end of the activities with the applications in the school, the research used as data generation, besides the reflections on the practice, an interview with the regent teacher of the selected class, who participated as a listener of all the classes, a diagnostic questionnaire with the students involved and a journal, containing all the observations and impressions of the practice developed with the students. With the research, we sought to verify the possibility of smartphones being used as another way to access information that through integration and interactivity should become an additional knowledge in the students' lives. In the activities developed in the classroom with the selected group, we based ourselves on all the theoretical contribution described in the initial part of the text, and we based on concepts and theories of renowned authors in the area, such as: the Learning of Assmann (1999) and Dal Molin (2003); the importance of planning activities based on the advances of cyberspace and cyberculture, as Lévy points out (1999) and the influences of the culture of convergence in this scenario, according to Jenkins (2009); the characteristics of a rhizomatic teaching that creates maps and escape from from the tree models and decals, based on the theory of Deleuze and Guattari (1995); the new relations with knowledge that emerge from the age of cyberspace, with a collective intelligence and a teacher who passes from the only holder of the knowledge to animator and supervisor of this intelligence, in a process of flipped learning, based on the studies of Lévy (1998b), Moran (2015), Prensky (2001) and others; the advantages that mobile learning can bring both within the classroom and beyond the school environment, following UNESCO (2014), Motter (2013), Souza (2012) among other authors who collaborated for the present study. / Os smartphones tem se tornado parte vital da vida dos seres humanos, sua grande expansão e sua modernização fazem com que as pessoas estejam conectadas no ciberespaço desde o momento que acordam até a hora em que vão dormir. Levando isso em conta, os processos educacionais podem criar estratégias para que esse uso cotidiano dos smartphones possa ser convertido, ao menos em partes, para atividades de Aprendência. Essa dissertação, intitulada English for Everybody and Everywhere: conexões e convergências, vinculada à linha de pesquisa Linguagem: Práticas Linguísticas, Culturais e de Ensino, do Mestrado em Letras da Universidade Estadual do Oeste do Paraná (UNIOESTE) propõe o uso de quatro aplicativos para smartphone, Let’s Learn English, Duolingo, LyTrans English, e WhatsApp, no processo de Aprendência de Língua Inglesa, buscando aproximar a aprendizagem da realidade dos estudantes do século XXI, de modo a facultar um ambiente mais criativo, interativo e dinâmico em sala de aula, em um processo de convergência que estabelece conexões entre estudantes, professores, o ciberespaço e seus saberes. Esse estudo se configura como uma pesquisa-ação de abordagem qualitativa, na qual a pesquisadora assumiu o papel de professora, desenvolvendo atividades práticas nos aplicativos para smartphone, citados acima, com um grupo de estudantes do segundo ano do ensino médio de uma escola pública da cidade de Cascavel – PR. Ao final das atividades com os aplicativos na escola, a pesquisa utilizou como geração de dados, além das reflexões sobre a prática, uma entrevista com a professora regente da turma selecionada, que participou como ouvinte de todas as aulas, um questionário diagnóstico com os estudantes envolvidos e um diário de bordo, contento todas as observações e impressões da prática desenvolvida com os estudantes. Com a pesquisa, buscamos verificar a possibilidade dos smartphones serem utilizados como mais uma via de acesso às informações que pela integração e interatividade deve tornar-se um conhecimento a mais na vida dos estudantes. Nas atividades desenvolvidas em sala de aula com o grupo selecionado, baseamo-nos no aporte teórico descrito na parte inicial do texto, e nos embasamos em conceitos e teorias de autores renomados na área, tais como: a Aprendência de Assmann (1999) e Dal Molin (2003); a importância de planejar atividades fundamentadas nos avanços do ciberespaço e da cibercultura, como aponta Lévy (1999) e as influências da cultura de convergência nesse cenário, conforme Jenkins (2009); as características de um ensino rizomático que cria mapas e foge dos modelos de árvore e decalques, baseando-nos na teoria de Deleuze e Guattari (1995); as novas relações com o saber que surgem a partir da era do ciberespaço, com uma inteligência coletiva e um professor que passa de único detentor dos saberes para animador e orientador dessa inteligência, em um processo de flipped learning, tendo como base os estudos de Lévy (1998b), Moran (2015), Prensky (2001) e outros; as vantagens que a aprendizagem móvel pode trazer tanto para dentro de sala de aula quanto para além do ambiente escolar, seguindo UNESCO (2014), Motter (2013), Souza (2012) entre outros autores que colaboraram para o presente estudo.
86

Low-power high-linearity digital-to-analog converters

Kuo, Ming-Hung 09 March 2012 (has links)
In this thesis work, a design of 14-bit, 20MS/s segmented digital-to-analog converter (DAC) is presented. The segmented DAC uses switched-capacitor configuration to implement 8 (LSB) + 6 (MSB) segmented architecture to achieve high performance for minimum area. The implemented LSB DAC is based on quasi-passive pipelined DAC that has been proven to provide low power and high speed operation. Typically, capacitor matching is the best among all integrated circuit components but the mismatch among nominally equal value capacitors will introduce nonlinear distortion. By using dynamic element matching (DEM) technique in the MSB DAC, the nonlinearity caused by capacitor mismatch is greatly reduced. The output buffer employed direct charge transfer (DCT) technique that can minimize kT/C noise without increasing the power dissipation. This segmented DAC is designed and simulated in 0.18 μm CMOS technology, and the simulated core DAC block only consumes 403 μW. / Graduation date: 2012
87

Compression d'images dans les réseaux de capteurs sans fil

Makkaoui, Leila 26 November 2012 (has links) (PDF)
Cette thèse forme une contribution au problème de la conservation de l'énergie dans le cas particulier des réseaux de capteurs d'images, où une partie voire tous les nœuds du réseau sont équipés d'une petite caméra à technologie CMOS. Les images engagent des volumes de données très largement supérieurs aux mesures scalaires classiques telles que la température, et donc des dépenses énergétiques plus élevées. L'émetteur radio étant l'un des composants les plus gourmands en énergie, il est évident que la compression de l'image à la source peut réduire significativement l'énergie dépensée pour la transmission de l'image, tant au niveau du nœud caméra que des nœuds formant le chemin jusqu'au point de collecte. Toutefois, les méthodes de compression bien connues (JPEG, JPEG2000, SPIHT) sont mal adaptées à la limitation des ressources de calcul et de mémoire caractéristiques des nœuds-capteurs. Sur certaines plateformes matérielles, ces algorithmes ont même un coût énergétique supérieur au gain qu'ils amènent sur la transmission. Autrement dit, le nœud caméra épuise plus vite sa batterie en envoyant des images compressées que des images non compressées. La complexité de l'algorithme de compression est donc un critère de performance aussi important que le rapport débit-distorsion. Les contributions contenues dans ce mémoire de thèses sont triples : - Tout d'abord, nous avons proposé un algorithme de compression basé sur la transformée en cosinus discrète (DCT 8 points) de complexité réduite, combinant la méthode de DCT rapide la plus efficace de la littérature (DCT de Cordic-Loeffler) à une exécution réduite aux coefficients délimités par une zone carrée de taille k<8, les plus importants dans la reconstruction visuelle. Avec cette approche zonale, le nombre de coefficients à calculer, mais aussi à quantifier et encoder par bloc de 8x8 pixels est réduit à k^2 au lieu de 64, ce qui diminue mécaniquement le coût de la compression. - Nous avons ensuite étudié l'impact de k, donc du nombre de coefficients sélectionnés, sur la qualité de l'image finale. L'étude a été réalisée avec un jeu d'une soixantaine d'images de référence et la qualité des images était évaluée en utilisant plusieurs métriques, le PSNR, le PSNR-HVS et le MMSIM. Les résultats ont servi à identifier, pour un débit donné, la valeur limite de k qu'on peut choisir (statistiquement) sans dégradation perceptible de la qualité, et par conséquent les limites posées sur la réduction de la consommation d'énergie à débit et qualité constants. - Enfin, nous donnons les résultats de performances obtenus par des expérimentations sur une plateforme réelle composée d'un nœud Mica2 et d'une caméra Cyclops afin de démontrer la validité de nos propositions. Dans un scénario considérant des images de 128x128 pixels encodées à 0,5 bpp par exemple, la dépense d'énergie du nœud caméra (incluant compression et transmission) est divisée par 6 comparée au cas sans compression, et par 2 comparée au cas de l'algorithme JPEG standard.
88

Reducing Communication Through Buffers on a SIMD Architecture

Choi, Jee W. 13 May 2004 (has links)
Advances in wireless technology and the growing popularity of multimedia applications have brought about a need for energy efficient and cost effective portable supercomputers capable of delivering performance beyond the capabilities of current microprocessors and DSP chips. The SIMPil architecture currently being developed at Georgia Institute of Technology is a promising candidate for this task. In order to develop applications for SIMPil, a high level language and an optimizing compiler for the language are essential. However, with the recent trend of interconnect latency becoming a major bottleneck on computer systems, optimizations focusing on reducing latency are becoming more important, especially with SIMPil, as it is highly scalable. The compiler tracks the path of data through the network and buffers data in each processor to eliminate redundant communication. With a buffer size of 5, the compiler was able to eliminate 96 percent of the redundant communication for a 9x9 convolution and 8x8 DCT algorithms. With 5x5 convolution, only 89 percent elimination was observed. In terms of performance, 106 percent speedup was observed with 9x9 convolution at buffer size of 5 while 5x5 convolution and 8x8 DCT which have a much lower number of communication showed only 101 percent speedup.
89

Data-driven transform optimization for next generation multimedia applications

Sezer, Osman Gokhan 25 August 2011 (has links)
The objective of this thesis is to formulate a generic dictionary learning method with the guiding principle that states: Efficient representations lead to efficient estimations. The fundamental idea behind using transforms or dictionaries for signal representation is to exploit the regularity within data samples such that the redundancy of the representation is minimized subject to a level of fidelity. This observation translates to rate-distortion cost in compression literature, where a transform that has the lowest rate-distortion cost provides a more efficient representation than the others. In our work, rather than using as an analysis tool, the rate-distortion cost is utilized to improve the efficiency of transforms. For this, an iterative optimization method is proposed, which seeks an orthonormal transform that reduces the expected value of rate-distortion cost of an ensemble of data. Due to the generic nature of the new optimization method, one can design a set of orthonormal transforms either in the original signal domain or on the top of a transform-domain representation. To test this claim, several image codecs are designed, which use block-, lapped- and wavelet-transform structures. Significant increases in compression performances are observed compared to original methods. An extension of the proposed optimization method for video coding gave us state-of-the-art compression results with separable transforms. Also using the robust statistics, an explanation to the superiority of new design over other learning-based methods such as Karhunen-Loeve transform is provided. Finally, the new optimization method and the minimization of the "oracle" risk of diagonal estimators in signal estimation is shown to be equal. With the design of new diagonal estimators and the risk-minimization-based adaptation, a new image denoising algorithm is proposed. While these diagonal estimators denoise local image patches, by formulation the optimal fusion of overlapping local denoised estimates, the new denoising algorithm is scaled to operate on large images. In our experiments, the state-of-the-art results for transform-domain denoising are achieved.
90

NONCONTACT DIFFUSE CORRELATION TOMOGRAPHY OF BREAST TUMOR

He, Lian 01 January 2015 (has links)
Since aggressive cancers are frequently hypermetabolic with angiogenic vessels, quantification of blood flow (BF) can be vital for cancer diagnosis. Our laboratory has developed a noncontact diffuse correlation tomography (ncDCT) system for 3-D imaging of BF distribution in deep tissues (up to centimeters). The ncDCT system employs two sets of optical lenses to project source and detector fibers respectively onto the tissue surface, and applies finite element framework to model light transportation in complex tissue geometries. This thesis reports our first step to adapt the ncDCT system for 3-D imaging of BF contrasts in human breast tumors. A commercial 3-D camera was used to obtain breast surface geometry which was then converted to a solid volume mesh. An ncDCT probe scanned over a region of interest on the breast mesh surface and the measured boundary data were used for 3-D image reconstruction of BF distribution. This technique was tested with computer simulations and in 28 patients with breast tumors. Results from computer simulations suggest that relatively high accuracy can be achieved when the entire tumor was within the sensitive region of diffuse light. Image reconstruction with a priori knowledge of the tumor volume and location can significantly improve the accuracy in recovery of tumor BF contrasts. In vivo ncDCT imaging results from the majority of breast tumors showed higher BF contrasts in the tumor regions compared to the surrounding tissues. Reconstructed tumor depths and dimensions matched ultrasound imaging results when the tumors were within the sensitive region of light propagation. The results demonstrate that ncDCT system has the potential to image BF distributions in soft and vulnerable tissues without distorting tissue hemodynamics. In addition to this primary study, detector fibers with different modes (i.e., single-mode, few-mode, multimode) for photon collection were experimentally explored to improve the signal-to-noise ratio of diffuse correlation spectroscopy flow-oximeter measurements.

Page generated in 0.0767 seconds