• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1349
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 31
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3045
  • 532
  • 465
  • 417
  • 410
  • 358
  • 328
  • 276
  • 265
  • 222
  • 219
  • 201
  • 169
  • 161
  • 158
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
811

Effects of Applied Loads, Effective Contact Area and Surface Roughness on the Dicing Yield of 3D Cu Bonded Interconnects

Leong, Hoi Liong, Gan, C.L., Pey, Kin Leong, Thompson, Carl V., Li, Hongyu 01 1900 (has links)
Bonded copper interconnects were created using thermo-compression bonding and the dicing yield was used as an indication of the bond quality. SEM images indicated that the Cu was plastically deformed. Our experimental and modeling results indicate that the effective contact area is directly proportional to the applied load. Furthermore, for first time, results have been obtained that indicate that the dicing yield is proportional to the measured bond strength, and the bond strength is proportional to the effective contact area. It is also shown that films with rougher surfaces (and corresponding lower effective bonding areas) have lower bond strengths and dicing yields. A quantitative model for the relationship between measured surface roughness and the corresponding dicing yield has been developed. An appropriate surface-roughness data acquisition methodology has also been developed. The maximum possible applied load and the minimum possible surface roughness are required to obtain the maximum effective contact area, and hence to achieve optimum yields (both mechanically and electrically). / Singapore-MIT Alliance (SMA)
812

A comparative study of data transformations for efficient XML and JSON data compression : an in-depth analysis of data transformation techniques, including tag and capital conversions, character and word N-gram transformations, and domain-specific data transforms using SMILES data as a case study

Scanlon, Shagufta Anjum January 2015 (has links)
XML is a widely used data exchange format. The verbose nature of XML leads to the requirement to efficiently store and process this type of data using compression. Various general-purpose transforms and compression techniques exist that can be used to transform and compress XML data. More compact alternatives to XML data have been developed, namely JSON due to the verbosity of XML data. Similarly, there is a requirement to efficiently store and process SMILES data used in Chemoinformatics. General-purpose transforms and compressors can be used to compress this type of data to a certain extent, however, these techniques are not specific to SMILES data. The primary contribution of this research is to provide developers that use XML, JSON or SMILES data, with key knowledge of the best transformation techniques to use with certain types of data, and which compression techniques would provide the best compressed output size and processing times, depending on their requirements. The main study in this thesis, investigates the extent of which using data transforms prior to data compression can further improve the compression of XML and JSON data. It provides a comparative analysis of applying a variety of data transform and data transform variations, to a number of different types of XML and JSON equivalent datasets of various sizes, and applying different general-purpose compression techniques over the transformed data. A case study is also conducted, to investigate data transforms prior to compression to improve the compression of data within a data-specific domain.
813

The effect of transient dynamics of the internal combustion compression ring upon its tribological performance

Baker, Christopher E. January 2014 (has links)
The losses in an internal combustion engine are dominated by thermal and parasitic sources. The latter arises from mechanical inefficiencies inherent within the system, particularly friction in load bearing conjunctions such as the piston assembly. During idle and at low engine speeds, frictional losses are the major contributor to the overall engine losses as opposed to the dominant contribution of thermal losses under other driving conditions. Given the relatively small size and simple structure of the top compression ring, it has a disproportionate contribution to the total frictional losses. This suggests further analysis would be required to understand the underlying causes of compression ring behaviour throughout the engine cycle. The available literature on tribological analyses of compression rings does not account for the transient ring elastodynamics. They usually assume a rigid ring for film thickness and power loss predictions, which is not representative of the ring's dynamic response. A combined study of ring elastodynamic behaviour and its tribological conjunction is a comprehensive approach.
814

Compaction conventionnelle et compaction grande vitesse : application aux produits multimatériaux et multifonctions / Conventional and high-velocity compaction : appliction on multimaterials and multifunctions products

Le Guennec, Yann 25 May 2011 (has links)
Parmi les procédés de mise en forme de pièces industrielles, la métallurgie des poudres autorise une haute cadence de production avec une faible perte de matière première. L'élaboration de composants multi-matériaux par compression et frittage permet de minimiser le nombre d'étapes de conception afin de combiner des propriétés complémentaires. L'objet de cette thèse est l'étude d'un procédé de compression innovant qui peut augmenter la cadence de production et diminuer les contraintes sur l'outillage : la CGV (compression grande vitesse), et de l'appliquer à la mise en forme de pièces multimatériaux. Une presse à grande vitesse a été développée au laboratoire afin d'étudier l'influence de la CGV sur deux couples de matériaux : un couple base Fe / base WC, associant dureté et ténacité, un couple Acier 1.4313 / Stellite 6 associant résistance mécanique et résistance à la corrosion. Des modélisations numériques des procédés de compression conventionnelle et CGV ont été réalisées dans le but de mieux analyser les phénomènes observés et de prévoir le comportement en compression de plusieurs poudres simultanément. Ce travail aboutit à des recommandations pour la mise en oeuvre de la compression de composants multi-matériaux et met au jour quelques caractéristiques de la CGV. / Among industrial shape processes, powder metallurgy has a high production rate associated to low waste of materials. Manufacturing of multi-material components by die pressing and sintering is useful to reduce the production cycle in order to combine complementary properties. The aim of this thesis is to study an innovative compaction process which can increase production rate and reduce stresses applied on tools: the HVC (High Velocity Compaction), and use it to shape multi-material components. A HVC device has been developed in our laboratory to investigate the influence of HVC on two couples of powders : A couple Fe / WC which combines toughness and hardness properties, a couple 1.4313 steel/Stellite 6 with mechanical resistance associated to corrosion resistance. Numerical simulations of conventional and HVC compaction process have been achieved to analyze phenomenon and predict the behavior of multi-materials components during compaction. This work ends in recommendations for the implementation of the compression of multimaterials components and brings to light some characteristics of the CGV.
815

Using semantic knowledge to improve compression on log files

Otten, Frederick John 19 November 2008 (has links)
With the move towards global and multi-national companies, information technology infrastructure requirements are increasing. As the size of these computer networks increases, it becomes more and more difficult to monitor, control, and secure them. Networks consist of a number of diverse devices, sensors, and gateways which are often spread over large geographical areas. Each of these devices produce log files which need to be analysed and monitored to provide network security and satisfy regulations. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data for archival purposes after the log files have been rotated. However, there are many other compression programs which exist - each with their own advantages and disadvantages. These programs each use a different amount of memory and take different compression and decompression times to achieve different compression ratios. System log files also contain redundancy which is not necessarily exploited by standard compression programs. Log messages usually use a similar format with a defined syntax. In the log files, all the ASCII characters are not used and the messages contain certain "phrases" which often repeated. This thesis investigates the use of compression as a means of data reduction and how the use of semantic knowledge can improve data compression (also applying results to different scenarios that can occur in a distributed computing environment). It presents the results of a series of tests performed on different log files. It also examines the semantic knowledge which exists in maillog files and how it can be exploited to improve the compression results. The results from a series of text preprocessors which exploit this knowledge are presented and evaluated. These preprocessors include: one which replaces the timestamps and IP addresses with their binary equivalents and one which replaces words from a dictionary with unused ASCII characters. In this thesis, data compression is shown to be an effective method of data reduction producing up to 98 percent reduction in filesize on a corpus of log files. The use of preprocessors which exploit semantic knowledge results in up to 56 percent improvement in overall compression time and up to 32 percent reduction in compressed size. / TeX / pdfTeX-1.40.3
816

Transformation Induite au cours d’un Procédé Industriel (TIPI) de compression directe : transition polymorphique de la caféine et propriétés physiques des comprimés / Processing-Induced-Transformations (PIT) in direct compression : polymorphic transition of caffeine and tablet physical properties

Juban, Audrey 31 August 2016 (has links)
Ce manuscrit est consacré à l'étude des transformations polymorphiques induites au cours du procédé de compression directe, et à son incidence sur les propriétés mécaniques des comprimés. L'objectif principal de ce travail est d'apporter des éléments de compréhension sur la transition polymorphique de la caféine (principe actif modèle) Forme I en Forme II survenant lors du procédé de compression directe, et de déterminer si celle-ci a un impact sur la contrainte à la rupture du comprimé. L'utilisation du simulateur de compression Styl'One Classique (Médel'Pharm) et d'une machine de fatigue (Instron®) pour la fabrication des comprimés, a permis d'étudier deux paramètres de procédé (pression et vitesse de fabrication) et deux paramètres de formulation (dilution du principe actif et nature du diluant) représentatifs de conditions industrielles. Les transitions de phase de la caféine ont été évaluées par analyse calorimétrique différentielle (ACD). De plus, des études cinétiques ont été conduites durant plusieurs mois afin d'observer l'influence de ces différents paramètres sur la transition polymorphique de la caféine anhydre Forme I en Forme II dans les comprimés au cours de leur stockage. Enfin, l'analyse du mécanisme de transition de ce principe actif a été réalisée au moyen d'une loi exponentielle étirée, issue du modèle de Johnson-Mehl-AvramiLa contrainte à la rupture des comprimés (caractéristique globale) a été mesurée par un test de rupture diamétrale, la dureté de surface des comprimés (caractéristique locale) par nano-indentation. Un premier modèle de prédiction de la contrainte à la rupture selon la teneur en caféine a été développé. Les principales caractéristiques du cycle de compression calculées à partir des données enregistrée par le simulateur de compression ont permis d'analyser le comportement des formules lors de la compression puis d'établir un second modèle de prédiction de la contrainte à la rupture.Les résultats de transition polymorphique et de propriétés physiques de comprimés seront alors confrontés / Direct compression process is widely used in the pharmaceutical industry for tablet manufacturing. This work is dedicated to the study of the polymorphic transformation induced by a direct compression process, and its impact on tablet mechanical properties. The main objective is to improve the understanding of the phase transition of caffeine Form I into Form II occurring during the direct compression process, and whether it has an impact on the tablet tensile strength. In this way, several studies have been conducted on the impact of operating conditions on the polymorphic transformation of a model active pharmaceutical ingredient (API) and on few physical properties of the tablets.The use of a compression simulator Styl’One Classique (Médel’Pharm) and a fatigue equipment (Instron®) for the manufacture of tablets, allowed studying two process parameters (compression load and compression speed) and two formulation parameters (dilution of the API and nature of the diluent). Caffeine phase transitions have been evaluated by differential scanning calorimetry (DSC). Moreover, during several months after tableting, kinetic studies were conducted in order to observe the influence of these parameters on the polymorphic transition of the anhydrous caffeine Form I into Form II in tablets during storage. Finally, the analysis of the transition mechanism of this API was performed thanks to a stretched exponential law, derived from the Johnson-Melh-Avrami model.The tensile strength of tablets (global property) was measured by a diametral compression test and their surface hardness (local property) by nanoindentation. A first predictive model for tablet tensile strength according to the caffeine content was developed. The compression cycle characteristics calculated from the data recording with the compression simulator allowed analyzing the behavior of different blends during the compression process. A second model for predicting the tensile strength was then established.Finally, results obtained for the polymorphic transition and physical properties of tablets will then be confronting
817

GPU Accelerated Light Field Compression

Baravdish, Gabriel January 2018 (has links)
This thesis presents a GPU accelerated method to compress light field or light field videos. The implementation is based on an earlier work of a full light field compression framework. The large amount of data storage by capturing light fields is a challenge to compress and we seek to accelerate the encoding part. We compress by projecting each data point onto a set of dictionaries and seek a sparse representation with the least error. An optimized greedy algorithm to suit computations on the GPU is presented. We benefit of the algorithm outline by encoding the data segmentally in parallel for faster computation speed while maintaining the quality. The results shows a significantly faster encoding time compared to the results in the same research field. We conclude that there are further improvements to increase the speed, and thus it is not too far from an interactive compression speed.
818

[en] EVALUATING MOTION ESTIMATION ALGORITHMS FOR VIDEO COMPRESSION / [pt] AVALIAÇÃO DE ALGORITMOS DE MOVIMENTO PARA A COMPRESSÃO DE SEQÜÊNCIAS DE IMAGENS

JOSE ANTONIO CASTINEIRA GONZALES 19 July 2006 (has links)
[pt] Este trabalho teve por objetivo estudar algoritmos de estimação de movimento baseados na técnica de casamento de bloco a fim de avaliar a importância da sua escolha na construção de um codificador para uso em compressão de seqüência de imagens. Para isto foram estudados quatro algoritmos baseados na técnica de casamento de bloco, sendo verificada a interdependência existente entre os vários parâmetros que os compõem, tais como, tamanho da área de busca, critérios de medida de distorção entre blocos e tamanhos de blocos, em relação à qualidade da imagem reconstruída. / [en] This work was performed to study motion estimation algorithms based on block matching in order to evaluate the importance of the choice of the motion estimation algorithm in the Project of a image sequence compression coder. In order to do so, they were studied four motion estimation algorithms, and their performance were evaluated considering some parameters such as search region size, methods to measure the matching between blocks and block sizes, related to the quality of the reconstructed image.
819

[en] LOSSY LEMPEL-ZIV ALGORITHM AND ITS APPLICATION TO IMAGE COMPRESSION / [pt] ALGORITMO DE LEMPEL-ZIV COM PERDAS E APLICAÇÃO À COMPRESSÃO DE IMAGENS

MURILO BRESCIANI DE CARVALHO 17 August 2006 (has links)
[pt] Neste trabalho, um método de compressão de dados com perdas, baseado no algoritmo de compressão sem perdas de Lempel-Ziv é proposto. Simulações são usadas para caracterizar o desempenho do método, chamado LLZ. É também aplicado à compressão de imagens e os resultados obtidos são analisados. / [en] In this work, a lossy data compression method, base don the Lempel-Ziv lossles compression scheme is proposed. Simulations are used to study the performance of the method, called LLZ. The lLZ is also used to compress digital image data and the results obtained is analized.
820

[en] A UNIVERSAL ENCODEN FOR CONTINUOUS ALPHABET SOURCE COMPRESSION / [pt] UM ALGORITMO UNIVERSAL PARA COMPRESSÃO DE FONTES COM ALFABETO CONTÍNUO

MARCELO DE ALCANTARA LEISTER 04 September 2006 (has links)
[pt] A tese de mestrado, aqui resumida, tem a meta de propor novos algoritmos para a compressão de dados, em especial imagens, apresentando aplicações e resultados teóricos. Como o título sugere, estes dados serão originados de fontes com alfabeto contínuo, podendo ser particularizado para os casos discretos. O algoritmo a ser proposto (LLZ para o caso contínuo) é baseado no codificador universal Lempel-Ziv, apresentando a característica de admitir a introdução de perdas, mas que conduza a um maior aproveitamento do poder de compressão deste algoritmo. Desta forma, o LLZ se mostra vantajoso em dois pontos: integrar compactador e quantizador, e ser um quantizador universal. / [en] This dissertation introduces new data compression algorithms, specially for images, and presents some applications and theoretical results related to these algorithms. The data to be compressed will be originated from sources with continuos alphabet, and can be stated for discrete sources. The proposed algorithms (LLZ for continuos case), which is based on the universal Lempel-Ziv coder (LZ), accepts losses, taking advantage of LZ s compression power. As such, the LIZ is an innovating proposal in two ways: first, it couples compaction and quantization in one step; and second, it can be seeing as an universal quantizer.

Page generated in 0.0319 seconds