• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 51
  • 23
  • 13
  • 7
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 119
  • 119
  • 24
  • 23
  • 21
  • 20
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Contribution à l’amélioration des performances de décodage des turbo codes : algorithmes et architecture / Contribution to the improvement of the decoding performance of turbo codes : algorithms and architecture

Tonnellier, Thibaud 05 July 2017 (has links)
Les turbo codes sont une classe de codes correcteurs d’erreurs approchant la limite théorique de capacité formulée par Claude Shannon. Conjointement à leurs excellentes performances de décodage, la complexité calculatoire modérée des turbo décodeurs a permis leur inclusion dans de nombreux standards de communications numériques. Une des métriques permettant la caractérisation de codes correcteurs d’erreurs est l’évolution du taux d’erreurs binaires en fonction du rapport signal sur bruit. Dans le cadre des turbo codes, une courbe de performance de décodage comprend deux zones principales.Dans la première zone, une faible amélioration de la qualité du canal de transmission entraîne de grandes améliorations au niveau des performances de décodage. En revanche dans la seconde, une amélioration de cette qualité ne résulte qu’en une amélioration marginale des performances de décodage. Cette seconde région est nommée zone du plancher d’erreurs. Elle peut empêcher l’utilisation de turbo codes dans des contextes nécessitant de très faibles taux d’erreurs. C’est pourquoi la communauté scientifique a proposé différentes optimisations favorisant la construction de turbo codes atténuant ce plancher d’erreurs. Cependant, ces approches ne peuvent être considérées pour des turbocodes déjà standardisés. Dans ce contexte, cette thèse adresse le problème de la réduction du plancher d’erreurs en s’interdisant de modifier la chaîne de communications numériques du côté de l’émetteur.Pour ce faire, un état de l’art de méthodes de post-traitement de décodage est dressé pour les turbo codes. Il apparaît que les solutions efficaces sont coûteuses à mettre en oeuvre car elles nécessitent une multiplication des ressources calculatoires ou impactent fortement la latence globale de décodage.Dans un premier temps, deux algorithmes basés sur une supervision de l’évolution de métriques internes aux décodeurs, sont proposés. L’un deux permet d’augmenter la convergence du turbo décodeur. L’autre ne permet qu’une réduction marginale du plancher d’erreurs. Dans un second temps, il est observé que dans la zone du plancher d’erreurs, les trames décodées par le turbo décodeur sont très proches du mot de code originellement transmis. Ceci est démontré par une proposition de prédiction analytique de la distribution du nombre d’erreurs binaires par trame erronée. Cette dernière est réalisée grâce au spectre de distance du turbo code. Puisque ces erreurs binaires responsables du plancher d’erreurs sont peu nombreuses, une métrique permettant de les identifier est mise en oeuvre. Ceci mène alors à l’établissement d’un algorithme de décodage permettant de corriger des erreurs résiduelles. Cet algorithme, appelé algorithme Flip-and-Check se base sur un principe de création de mots candidats et de vérifications successives parun code détecteur d’erreurs. Grâce à cet algorithme de décodage, un abaissement du plancher d’erreurs d’un ordre de grandeur est obtenu pour les turbo codes de différents standards (LTE, CCSDS, DVB-RCS et DVB-RCS2), ce, tout en conservant une complexité calculatoire raisonnable.Finalement, une architecture matérielle de décodage implémentant l’algorithme Flipand-Check est présentée. Une étude préalable de l’impact des différents paramètres de l’algorithme est menée. Elle aboutit à la définition de valeurs optimales pour certains de ces paramètres. D’autres sont à adapter en fonction des gains visés en terme de performances de décodage. Cette architecture démontre alors la possible intégration de cet algorithme aux turbo décodeurs existants ; permettant alors d’abaisser le plancher d’erreurs des différents turbo codes présents dans les différents standards de télécommunication. / Since their introduction in the 90’s, turbo codes are considered as one of the most powerful error-correcting code. Thanks to their excellent trade-off between computational complexity and decoding performance, they were chosen in many communication standards. One way to characterize error-correcting codes is the evolution of the bit error rate as a function of signal-to-noise ratio (SNR). The turbo code error rate performance is divided in two different regions : the waterfall region and the error floor region. In the waterfall region, a slight increase in SNR results in a significant drop in error rate. In the error floor region, the error rate performance is only slightly improved as the SNR grows. This error floor can prevent turbo codes from being used in applications with low error rates requirements. Therefore various constructions optimizations that lower the error floor of turbo codes has been proposed in recent years by scientific community. However, these approaches can not be considered for already standardized turbo codes.This thesis addresses the problem of lowering the error floor of turbo codes without allowing any modification of the digital communication chain at the transmitter side. For this purpose, the state-of-the-art post-processing decoding method for turbo codes is detailed. It appears that efficient solutions are expensive to implement due to the required multiplication of computational resources or can strongly impact the overall decoding latency. Firstly, two decoding algorithms based on the monitoring of decoder’s internal metrics are proposed. The waterfall region is enhanced by the first algorithm. However, the second one marginally lowers the error floor. Then, the study shows that in the error floor region, frames decoded by the turbo decoder are really close to the word originally transmitted. This is demonstrated by a proposition of an analytical prediction of the distribution of the number of bits in errors per erroneous frame. This prediction rests on the distance spectrum of turbo codes. Since the appearance of error floor region is due to only few bits in errors, an identification metric is proposed. This lead to the proposal of an algorithm that can correct residual errors. This algorithm, called Flip-and-Check, rests on the generation of candidate words, followed by verification according to an error-detecting code. Thanks to this decoding algorithm, the error floor of turbo codes encountered in different standards (LTE, CCSDS, DVB-RCS and DVB-RCS2) is lowered by one order of magnitude. This performance improvement is obtained without considering an important computational complexity overhead. Finally, a hardware decoding architecture implementing the Flip-and-Check algorithm is presented. A preliminary study of the impact of the different parameters of this algorithm is carried out. It leads to the definition of optimal values for some of these parameters. Others has to be adapted according to the gains targeted in terms of decoding performance. The possible integration of this algorithm along with existing turbo decoders is demonstrated thanks to this hardware architecture. This therefore enables the lowering of the error floors of standardized turbo codes.
42

Generalização de regras de associação utilizando conhecimento de domínio e avaliação do conhecimento generalizado / Generalization of association rules through domain knowledge and generalized knoeledge evaliation

Veronica Oliveira de Carvalho 23 August 2007 (has links)
Dentre as técnicas de mineração de dados encontra-se a associação, a qual identifica todas as associações intrínsecas contidas na base de dados. Entretanto, essa característica, vantajosa por um lado, faz com que um grande número de padrões seja gerado, sendo que muito deles, mesmo sendo estatisticamente aceitos, são triviais, falsos, ou irrelevantes à aplicação. Além disso, a técnica de associação tradicional gera padrões compostos apenas por itens contidos na base de dados, o que leva à extração, em geral, de um conhecimento muito específico. Essa especificidade dificulta a obtenção de uma visão geral do domínio pelos usuários finais, que visam a utilização/exploração de conhecimentos úteis e compreensíveis. Assim, o pós-processamento das regras descobertas se torna um importante tópico, uma vez que há a necessidade de se validar as regras obtidas. Diante do exposto, este trabalho apresenta uma abordagem de pós-processamento de regras de associação que utiliza conhecimento de domínio, expresso via taxonomias, para obter um conjunto de regras de associação generalizadas compacto e representativo. Além disso, a fim de avaliar a representatividade de padrões generalizados, é apresentado também neste trabalho um estudo referente à utilização de medidas de interesse objetivas quando aplicadas a regras de associação generalizadas. Nesse estudo, a semântica da generalização é levada em consideração, já que cada uma delas fornece uma visão distinta do domínio. Como resultados desta tese, foi possível observar que: um conjunto de regras de associação pode ser compactado na presença de um conjunto de taxonomias; para cada uma das semânticas de generalização existe um conjunto de medidas mais apropriado para ser utilizado na avaliação de regras generalizadas / The association technique, one of the data mining techniques, identifies all the intrinsic associations in database. This characteristic, which can be advantageous on the one hand, generates a large number of patterns. Many of these patterns, even statistically accepted, are trivial, spurious, or irrelevant to the application. In addition, the association technique generates patterns composed only by items in database, which in general implies a very specific knowledge. This specificity makes it difficult to obtain a general view of the domain by the final users, who aims the utilization/exploration of useful and comprehensible knowledge . Thus, the post-processing of the discovered rules becomes an important topic, since it is necessary to validate the obtained rules. In this context, this work presents an approach for post-processing association rules that uses domain knowledge, expressed by taxonomies, to obtain a reduced and representative generalized association rule set. In addition, in order to evaluate the representativeness of generalized patterns, a study referent to the use of objective interest measures when applied to generalized association rules is presented. In this study, the generalization semantics is considered, since each semantic provides a distinct view of the domain. As results of this thesis, it was possible to observe that: an association rule set can be compacted with a taxonomy set; for each generalization semantic there is a measure set that is more appropriate to be used in the generalized rules evaluation
43

Construção semi-automática de taxonomias para generalização de regras de associação / Semi-automatic construction of taxonomies for association rules generation

Camila Delefrate Martins 14 July 2006 (has links)
Para o sucesso do processo de mineração de dados é importante que o conhecimento extraí?do seja compreensível e interessante para que o usuário final possa utilizá-lo em um sistema inteligente ou em processos de tomada de decisão. Um grande problema, porém, é identificado quando a tarefa de mineração de dados denominada associação é utilizada: a geração de um grande volume de regras. Taxonomias podem ser utilizadas para facilitar a análise e interpretação das regras de associação, uma vez que as mesmas provêm uma visão de como os itens podem ser hierarquicamente classificados. Em função dessa hierarquia é possível obter regras mais gerais que representem um conjunto de itens. Dentro desse contexto, neste trabalho é apresentada uma metodologia para construção semi-automática de taxonomias, que inclui procedimentos automáticos e interativos para a realização dessa tarefa. Essa combinação possibilita a utilização do conhecimento do especialista e também o auxilia na identificação de grupos. Entre os principais resultados deste trabalho, pode-se destacar a proposta e implementação do algoritmo SACT (Semi-automatic Construction of Taxonomies - Construção Semi-automática de Taxonomias), que provê a utilização da metodologia proposta. Para viabilizar a utilização do algoritmo, foi desenvolvido o módulo computacional RulEESACT. Com o objetivo de viabilizar e analisar a qualidade da metodologia proposta e do módulo desenvolvido, foi realizado um estudo de caso no qual foram construída taxonomias para duas bases de dados utilizando o RulEE-SACT. Uma das taxonomias foi analisada e validada por uma especialista do domínio. Posteriormente, as taxonomias e as bases de transações foram fornecidas para dois algoritmos de generalização de regras de associação a fim de analisar a aplicação das taxonomias geradas / I n the data mining process it is important that the extracted knowledge is understandable and interesting to the final user, so it can be used to support in the decision making. However, the data mining task named association has one problem: it generates a big volume of rules. Taxonomies can be used to facilitate the analysis and interpretation of association rules, because they provide an hierarchical vision of the items. This hierarchy enables the obtainment of more general rules, which represent a set of items. In this context, a methodology to semi-automatically construct taxonomies is proposed in this work. This methodology includes automatic and interactives procedures in order to construct the taxonomies, using the specialist?s knowledge and also assisting in the identification of groups. One of the main results of this work is the proposal and implementation of the SACT (Semi-automatic Construction of Taxonomies) algorithm, which provides the use of the proposed methodology. In order to facilitate the use of this algorithm, a computational module named RulEE-SACT was developed. Aiming to analyze the viability and quality of the proposed methodology and the developed module, a case study was done. In this case study, taxonomies of two databases were constructed using the RulEE-SACT. One of them was analyzed and validated by a domain specialist. Then the taxonomies and the databases were supplied to two algorithms which generalize association rules, aiming to analyze the use of the generated taxonomies
44

Advanced Real-time Post-Processing using GPGPU techniques

Lönroth, Per, Unger, Mattias January 2008 (has links)
Post-processing techniques are used to change a rendered image as a last step before presentation and include, but is not limited to, operations such as change of saturation or contrast, and also more advanced effects like depth-of-field and tone mapping. Depth-of-field effects are created by changing the focus in an image; the parts close to the focus point are perfectly sharp while the rest of the image has a variable amount of blurriness. The effect is widely used in photography and movies as a depth cue but has in the latest years also been introduced into computer games. Today’s graphics hardware gives new possibilities when it comes to computation capacity. Shaders and GPGPU languages can be used to do massive parallel operations on graphics hardware and are well suited for game developers. This thesis presents the theoretical background of some of the recent and most valuable depth-of-field algorithms and describes the implementation of various solutions in the shader domain but also using GPGPU techniques. The main objective is to analyze various depth-of-field approaches and look at their visual quality and how the methods scale performance wise when using different techniques.
45

"Pós-processamento de regras de associação" / Post-processing of association rules

Edson Augusto Melanda 30 November 2004 (has links)
A demanda por métodos de análise e descoberta de conhecimento em grandes bases de dados tem fortalecido a pesquisa em Mineração de Dados. Dentre as tarefas associadas a essa área, tem-se Regras de Associação. Vários algoritmos foram propostos para tratamento de Regras de Associação, que geralmente tˆem como resultado um elevado número de regras, tornando o Pós-processamento do conhecimento uma etapa bastante complexa e desafiadora. Existem medidas para auxiliar essa etapa de avaliação de regras, porém existem lacunas referentes a inexistência de um método intuitivo para priorizar e selecionar regras. Além disso, não é possível encontrar metodologias específicas para seleção de regras considerando mais de uma medida simultaneamente. Esta tese tem como objetivo a proposição, desenvolvimento e implementação de uma metodologia para o Pós-processamento de Regras de Associação. Na metodologia proposta, pequenos grupos de regras identificados como potencialmente interessantes são apresentados ao usuário especialista para avaliação. Para tanto, foram analisados métodos e técnicas utilizadas em Pós-processamento de conhecimento, medidas objetivas para avaliação de Regras de Associação e algoritmos que geram regras. Dessa perspectiva foram realizados experimentos para identificar o potencial das medidas a serem empregadas como filtros de Regras de Associação. Uma avaliação gráfica apoiou o estudo das medidas e a especificação da metodologia proposta. Aspecto inovador da metodologia proposta é a utilização do método de Pareto e a combinação de medidas para selecionar as Regras de Associação. Por fim foi implementado um ambiente para avaliação de Regras de Associação, denominado ARInE, viabilizando o uso da metodologia proposta. / The large demand of methods for knowledge discovery and analysis in large databases has continously increased the research in data mining area. Among the tasks associated to this area, one can find Association Rules. Several algorithms have been proposed for treating Association Rules. However, these algorithms give as results a huge amount of rules, making the knowledge post-processing phase very complex and challeging. There are several measures that can be used in this evaluation phase, but there are also some limitations regarding to the ausence of an intuitive method to rank and select rules. Moreover, it is not possible to find especific methodologies for selecting rules, considering more than one measure simultaneously. This thesis has as objective the proposal, development and implementation of a postprocessing methodology for Association Rules. In the proposed methodology, small groups of rules, which have been identified as potentialy interesting, are presented to the expert for evaluation. In this sense, methods and techniques for knowledge post-processing, objective measures for rules evaluation, and Association Rules algorithms have been analized. From this point of view, several experiments have been realized for identifying the potential of such measures to be used to filter Association Rules. The study of measures and the specification of the proposed methodology have been supported by a graphical evaluation. The novel aspect of the proposed methodology consists on using the Pareto’s method and combining measures for selecting Association Rules. Finally, an enviroment for evaluating Association Rules, named as ARInE, has been implemented according to the proposed methodology.
46

Investigations on post-processing of 3D printed thermoplastic polyurethane (TPU) surface

Boualleg, Abdelmadjid January 2019 (has links)
Abstract The reduction of product development cycle time is a major concern in industries to remain competitive in the marketplace. Among various manufacturing technologies, 3D printing technology or also known as additive manufacturing (AM), has shown excellent potential to reduce both the cycle time and cost of the product due to its lower consumption of energy and material usage compared to conventional manufacturing. Fused deposition modeling (FDM) is one of the most popular additive manufacturing technologies for various engineering applications which has the ability to build functional parts having complex geometrical shapes in reasonable build time and can use less expensive equipment and cheaper material. However, the quality of parts produced by FDM has some challenges such as poor surface quality.   The focus of this study is improving the surface quality produced by Fused Deposition Modeling. The investigations include 3D printing study samples with optimum parameter settings and post-processing the sample’s surfaces by laser ablation. Taguchi’s design of the experiment is employed to identify the optimum settings of laser ablation the FDM surfaces. Laser power, laser speed and pulse per inch (PPI) are the laser settings considered in the study. Characterization of the samples are done using Dino-lite USB camera images and GFM Mikro-CAD fringe projection microscope is used to measure the surface roughness of the samples. Areal surface parameters are used to characterize and compare the surfaces of as printed and laser ablated. It is observed that the effect of laser ablation varies with respect to surfaces printed at different angles and laser-ablated with different settings. The surface roughness of laser-ablated surfaces is found to be lower than as-printed FDM surfaces.
47

Kernel density estimators as a tool for atmospheric dispersion models

Egelrud, Daniel January 2021 (has links)
Lagrangian particle models are useful for modelling pollutants in the atmosphere. They simulate the spread of pollutants by modelling trajectories of individual particles. However, to be useful, these models require a density estimate. The standard method to use has been boxcounting, but kernel density estimator (KDE) is an alternative. How KDE is used varies as there is no standard implementation. Primarily, it is the choice of kernel and bandwidth estimator that determines the model. In this report I have implemented a KDE for FOI’s Lagrangian particle model LPELLO. The kernel I have used is a combination between a uniform and Gaussian kernel. Four different bandwidth estimators has been tested, where two are global and two are variable. The first variable bandwidth estimator is based on the age of released particles, and the second is based on the turbulence history of the particles. The methods have then been tested against boxcounting, which by using an exceedingly large number of particles can be seen as the true concentration. The tests indicate that KDE method generally performs better than boxcounting at low particle numbers. The variable bandwidth estimators also performed better than both global bandwidth estimators. To achive a firmer conclusion, more testing is needed. The results indicate that KDE in general, and variable bandwidth estimators in specific, are useful tools for concentration estimate.
48

Modelling, Evaluation and Assessment of Welded Joints Subjected to Fatigue

Rajaganesan, Prajeet January 2020 (has links)
Fatigue assessment of welded joints using finite element methods is becoming very common. Research about new methods is being carried out every day that show a more accurate estimation of the fatigue life cycle than the previous ones. Some of these methods are investigated in this thesis for a thorough understanding of the weld fatigue evaluation process.The thesis study presents several methods as candidates for analysis of selected case studies for comparison. The sensitivity of methods towards FE model properties was studied. The ease of implementation for further automatization of the method was highly considered from the early stages of the project. A comparison study amongst feasible methods was then performed after analysis.The selected three case studies provided a wide range of difficulties in terms of geometry and loading and made them suitable for the methods to be evaluated. It should be noted that case studies only with fillet welds were considered during the literature study and analysis. Implementation of some methods on a case study where they have not previously been tested before gave a challenging task during the analysis phase. The proposed method after comparison and ranking of the methods based on several criteria such as accuracy, robustness, etc. was the hot spot stress method. The main advantages of this method are its low computational time, less complexity during both pre- and post-processing, and the ability to work for both solid and shell models.Finally, the report gives a walk-through of several functionalities of the post-processor tool built to enhance workflow for the hot spot based fatigue assessment of welds. Pseudo-codes for some functions of the tool are given for clarity. A summary of the workflow is presented as a flowchart. The outputs of the case studies were then evaluated using the tool and compared with the manual evaluation to check the effectiveness of the tool on different scenarios. The tool shows flexibility in handling different types of weld geometry with good agreement to the results obtained manually but only for welds lying on a flat surface. Some of the advantages of the tool are its capability to handle multiple welds simultaneously and the flexibility to the user in selecting the way the results are presented. Most of the postprocessing steps are automatized, while some require user inputs.
49

Investigating Catalyst Composition, Doping, and Salt Treatment for Carbon Nanotube Sheets, and Methods to produce Carbon Hybrid Materials

Pujari, Anuptha 06 June 2023 (has links)
No description available.
50

Hybrid in-process and post-process qualification for fused filament fabrication

Saleh, Abu Shoaib 21 July 2023 (has links)
No description available.

Page generated in 0.0804 seconds