• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 302
  • 106
  • 35
  • 34
  • 23
  • 11
  • 10
  • 6
  • 6
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 628
  • 132
  • 103
  • 96
  • 79
  • 75
  • 62
  • 58
  • 52
  • 49
  • 47
  • 40
  • 40
  • 37
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Le déni et la minimisation en tant que distorsions cognitives chez les agresseurs sexuels

Girard, Julie 04 1900 (has links)
Objectif : Les auteurs s’intéressant à la relation entre le déni, la minimisation et les distorsions cognitives ont tous utilisé des méthodes et des définitions différentes pour décrire ces concepts, entrainant une importante variabilité des résultats. La recherche actuelle a donc pour objectif de clarifier la mesure du déni, de la minimisation et des distorsions cognitives. Méthode : Les participants étaient 313 détenus masculins ayant complété le programme national de traitement pour délinquants sexuels du Service correctionnel du Canada entre 2000 et 2004. Ces individus ont complété une série de tests psychométriques avant et après leur participation au programme, dont le SOARS et les échelles de Bumby. L’analyse des données a suivi le processus de validation de construit établi par Nunnally et Bernstein (1994). Résultats : Les résultats des analyses statistiques indiquent que le Sex Offender Acceptance of Responsibility Scales (SOARS; Peacock, 2000) ne mesure pas efficacement le construit du déni et de la minimisation. Ses propriétés psychométriques sont discutables. La réduction de l’instrument à dix variables permet cependant d’améliorer la mesure. L’échelle résultante est composée de deux facteurs, soit l’« acceptation du tort sexuel » et l’« acceptation de l’intention sexuelle ». Ces deux facteurs ont été mis en relation avec les facteurs des échelles de Bumby afin d’explorer les similitudes entre les concepts de déni, minimisation et distorsion cognitive. Or, malgré des corrélations faibles à moyennes, les différentes variables ne convergent en aucun facteur lors d’une analyse factorielle et les variables du SOARS corrèlent très peu au total de l’échelle, suggérant qu’il s’agit de concepts distincts. / Objective: Until now, a handful of authors have examined the relationship between denial, minimization and cognitive distortions, all using different methods and definitions to describe these concepts, resulting in a significant variability in the results. The primary aim of the current research is therefore to clarify the assessment of denial, minimization, and cognitive distortions. Method: Participants were 313 male inmates who completed the national sex offenders treatment program of the Correctional Service of Canada between 2000 and 2004. These individuals completed a series of psychometric tests before and after their participation in the program, including the SOARS and the Bumby scales. Data analysis followed the principles of construct validation established by Nunnally and Bernstein (1994). Results: The results of statistical analyses indicate that the Sex Offender Acceptance of Responsibility Scales (SOARS, Peacock, 2000) does not effectively measure the construct of denial and minimization. Its psychometric properties are questionable. However, the reduction of the scale to ten items improve the instrument. The resulting scale is composed of two factors, "Acceptance of sexual harm" and "Acceptance of sexual intent." These two factors were then examined in relation with the Bumby scales to explore the similarities between the concepts of denial, minimization and cognitive distortion. Despite low to moderate correlations, the various items failed to converge in a factor analysis and the SOARS variables correlate very little to the total score of the scale, suggesting that they are distinct concepts. These results indicate that denial and minimization and cognitive distortions of sexual offenders are two distinct constructs.
392

Optimisation perceptive de la restitution sonore multicanale par une analyse spatio-temporelle des premières réflexions

Deprez, Romain 07 December 2012 (has links)
L'objectif de cette thèse est l'optimisation de la qualité perçue de la reproduction sonore par un système audio multicanal, dans un contexte de salle d'écoute domestique. Les travaux de recherche présentés s'articulent selon deux axes. Le premier concerne l'effet de salle, et plus particulièrement les aspects physiques et perceptifs liés aux premières réflexions d'une salle. Ces éléments sont décrits spécifiquement, et une expérience psychoacoustique a été menée afin d'étendre les données disponibles quant à leur perceptibilité, c'est à dire leur capacité à modifier la perception du son direct, que ce soit en termes de timbre ou de localisation. Les résultats mettent en évidence la dépendance du seuil en fonction du type de stimulus, ainsi que son évolution en fonction de la configuration spatiale de l'onde directe et de la réflexion. Pour une condition donnée, le seuil de perceptibilité est décrit comme une fonction de directivité dépendant de l'incidence de la réflexion.Le deuxième axe de travail concerne les méthodes de correction de l'effet de la salle de reproduction. Les méthodes numériques classiques sont d'abord étudiées. Leur principale lacune réside dans l'absence de prise en compte du rôle spécifique des propriétés temporelles et spatiales des première réflexions. Le travail de thèse se termine par la proposition d'une nouvelle méthode de correction utilisant un algorithme itératif de type FISTA modifié afin de prendre en compte la perceptibilité des réflexions. Le traitement est implémenté dans une représentation où l'information spatiale est analysée sur la base des harmoniques sphériques. / The goal of this Ph. D. thesis is to optimize the perceived quality of multichannel sound reproduction systems, in the context of a domestic listening room. The presented research work have been pursued in two different directions.The first deals with room effet, and more particularly with physical and perceptual aspects of first reflections within a room. These reflections are specifically described, and a psychoacoustical experiment have been carried out in order to extend the available data on their perceptibility, i.e. their potency in altering the perception of the direct sound, whether in its timbral or spatial features. Results exhibit the variation of the threshold depending on the type of stimulus, as well as on the spatial configuration of the direct sound and the reflection. For a given condition, the perceptibility threshold is given as a directivity function depending on the direction of incidence of the reflection.The second topic deals with room correction methods. Firstly, state-of-the art digital methods are investigated. Their main drawback is that they don't consider the specific impact of the temporal and spatial attributes of first reflections. A new correction method is therefore proposed. It uses an iterative algorithm, derivated from the FISTA method, in order to take into account the perceptibility of the reflections. All the processing is carried out in a spatial sound representation, where the spatial properties of the sound are analysed thanks to spherical harmonics.
393

Graph-based variational optimization and applications in computer vision / Optimisation variationnelle discrète et applications en vision par ordinateur

Couprie, Camille 10 October 2011 (has links)
De nombreuses applications en vision par ordinateur comme le filtrage, la segmentation d'images, et la stéréovision peuvent être formulées comme des problèmes d'optimisation. Récemment les méthodes discrètes, convexes, globalement optimales ont reçu beaucoup d'attention. La méthode des "graph cuts'", très utilisée en vision par ordinateur est basée sur la résolution d'un problème de flot maximum discret, mais les solutions souffrent d'un effet de blocs,notamment en segmentation d'images. Une nouvelle formulation basée sur le problème continu est introduite dans le premier chapitre et permet d'éviter cet effet. La méthode de point interieur employée permet d'optimiser le problème plus rapidement que les méthodes existantes, et la convergence est garantie. Dans le second chapitre, la formulation proposée est efficacement étendue à la restauration d'image. Grâce à une approche du à la contrainte et à un algorithme proximal parallèle, la méthode permet de restaurer (débruiter, déflouter, fusionner) des images rapidement et préserve un meilleur contraste qu'avec la méthode de variation totale classique. Le chapitre suivant met en évidence l'existence de liens entre les méthodes de segmentation "graph-cuts'", le "randomwalker'', et les plus courts chemins avec un algorithme de segmentation par ligne de partage des eaux (LPE). Ces liens ont inspiré un nouvel algorithme de segmentation multi-labels rapide produisant une ligne de partage des eaux unique, moins sensible aux fuites que la LPE classique. Nous avons nommé cet algorithme "LPE puissance''. L'expression de la LPE sous forme d'un problème d'optimisation a ouvert la voie à de nombreuses applications possibles au delà de la segmentation d'images, par exemple dans le dernier chapitre en filtrage pour l'optimisation d'un problème non convexe, en stéréovision, et en reconstruction rapide de surfaces lisses délimitant des objets à partir de nuages de points bruités / Many computer vision applications such as image filtering, segmentation and stereovision can be formulated as optimization problems. Recently discrete, convex, globally optimal methods have received a lot of attention. Many graph-based methods suffer from metrication artefacts, segmented contours are blocky in areas where contour information is lacking. In the first part of this work, we develop a discrete yet isotropic energy minimization formulation for the continuous maximum flow problem that prevents metrication errors. This new convex formulation leads us to a provably globally optimal solution. The employed interior point method can optimize the problem faster than the existing continuous methods. The energy formulation is then adapted and extended to multi-label problems, and shows improvements over existing methods. Fast parallel proximal optimization tools have been tested and adapted for the optimization of this problem. In the second part of this work, we introduce a framework that generalizes several state-of-the-art graph-based segmentation algorithms, namely graph cuts, random walker, shortest paths, and watershed. This generalization allowed us to exhibit a new case, for which we developed a globally optimal optimization method, named "Power watershed''. Our proposed power watershed algorithm computes a unique global solution to multi labeling problems, and is very fast. We further generalize and extend the framework to applications beyond image segmentation, for example image filtering optimizing an L0 norm energy, stereovision and fast and smooth surface reconstruction from a noisy cloud of 3D points
394

Environmental Performance of Copper Slag and Barshot as Abrasives

Potana, Sandhya Naidu 20 May 2005 (has links)
The basic objective of this study was to evaluate the environmental performance of two abrasives Copper Slag and Barshot in terms of productivity (in terms of area cleaned- ft2/hr), consumption and or used-abrasive generation rate (of the abrasive- ton/2000ft2; lb/ft2) and particulate emissions (mg/ft2; mg/lb; lb/lb; lb/kg; lb/ton). This would help in evaluating the clean technologies for dry abrasive blasting and would help shipyards to optimize the productivity and minimize the emissions by choosing the best combinations reported in this study to their conditions appropriately. This project is a joint effort between the Gulf Coast Region Maritime technology Center (GCRMTC) and USEPA. It was undertaken to simulate actual blasting operations conducted at shipyards under enclosed, un-controlled conditions on plates similar to steel plates commonly blasted at shipyards.
395

Vehicle detection and tracking using wireless sensors and video cameras

Bandarupalli, Sowmya 06 August 2009 (has links)
This thesis presents the development of a surveillance testbed using wireless sensors and video cameras for vehicle detection and tracking. The experimental study includes testbed design and discusses some of the implementation issues in using wireless sensors and video cameras for a practical application. A group of sensor devices equipped with light sensors are used to detect and localize the position of moving vehicle. Background subtraction method is used to detect the moving vehicle from the video sequences. Vehicle centroid is calculated in each frame. A non-linear minimization method is used to estimate the perspective transformation which project 3D points to 2D image points. Vehicle location estimates from three cameras are fused to form a single trajectory representing the vehicle motion. Experimental results using both sensors and cameras are presented. Average error between vehicle location estimates from the cameras and the wireless sensors is around 0.5ft.
396

Factor analysis of dynamic PET images

Cruz Cavalcanti, Yanna 31 October 2018 (has links)
La tomographie par émission de positrons (TEP) est une technique d'imagerie nucléaire noninvasive qui permet de quantifier les fonctions métaboliques des organes à partir de la diffusion d'un radiotraceur injecté dans le corps. Alors que l'imagerie statique est souvent utilisée afin d'obtenir une distribution spatiale de la concentration du traceur, une meilleure évaluation de la cinétique du traceur est obtenue par des acquisitions dynamiques. En ce sens, la TEP dynamique a suscité un intérêt croissant au cours des dernières années, puisqu'elle fournit des informations à la fois spatiales et temporelles sur la structure des prélèvements de traceurs en biologie \textit{in vivo}. Les techniques de quantification les plus efficaces en TEP dynamique nécessitent souvent une estimation de courbes temps-activité (CTA) de référence représentant les tissus ou une fonction d'entrée caractérisant le flux sanguin. Dans ce contexte, de nombreuses méthodes ont été développées pour réaliser une extraction non-invasive de la cinétique globale d'un traceur, appelée génériquement analyse factorielle. L'analyse factorielle est une technique d'apprentissage non-supervisée populaire pour identifier un modèle ayant une signification physique à partir de données multivariées. Elle consiste à décrire chaque voxel de l'image comme une combinaison de signatures élémentaires, appelées \textit{facteurs}, fournissant non seulement une CTA globale pour chaque tissu, mais aussi un ensemble des coefficients reliant chaque voxel à chaque CTA tissulaire. Parallèlement, le démélange - une instance particulière d'analyse factorielle - est un outil largement utilisé dans la littérature de l'imagerie hyperspectrale. En imagerie TEP dynamique, elle peut être très pertinente pour l'extraction des CTA, puisqu'elle prend directement en compte à la fois la non-négativité des données et la somme-à-une des proportions de facteurs, qui peuvent être estimées à partir de la diffusion du sang dans le plasma et les tissus. Inspiré par la littérature de démélange hyperspectral, ce manuscrit s'attaque à deux inconvénients majeurs des techniques générales d'analyse factorielle appliquées en TEP dynamique. Le premier est l'hypothèse que la réponse de chaque tissu à la distribution du traceur est spatialement homogène. Même si cette hypothèse d'homogénéité a prouvé son efficacité dans plusieurs études d'analyse factorielle, elle ne fournit pas toujours une description suffisante des données sousjacentes, en particulier lorsque des anomalies sont présentes. Pour faire face à cette limitation, les modèles proposés ici permettent un degré de liberté supplémentaire aux facteurs liés à la liaison spécifique. Dans ce but, une perturbation spatialement variante est introduite en complément d'une CTA nominale et commune. Cette variation est indexée spatialement et contrainte avec un dictionnaire, qui est soit préalablement appris ou explicitement modélisé par des non-linéarités convolutives affectant les tissus de liaisons non-spécifiques. Le deuxième inconvénient est lié à la distribution du bruit dans les images PET. Même si le processus de désintégration des positrons peut être décrit par une distribution de Poisson, le bruit résiduel dans les images TEP reconstruites ne peut généralement pas être simplement modélisé par des lois de Poisson ou gaussiennes. Nous proposons donc de considérer une fonction de coût générique, appelée $\beta$-divergence, capable de généraliser les fonctions de coût conventionnelles telles que la distance euclidienne, les divergences de Kullback-Leibler et Itakura-Saito, correspondant respectivement à des distributions gaussiennes, de Poisson et Gamma. Cette fonction de coût est appliquée à trois modèles d'analyse factorielle afin d'évaluer son impact sur des images TEP dynamiques avec différentes caractéristiques de reconstruction. / Thanks to its ability to evaluate metabolic functions in tissues from the temporal evolution of a previously injected radiotracer, dynamic positron emission tomography (PET) has become an ubiquitous analysis tool to quantify biological processes. Several quantification techniques from the PET imaging literature require a previous estimation of global time-activity curves (TACs) (herein called \textit{factors}) representing the concentration of tracer in a reference tissue or blood over time. To this end, factor analysis has often appeared as an unsupervised learning solution for the extraction of factors and their respective fractions in each voxel. Inspired by the hyperspectral unmixing literature, this manuscript addresses two main drawbacks of general factor analysis techniques applied to dynamic PET. The first one is the assumption that the elementary response of each tissue to tracer distribution is spatially homogeneous. Even though this homogeneity assumption has proven its effectiveness in several factor analysis studies, it may not always provide a sufficient description of the underlying data, in particular when abnormalities are present. To tackle this limitation, the models herein proposed introduce an additional degree of freedom to the factors related to specific binding. To this end, a spatially-variant perturbation affects a nominal and common TAC representative of the high-uptake tissue. This variation is spatially indexed and constrained with a dictionary that is either previously learned or explicitly modelled with convolutional nonlinearities affecting non-specific binding tissues. The second drawback is related to the noise distribution in PET images. Even though the positron decay process can be described by a Poisson distribution, the actual noise in reconstructed PET images is not expected to be simply described by Poisson or Gaussian distributions. Therefore, we propose to consider a popular and quite general loss function, called the $\beta$-divergence, that is able to generalize conventional loss functions such as the least-square distance, Kullback-Leibler and Itakura-Saito divergences, respectively corresponding to Gaussian, Poisson and Gamma distributions. This loss function is applied to three factor analysis models in order to evaluate its impact on dynamic PET images with different reconstruction characteristics.
397

Reconfiguração ótima de sistemas de distribuição de energia elétrica baseado no comportamento de colônias de formigas / Optimal reconfiguration of the electric power distribution systems using a modified ant colony system algorithm

Pereira, Fernando Silva 26 February 2010 (has links)
O objetivo deste trabalho é apresentar uma nova abordagem para obtenção de configurações para sistemas de distribuição de energia elétrica com o intuito de minimizar o valor de perdas ativas sem violar as restrições operacionais. Para isso, considera-se que os sistemas de distribuição estão operando em regime permanente e que suas fases estão equilibradas e simétricas, podendo o sistema ser representado por um diagrama unifilar. A reconfiguração é feita de forma a redistribuir os fluxos de corrente nas linhas, transferindo cargas entre os alimentadores e melhorando o perfil de tensão ao longo do sistema. O problema de reconfiguração do sistema pode ser formulado como um problema de programação não-linear inteiro misto. Devido à explosão combinatorial inerente a este tipo de problema, a resolução do mesmo por técnicas de otimização clássicas torna-se pouco atraente, dando espaço para técnicas heurísticas e metaheurísticas. Essas outras, mesmo não garantindo o ótimo global, são capazes de encontrar boas soluções em um espaço de tempo relativamente curto. Para a resolução do problema de reconfiguração, utilizou-se uma nova metodologia baseada no comportamento de colônias de formigas em busca de alimento na natureza. Nesta, formigas artificiais (agentes) exploram o meio ambiente (sistema de distribuição) e trocam informações para tentar encontrar a topologia que apresente os menores valores de perdas ativas. Para o cálculo das perdas, este trabalho também apresenta uma nova abordagem para resolução do problema de fluxo de potência (FP) em sistemas de distribuição radial. O fluxo de potência é uma ferramenta básica utilizada pelos centros de controle para determinar os estados e condições operacionais desses sistemas de potência. Basicamente, as metodologias empregadas para o cálculo do fluxo de potência são baseadas nos métodos clássicos de Newton ou Gauss. Mas em sistemas de distribuição de energia, devido a particularidades inerentes a estes, como a alta relação entre resistência e reatância das linhas (r/x) e a operação radial, estes métodos apresentam problemas de convergência e se tornam ineficientes na maioria das vezes. A abordagem consiste na associação dos métodos da função penalidade e de Newton. O mal-condicionamento da matriz Jacobiana de Newton é resolvido pela associação com o método da função penalidade. São apresentados testes realizados em sistemas de 5 barras, 16 barras, 33 barras, 69 barras e 136 barras para avaliar a potencialidade das técnicas propostas. Os resultados são considerados bons ou muito bons quando comparado com as técnicas existentes atualmente. / The objective of this work is to present a novel methodology for obtaining new configurations of the distribution system in order to minimize the active power losses without violating operational constraints. For this, it is considered that any distribution system is operating in a steady state and that it is balanced, therefore it can be represented by a one-line diagram. The reconfiguration is done in order to redistribute de current flows on the distribution power lines, transferring loads among the feeders and improving the voltage profile along the system. Such problem can be formulated as a mixed integer nonlinear programming problem. Due to its inherent combinatorial characteristic and since its solution by classic optimization techniques is not appealing, heuristic and metaheuristic techniques are thus better suited for its solution. Although these latter do not guarantee a global optimum, they are able to find good solutions in a relatively short time. The solution of the reconfiguration problem in this approach makes use of a novel methodology based on ant colony behavior, when these search for victuals in nature. In this technique, the artificial ants (agents) explore the environment (distribution system) and exchange information among them in order to find the topology that provides the smallest active losses. For the active losses calculation, this work also presents a novel approach for the solution of the power flow problem for radial distribution systems. The solution of the power flow problem is used by system operators in order to determine the state and operational conditions of power systems. Basically, the most common techniques used in the power flow solution are based on either Newton\'s or Gauss\' approaches. However, due to particular characteristics of distribution systems such as the high ratio of r/x and the radial topology, these methods present convergence problems and are not efficient in most of the cases. Thus, this novel technique consists in associating Newton\'s and the penalty function approaches. The matter of the ill-conditioned Jacobian matrix in Newton\'s method is overcome with the penalty function method. Some tests performed in different systems are then presented in order to assess the effectiveness of both proposed techniques.
398

Modelos matemáticos e heurísticas baseadas em técnicas de programação matemática para o problema de minimização de perdas e reconfiguração de redes elétricas / Mathematical models and heuristic based on mathematical programming techniques for the problem of minimization of losses and reconfiguration of electrical networks

Spatti, Karla Barbosa de Freitas 04 April 2018 (has links)
A reconfiguração de redes de distribuição de energia elétrica consiste em alterar sua topologia por meio de manobras de chaves nos circuitos primários. Trata-se de um problema de otimização combinatória, onde normalmente os objetivos são a minimização de perdas ativas e/ou número de manobras realizadas, atendendo as restrições como isolamento de faltas, balanceamento de cargas entre os alimentadores e melhoria dos níveis de tensão. As dificuldades na modelagem e na resolução exata de problemas envolvendo a reconfiguração de redes de distribuição advém do tamanho dos sistemas reais, representados por um número elevado de chaves e alimentadores e ainda pela natureza combinatorial do problema. Para tratar essas questões, diversas modelagens e técnicas computacionais têm sido desenvolvidas, em particular heurísticas de melhoramento que através de uma solução factível, otimiza os resultados reduzindo o espaço de busca, até encontrar uma nova solução com melhor função objetivo. Neste sentido, são propostas duas formulações matemáticas descrevendo novas restrições a fim de melhorar a descrição do problema. A primeira, uma formulação mais simplificada, considera apenas a parte ativa das instâncias; na segunda um modelo completo é descrito otimizando parte das restrições do primeiro modelo e considerando também a parte reativa das instâncias. Duas heurísticas também são adaptadas pela primeira vez para o problema de reconfiguração de redes, pois a heurística de melhoramento Fix-and-Optmize é configurada de duas formas diferentes, determinando seus principais parâmetros através de uma análise de sensibilidade. Os resultados dos dois modelos propostos e também das heurísticas adaptadas para 13 sistemas de referência são descritos e comparados com outros métodos da literatura. Para verificar a eficiência e robustez dos métodos e heurísticas desenvolvidos, replicações são propostas de dois sistemas de referência, 9 replicações do sistema de 72 barras e 4 replicações do sistema de 10560 barras. Seus resultados bem como o desempenho dos métodos são descritos e avaliados. / A reconfiguration of electricity distribution networks consists in altering a topology of the networks by means of key maneuvers in the primary circuits. It is a problem of combinatorial optimization, where the objectives are a minimization of active losses and/or number of maneuvers performed, taking into account constraints such as fault isolation, load balance between feeders and improvement of voltage levels. As difficulties in modeling and in the exact resolution of problems involving a reconfiguration of distribution networks come from the size of the real systems, represented by a large number of switches and feeders, and also by the combinatorial nature of the problem. To address these issues, several models and computational techniques have been developed, in particular heuristics of improvement that through a feasible solution, improves results by reducing the search space, until finding a new solution with better objective function. In this sense, in this thesis it is proposed of two mathematical formulations describing new constraints in order to improve a description of the problem. A first, simpler formulation considers only a active part of the instances, in the second a complete model is described optimizing some restrictions of the first model and also considering the reactive part of the instances. Two heuristics are also first adapted to the network reconfiguration problem. The Fix-and-Optmize enhancement heuristic is configured in two different ways, determining its key parameters through a sensitivity analysis. The results of the two proposed models and also of the heuristics adapted for 13 reference systems are described and compared with other methods of the literature. To verify the efficiency and robustness of the developed methods and heuristics, replications are proposed for two reference systems, 9 replications of the 72 bus system and 4 replications of the 10560 bus system. Its results as well as the performance of the methods are described and evaluated.
399

Vyhodnocení exploatačních parametrů vybraných pluhů v porovnatelných podmínkách

KUKLA, Martin January 2019 (has links)
In this diploma thesis, there are described different types of soil tillage and options of their use. The thesis focuses on comparision selected plow in comparable conditions. Plows were compared by efficiency, possibilities of the type of rotation on headlands, fuel consumption and durability of wearing parts. For measuring were chosen lands with a slope as small as possible and with furrow that should be as long as possible to minimize distortion of the results.
400

Apoio à tomada de decisão e minimização da perda de matéria prima em processos de manufatura

Ferrary, Felipe Rodrigues 10 March 2015 (has links)
Submitted by Maicon Juliano Schmidt (maicons) on 2015-08-05T14:54:29Z No. of bitstreams: 1 Felipe Rodrigues Ferrary_.pdf: 7904559 bytes, checksum: c792687a5d6af3c5b496d85e12b9f21e (MD5) / Made available in DSpace on 2015-08-05T14:54:29Z (GMT). No. of bitstreams: 1 Felipe Rodrigues Ferrary_.pdf: 7904559 bytes, checksum: c792687a5d6af3c5b496d85e12b9f21e (MD5) Previous issue date: 2015-03-10 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O estudo a seguir tem por objetivo otimizar e tornar automatizado um sistema de manufatura que utiliza chapas como matéria prima. A preparação do processo de manufatura através do sistema CAM analisado atualmente possui diversas etapas que não possuem comunicação entre si, além disso, exige um alto nível de intervenção do usuário em suas tomadas de decisão. Esse processo deve ser unificado de forma a buscar um resultado aprimorado, com base nos critérios de otimização propostos. As etapas do processo de manufatura envolvendo chapas são analisadas ao longo do trabalho, passando por etapas iniciais como a definição das peças que devem ser produzidas, etapas intermediárias como o processo de otimização de peças (nesting) e sua respectiva parametrização, e a etapa final, ou seja, a obtenção do código NC para a produção das peças. Para otimizar tais etapas, é proposto um sistema de apoio à tomada de decisão, com características híbridas, formado por um sistema especialista e por técnicas de otimização, tais como metaheurísticas. O método proposto aprimora os resultados através da parametrização automatizada utilizando o sistema de apoio à tomada de decisão, definindo a melhor parametrização com base nos produtos a serem manufaturados, reduzindo assim a necessidade de decisões manuais e, por conseguinte, a interferência do usuário no processo e, ainda, eliminando a necessidade do mesmo ser um especialista. Essa automação deve analisar as possíveis chapas em estoque assim como parâmetros oferecidos pelo processo de nesting e buscar a melhor configuração para o processo, analisando possíveis permutações. Além disso, é proposta a adição de um novo componente nas etapas de manufatura, responsável por analisar as sobras aproveitáveis do processo e organizar a sucata gerada pelo mesmo, tornando-a disponível para ser reutilizada futuramente. Analisando o SAD implementado, foi possível observar que os resultados obtidos foram satisfatórios e, em muitos casos, superiores aos obtidos em outros testes realizados na literatura. Além disso, a aceitação do sistema pelos usuários que realizaram os testes de performance e viabilidade de uso foi considerada excelente. Conforme apontado pelos usuários, o número de parâmetros a ser selecionado reduziu drasticamente, tornando assim o sistema mais simples de ser utilizado. / The following work aims to optimize and make automated a manufacturing system that uses metal plates as raw material. The analysed process currently has several steps with no communication between each other, moreover requires a high level of user’s intervention in their decision making. This process must be unified in order to achieve the optimal result based on the optimization criteria proposed. The analysed processes throughout this work address all stages of the manufacturing process involving plates, going through the initial stages as the definition of the parts to be produced, intermediate steps as the process of optimizing parts (nesting) and its parameters and final step, i.e., obtaining the CN code for the production of parts. To optimize these steps, a method of hybrid solution using a system to support the decision making aided by an expert system and known optimization (such metaheuristics) is proposed. The proposed method will improve the results through an automated parameterization using the system to support the decisions making, defining the best parameter based on the products to be manufactured, thus reducing the need for manual decisions and therefore the user input in the process and eliminating the need for an expert to be the same. This automation should analyse the possible plates in stock as well as parameters offered by the nesting process and seek the best configuration for the process, analyzing possible permutations. Furthermore, we propose the addition of a new component in the steps of manufacturing, responsible for analysing the process remains usable and organize the scrap generated by it, making it available for reuse in the future. Through the proposed system, good and relevant results were obtained. In several cases the obtained results are better than results in the literature. In addition, the acceptance of the system by users who performed performance system tests was considered excellent. As pointed by these users, the number of parameters to be selected has drastically reduced, thus making the system simpler to use.

Page generated in 0.1023 seconds