Spelling suggestions: "subject:"aprocessing unit"" "subject:"eprocessing unit""
51 |
Prospecção de componentes bioativos em resíduos do processamento do pescado visando a sustentabilidade da cadeia produtiva / Prospecting of bioactive components in the fish processing for the sustainability of the production chainAnbe, Lika 13 October 2011 (has links)
O resíduo gerado nas indústrias de processamento de pescado representa sérios problemas de poluição ambiental pela falta de destino adequado a este material. As espécies que alcançaram melhor rendimento produzem cerca de 30 a 40% da fração comestível na forma de filés. O ideal seria utilizar a matéria-prima em toda a sua extensão para obtenção de co-produtos, evitando a própria formação do resíduo. Elaborou-se a silagem ácida de pescado através do resíduo do processamento de sardinha (Sardinella brasiliensis) como forma de aproveitamento integral da matériaprima, constituídos por brânquias, vísceras, cabeças, escamas, espinhas dorsais e descartes de tecido musculares, incentivando a sustentabilidade desde a escolha do ácido e o aproveitamento de resíduos. A utilização do ácido cítrico (T1) como agente acidificador apresentou bons resultados em relação à mistura fórmico:propiônico (T2). Avaliou-se a estabilização das silagens, a aplicação da centrifugação (modelo 5810R, Eppendorf), sob rotação de 4840 x g; 0 ºC; 20min para obtenção das frações e o rendimento de cada parcela. Para T1 foram obtidos 17,1% de fração lipídica, 27,2% de fração aquosa (F1) e 55,7% de fração sedimentada. Para T2 foram obtidos 15,1% de fração lipídica, 31,8% de fração aquosa (F2) e 53,1% de fração sedimentada. A silagem ao ser fracionada se torna uma alternativa tecnológica com possível utilização em diferentes áreas de atuação, pois em sua porção aquosa (F), há presença de todos os aminoácidos essenciais. O aminoácido em maior concentração foi o ácido glutâmico em T1 e F1, sendo 12,3 e 11,53 g/100g de proteína, respectivamente. Para T2 o maior valor encontrado foi para glicina, da ordem de 11,94 g/100g de proteína; e para F2 o ácido glutâmico, 11,25 g/100g de proteína. Os resultados indicaram a possibilidade das frações aquosas serem empregadas como peptonas, devido aos teores de aminoácidos existentes serem semelhantes e/ou superiores aos presentes em produtos comerciais. Buscou-se quantificar o resíduo gerado em um dia de processamento em uma unidade de beneficiamento de tilápias (Oreochromis niloticus), bem como verificar o custo para o possível aproveitamento deste. Obteve-se 61,15% de resíduo, sendo que, 28,23%; 17,12%; 7,97% e 7,83% eram constituídos de carcaças, cabeças, vísceras e peles. Sugeriu-se para a unidade de processamento, o encaminhamento dos resíduos para produção de co-produtos, como forma de aumentar a sustentabilidade sócio-econômica e ambiental da unidade de processamento. / The waste generated in the fish processing industries poses serious environmental pollution problems due to lack of suitable target for this material. The species that produced better yield reached about 30 to 40% fraction in the form of edible fillets. The ideal would be to use raw material in its entirety to obtain co-products, avoiding the very formation of the residue. We developed the acid silage of fish waste throught the processing of sardines (Sardinella brasiliensis) as a way to use all the raw materials, consisting of gills, viscera, scales, spines and discharges of muscle tissue, encouraging sustainability from the choice of acid and waste recovery. The use of citric acid (T1) as sour agent showed good results in terms of mixing formic, propionic (T2). We evaluated the stabilization of the silage and the application of centrifugation (model 5810R, Eppendorf) under rotation 4840 x g, 0 ºC, 20min to obtain the fractions and yield of each plot. The silage when it becomes an alternative fractional technology with potential use in different areas, because in the watery portion (F), is presence all the essential amino acids. In T1 obtained 17.1% of total lipids, 27.2% of aqueous fraction (F1) and 55.7% sedimented fraction. T2 were obtained for 15.1% of total lipids, 31,8% aqueous fraction (F2) and 53.1% sedimentad fraction. The amino acid concentration was higher in glutamic acid in T1 and F1, being 12.30 and 11.53 g.100g-1 of protein, respectively. For T2 the highest value was found for glycine, the order of 11.94 g.100g-1 of protein; for F2 the content was found for glutamic acid, 11.25 g.100g-1 of protein. The results indicate the possibility of aqueous fractions were employed as peptones, due to the existing levels amino acid are similar and/or greater than those present in commercial products. We sought to quantify the waste generated in a day of processing a unit improvement of tilapia (Oreohromis niloticus) and check the cost for the possible use of this. We obtained 61.15% of waste, of which, 28.23%, 17.12%, 7.97% and 7.83% consisted of carcasses, heads, guts and skins. It has been suggested for the processing unit, the routing of the waste to produce co-products as a way to increase the socio-economic sustainability and environmental processing unit.
|
52 |
Computação paralela em GPU para resolução de sistemas de equações algébricas resultantes da aplicação do método de elementos finitos em eletromagnetismo. / Parallel computing on GPU for solving systems of algebraic equations resulting from application of finite element method in electromagnetism.Ana Flávia Peixoto de Camargos 04 August 2014 (has links)
Este trabalho apresenta a aplicação de técnicas de processamento paralelo na resolução de equações algébricas oriundas do Método de Elementos Finitos aplicado ao Eletromagnetismo, nos regimes estático e harmônico. As técnicas de programação paralelas utilizadas foram OpenMP, CUDA e GPUDirect, sendo esta última para as plataformas do tipo Multi-GPU. Os métodos iterativos abordados incluem aqueles do subespaço Krylov: Gradientes Conjugados, Gradientes Biconjugados, Conjugado Residual, Gradientes Biconjugados Estabilizados, Gradientes Conjugados para equações normais (CGNE e CGNR) e Gradientes Conjugados ao Quadrado. Todas as implementações fizeram uso das bibliotecas CUSP, CUSPARSE e CUBLAS. Para problemas estáticos, os seguintes pré-condicionadores foram adotados, todos eles com implementações paralelizadas e executadas na GPU: Decomposições Incompletas LU e de Cholesky, Multigrid Algébrico, Diagonal e Inversa Aproximada. Para os problemas harmônicos, apenas os dois primeiros pré-condicionadores foram utilizados, porém na sua versão sequencial, com execução na CPU, resultando em uma implementação híbrida CPU-GPU. As ferramentas computacionais desenvolvidas foram testadas na simulação de problemas de aterramento elétrico. No caso do regime harmônico, em que o fenômeno é regido pela Equação de Onda completa com perdas e não homogênea, a formulação adotada foi aquela em dois potenciais, A-V aresta-nodal. Em todas as situações, os aplicativos desenvolvidos para GPU apresentaram speedups apreciáveis, demonstrando a potencialidade dessa tecnologia para a simulação de problemas de larga escala na Engenharia Elétrica, com excelente relação custo-benefício. / This work presents the use of parallel processing techniques in Graphics Processing Units (GPU) for the solution of algebraic equations arising from the Finite Element modeling of electromagnetic phenomena, both in steadystate and time-harmonic regime. The techniques used were parallel programming OpenMP, CUDA and GPUDirect, the latter for those platforms of type Multi-GPU. The iterative methods discussed include those of the Krylov subspace: Conjugate Gradients, Bi-conjugate Gradients, Conjugate Residual, Bi-conjugate Gradients Stabilized, Conjugate Gradients for Normal Equations (CGNE and CGNR) and Conjugate Gradients Squared. All implementations have made use of CUSP, CUSPARSE and CUBLAS libraries. For the static problems, the following pre-conditioners were adopted, all with parallelized implementations and executed on the GPU: Incomplete decompositions, both LU and Cholesky, Algebraic Multigrid, Diagonal and Approximate Inverse. For the time-harmonic varying problems, only the first two pre-conditioners were used, but in their sequential version and running in the CPU, which yielded a hybrid CPU-GPU implementation. The developed computational tools were tested in the simulation of electrical grounding systems. In the case of the harmonic regime, in which the phenomenon is governed by the driven, lossy wave equation, the formulation adopted was that in two potential, the ungauged edge A-V formulation. In all cases, the developed GPU-based tools showed considerable speedups, showing that this is a promising technology for the simulation of large-scale Electrical Engineering problems, with excellent cost-benefit.
|
53 |
Navegação de robôs móveis utilizando visão estéreo / Mobile robot navigation using stereo visionCaio César Teodoro Mendes 26 April 2012 (has links)
Navegação autônoma é um tópico abrangente cuja atenção por parte da comunidade de robôs móveis vemaumentando ao longo dos anos. O problema consiste em guiar um robô de forma inteligente por um determinado percurso sem ajuda humana. Esta dissertação apresenta um sistema de navegação para ambientes abertos baseado em visão estéreo. Uma câmera estéreo é utilizada na captação de imagens do ambiente e, utilizando o mapa de disparidades gerado por um método estéreo semi-global, dois métodos de detecção de obstáculos são utilizando para segmentar as imagens em regiões navegáveis e não navegáveis. Posteriormente esta classificação é utilizada em conjunto com um método de desvio de obstáculos, resultando em um sistema completo de navegação autônoma. Os resultados obtidos por está dissertação incluem a avaliação de dois métodos estéreo, esta sendo favorável ao método estéreo empregado (semi-global). Foram feitos testes visando avaliar a qualidade e custo computacional de dois métodos para detecção de obstáculos, um baseado em plano e outro baseado em cone. Tais testes deixaram claras as limitações de ambos os métodos e levaram a uma implementação paralela do método baseado em cone. Utilizando uma unidade de processamento gráfico, a versão paralelizada do método baseado em cone atingiu um ganho no tempo computacional de aproximadamente dez vezes. Por fim, os resultados demonstrarão o sistema completo em funcionamento, onde a plataforma robótica utilizada, um veículo elétrico, foi capaz de desviar de pessoas e cones alcançando seu objetivo seguramente / Autonomous navigation is a broad topic that has received increasing attention from the community of mobile robots over the years. The problem is to guide a robot in a smart way for a certain route without human help. This dissertation presents a navigation system for open environments based on stereo vision. A stereo camera is used to capture images of the environment and based on the disparity map generated by a semi-global stereo method, two obstacle detection methods are used to segment the images into navigable and non-navigable regions. Subsequently, this classification is employed in conjunction with a obstacle avoidance method, resulting in a complete autonomous navigation system. The results include an evaluation two stereo methods, this being favorable to the employed stereo method (semi-global). Tests were performed to evaluate the quality and computational cost of two methods for obstacle detection, a plane based one and a cone based. Such tests have left clear the limitations of both methods and led to a parallel implementation of the cone based method. Using a graphics processing unit, a parallel version of the cone based method reached a gain in computational time of approximately ten times. Finally, the results demonstrate the complete system in operation, where the robotic platform used, an electric vehicle, was able to dodge people and cones reaching its goal safely
|
54 |
Proton Computed Tomography: Matrix Data Generation Through General Purpose Graphics Processing Unit Reconstructionwitt, micah 01 March 2014 (has links)
Proton computed tomography (pCT) is an image modality that will improve treatment planning for patients receiving proton radiation therapy compared with the current techniques, which are based on X-ray CT. Images are reconstructed in pCT by solving a large and sparse system of linear equations. The size of the system necessitates matrix-partitioning and parallel reconstruction algorithms to be implemented across some sort of cluster computing architecture. The prototypical algorithm to solve the pCT system is the algebraic reconstruction technique (ART) that has been modified into parallel versions called block-iterative-projection (BIP) methods and string-averaging-projection (SAP) methods. General purpose graphics processing units (GPGPUs) have hundreds of stream processors for massively parallel calculations. A GPGPU cluster is a set of nodes, with each node containing a set of GPGPUs. This thesis describes a proton simulator that was developed to generate realistic pCT data sets. Simulated data sets were used to compare the performance of a BIP implementation against a SAP implementation on a single GPGPU with the data stored in a sparse matrix structure called the compressed sparse row (CSR) format. Both BIP and SAP algorithms allow for parallel computation by creating row partitions of the pCT linear system. The difference between these two general classes of algorithms is that BIP permits parallel computations within the row partitions yet sequential computations between the row partitions, whereas SAP permits parallel computations between the row partitions yet sequential computations within the row partitions. This thesis also introduces a general partitioning scheme to be applied to a GPGPU cluster to achieve a pure parallel ART algorithm while providing a framework for column partitioning to the pCT system, as well as show sparse visualization patterns that can be found via specified ordering of the equations within the matrix.
|
55 |
Métaheuristiques hybrides distribuées et massivement parallèles / Hybrid metaheuristics distributed and massively parallelAbdelkafi, Omar 07 November 2016 (has links)
De nombreux problèmes d'optimisation propres à différents secteurs industriels et académiques (énergie, chimie, transport, etc.) nécessitent de concevoir des méthodes de plus en plus efficaces pour les résoudre. Afin de répondre à ces besoins, l'objectif de cette thèse est de développer une bibliothèque composée de plusieurs métaheuristiques hybrides distribuées et massivement parallèles. Dans un premier temps, nous avons étudié le problème du voyageur de commerce et sa résolution par la méthode colonie de fourmis afin de mettre en place les techniques d'hybridation et de parallélisation. Ensuite, deux autres problèmes d'optimisation ont été traités, à savoir, le problème d'affectation quadratique (QAP) et le problème de la résolution structurale des zéolithes (ZSP). Pour le QAP, plusieurs variantes basées sur une recherche taboue itérative avec des diversifications adaptatives ont été proposées. Le but de ces propositions est d'étudier l'impact de : l'échange des données, des stratégies de diversification et des méthodes de coopération. Notre meilleure variante est comparée à six des meilleurs travaux de la littérature. En ce qui concerne le ZSP, deux nouvelles formulations de la fonction objective sont proposées pour évaluer le potentiel des structures zéolitiques trouvées. Ces formulations sont basées sur le principe de pénalisation et de récompense. Deux algorithmes génétiques hybrides et parallèles sont proposés pour générer des structures zéolitiques stables. Nos algorithmes ont généré actuellement six topologies stables, parmi lesquelles trois ne sont pas répertoriées sur le site Web du SC-IZA ou dans l'Atlas of Prospective Zeolite Structures. / Many optimization problems specific to different industrial and academic sectors (energy, chemicals, transportation, etc.) require the development of more effective methods in resolving. To meet these needs, the aim of this thesis is to develop a library of several hybrid metaheuristics distributed and massively parallel. First, we studied the traveling salesman problem and its resolution by the ant colony method to establish hybridization and parallelization techniques. Two other optimization problems have been dealt, which are, the quadratic assignment problem (QAP) and the zeolite structure problem (ZSP). For the QAP, several variants based on an iterative tabu search with adaptive diversification have been proposed. The aim of these proposals is to study the impact of: the data exchange, the diversification strategies and the methods of cooperation. Our best variant is compared with six from the leading works of the literature. For the ZSP two new formulations of the objective function are proposed to evaluate the potential of the zeolites structures founded. These formulations are based on reward and penalty evaluation. Two hybrid and parallel genetic algorithms are proposed to generate stable zeolites structures. Our algorithms have now generated six stable topologies, three of them are not listed in the SC-JZA website or in the Atlas of Prospective Zeolite Structures.
|
56 |
Modélisation multi-échelles et calculs parallèles appliqués à la simulation de l'activité neuronale / Multiscale modeling and parallel computations applied to the simulation of neuronal activityBedez, Mathieu 18 December 2015 (has links)
Les neurosciences computationnelles ont permis de développer des outils mathématiques et informatiques permettant la création, puis la simulation de modèles représentant le comportement de certaines composantes de notre cerveau à l’échelle cellulaire. Ces derniers sont utiles dans la compréhension des interactions physiques et biochimiques entre les différents neurones, au lieu d’une reproduction fidèle des différentes fonctions cognitives comme dans les travaux sur l’intelligence artificielle. La construction de modèles décrivant le cerveau dans sa globalité, en utilisant une homogénéisation des données microscopiques est plus récent, car il faut prendre en compte la complexité géométrique des différentes structures constituant le cerveau. Il y a donc un long travail de reconstitution à effectuer pour parvenir à des simulations. D’un point de vue mathématique, les différents modèles sont décrits à l’aide de systèmes d’équations différentielles ordinaires, et d’équations aux dérivées partielles. Le problème majeur de ces simulations vient du fait que le temps de résolution peut devenir très important, lorsque des précisions importantes sur les solutions sont requises sur les échelles temporelles mais également spatiales. L’objet de cette étude est d’étudier les différents modèles décrivant l’activité électrique du cerveau, en utilisant des techniques innovantes de parallélisation des calculs, permettant ainsi de gagner du temps, tout en obtenant des résultats très précis. Quatre axes majeurs permettront de répondre à cette problématique : description des modèles, explication des outils de parallélisation, applications sur deux modèles macroscopiques. / Computational Neuroscience helped develop mathematical and computational tools for the creation, then simulation models representing the behavior of certain components of our brain at the cellular level. These are helpful in understanding the physical and biochemical interactions between different neurons, instead of a faithful reproduction of various cognitive functions such as in the work on artificial intelligence. The construction of models describing the brain as a whole, using a homogenization microscopic data is newer, because it is necessary to take into account the geometric complexity of the various structures comprising the brain. There is therefore a long process of rebuilding to be done to achieve the simulations. From a mathematical point of view, the various models are described using ordinary differential equations, and partial differential equations. The major problem of these simulations is that the resolution time can become very important when important details on the solutions are required on time scales but also spatial. The purpose of this study is to investigate the various models describing the electrical activity of the brain, using innovative techniques of parallelization of computations, thereby saving time while obtaining highly accurate results. Four major themes will address this issue: description of the models, explaining parallelization tools, applications on both macroscopic models.
|
57 |
Fluid Mechanics of Vertical Axis Turbines : Simulations and Model DevelopmentGoude, Anders January 2012 (has links)
Two computationally fast fluid mechanical models for vertical axis turbines are the streamtube and the vortex model. The streamtube model is the fastest, allowing three-dimensional modeling of the turbine, but lacks a proper time-dependent description of the flow through the turbine. The vortex model used is two-dimensional, but gives a more complete time-dependent description of the flow. Effects of a velocity profile and the inclusion of struts have been investigated with the streamtube model. Simulations with an inhomogeneous velocity profile predict that the power coefficient of a vertical axis turbine is relatively insensitive to the velocity profile. For the struts, structural mechanic loads have been computed and the calculations show that if turbines are designed for high flow velocities, additional struts are required, reducing the efficiency for lower flow velocities.Turbines in channels and turbine arrays have been studied with the vortex model. The channel study shows that smaller channels give higher power coefficients and convergence is obtained in fewer time steps. Simulations on a turbine array were performed on five turbines in a row and in a zigzag configuration, where better performance is predicted for the row configuration. The row configuration was extended to ten turbines and it has been shown that the turbine spacing needs to be increased if the misalignment in flow direction is large.A control system for the turbine with only the rotational velocity as input has been studied using the vortex model coupled with an electrical model. According to simulations, this system can obtain power coefficients close to the theoretical peak values. This control system study has been extended to a turbine farm. Individual control of each turbine has been compared to a less costly control system where all turbines are connected to a mutual DC bus through passive rectifiers. The individual control performs best for aerodynamically independent turbines, but for aerodynamically coupled turbines, the results show that a mutual DC bus can be a viable option.Finally, an implementation of the fast multipole method has been made on a graphics processing unit (GPU) and the performance gain from this platform is demonstrated.
|
58 |
Applying Contact Angle to a Two-dimensional Smoothed Particle Hydrodynamics (SPH) model on a Graphics Processing Unit (GPU) PlatformFarrokhpanah, Amirsaman 22 November 2012 (has links)
A parallel GPU compatible Lagrangian mesh free particle solver for multiphase fluid flow based on SPH scheme is developed and used to capture the interface evolution during droplet impact. Surface tension is modeled employing the multiphase scheme of Hu et al. (2006). In order to precisely simulate the wetting phenomena, a method based on the work of Šikalo et al. (2005) is jointly used with the model proposed by Afkhami et al. (2009) to ensure accurate dynamic contact angle calculations. Accurate predictions were obtained for droplet contact angle during spreading.
A two-dimensional analytical model is developed as an expansion to the work of Chandra et al. (1991). Results obtain from the solver agrees well to this analytical results.
Effects of memory management techniques along with a variety of task assigning algorithms on GPU are studied. GPU speedups of up to 120 times faster than a single processor CPU were obtained.
|
59 |
Parallel Sorting on the Heterogeneous AMD Fusion Accelerated Processing UnitDelorme, Michael Christopher 18 March 2013 (has links)
We explore efficient parallel radix sort for the AMD Fusion Accelerated Processing Unit (APU). Two challenges arise: efficiently partitioning data between the CPU and GPU and the allocation of data in memory regions. Our coarse-grained implementation utilizes both the GPU and CPU by sharing data at the begining and end of the sort. Our fine-grained implementation utilizes the APU’s integrated memory system to share data throughout the sort. Both these implementations outperform the current state of the art GPU radix sort from NVIDIA. We therefore demonstrate that the CPU can be efficiently used to speed up radix sort on the APU.
Our fine-grained implementation slightly outperforms our coarse-grained implementation. This demonstrates the benefit of the APU’s integrated architecture. This performance benefit is hindered by limitations in the APU’s architecture and programming model. We believe that the performance benefits will increase once these limitations are addressed in future generations of the APU.
|
60 |
Applying Contact Angle to a Two-dimensional Smoothed Particle Hydrodynamics (SPH) model on a Graphics Processing Unit (GPU) PlatformFarrokhpanah, Amirsaman 22 November 2012 (has links)
A parallel GPU compatible Lagrangian mesh free particle solver for multiphase fluid flow based on SPH scheme is developed and used to capture the interface evolution during droplet impact. Surface tension is modeled employing the multiphase scheme of Hu et al. (2006). In order to precisely simulate the wetting phenomena, a method based on the work of Šikalo et al. (2005) is jointly used with the model proposed by Afkhami et al. (2009) to ensure accurate dynamic contact angle calculations. Accurate predictions were obtained for droplet contact angle during spreading.
A two-dimensional analytical model is developed as an expansion to the work of Chandra et al. (1991). Results obtain from the solver agrees well to this analytical results.
Effects of memory management techniques along with a variety of task assigning algorithms on GPU are studied. GPU speedups of up to 120 times faster than a single processor CPU were obtained.
|
Page generated in 0.0967 seconds