531 |
Fracionamento de carboidratos e proteínas e a predição da proteína bruta e suas frações e das fibras em detergentes neutro e ácido de Brachiaria brizantha cv. Marandu por uma rede neural artificial / Fractions of carbohydrates and proteins and the prediction of the crude protein and its fractions and of fibres in detergents neutral and acid of Brachiaria brizantha cv. marandu for artificial neural networkKäthery Brennecke 28 February 2007 (has links)
Numa área experimental de 25,2 ha formada com o capim-braquiarão (Brachiaria brizantha (Hochst) Stapf.) cv. Marandu e localizada no Campus da USP em Pirassununga/SP, durante o período de janeiro a julho de 2004, conduziu-se a presente pesquisa pela Faculdade de Zootecnia e Engenharia de Alimentos (FZEA/USP) com os seguintes objetivos: 1) Determinar as frações de carboidratos (A - açúcares solúveis com rápida degradação ruminal; B1- amido e pectina; B2 - parede celular com taxa de degradação mais lenta; C - fração não digerida) e as frações protéicas (A - NNP; B1 - peptídeos e oligopeptídeos; B2 - proteína verdadeira; B3 - NFDN; C - NIDA) na forragem da gramínea, baseados nas equações utilizadas pelo método de Cornell; 2) Relacionar outras variáveis com as medições em campo de experimentos paralelos e dados de elementos de clima com as frações protéicas e de carboidratos com o auxílio de um modelo computacional baseado em redes neurais artificiais (RNA). O delineamento foi em blocos completos e casualizados, com quatro tratamentos (ofertas de forragem de 5, 10, 15 e 20% - kg de massa seca por 100 kg de peso animal.dia) e quatro repetições. Cada bloco era dividido em quatro unidades experimentais de 1,575 ha, com cinco piquetes de 0,315 ha cada. Os animais eram manejados em cada unidade experimental em lotação rotacionada, com períodos de descanso de 28 dias no verão e 56 dias no inverno e período de ocupação de 7 dias, respectivamente. As amostras eram colhidas 2 dias antes da entrada dos animais à altura do resíduo do pastejo anterior. Foram determinados produção de massa seca (MS), alturas de pré e pós pastejo, fibras em detergente ácido (FDA) e neutro (FDN), sacarose, amido, lignina, extrato etéro (EE), carboidrato totais (CHO), carboidratos não estruturais (CNE), frações A, B1, B2 e C de carboidratos, proteína bruta (PB), frações A, B1, B2, B3 e C de proteínas e análise de uma rede neural artificial para uma predição dos teores de FDA, fibra em detergente neutro, PB e as frações protéicas. A produção de massa seca (MS) foi significativa, quando se estudou os efeitos da oferta de forragem (p<0,05), ciclo de pastejo (p<0,05) e da interação oferta de forragem x ciclo de pastejo (p<0,05). A maior produção foi no mês de março, quando se alcançou a média de 16140 kg MS/há para o oferta de 20%. Os teores de FDA foram significativos, quando se estudou a oferta de forragem (p<0,05) e seus maiores. Os teores médios da fibra em detergente neutro foram de 66,3 e 64,7% no verão e inverno respectivamente. Houve diferenças significativas para PB, quando se estudou a oferta de forragem (p<0,05), sendo seus teores médios de maior valor na OF a 5%. Observa aumento dos CNE em função de lâminas e colmos ao longo das estações do ano com interação no CP x OF (p<0,05) e seus maiores valores foram encontrados no ciclo de pastejo 3 na oferta de forragem 5%. Os teores de CHO totais apresentaram diferenças (p<0,10) em função da oferta de forragem, sendo os maiores teores médios encontrados na oferta de forragem de 20%. As frações A e B2 de CHO foram significativas em função da oferta de forragem (p<0,05), enquanto que os maiores teores médios da fração A foram encontrados nos ciclos de pastejo 3 e 4 e das frações B2 (%CHO) no ciclo de pastejo 1. As frações B2 e C de CHO apresentaram-se diferentes (p<0,05) nos ciclos de pastejo, sendo decrescentes para a fração B1 e crescentes para a fração C. As frações A (47%), B1 (11%) e B3 (10%) de proteínas foram significativas nos ciclos de pastejos. Os teores médios da fração B2 de proteínas apresentaram-se semelhantes (p>0,05) e os da fração C de proteínas foram diferentes (p<0,05) nas ofertas de forragem e ciclos de pastejo. Conclui-se que os ciclos de pastejos interferiram em todas as variáveis estudadas e que os teores das frações de proteínas e carboidratos estão dentro da variação (%) encontrada na literatura. A rede neural artificial conseguiu vincular as interações existentes de dados de campo e estimar os valores laboratoriais dentro de erros esperados, permitindo com isso desvincular análises laboratoriais, de qualidade de planta forrageira, à pesquisa agropecuária e com isso obter além de resultados mais rápidos, menor custo de pesquisa. / In a experimental área of 25.2 há formed with capim-braquiarão (Brachiaria brizantha (Hochst) Stapf ) cv. Marandu located in University of São Paulo Campus of Pirassununga/SP, during the period of january to july of 2004 was lead the present recherché for Faculdade de Zootecnia e Engenharia de Alimentos (FZEA/USP) to appetent the following objectives: 1) Determine protein fractions (the NNP; B1 - peptides and oligopepitides; B2 - true protein; B3 - NDF, C - AND) and carbohydrates fractions (soluble sugars with fast rumem degradation); B1(starch and pectin); B2 (cell wall alower degradation rate; C (indigested fraction rate) in the fodder plant of the grass, as it\'s respetive dregadability rate, based on equations using Cornell model. 2) To relate other variables measurements in field to parallel experiments and climate elements to the protein and carbohydrate fractions was used a computacional model based in nets of artificial neural. The randomized complete block design with four treatments (herbage allowance of 5, 10, 15 and 20% - kg of dry mass for 100 kg of animal.dia weight) and four repetitions. Each block was divided in four experimental units of 1,575 ha, with five 0,315 poles of ha each. The animals were management in each experimental unit in rotational grazing capacity, with periods of rest of 28 days in the summer and 56 days in the winter and period of occupation of 7 days, respectively. The samples were harvested 2 days before the entrance of the animals to the height of the residue of pasture previous. Were conducted analysis of production of dry mass (DM), heights daily pay and after grazing, staple fibers in acid detergent (ADF) and neutral (NDF), sacarose, starch, lignina, extract etereo (EE), carbohydrate (CHO), not structural carbohydrate (NSC), fractions A, B1, B2 and C of carbohydrate, crude protein (CP), fractions protein A, B1, B2, B3 and C and analysis of artificial neural network for a prediction of levels of ADF, NDF, CP and protéicas fractions. The dry matter (DM) production was significant for herbage allowance (p<0,05), grazing periods (p<0,05) and interaction between allowances x grazing periods (p<0,05). The righ production was in February 13,352 kg MS/ha. The ADF was significant for allowance and grazing periods (p<0,05), with 34.8%, on summer and 35.9% on winter. The average measured of NDF on summer and winter was 66.3 and 64.7%, respectively. It showed significant differences of PC when studied the allowance (p<0,05) and its average measured on summer and winter was 8,3 and 8,1%, respectively. It observes increase of the CNE in function of blades and stem to the long one of the stations of the year with interaction in grazing periods x herbage allowance and its bigger values had been found in the grazing periods 3 with herbage allowance 5%. The total texts of CHO had presented differences (p<0,10) in function of herbage allowance, being biggest found average texts in herbage allowance of 20%. The fractions and the B2 of CHO had been significant, when studied in function of the herbage allowance (p<0,05) for the fraction A and for fraction B2 (p<0,05); the biggest average texts in % of CHO of the fraction had been found It in the cycles of grazing 3 and 4 and the B2 fractions (%CHO) in the grazing periods 1. Fractions B2 (p<0,05) and C (p<0,05) of CHO had presented significant differences, when studied the factor grazing periods, where the B1 fraction the texts had been diminishing the measure that increased the grazing periods and fraction C the texts had increased the measure that had increased the grazing periods. The A, B1 and B3 protein fraction was significant when was studied the grazing periods and the results were 0,47; 0,11; 0,10 respectively. The B2 fraction was not significant. C fraction was significant when studied the allowance (p<0,05) and grazing periods (p<0,05). It was concluded that the grazing periods had intervened with all the studied 0 variable and that the texts of the protein fractions and carbohydrates are inside of the variation (%) found in literature. The results from lab was used to train and test neural network. With a program developed by neural network in a mult layer perceptron with capacity to predict the parameters of nutrition and nourishing value from parameters of forage plant intrinsic and extrinsic, where it was allowed to disentail lab analysis of forage plant quality on the farm research, to get beyond faster and have less research costs.
|
532 |
Novo método de mapeamento de espaços de cor através de redes neurais artificiais especializadas / New method for mapping color spaces using specialized artificial neural networksRobson Barcellos 24 August 2011 (has links)
Este trabalho apresenta uma nova metodologia para mapeamento no espaço de cor colorimétrico CIEXYZ, dos valores de triestímulo obtidos em um espaço de cor não colorimétrico definido pelas curvas de sensibilidade de um sensor eletrônico. A inovação do método proposto é realizar o mapeamento através de três redes neurais artificiais sendo que cada uma é especializada em mapear cores com um determinado triestímulo dominante. É feita a comparação dos resultados do mapeamento com vários trabalhos publicados sobre mapeamento de um espaço de cor em outro usando diversas técnicas. Os resultados mostram a eficiência do método proposto e permitem sua utilização em equipamentos para medir cores, incrementando sua precisão. / This work presents a new method for mapping a non colorimetric color space defined by the sensitivity curves of an electronic color sensor to the colorimetric color space CIEXYZ. The novelty of the proposed method is to perform the mapping by a set of three artificial neural networks, each one specialized in mapping colors with a specific dominant tristimulus. The results are compared with the ones obtained in published works about the mapping of color spaces, using several methods. The results of the method proposed in this work show that it is efficient and it can be used in equipments for measuring colors, improving its precision.
|
533 |
Uma metodologia de binarização para áreas de imagens de cheque utilizando algoritmos de aprendizagem supervisionadaAlves, Rafael Félix 23 June 2015 (has links)
Made available in DSpace on 2016-03-15T19:38:02Z (GMT). No. of bitstreams: 1
RAFAEL FELIX ALVES.pdf: 2156088 bytes, checksum: a82e527c69001eb9cee5a989bde3b8dc (MD5)
Previous issue date: 2015-06-23 / The process of image binarization consists of transforming a color image into a new one with only two colors: black and white. This process is an important step for many modern applica-tions such as Check Clearance, Optical Character Recognition and Handwriting Recognition. Improvements in the automatic process of image binarization represent impacts on applications that rely on this step. The present work proposes a methodology for automatic image binariza-tion. This methodology applies supervised learning algorithms to binarize images and consists of the following steps: images database construction; extraction of the region of interest; pat-terns matrix construction; pattern labelling; database sampling; and classifier training. Experi-mental results are presented using a database of Brazilian bank check images and the competi-tion database DIBCO 2009. In conclusion, the proposal demonstrated to be superior to some of its competitors in terms of accuracy and F-Measure. / O processo de binarização de imagens consiste na transformação de uma imagem colorida em uma nova imagem com apenas duas cores: uma que representa o fundo, outra o objeto de interesse. Este processo é uma importante etapa de diversas aplicações modernas, como a Compensação de Cheque, o Reconhecimento Ótico de Caracteres (do inglês Optical Characterer Recognition) e o Reconhecimento de Texto Manuscrito (do inglês Handwritten Recognition, HWR). Dado que melhorias no processo automático de binarização de imagens representam impactos diretos nas aplicações que dependem desta etapa o presente trabalho propõe uma metodologia para realizar a binarização automática de imagens. A proposta realiza a binarização de forma automática baseado no uso de algoritmos de aprendizagem supervisionada, tais como redes neurais artificiais e árvore de decisão. O processo como um todo consiste das seguintes etapas: construção do banco de imagens; extração da região de interesse; construção da matriz de padrões; rotulação dos padrões; amostragem da base; e treinamento do classificador. Resultados experimentais são apresentados utilizando uma base de imagens de cheques de bancos brasileiros (CMC-7 e montante de cortesia) e a base de imagens da competição DIBCO 2009. Em conclusão, a metodologia proposta apresentou-se competitiva aos métodos da literatura destacando-se em aplicações onde o processamento de imagens está restrito a uma categoria de imagens, como é o caso das imagens de cheques de bancos brasileiros. A presente metodologia apresenta resultados experimentais entre as três primeiras posições e melhores resultados em relação a medida F-Measure quando comparada com as demais.
|
534 |
Estudo de equilíbrio de troca iônica de sistemas binários e ternários por meio de redes neurais / Ion exchange equilibrium of the binary and ternary systems using neural network and mass action lawZanella Junior, Eliseu Avelino 13 February 2009 (has links)
Made available in DSpace on 2017-07-10T18:08:10Z (GMT). No. of bitstreams: 1
Eliseu A Zanella Junior.pdf: 762348 bytes, checksum: 96525daf5cc5df21f8559d564eedfb4d (MD5)
Previous issue date: 2009-02-13 / In the majority of the applications of the process of ionic exchange in the chemical industry some ionic species are gifts that compete between itself for the active small sieges of the ionic exchanger. Therefore, the project of these systems requires an analysis of the coefficients of selectivity of ions gifts in the solution that determines the influence of the separation process. The data of balance of processes of ionic exchange generally are discrebed for the Law of the Action of the Masses, therefore in this boarding the no-idealists of the phases are consideret watery and solid. The calculation of Balance in systems of ionic exchange in multicomponent systems requires the resolution of a system of not linear equations, and depending on the number of involved species one high computational time cam be required. An alternative to the conventional modelin is the job of Artificial the Neural Nets. Inside of this context, the objective of the present work was to evaluate the application of Artificial the Neural Nets in the modeling of the binary and ternary data of balance in systems of ionic exchange, and also to evaluate the viability to apply Artificial the Neural Nets in the prediction of the data of balance of the ternary systems from information of the binary systems. To evaluate the efficiency of Artificial the Neural Nets in the description of the data of balance of systems of ionic exchange, the gotten results had been compared with the values calculated for the application of the Law of the Action of the Masses. Two experimental data sets of ionic exchange had been used. The first set was constituted of the binary and ternary systems of ions sulphate, chloride and nitrate and as exchanging ion the resin AMBERLITE ANGER 400, with total concentration of 0,2N 298K and had been gotten by SMITH and WOODBURN (1978). As the joint one was constituted of the binary and ternary data of ions of lead, has covered and ionic sodium and as exchanging the clinoptilolita, with 0,005 concentration eq/L and temperature of 303K, gotten for FERNANDEZ (2004). The data of entrance of the net had been the composition of íons in solution and of exit they had been the composition of the resin. The training of diverse structures of RNAs was effected. Different architectures had been tested varying the nunber of neurons of the laver of entrance and the occult layer. The nunber of neurons of the entrance layer varied of 2 up to 20 and the occult layer of 1 up to 2, searching always a structure with the lesser value of the objective function. The methods Powell and Simplex had been used to determine the weights of the net. The Law of the Action of the Masses revealed efficient in the description of the following binary systems: SO4-2-NO3-, SO4-2-Cl- e NO3
--Cl-Pb2+-Na+, Cu2+-Na+, however, the results for the system Na+-Pb2+ had not been satisfactory. In the modeling of the binary data Artificial the Neural Nets if had shown efficient in all the investigated cases. In the prediction of the ternary system the Law of the Action of the Masses only revealed efficient for systems SO42--NO3-, SO42-CI- e NO3--CI-. In the prediction of the data of ternary balance for the two evaluatede systems, using Artificial the neural Nets from the binary data generated by the Law of the Action of the Masses, one did not reveal efficient. In the ternary system (SO4²-, NO3-,CI-) the trained Artificial Neural Nets with the binary data set and the inclusion of ternary experimental data of balance (three and seven data) had obtained to represent with precision the behavior of the system. In the ternary system (Pb+². Cu+², Na+), hte nets trained from the binary data set and with the inclusion of all the experimental data of the ternary system, the gotten results had been satisfactory, because they had presented errors near by 2% to 6%. Artificial the Neural Nets had not presented predictive capacity to describe the balance in the process of ionic exchange. However, the nets present an advantage in relation the Law of the Action of the Masses, to allow that the compositions of balance of the resin are calculated explicit. / Na maioria das aplicações do processo de troca iônica na indústria química estão presentes várias espécies iônicas que competem entre si pelos sítios ativos do trocador iônico. Portanto, o projeto destes sistemas requer uma análise dos coeficientes de seletividade dos íons presentes na solução que determina a influência do processo de separação. Os dados de equilíbrio de processos de troca iônica geralmente são descritos pela Lei da Ação das Massas, pois nesta abordagem são considerados as não-idealidades das fases aquosa e sólida. O cálculo de Equilíbrio em sistemas de troca iônica em sistemas multicomponentes requer a resolução de um sistema de equações não lineares, e dependendo do número de espécies envolvidas pode-se requerer um elevado tempo computacional. Uma alternativa à modelagem convencional é o emprego das Redes Neurais Artificiais. Dentro deste contexto, o objetivo do presente trabalho foi avaliar a aplicação das Redes Neurais Artificiais na modelagem dos dados binários e ternários de equilíbrio em sistemas de troca iônica, e também avaliar a viabilidade de aplicar as Redes Neurais Artificiais na predição dos dados de equilíbrio dos sistemas ternários a partir de informações dos sistemas binários. Para avaliar a eficiência das Redes Neurais Artificiais na descrição dos dados de equilíbrio de sistemas de troca iônica, os resultados obtidos foram comparados com os valores calculados pela aplicação da Lei da Ação das Massas. Foram utilizados dois conjuntos de dados experimentais de troca iônica. O primeiro conjunto era constituído pelos sistemas binários e ternários dos íons sulfato, cloreto e nitrato e como trocador iônico a resina AMBERLITE IRA 400, com concentração total de 0,2N a 298K e foram obtidos por SMITH e WOODBURN (1978). O segundo conjunto era constituído dos dados binários e ternários dos íons de chumbo, cobre e sódio e como trocador iônico a clinoptilolita, com concentração 0,005 eq/L e temperatura de 303K, obtidos por FERNANDEZ (2004). Os dados de entrada da rede foram a composição dos íons em solução e de saída foram a composição da resina. Efetuou-se o treinamento de diversas estruturas de RNAs. Foram testadas diferentes arquiteturas variando o número de neurônios da camada de entrada e da camada oculta. O número de neurônios da camada de entrada variou de 2 até 20 e da camada oculta de 1 até 2, buscando sempre uma estrutura com o menor valor da função objetivo. Os métodos Powell e Simplex foram utilizados para determinar os pesos da rede. A Lei da Ação das Massas mostrou-se eficiente na descrição dos seguintes sistemas binários: SO42--NO3-, SO42--Cl- e NO3--Cl- Pb2+-Na+, Cu2+-Na+, entretanto, os resultados para o sistema Na+-Pb2+ não foram satisfatórios. Na modelagem dos dados binários as Redes Neurais Artificiais se mostraram eficientes em todos os casos investigados. Na predição do sistema ternário a Lei da Ação das Massas mostrou-se eficiente somente para os sistemas SO42--NO3-, SO42--Cl- e NO3--Cl-. Na predição dos dados de equilíbrio ternário para os dois sistemas avaliados, empregando as Redes neurais Artificiais a partir dos dados binários gerados pela Lei da Ação das Massas, não se mostrou eficiente. No sistema ternário (SO42-,NO3-,Cl-) as Redes Neurais Artificiais treinadas com o conjunto de dados binários e com a inclusão de dados experimentais ternários de equilíbrio (três e sete dados) conseguiram representar com precisão o comportamento do sistema. No sistema ternário (Pb+2,Cu+2,Na+), as redes treinadas a partir do conjunto de dados binários e com a inclusão de todos os dados experimentais do sistema ternário, os resultados obtidos foram satisfatórios, pois apresentaram erros na faixa de 2% a 6%. As Redes Neurais Artificiais não apresentaram capacidade preditiva de descrever o equilíbrio no processo de troca iônica. Entretanto, as redes apresenta uma vantagem em relação a Lei da Ação das Massas, permitir que sejam calculados explicitamente as composições de equilíbrio da resina.
|
535 |
Metodologia para detecção e localização de áreas de defeitos de alta impedância com a presença da geração distribuídaLedesma, Jorge Javier Giménez 12 February 2017 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-05-18T13:24:16Z
No. of bitstreams: 1
jorgejaviergimenezledesma.pdf: 4002237 bytes, checksum: 74e94889e9e4afbc4463915274bf7e33 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-05-18T14:07:36Z (GMT) No. of bitstreams: 1
jorgejaviergimenezledesma.pdf: 4002237 bytes, checksum: 74e94889e9e4afbc4463915274bf7e33 (MD5) / Made available in DSpace on 2017-05-18T14:07:36Z (GMT). No. of bitstreams: 1
jorgejaviergimenezledesma.pdf: 4002237 bytes, checksum: 74e94889e9e4afbc4463915274bf7e33 (MD5)
Previous issue date: 2017-02-12 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Este trabalho propõe o desenvolvimento de modelos e métodos numéricos, baseados em redes neurais artificiais, para a detecção e localização de áreas com defeitos de alta impedância em sistemas de distribuição. De forma paralela, também é avaliada a eficiência da utilização de diferentes tipos de formas de medição de dados no desempenho do método, que é implementada através de duas etapas.
A primeira etapa consiste na adaptação de um programa existente para cálculo de faltas, tendo como objetivo gerar de forma aleatória vários tipos de defeitos, assim como a localização dos mesmos. A metodologia de cálculo de defeitos foi desenvolvida utilizando as equações de injeção de correntes em coordenadas retangulares. Neste programa, também serão considerados os modelos de carga variantes com a tensão durante os defeitos e modelos de diversas gerações distribuídas, convencionais e não convencionais.
Em seguida, foi desenvolvido e implementado um método baseado em redes neurais artificiais, para detecção e identificação de faltas, assim como para estimar a localização de faltas em um sistema de distribuição. Esta rede neural possui como entrada módulos e ângulos das tensões e correntes do sistema elétrico, obtidas através das medições fasoriais dos PMUs e/ou IEDs. As saídas da rede neural correspondem à detecção e localização de áreas de defeitos.
O método proposto foi desenvolvido no ambiente MatLab® e com o intuito de avaliar sua eficiência, foi testado em alguns sistemas IEEE e em um sistema real. Os resultados obtidos dos estudos são apresentados sob a forma de tabelas e gráficos com suas respectivas acurácias, números de neurônios e as diferentes configurações adotadas. / This work proposes the development of numerical models and methods, based on artificial neural networks, for the detection and localization of high impedance faults in distribution systems. In parallel, the efficiency is also evaluated using different types of measurement data techniques in the performance of the method, which is implemented through two steps.
The first step consists in the adaptation of an existing program for calculation of faults, aiming to generate randomly several types of faults, as well as their location. The faults calculation methodology was developed using current injection equations in rectangular coordinates. In this program the models of load variation with the voltage during the faults and a variety of conventional and unconventional models for distributed generation, are considered.
Next, a method based on artificial neural networks is developed and implemented for the detection and identification of faults, as well as to estimate the fault location within a distribution system. The neural network inputs are modules and angles of the voltages and currents of the electrical system, obtained from the PMUs and / or IEDs. The outputs of the neural network correspond to the detection and location of faults.
The proposed method was developed in MatLab® environment and tested in some IEEE systems and in a real system in order to evaluate its efficiency. The results obtained from the studies was presented in the form of tables and graphs with their respective accuracy, numbers of neurons and the different configurations adopted.
|
536 |
Estratégia computacional para avaliação de propriedades mecânicas de concreto de agregado leveBonifácio, Aldemon Lage 16 March 2017 (has links)
Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2017-06-21T11:44:49Z
No. of bitstreams: 1
aldemonlagebonifacio.pdf: 14222882 bytes, checksum: a77833e828dc4a72cf27e6608d6e0c5d (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-08-07T19:04:13Z (GMT) No. of bitstreams: 1
aldemonlagebonifacio.pdf: 14222882 bytes, checksum: a77833e828dc4a72cf27e6608d6e0c5d (MD5) / Made available in DSpace on 2017-08-07T19:04:13Z (GMT). No. of bitstreams: 1
aldemonlagebonifacio.pdf: 14222882 bytes, checksum: a77833e828dc4a72cf27e6608d6e0c5d (MD5)
Previous issue date: 2017-03-16 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O concreto feito com agregados leves, ou concreto leve estrutural, é considerado um material de construção versátil, bastante usado em todo o mundo, em diversas áreas da construção civil, tais como, edificações pré-fabricadas, plataformas marítimas, pontes, entre outros. Porém, a modelagem das propriedades mecânicas deste tipo de concreto, tais como o módulo de elasticidade e a resistência a compressão, é complexa devido, principalmente, à heterogeneidade intrínseca aos componentes do material. Um modelo de predição das propriedades mecânicas do concreto de agregado leve pode ajudar a diminuir o tempo e o custo de projetos ao prover dados essenciais para os cálculos estruturais. Para esse fim, este trabalho visa desenvolver uma estratégia computacional para a avaliação de propriedades mecânicas do concreto de agregado leve, por meio da combinação da modelagem computacional do concreto via MEF (Método de Elementos Finitos), do método de inteligência computacional via SVR (Máquina de vetores suporte com regressão, do inglês Support Vector Regression) e via RNA (Redes Neurais Artificiais). Além disso, com base na abordagem de workflow científico e many-task computing, uma ferramenta computacional foi desenvolvida com o propósito de facilitar e automatizar a execução dos experimentos científicos numéricos de predição das propriedades mecânicas. / Concrete made from lightweight aggregates, or lightweight structural concrete, is considered a versatile construction material, widely used throughout the world, in many areas of civil construction, such as prefabricated buildings, offshore platforms, bridges, among others. However, the modeling of the mechanical properties of this type of concrete, such as the modulus of elasticity and the compressive strength, is complex due mainly to the intrinsic heterogeneity of the components of the material.
A predictive model of the mechanical properties of lightweight aggregate concrete can help reduce project time and cost by providing essential data for structural calculations. To this end, this work aims to develop a computational strategy for the evaluation of mechanical properties of lightweight concrete by combining the concrete computational modeling via Finite Element Method, the computational intelligence method via Support Vector Regression, and via Artificial Neural Networks. In addition, based on the approachs
scientific workflow and many-task computing, a computational tool will be developed with the purpose of facilitating and automating the execution of the numerical scientific experiments of prediction of the mechanical properties.
|
537 |
Automatic control of a marine loading arm for offshore LNG offloading offloading / Commande d’un bras de chargement de gaz naturel liquéfié en milieu marinBesset, Pierre 27 April 2017 (has links)
Un bras de chargement de gaz est une structure articulée dans laquelle du méthane peut s’écouler à température cryogénique. En haute mer, ces bras sont installés sur le pont de navires-usines et se connectent à des méthaniers pour leur transférer du gaz. En raison de problèmes de sécurité et de performances, il est souhaité que le bras de chargement soit robotisé pour qu’il se connecte automatiquement. Cette thèse a pour objectif l‘automatisation de la connexion. Cette opération nécessite un pilotage de grande précision vis à vie de la taille du bras. Pour cette raison le bras est d’abord étalonné pour augmenter sa précision statique. Ensuite, des analyses modales expérimentales mettent en évidence l’importante souplesse de la structure des bras de chargement. Pour cette raison un générateur de trajectoires « douces », à jerk limité, est développé afin de piloter le bras sans le faire vibrer. Enfin, un système de compensation actif visant à compenser les mouvements relatifs des deux navires est mis en place. Cette compensation combine la génération de trajectoires douces avec une composante prédictive basée sur des réseaux de neurones. Cette dernière permet de prédire et d’anticiper les mouvements des navires sur l’océan, afin d’annuler tout retard dans la compensation. Finalement, cette thèse présente la première connexion automatique d’un bras de chargement, et démontre la validité de cette approche. / Marine loading arms are articulated structures that transfer liquefied gas between two vessels. The flanging operation of the loading arm to the receiving tanker is very sensitive. This thesis aims to robotize a loading arm so it can flange automatically. The required accuracy for the connection is very high. A calibration procedure is thus proposed to increase the accuracy of loading arms. Moreover a jerk-limited trajectory generator is developed to smoothly drive the arm without inducing oscillation. This element is important because the structures of loading arms have a very low stiffness and easily oscillate, as highlighted by modal analyses.A predictive active compensation algorithm is developed to track without delay the relative motion between the two vessels. This algorithm relies on an artificial neural network able to predict the evolution of this relative motion. Finally this thesis presents the first automatic connection of an offshore loading arm. The success of the final tests validate the feasibility the automatic connection and the validity of this approach.
|
538 |
Flutter Susceptibility Assessment of Airplanes in Sub-critical Regime using Ameliorated Flutter Margin and Neural Network Based MethodsKumar, Brijesh January 2014 (has links) (PDF)
As flight flutter testing on an airplane progresses to high dynamic pressures and high Mach number region, it becomes very difficult for engineers to predict the level of the remaining stability in a flutter-prone mode and flutter-prone mechanism when response data is infested with uncertainty. Uncertainty and ensuing scatter in modal data trends always leads to diminished confidence amidst the possibility of sudden decrease in modal damping of a flutter-prone mode. Since the safety of the instrumented prototype and the crew cannot be compromised, a large number of test-points are planned, which eventually results in increased development time and associated costs. There has been a constant demand from the flight test community to improve understanding of the con-ventional methods and develop new methods that could enable ground-station engineers to make better decision with regard to flutter susceptibility of structural components on the airframe. An extensive literature survey has been done for many years to take due cognizance of the ground realities, historical developments, and the state of the art. Besides, discussion on the results of a survey carried on occurrences of flutter among general aviation airplanes has been provided at the very outset.
Data for research comprises results of Computational Aero elasticity Analysis (CAA) and limited Flight Flutter Tests (FFTs) on two slightly different structural designs of the airframe of a supersonic fixed-wing airplane. Detail discussion has been provided with regard to the nature of the data, the certification requirements for an airplane to be flutter-free in the flight-envelope, and the adopted process of flight flutter testing. Four flutter-prone modes - with two modes forming a symmetric bending-pitching flutter mechanism and the other two forming an anti-symmetric bending-pitching mechanism have been identified based on the analysis of computational data. CAA and FFT raw data of these low frequency flutter modes have been provided followed by discussion on its quality and flutter susceptibility of the critical mechanisms. Certain flight-conditions, at constant altitude line and constant Mach number lines, have been chosen on the basis of availability of FFT data near the same flight conditions.
Modal damping is often a highly non-linear function of airspeed and scatter in such trends of modal damping can be very misleading. Flutter margin (FM) parameter, a measure of the remaining stability in a binary flutter mechanism, exhibits smooth and gradual variation with dynamic pressure. First, this thesis brings out the established knowledge of the flutter margin method and marks the continuing knowledge-gaps, especially about the applicable form of the flutter margin prediction equation in transonic region. Further theoretical developments revealed that the coefficients of this equation are flight condition depended to a large extent and the equation should be only used in small ‘windows’ of the flight-envelope by making the real-time flutter susceptibility assessment ‘progressive’ in nature. Firstly, it is brought out that lift curve slope should not be treated as a constant while using the prediction equation at constant altitudes on an airplane capable of transonic flight. Secondly, it was realized that the effect of shift in aerodynamic canter must be considered as it causes a ‘transonic-hump’. Since the quadratic form of flutter margin prediction equation developed 47 years ago, does not provide a valid explanation in that region, a general equation has been derived. Furthermore, flight test data from only supersonic region must be used for making acceptable predictions in supersonic region.
The ‘ameliorated’ flutter margin prediction equation too provides bad predictions in transonic region. This has been attributed to the non-validity of quasi-steady approximation of aerodynamic loads and other additional non-linear effects. Although the equation with effect of changing lift curve slope provides inconsistent predictions inside and near the region of transonic-hump, the errors have been acceptable in most cases. No consistent congruency was discovered to some earlier reports that FM trend is mostly parabolic in subsonic region and linear in supersonic region. It was also found that the large scatter in modal frequencies of the constituent modes can lead to scatter in flutter margin values which can render flutter margin method as ineffective as the polynomial fitting of modal damping ratios. If the modal parameters at a repeated test-point exhibit Gaussian spread, the distribution in FM is non-Gaussian but close to gamma-type.
Fifteen uncertainty factors that cause scatter in modal data during FFT and factor that cause modelling error in a computational model have been enumerated. Since scatter in modal data is ineluctable, it was realized that a new predictive tool is needed in which the probable uncertainty can be incorporated proactively. Given the recent shortcomings of NASA’s flutter meter, the neural network based approach was recognized as the most suitable one. MLP neural network have been used successfully in such scenarios for function approximation through input-output mapping provided the domains of the two are remain finite.
A neural network requires ample data for good learning and some relevant testing data for the evaluation of its performance. It was established that additional data can be generated by perturbing modal mass matrix in the computational model within a symmetric bound. Since FFT is essentially an experimental process, it was realized that such bound should be obtained from experimental data only, as the full effects of uncertainty factors manifest only during flight tests. The ‘validation FFT program’, a flight test procedure for establishing such bound from repeated tests at five diverse test-points in safe region has been devised after careful evaluation of guide-lines and international practice. A simple statistical methodology has been devised to calculate the bound-of-uncertainty when modal parameters from repeated tests show Gaussian distribution. Since no repeated tests were conducted on the applicable airframe, a hypothetical example with compatible data was considered to explain the procedure. Some key assumptions have been made and discussion regarding their plausibility has been provided. Since no updated computational model was made available, the next best option of causing random variation in nominal values of CAA data was exercised to generate additional data for arriving at the final form of neural network architecture and making predictions of damping ratios and FM values.
The problem of progressive flutter susceptibility assessment was formulated such that the CAA data from four previous test-points were considered as input vectors and CAA data from the next test-point was the corresponding output. General heuristics for an optimal learning performance has been developed. Although, obtaining an optimal set of network parameters has been relatively easy, there was no single set of network parameters that would lead to consistently good predictions. Therefore some fine-tuning, of network parameters about the optimal set was often needed to achieve good generalization.
It was found that data from the four already flown test-points tend to dominate network prediction and the availability of flight-test data from these previous test-points within the bound about nominal is absolutely important for good predictions. The performance improves when all the five test-points are closer. If above requirements were met, the predictive performance of neural network has been much more consistent in flutter margin values than in modal damping ratios. A new algorithm for training MLP network, called Particle Swarm Optimization (PSO) has also been tested. It was found that the gradient descent based algorithm is much more suitable than PSO in terms of training time, predictive performance, and real-time applicability. In summary, the main intellectual contributions of this thesis are as follows:
• Realization of that the fact that secondary causes lead incidences of flutter on airplanes than primary causes.
• Completion of theoretical understanding of data-based flutter margin method and flutter margin prediction equation for all ranges of flight Mach number, including the transonic region.
• Vindication of the fact that including lift-curve slope in the flutter margin pre-diction equation leads to improved predictions of flutter margins in subsonic and supersonic regions and progressive flutter susceptibility assessment is the best way of reaping benefits of data-based methods.
• Explanation of a plausible recommended process for evaluation of uncertainty in modal damping and flutter margin parameter.
• Realization of the fact that a MLP neural network, which treats a flutter mechanism as a stochastic non-linear system, is a indeed a promising approach for real-time flutter susceptibility assessment.
|
539 |
Investigations of calorimeter clustering in ATLAS using machine learningNiedermayer, Graeme 11 January 2018 (has links)
The Large Hadron Collider (LHC) at CERN is designed to search for new physics by colliding protons with a center-of-mass energy of 13 TeV. The ATLAS detector is a multipurpose particle detector built to record these proton-proton collisions. In order to improve sensitivity to new physics at the LHC, luminosity increases are planned for 2018 and beyond. With this greater luminosity comes an increase in the number of simultaneous proton-proton collisions per bunch crossing (pile-up). This extra pile-up has adverse effects on algorithms for clustering the ATLAS detector's calorimeter cells. These adverse effects stem from overlapping energy deposits originating from distinct particles and could lead to difficulties in accurately reconstructing events. Machine learning algorithms provide a new tool that has potential to improve clustering performance. Recent developments in computer science have given rise to new set of machine learning algorithms that, in many circumstances, out-perform more conventional algorithms. One of these algorithms, convolutional neural networks, has been shown to have impressive performance when identifying objects in 2d or 3d arrays. This thesis will develop a convolutional neural network model for calorimeter cell clustering and compare it to the standard ATLAS clustering algorithm. / Graduate
|
540 |
Metamodel-Based Multidisciplinary Design Optimization of Automotive StructuresRyberg, Ann-Britt January 2017 (has links)
Multidisciplinary design optimization (MDO) can be used in computer aided engineering (CAE) to efficiently improve and balance performance of automotive structures. However, large-scale MDO is not yet generally integrated within automotive product development due to several challenges, of which excessive computing times is the most important one. In this thesis, a metamodel-based MDO process that fits normal company organizations and CAE-based development processes is presented. The introduction of global metamodels offers means to increase computational efficiency and distribute work without implementing complicated multi-level MDO methods. The presented MDO process is proven to be efficient for thickness optimization studies with the objective to minimize mass. It can also be used for spot weld optimization if the models are prepared correctly. A comparison of different methods reveals that topology optimization, which requires less model preparation and computational effort, is an alternative if load cases involving simulations of linear systems are judged to be of major importance. A technical challenge when performing metamodel-based design optimization is lack of accuracy for metamodels representing complex responses including discontinuities, which are common in for example crashworthiness applications. The decision boundary from a support vector machine (SVM) can be used to identify the border between different types of deformation behaviour. In this thesis, this information is used to improve the accuracy of feedforward neural network metamodels. Three different approaches are tested; to split the design space and fit separate metamodels for the different regions, to add estimated guiding samples to the fitting set along the boundary before a global metamodel is fitted, and to use a special SVM-based sequential sampling method. Substantial improvements in accuracy are observed, and it is found that implementing SVM-based sequential sampling and estimated guiding samples can result in successful optimization studies for cases where more conventional methods fail.
|
Page generated in 0.0791 seconds