• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 108
  • 68
  • 24
  • 19
  • 7
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 267
  • 267
  • 65
  • 62
  • 54
  • 50
  • 32
  • 27
  • 26
  • 24
  • 23
  • 22
  • 21
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Värdeflödesanalys på DIAB AB Laholm

Mehmedovic, Edin January 2006 (has links)
<p>This report is the result of a 20-points project at the University of Jönköping. The project was carried out in form of a case study with the object of analysing the value flow at DIAB AB’s confection department in Laholm. The aim of this project is to submit proposals to the production management on how to increase the efficiency of the production flow at the confection department and reduce the capital accumulation in form of products in work.</p><p>The information in this report is gathered from interviews, observations and measurements. Furthermore, a literature study was carried out in view to find suitable theories when analysing present as well as future suggested production conditions.</p><p>This report is based on four main questions:</p><p>• What does the existing process of the value flow for the most produced product family look like?</p><p>• How does the process of the value flow for GS perform considering the through-put-time?</p><p>o How long through-put-time does a representative product of the GS-family have?</p><p>o How long is the value- and no value adding time for that product along its production flow?</p><p>• Which production related disturbances and cost prompters exist in the present value flow process?</p><p>• How could the process of the value flow for GS be made more efficient, less persistent to disturbances and more competitive?</p><p>The existing process of the value flow for the most producing product family has been mapped and is illustrated in appendix 3. For now, the process includes nine working stations along the production chain.</p><p>The through-put-time of a representative GS-product is according to my survey 18,5 days. The value adding time is only 16,1 minutes, that is 0,061 % of the entire through-put-time. The remaining time, in other words the no value adding time, is 440 hours and it represents mainly storage and transport of products.</p><p>The representative production disturbances and cost prompters that characterise the process of the value flow contain material related disturbances, a high number of long shifts, long storage time prior to the customer order point and with that, high capital accumulation and finally unnecessary transports.</p><p>Improvement proposals aim to increase the efficacy of the process of the value flow and reduce the capital amounts by shifting from the present production strategy involving manufacturing towards order (TMO) to assembling towards order (MMO).</p><p>In order to make this possible a semi-manufactured storage will be introduced after the standard confection which will represent the new decoupling point. The production at the standard confection will then occur according to the semi-manufactured storage. The standard confection should produce in larger aggregated order quantities based on prognosis in order to benefit from the advantages of economy of scale and the production must proceed in a continuous flow according to the FIFU-system (First In First Out). In addition to that, the special confection must produce according to a pull-system and only when the customer makes a request.</p><p>The tact-time of the GS products should constitute a limit for all the cycle times along the production chain, both on the standard- and special confection. This is partly due to creating a constant and balanced production flow which enables short through-put-time and partly due to avoiding in-between-storage as a result of various bottlenecks.</p>
172

The effect of scale on the morphology, mechanics and transmissivity of single rock fractures

Fardin, Nader January 2003 (has links)
<p>This thesis investigates the effect of scale on themorphology, mechanics and transmissivity of single rockfractures using both laboratory and in-situ experiments, aswell as numerical simulations. Using a laboratory 3D laserscanner, the surface topography of a large silicon-rubberfracture replica of size 1m x 1m, as well as the topography ofboth surfaces of several high-strength concrete fracturereplicas varying in size from 50mmx50mm to 200mm x 200mm, werescanned. A geodetic Total Station and an in-situ 3D laser radarwere also utilized to scan the surface topography of a largenatural road-cut rock face of size 20m x 15m in the field. Thisdigital characterization of the fracture samples was then usedto investigate the scale dependency of the three dimensionalmorphology of the fractures using a fractal approach. Thefractal parameters of the surface roughness of all fracturesamples, including the geometrical aperture of the concretefracture samples, were obtained using the Roughness-Lengthmethod.</p><p>The results obtained from the fractal characterization ofthe surface roughness of the fracture samples show that bothfractal dimension, D, and amplitude parameter, A, for aself-affine surface are scale-dependent, heterogeneous andanisotropic, and their values generally decrease withincreasing size of the sample. However, this scale-dependencyis limited to a certain size—defined as the stationaritythreshold, where the surface roughness parameters of thefracture samples remain essentially constant beyond thisstationarity threshold. The surface roughness and thegeometrical aperture of the tested concrete fracture replicasin this study did not reach stationarity due to the structuralnon-stationarity of their surface at small scales. Although theaperture histogram of the fractures was almost independent ofthe sample size, below their stationarity threshold both theHurst exponent, Hb, and aperture proportionality constant, Gb,decrease on increasing the sample sizes.</p><p>To investigate the scale effect on the mechanical propertiesof single rock fractures, several normal loading and directshear tests were performed on the concrete fracture replicassubjected to different normal stresses under Constant NormalLoad (CNL) conditions. The results showed that both normal andshear stiffnesses, as well as the shear strength parameters ofthe fracture samples, decrease on increasing the sample size.It was observed that the structural non-stationarity of surfaceroughness largely controls the contact areas and damage zoneson the fracture surfaces as related to the direction of theshearing.</p><p>The aperture maps of the concrete fracture replicas ofvarying size and at different shear displacements, obtainedfrom numerical simulation of the aperture evolution duringshearing using their digitized surfaces, were used toinvestigate the effect of scale on the transmissivity of thesingle rock fractures. A FEM code was utilized to numericallysimulate the fluid flow though the single rock fractures ofvarying size. The results showed that flow rate not onlyincreases on increasing the sample size, but also significantlyincreases in the direction perpendicular to the shearing, dueto the anisotropic roughness of the fractures.</p><p><b>Key words:</b>Anisotropy, Aperture, Asperity degradation,Contact area, Finite Element Method (FEM), Flow analysis,Fractals, Fracture morphology, Heterogeneity,Stress-deformation, Surface roughness, Roughness-Length method,Scale dependency, Stationarity, Transmissivity, 3D laserscanner.</p>
173

Turbulent Flow Analysis and Coherent Structure Identification in Experimental Models with Complex Geometries

Amini, Noushin 2011 December 1900 (has links)
Turbulent flows and coherent structures emerging within turbulent flow fields have been extensively studied for the past few decades and a wide variety of experimental and numerical techniques have been developed for measurement and analysis of turbulent flows. The complex nature of turbulence requires methods that can accurately estimate its highly chaotic spatial and temporal behavior. Some of the classical cases of turbulent flows with simpler geometries have been well characterized by means of the existing experimental techniques and numerical models. Nevertheless, since most turbulent fields are of complex geometries; there is an increasing interest in the study of turbulent flows through models with more complicated geometries. In this dissertation, characteristics of turbulent flows through two different facilities with complex geometries are studied applying two different experimental methods. The first study involves the investigation of turbulent impinging jets through a staggered array of rods with or without crossflow. Such flows are crucial in various engineering disciplines. This experiment aimed at modeling the coolant flow behavior and mixing phenomena within the lower plenum of a Very High Temperature Reactor (VHTR). Dynamic Particle Image Velocimetry (PIV) and Matched Index of Refraction (MIR) techniques were applied to acquire the turbulent velocity fields within the model. Some key flow features that may significantly enhance the flow mixing within the test section or actively affect some of the structural components were identified in the velocity fields. The evolution of coherent structures within the flow field is further investigated using a Snapshot Proper Orthogonal Decomposition (POD) technique. Furthermore, a comparative POD method is proposed and successfully implemented for identification of the smaller but highly influential coherent structures which may not be captured in the full-field POD analysis. The second experimental study portrays the coolant flow through the core of an annular pebble bed VHTR. The complex geometry of the core and the highly turbulent nature of the coolant flow passing through the gaps of fuel pebbles make this case quite challenging. In this experiment, a high frequency Hot Wire Anemometry (HWA) system is applied for velocity measurements and investigation of the bypass flow phenomena within the near wall gaps of the core. The velocity profiles within the gaps verify the presence of an area of increased velocity close to the outer reflector wall; however, the characteristics of the coolant flow profile is highly dependent on the gap geometry and to a less extent on the Reynolds number of the flow. The time histories of the velocity are further analyzed using a Power Spectra Density (PSD) technique to acquire information about the energy content and energy transfer between eddies of different sizes at each point within the gaps.
174

Stochastic Modeling and Analysis of Power Systems with Intermittent Energy Sources

Pirnia, Mehrdad 10 February 2014 (has links)
Electric power systems continue to increase in complexity because of the deployment of market mechanisms, the integration of renewable generation and distributed energy resources (DER) (e.g., wind and solar), the penetration of electric vehicles and other price sensitive loads. These revolutionary changes and the consequent increase in uncertainty and dynamicity call for significant modifications to power system operation models including unit commitment (UC), economic load dispatch (ELD) and optimal power flow (OPF). Planning and operation of these ???smart??? electric grids are expected to be impacted significantly, because of the intermittent nature of various supply and demand resources that have penetrated into the system with the recent advances. The main focus of this thesis is on the application of the Affine Arithmetic (AA) method to power system operational problems. The AA method is a very efficient and accurate tool to incorporate uncertainties, as it takes into account all the information amongst dependent variables, by considering their correlations, and hence provides less conservative bounds compared to the Interval Arithmetic (IA) method. Moreover, the AA method does not require assumptions to approximate the probability distribution function (pdf) of random variables. In order to take advantage of the AA method in power flow analysis problems, first a novel formulation of the power flow problem within an optimization framework that includes complementarity constraints is proposed. The power flow problem is formulated as a mixed complementarity problem (MCP), which can take advantage of robust and efficient state-of-the-art nonlinear programming (NLP) and complementarity problems solvers. Based on the proposed MCP formulation, it is formally demonstrated that the Newton-Raphson (NR) solution of the power flow problem is essentially a step of the traditional General Reduced Gradient (GRG) algorithm. The solution of the proposed MCP model is compared with the commonly used NR method using a variety of small-, medium-, and large-sized systems in order to examine the flexibility and robustness of this approach. The MCP-based approach is then used in a power flow problem under uncertainties, in order to obtain the operational ranges for the variables based on the AA method considering active and reactive power demand uncertainties. The proposed approach does not rely on the pdf of the uncertain variables and is therefore shown to be more efficient than the traditional solution methodologies, such as Monte Carlo Simulation (MCS). Also, because of the characteristics of the MCP-based method, the resulting bounds take into consideration the limits of real and reactive power generation. The thesis furthermore proposes a novel AA-based method to solve the OPF problem with uncertain generation sources and hence determine the operating margins of the thermal generators in systems under these conditions. In the AA-based OPF problem, all the state and control variables are treated in affine form, comprising a center value and the corresponding noise magnitudes, to represent forecast, model error, and other sources of uncertainty without the need to assume a pdf. The AA-based approach is benchmarked against the MCS-based intervals, and is shown to obtain bounds close to the ones obtained using the MCS method, although they are slightly more conservative. Furthermore, the proposed algorithm to solve the AA-based OPF problem is shown to be efficient as it does not need the pdf approximations of the random variables and does not rely on iterations to converge to a solution. The applicability of the suggested approach is tested on a large real European power system.
175

Evaluation environnementale de territoires à travers l'analyse de filières : la comptabilité biophysique pour l'aide à la décision délibérative / Environmental assessment of territories through supply chain analysis : biophysical accounting for deliberative decision-aiding

Courtonne, Jean-Yves 28 June 2016 (has links)
Les conséquences de nos modes de production et de consommation sur l’environnement mondial sont reconnues et analysées depuis plusieurs décennies : changement climatique, effondrement de la biodiversité, tensions sur de nombreuses ressources stratégiques etc.Notre travail s’inscrit dans un courant de pensée visant à développer d’autres indicateurs de richesse. Dans une perspective de durabilité forte, nous nous concentrons sur une comptabilité biophysique (non monétaire), apte à pointer les externalités environnementales. Si une part importante des recherches dans ce domaine a été dédiée aux échelons nationaux, nous nous intéressons ici aux échelles locales, et en particulier aux régions françaises. Après avoir étudié les caractéristiques d’outils existants mobilisés dans les domaines de l’économie écologique et de l’écologie industrielle, comme l’Empreinte Ecologique, l’Analyse de Flux de Matières (AFM), l’Analyse de Cycle de Vie ou l’Analyse Entrée-Sortie, nous nous focalisons sur les filières de production que nous analysons à partir des quantités de matières qu’elles mobilisent au cours des étapes de production, transformation, transport et consommation. La méthode développée, AFM Filière, permet de produire des schémas de flux cohérents au niveau national, dans chaque région, et quand les données le permettent, à des niveaux infra-régionaux. Ceux-ci sont basés sur un processus systématique de réconciliation des données disponibles. Nous évaluons la précision de ces données d’entrée, ce qui permet de fournir des intervalles de confiance sur les résultats, pouvant à leur tour pointer vers des manques de connaissance. En particulier, nous fournissons une évaluation détaillée de la précision de l’enquête permanente sur le transport routier de marchandises (TRM), une pièce maîtresse de l’AFM Filière. Nous montrons au passage que réaliser le bilan matières sur une période de plusieurs années permet non seulement de s’affranchir du problème des stocks, mais aussi de réduire significativement l’incertitude sur les échanges entre régions. Nous adaptons par la suite la méthode des chaînes de Markov absorbantes pour tracer les flux jusqu’à leur destination finale et allouer les pressions sur l’environnement produites tout au long de la filière. Les flux de matières peuvent également être couplés à des modèles économiques afin de prévoir leur évolution en réponse à certaines politiques. En collaboration avec le Laboratoire d’Economie Forestière (LEF), nous fournissons ainsi la première tentative de représentation des flux sur la filière forêt-bois française, et analysons l’impact de différentes politiques de réduction des exports de bois brut sur l’économie et sur les flux physiques. Enfin, nous montrons comment il serait possible d’articuler ces analyses de filières avec les méthodes d’analyse qualitative déployées dans le domaine de l’écologie territoriale, et en particulier, l’analyse des jeux d’acteurs dans la filière. Nous situons notre travail dans le cadre normatif de la démocratie délibérative. A ce titre, nous réfléchissons aux apports de la comptabilité biophysique aux processus de décisions publiques incluant diverses parties prenantes. Nous dressons un panorama des modes de décision, des étapes clé d’un processus d’aide à la décision, des méthodes multicritères mais également des différentes formes que peut prendre la participation des citoyens. Nous proposons finalement une méthode d’aide à la délibération fondée sur l’élicitation de la satisfaction et du regret éprouvé par chaque partie prenante face à un futur donné. Celle-ci vise à organiser la discussion sur le mode du consensus apparent, qui facilite par nature le respect des minorités. Enfin, en partant des principales critiques adressées à la quantification, nous proposons en conclusion une réflexion sur les conditions qui permettraient de mettre la comptabilité écologique au service de l’émancipation démocratique. / The consequences of our modes of production and consumptions on the global environment have been recognized and analyzed for many decades: climate change, biodiversity collapse, tensions on numerous strategic resources etc. Our work follows a line of thought aiming at developing other indicators of wealth, alternative to the Growth Domestic Product. In particular, in a perspective of strong sustainability, we focus on biophysical (non-monetary) accounting, with the objective of pinpointing environmental externalities. A large part of existing research in this domain being targeted towards national levels, we rather focus on subnational scales, with on strong emphasis on French regions. With decentralization policies, these territories are indeed given increasing jurisdiction and also benefit from greater margins of action than national or international levels to implement a transition to sustainability. After studying the characteristic of existing tools used in the fields of ecological economics and industrial ecology, such as the Ecological Footprint, Material Flow Analysis (MFA), Life Cycle Assessment or Input-Output Analysis, we focus on supply chains that we analyze through the quantities of materials they mobilize during the production, transformation, transport and consumption steps. The method developed, the Supply-Chain MFA, provides coherent flow diagrams at the national scale, but also in every region and, when data allow it, at infra-regional levels. These diagrams are based on a systematic reconciliation process of available data. We assess the precision of input data, which allows to provide confidence interval on results, and in turn, to put the light on lacks of knowledge. In particular, we provide a detailed uncertainty assessment of the French domestic road freight survey (TRM), a crucial piece of the Supply-Chain MFA. By doing so, we show that undertaking the study on a period of several years not only solves the issue of stocks but also significantly reduces uncertainties on trade flows between regions. We then adapt the Absorbing Markov Chains framework to trace flows to their final destination and to allocate environmental pressures occurring all along the supply chain. For instance, in the case of cereals, we study energy consumption, greenhouse gas emissions, the blue water footprint, land use and the use of pesticides. Material flows can also be coupled with economic modeling in order to forecast how they will likely respond to certain policies. In collaboration with the laboratory of forest economics (LEF), we thusly provide the first attempt of representing the whole French forest-wood supply-chain, and we analyze the impact of a set of policies on both the economy and physical flows. Finally, we show the opportunities of linking these supply-chain results with qualitative methods unfold in the domain of territorial ecology, stakeholder analysis in particular. We situate our work in the normative framework of deliberative democracy and are therefore interested in the contributions of biophysical accounting to public decision processes that include diverse stakeholders. We propose an overview of decision modes, key steps of decision-aiding, multicriteria methods, but also of the various forms taken by citizen participation. We eventually design a deliberation-aiding method, based on elicitation of each stakeholder’s satisfaction and regret regarding a given future. It aims at organizing the discussion on an apparent consensus mode, which by nature facilitates the respect of minorities. Finally, based on the main criticisms addressed to quantification, we propose in conclusion thoughts on the conditions that could put biophysical accounting at the service of democratic emancipation.
176

Optimisation multicritère pour une gestion globale des ressources : application au cycle du cuivre en France / Multicriteria optimization for a global resource management : application to French copper cycle

Bonnin, Marie 11 December 2013 (has links)
L'amélioration de la gestion des ressources naturelles est nécessaire pour répondre aux nombreux enjeux liés à leur exploitation. Ce travail propose une méthodologie d'optimisation de leur gestion, appliquée au cas du cuivre en France. Quatre critères permettant de juger les stratégies de gestion ont été retenus : le coût, les impacts environnementaux, la consommation énergétique et les pertes de ressources. La première étape de cette méthodologie est l'analyse de la situation actuelle, grâce à une modélisation du cycle français du cuivre de 2000 à 2009. Cet examen a montré que la France importe la quasi-totalité de ses besoins sous forme de cuivre raffiné, et a une industrie de recyclage peu développée. Suite à ces premiers résultats, la problématique du traitement des déchets de cuivre, et notamment de leur recyclage, a été étudiée. Une stratégie de modélisation des flux recyclés, basée sur la construction de flowsheets, a été développée. La formulation mathématique générale du problème a ensuite été définie : il s'agit d'un problème mixte, non-linéaire et a priori multiobjectif, qui a une contrainte égalité forte (la conservation de la masse). Une étude des méthodes d'optimisation a conduit à choisir un algorithme génétique (AG). Une alternative a également été envisagée pour résoudre le problème multiobjectif par programmation linéaire en le linéarisant "sous contrainte". Ce travail a mis en évidence la nécessité de développer une filière de recyclage efficace des déchets électriques et électroniques en France. Il a de plus montré que le cuivre contenu dans les déchets ne permet pas de couvrir la demande et qu'il est nécessaire d'importer du cuivre, de préférence sous forme de débris. / Improving the natural resources management is necessary to address the many issues related to their exploitation. This work proposes an optimization methodology for their management, applied to the case of copper in France. Four criteria are identified to assess management strategies: cost, environmental impacts, energy consumption and resource losses. The first step of this methodology is the analysis of the current situation, by modelling the French copper cycle from 2000 to 2009. This analysis showed that France imports almost all of its needs as refined copper, and has an underdeveloped recycling industry. Following these initial results, the problematic of copper wastes, including recycling, has been investigated. A recycled flow modelling strategy has been developed, based on the construction of flowsheets. The general mathematical formulation of the problem is then defined. It is a non-linear, mixed and a priori multiobjective problem, with a strong equality constraint (mass conservation). A review of optimization methods has led to choose a genetic algorithm (GA). An alternative was also proposed to solve the multiobjective problem with linear programming, by linearizing it under constraint. This work has highlighted the necessity of developing an effective recycling field of wastes from electric and electronic equipment in France. It also showed that the copper contained in wastes does not meet the demand, so that France needs to import copper, preferably as scraps.
177

Detecção de eventos de segurança de redes por intermédio de técnicas estatísticas e associativas aplicadas a fluxos de dados /

Proto, André. January 2011 (has links)
Orientador: Adriano Mauro Cansian / Banca: Paulo Licio de Geus / Banca: Marcos Antônio Cavenaghi / Resumo: Este trabalho desenvolve e consolida um sistema de identificação e correlação de comportamentos de usuários e serviços em redes de computadores. A definição destes perfis auxiliará a identificação de comportamentos anômalos ao perfil de um grupo de usuários e serviços e a detecção de ataques em redes de computadores. Este sistema possui como estrutura base a utilização do padrão IPFIX - IP Flow Information Export - como exportador de informações sumarizadas de uma rede de computadores. O projeto prevê duas etapas principais: o desenvolvimento de um coletor de fluxos baseado no protocolo NetFlow, formalizado pela Internet Engineering Task Force (IETF) como padrão IPFIX, que acrescente melhorias na sumarização das informações oferecidas, consumindo menor espaço de armazenamento; a utilização de técnicas de mineração de dados estatísticas e associativas para detecção, correlação e classificação de comportamentos e eventos em redes de computadores. Este modelo de sistema mostra-se inovador na análise de fluxos de rede por meio da mineração de dados, empreendendo características importantes aos sistemas de monitoramento e segurança computacional, como escalabilidade de redes de alta velocidade e a detecção rápida de atividades ilícitas, varreduras de rede, intrusão, ataques de negação de serviço e de força bruta, sendo tais eventos considerados como grandes ameaças na Internet. Além disso, possibilita aos administradores de redes um melhor conhecimento do perfil da rede administrada / Abstract: This work develops and consolidates an identification and correlation system of users and services behaviors on computer networks. The definition about these profiles assists in identifying anomalous behavior for the users and services profile and detecting attacks on computer networks. This system is based on the use of standard IPFIX - IP Flow Information Export - as a summarizing information exporter of a computer network. The project provides two main steps: the development of a flow collector based on the NetFlow protocol, formalized by Internet Engineering Task Force (IETF) as IPFIX standard, which improves the summarization of provided information, resulting in less storage space; the use of data mining association techniques for the detection, correlation and classification of behaviors and events in computer networks. This system model innovates in the analysis through IPFIX flow mining, adding important features to the monitoring of systems and computer security, such as scalability for high speed networks and fast detection of illicit activities, network scan, intrusion, DoS and brute force attacks, which are events considered big threats on the Internet. Also, it enables network administrators to have a better profile of the managed network / Mestre
178

Sistema fluxo-batelada monossegmentado: determinação espectrofotométrica de boro em plantas. / Monosegmented flow-batch system: Spectrophotometric determination of boron in plants.

Barreto, Inakã Silva 30 August 2012 (has links)
Made available in DSpace on 2015-05-14T13:21:18Z (GMT). No. of bitstreams: 1 Arquivototal.pdf: 5236156 bytes, checksum: bb419d4ddca1889deb0fe27fbd777c26 (MD5) Previous issue date: 2012-08-30 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / This work introduces the monosegmented flow-batch (MSFB) analysis concept. This system combines favourable characteristics of both flowbatch and the monosegmented analysers, allowing use of the flow-batch system for slow reaction kinetics without impairing sensitivity or sampling throughput. The MSFB was evaluated during spectrophotometric determination of boron in plant extracts, which is a method that involves a slow reaction between boron and azomethine-H. All standard solutions were prepared in-line, and all analytical processes completed by simply changing the operational parameters in the MSFB control software. The limit of detection was estimated at 0.008 mg L−1. The measurements could be performed at a rate of 120 samples per hour with satisfactory precision. The proposed MSFB was successfully applied to analyse 10 plant samples and the results are in agreement with the reference method at a 95% level of confidence. / Esse trabalho introduz o conceito fluxo-batelada monossegmentado (monosegmented flow-batch - MSFB). Esse sistema combina as características favoráveis do sistema fluxo-batelada (flow-batch analysis FBA) e do fluxo monossegmentado (monosegmented flow analysis MSFA), permitindo o uso do FBA em reações de cinética lenta sem prejuízo na sensibilidade ou na frequência de amostragem. O MSFB foi avaliado durante a determinação espectrofotométrica de boro em extrato de plantas, baseado no método que envolve a reação lenta entre o boro e a azometina-H. Todas as soluções padrão foram preparadas in-line e todos os processos analíticos foram realizados por simples mudanças nos parâmetros operacionais do software de controle do MSFB. O limite de detecção foi estimado em 0,008 mg L-1. As medidas foram executadas com frequência analítica de 120 amostras por hora, com precisão satisfatória. O MSFB foi aplicado com sucesso na análise de 10 amostras de extratos plantas e os resultados foram equivalentes aos obtidos pelo método de referência, ao nível de 95% de confiança estatística.
179

Materiais inorgânicos associados a sistemas multicomutados para a determinação de espécie químicas em alimentos

LEOTERIO, Dilmo Marques da Silva 21 March 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-05-30T13:01:39Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TESE DILMO MARQUES_2016.pdf: 3410115 bytes, checksum: ef2c4a31a389b0d7af33d59ec444fa24 (MD5) / Made available in DSpace on 2017-05-30T13:01:39Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TESE DILMO MARQUES_2016.pdf: 3410115 bytes, checksum: ef2c4a31a389b0d7af33d59ec444fa24 (MD5) Previous issue date: 2016-03-21 / CNPQ / Neste trabalho foram desenvolvidos métodos analíticos, baseados no conceito de multicomutação, para análise de espécies químicas em alimentos. Para isto, foram sintetizados dois materiais: um composto de coordenação usado como fase sólida na determinação de açúcares redutores e uma rede de sílica utilizada como resina de pré-concentração de percloratos. O composto de coordenação cobre (II) - 4,4 '- bipiridina foi usado no desenvolvimento do método para a determinação de açúcares redutores em água de coco e sucos empregando sistema em fluxo multicomutado com detecção espectrofotométrica. A metodologia de análise baseou-se na reação de oxi-redução em meio alcalino entre o reagente sólido e os açúcares redutores. A reação entre a fase sólida (de coloração azul) com (glicose + frutose) resulta em produto de cor amarelada, monitorada em 420 nm. A fase sólida foi fixada em 50 mg e temperatura de 90ºC. Com volume de zona de amostragem de 160 L, que corresponde a 20 ciclos, obtendo resposta linear entre 1,0 e 20,0 g L-1 de AR com RSD de 4,47% (n = 7 ), limite de detecção de 0,2250 g L-1, limite de quantificação de 0,7496 g L-1, freqüência analítica de 75 determinções por hora e geração de efluentes de 320 μL por determinação. Os teores de açúcar redutor encontrados em sucos e água de coco variaram de 38,35 a 98,50 g L-1 e 61,80 a 68,70 g L-1 respectivamente. A rede de sílica, 2,5,8,11,14-pentaoxa-1-silaciclotetradecano, foi empregada num sistema multicomutado como coluna de pré-concentração de perclorato. O sistema foi acoplado a um detector potenciométrico para determinar percloratos em vegetais. Usou-se um eletrodo tubular com membrana constituída por 1% (m/m) de BNIP 4,4 Dapm LC1 solubilizado em 68% (m/m) de 2-nitrodifeniléter, como solvente mediador e 31% (m/m) de poli (cloreto de vinila). Obteve-se limite de detecção de 2,8x10-7 mol L-1, resposta linear no intervalo de 1,0x10-9 a 1,0x10-1 mol L-1, coeficiente de correlação linear de 0,9998 e a coluna apresentou uma capacidade de retenção de 2,86x10-3 mol de perclorato. O sistema foi aplicado às amostras de diferentes vegetais e foram encontradas concentrações de percloratos de 1,30 a 5,08 µg L-1 o teste recuperação variou de 96,5 a 110,8 %. / Two new multicommutation-based analytical methods were developed aiming to the quantification of chemical species in food. The first method is intended to the determination of reducing sugars in coconut water and fruit juices, while the second one is a potentiometric determination of perchlorates in horticultural products. The method eveloped for the determination of reducing sugars uses a multicommuted flow system with spectrophotometric detection employing a (copper (II) - 4,4’ – bipyridyl) coordination compound as the solid-phase reagent. The reaction of the solid blue phase with glucose/fructose resulted in a yellowish solution, which was monitored at 420 nm; 50 mg of the solid phase was used and the temperature was set at 90ºC. The volume of the sample zone was, 160 L, corresponding to 20 cycles, with a linear response of 1.0 e 20.0 g L-1 to the RA and RSD of 4.47% (n = 7), detection limit of 0.2250 g L-1, the limit of quantification was 0.7496 g L-1, analytical frequency of 75 determination per hour and effluent generation of 320 L per determination. The potentiometric method used to determine perchlorates used a tubular electrode formed by a polymeric membrane which showed the best features consisted of 1% (w/w) BNIP Dapm LC1 solubilized in 68% (w/w) of 2-nitrodiphenyl ether as a mediator solvent and 31% (w/w) poly(vinyl chloride) as the polymeric matrix. A limit of detection of 2.8x10-7 mol L-1 was obtained with a linear response in the range of 1.0x10-9 at 1.0x10-1 mol L-1, linear correlation coefficient of 0.9998 and the column retention capacity 2.86x10-3 mol perchlorate. Was applied to samples of different vegetables found perchlorates 1.30 to 5.08 µg L-1 and recovery between 96.5 and 110.8%.
180

Cálculo do empuxo ativo com determinação numérica da superfície freática / Calculation of active thrust with numerical determination of the phreatic surface

Santos Junior, Petrucio José dos, 1975- 16 August 2018 (has links)
Orientador: Pérsio Leister de Almeida Barros / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Civil, Arquitetura e Urbanismo / Made available in DSpace on 2018-08-16T16:58:48Z (GMT). No. of bitstreams: 1 SantosJunior_PetrucioJosedos_M.pdf: 1818015 bytes, checksum: 92ab0f753dc226315fad2785fd0cd0bc (MD5) Previous issue date: 2010 / Resumo: A determinação do empuxo ativo através de métodos de equilíbrio limite, para análise de muros de contenção é prática comum na engenharia geotécnica, principalmente pela simplicidade analítica de sua obtenção. Porém, havendo a presença de uma superfície freática no solo arrimado tal determinação não apresenta resultado analítico, sendo então requerido um estudo numérico para obter um valor que auxilie com certa precisão nessa análise. Poucos trabalhos foram feitos sobre esse tema e ainda assim sua importância não deixa de ser relevante para a verificação das condições de estabilidade de estruturas de arrimo drenantes. Nesse trabalho é feita uma abordagem numérica através do Método dos Elementos de Contorno (MEC) para determinação da posição da superfície freática e posterior cálculo do empuxo ativo pelo Método de Coulomb considerando a influência dessa superfície. É implementado um programa de computador, cujo algoritmo de cálculo, baseado em MEC, apresenta a posição da freática, o valor da vazão total que chega ao sistema de drenagem e o empuxo ativo atuante sobre a estrutura de contenção / Abstract: The determination of active thrust in retaining wall analysis through limit equilibrium is a routine in geotechnical engineering, mostly due to analytic simplicity. However, when there is a phreatic surface in the retained soil, such determination does not present an analytic result. Then a numerical study is necessary to obtain a representative value of prore water pressures in the soil for the analysis. There are few technical publications about this theme, but its importance is recognized in drained retaining wall stability calculation. This work proposes a numerical approach using Boundary Element Method (BEM) to evaluate the position of phreatic surface and calculation of active thrust coefficient through Coulomb's method considering the influence of this position. A computer program, which calculation algorithm based on BEM is developed. It presents the results of the phreatic surface position, the total flow volume that arrives to the drainage system and the active thrust value / Mestrado / Geotecnia / Mestre em Engenharia Civil

Page generated in 0.0453 seconds