Spelling suggestions: "subject:"low analysis"" "subject:"flow analysis""
171 |
Värdeflödesanalys på DIAB AB LaholmMehmedovic, Edin January 2006 (has links)
This report is the result of a 20-points project at the University of Jönköping. The project was carried out in form of a case study with the object of analysing the value flow at DIAB AB’s confection department in Laholm. The aim of this project is to submit proposals to the production management on how to increase the efficiency of the production flow at the confection department and reduce the capital accumulation in form of products in work. The information in this report is gathered from interviews, observations and measurements. Furthermore, a literature study was carried out in view to find suitable theories when analysing present as well as future suggested production conditions. This report is based on four main questions: • What does the existing process of the value flow for the most produced product family look like? • How does the process of the value flow for GS perform considering the through-put-time? o How long through-put-time does a representative product of the GS-family have? o How long is the value- and no value adding time for that product along its production flow? • Which production related disturbances and cost prompters exist in the present value flow process? • How could the process of the value flow for GS be made more efficient, less persistent to disturbances and more competitive? The existing process of the value flow for the most producing product family has been mapped and is illustrated in appendix 3. For now, the process includes nine working stations along the production chain. The through-put-time of a representative GS-product is according to my survey 18,5 days. The value adding time is only 16,1 minutes, that is 0,061 % of the entire through-put-time. The remaining time, in other words the no value adding time, is 440 hours and it represents mainly storage and transport of products. The representative production disturbances and cost prompters that characterise the process of the value flow contain material related disturbances, a high number of long shifts, long storage time prior to the customer order point and with that, high capital accumulation and finally unnecessary transports. Improvement proposals aim to increase the efficacy of the process of the value flow and reduce the capital amounts by shifting from the present production strategy involving manufacturing towards order (TMO) to assembling towards order (MMO). In order to make this possible a semi-manufactured storage will be introduced after the standard confection which will represent the new decoupling point. The production at the standard confection will then occur according to the semi-manufactured storage. The standard confection should produce in larger aggregated order quantities based on prognosis in order to benefit from the advantages of economy of scale and the production must proceed in a continuous flow according to the FIFU-system (First In First Out). In addition to that, the special confection must produce according to a pull-system and only when the customer makes a request. The tact-time of the GS products should constitute a limit for all the cycle times along the production chain, both on the standard- and special confection. This is partly due to creating a constant and balanced production flow which enables short through-put-time and partly due to avoiding in-between-storage as a result of various bottlenecks.
|
172 |
Two Dimensional Numerical Modelling Of Variably Saturated FlowsMuthineni, Srinivas 01 1900 (has links)
The prediction of moisture and contaminant transport through unsaturated soil to ground water is becoming increasingly important in the fields of hydrology, agriculture and environmental engineering. Computer aided simulation techniques enables one to conduct a series of systematic numerical experiments to analyze flow phenomenon in subsurface hydrology under various physical and chemical processes. The flow movement depends upon medium characteristics, initial and boundary conditions, which reflect, physical processes occurring below the ground. To understand the effects of physical process an efficient and accurate model is needed. Thus the model developed must be able to handle varied initial and boundary conditions. In this regard, infiltration into a very dry soil becomes a very important problem of study.
Most of the earlier numerical models developed are concentrated on the development of an efficient algorithm or the modelling of a particular process which govern the flow in unsaturated or saturated-unsaturated homogeneous medium. Not much work has been done on the analysis of variably saturated flow in layered soil medium. Models to simulate unsaturated flow through dry soils, especially through layered soils with varied boundary conditions are very limited. Further, not much studies have been reported in the literature on the prediction of seepage face development and the phreatic surface movement in variably saturated media with layering. These aspects are very important in determining the flow field and the discharge from the domain. A detailed literature review covering above aspects has been made and is reported in this thesis.
In the present work, two dimensional numerical models to predict the movement of wetting front in unsaturated domain and the movement of the phreatic surface in homogeneous and layered porous media under various initial and boundary conditions are developed based on finite difference and finite volume techniques. These models can handle flow in both rectangular flow domains and radial flow domains. The initial condition settings include the handling of very dry soil medium without any transformation of the governing equation, handling of infiltration and constant head conditions at the boundaries under steady state as well as transient scenarios. The models are also able to handle various soil moisture characteristics which depict the nonlinear behaviour between hydraulic conductivity, moisture content and pressure head in a soil media.
A mixed form of the governing partial differential equation is used in the present study as it leads to better mass conservation. The finite difference model uses a central difference approximation for the space derivatives and an Eulerian backward difference approximation for the time derivative. The fully implicit formulation is solved iteratively using Strongly Implicit Procedure after making Picard approximation for the highly nonlinear coefficients. The process of infiltration into an initially dry soil leads to the development of a steep wetting front. As the finite volume technique is naturally an upwind method, it is expected to play a positive role in modelling such processes accurately. Hence, a finite volume model is also developed by approximating the convective part using a MUSCL approach and a fully implicit central difference method for the diffusive part of the governing equation.
The models developed are validated using both experimental data and numerical solutions for problems reported in the literature. The validation problems cover a wide range of physical scenarios such as: infiltration into a very dry soil, infiltration into a dry soil column with gravity drainage, development of water table mound, steady state drainage in a sand filled wedge shaped tank with seepage face development and transient seepage face development in a rectangular domain. Five test problems are used for the validation of the models. The developed models perform very well for the test problems considered, indicating the models' capability in handling such situations. The results obtained by using the present models for simulating flow through highly unsaturated (very dry) soils show that the models perform very well when compared with models which use transformation techniques to handle such problems. The performance of the present models in comparison with the experimental data and numerical models available in the literature show the suitability of the present models in handling such situations.
The present models are also used to analyse various types of unsaturated flow problems with varying initial and boundary conditions. The boundary conditions considered are no flow and /or free flow conditions along the left, right and bottom boundaries with infiltration condition along a part of the top boundary. For the various cases considered in the present study, infiltration rate varies from 5 cm/day to 50cm/day through an initially very dry soil of -15000 cm pressure head in homogeneous and layered soils. Different types of soil media considered vary from sandy loam, loam and clay with horizontal and vertical layering of these soils. A total number of 14 cases are analysed. The results are discussed in terms of pressure head variation in the flow domain along with moisture redistribution for all the cases under consideration. It is observed from these studies that the infiltration rate play an important role on the wetting front movement through layered soils depending on the type of layering and the boundary conditions considered. The soil properties of various layers affect the movement of wetting front by changing the direction of movement. Even though the wetting front movement is predominantly vertical, there is a tendency for the wetting front to move in the horizontal direction as it moves from a coarse soil to fine soil. It is also observed that the vertical layering of soils with different hydraulic conductivity helps in redirecting the flow towards the bottom boundary through the neighboring coarser layers.
As finite volume method is more suitable for simulating sharp fronts, it is expected to perform better than finite difference method for simulating infiltration into very dry soils. So, a comparison is made between the performance of these two models by using the above test problems. It is observed from these studies that the performance of both the models are same except that the finite volume method takes much more CPU time than the finite difference model. Considering the type of problems tested, it is observed that for modelling unsaturated and saturated-unsaturated flows, finite difference method is better in comparison to finite volume method. It may be due to the predominant diffusive characteristics of the governing equation even while modelling flow through very dry soils.
Proper estimation of the seepage height is an important aspect in finding the discharge through the porous medium. It is observed from the literature that the use of a saturated flow model in such situations can lead to an underestimation of the discharge through the porous medium. This effect is more important when dealing with small dimension problems. It is also observed that various parameters which govern the moisture movement through saturated-unsaturated regions affect proper estimation of the seepage face height and there by discharge. Various factors like effect of boundary conditions, type of soil layering, problem dimension and aspect ratio on seepage face development and the associated phreatic surface formation is studied in the present work. It is seen from the present study that the seepage face development is more in rectangular flow domain than in radial flow domain for both homogeneous and layered soils. It is also seen that the seepage face development in rectangular problems are more sensitive than radial flow problems for various factors considered. The seepage height is also influenced by the tail water level. It is seen from the present study that as the tail water level increases the seepage face reduces with no seepage face development for some of the cases studied. This influence is relatively less for radial flow problems. As the length of the domain increases the seepage height decreases. It is seen that for different cases with same horizontal dimension, as the height of the domain increases the seepage face height also increases. This phenomenon is observed for both homogeneous and layered soil medium. The influence of the aspect ratio, which is the ratio of the length to height of the domain indicate that as the aspect ratio increases the seepage height decreases. The type of the soil layering is observed to have a very strong influence on the seepage face development. The study for understanding the effect of soil layering on the development of seepage face and phreatic surface suggest that as the coarseness of the material increases, the phreatic surface become flatter and its steepness increases with the fineness of the soil.
The present model is also used for studying the transient phreatic surface movement and the seepage face development. This is studied for homogeneous and layered rectangular soil medium. The present study is used to understand the effect of specific storage on the phreatic surface movement and the seepage face development. The studies indicate that the influence of specific storage on the seepage face development is insignificant in homogeneous soils with only very little effect in the early time for longer domains. It is also observed that the influence of the specific storage is significant in the case of layered soils. This effect depends on the type of layering and the problem dimension and is observed to have influence for relatively longer period. This observation suggests the importance of specific storage on transient seepage face development. When the specific storage effect is considered the drainage of the soil become faster resulting in a faster decline of the phreatic surface with time. The influence of specific storage is also studied considering the problem dimension effect. It is seen that as the aspect ratio increases, the effect of specific storage on the phreatic surface development decreases. The studies with change in the upstream boundary condition from a constant head to a no flow condition indicate that the effect of specific storage has no significant influence on the phreatic surface development for both homogeneous and layered soils.
|
173 |
Värdeflödesanalys på DIAB AB LaholmMehmedovic, Edin January 2006 (has links)
<p>This report is the result of a 20-points project at the University of Jönköping. The project was carried out in form of a case study with the object of analysing the value flow at DIAB AB’s confection department in Laholm. The aim of this project is to submit proposals to the production management on how to increase the efficiency of the production flow at the confection department and reduce the capital accumulation in form of products in work.</p><p>The information in this report is gathered from interviews, observations and measurements. Furthermore, a literature study was carried out in view to find suitable theories when analysing present as well as future suggested production conditions.</p><p>This report is based on four main questions:</p><p>• What does the existing process of the value flow for the most produced product family look like?</p><p>• How does the process of the value flow for GS perform considering the through-put-time?</p><p>o How long through-put-time does a representative product of the GS-family have?</p><p>o How long is the value- and no value adding time for that product along its production flow?</p><p>• Which production related disturbances and cost prompters exist in the present value flow process?</p><p>• How could the process of the value flow for GS be made more efficient, less persistent to disturbances and more competitive?</p><p>The existing process of the value flow for the most producing product family has been mapped and is illustrated in appendix 3. For now, the process includes nine working stations along the production chain.</p><p>The through-put-time of a representative GS-product is according to my survey 18,5 days. The value adding time is only 16,1 minutes, that is 0,061 % of the entire through-put-time. The remaining time, in other words the no value adding time, is 440 hours and it represents mainly storage and transport of products.</p><p>The representative production disturbances and cost prompters that characterise the process of the value flow contain material related disturbances, a high number of long shifts, long storage time prior to the customer order point and with that, high capital accumulation and finally unnecessary transports.</p><p>Improvement proposals aim to increase the efficacy of the process of the value flow and reduce the capital amounts by shifting from the present production strategy involving manufacturing towards order (TMO) to assembling towards order (MMO).</p><p>In order to make this possible a semi-manufactured storage will be introduced after the standard confection which will represent the new decoupling point. The production at the standard confection will then occur according to the semi-manufactured storage. The standard confection should produce in larger aggregated order quantities based on prognosis in order to benefit from the advantages of economy of scale and the production must proceed in a continuous flow according to the FIFU-system (First In First Out). In addition to that, the special confection must produce according to a pull-system and only when the customer makes a request.</p><p>The tact-time of the GS products should constitute a limit for all the cycle times along the production chain, both on the standard- and special confection. This is partly due to creating a constant and balanced production flow which enables short through-put-time and partly due to avoiding in-between-storage as a result of various bottlenecks.</p>
|
174 |
The effect of scale on the morphology, mechanics and transmissivity of single rock fracturesFardin, Nader January 2003 (has links)
<p>This thesis investigates the effect of scale on themorphology, mechanics and transmissivity of single rockfractures using both laboratory and in-situ experiments, aswell as numerical simulations. Using a laboratory 3D laserscanner, the surface topography of a large silicon-rubberfracture replica of size 1m x 1m, as well as the topography ofboth surfaces of several high-strength concrete fracturereplicas varying in size from 50mmx50mm to 200mm x 200mm, werescanned. A geodetic Total Station and an in-situ 3D laser radarwere also utilized to scan the surface topography of a largenatural road-cut rock face of size 20m x 15m in the field. Thisdigital characterization of the fracture samples was then usedto investigate the scale dependency of the three dimensionalmorphology of the fractures using a fractal approach. Thefractal parameters of the surface roughness of all fracturesamples, including the geometrical aperture of the concretefracture samples, were obtained using the Roughness-Lengthmethod.</p><p>The results obtained from the fractal characterization ofthe surface roughness of the fracture samples show that bothfractal dimension, D, and amplitude parameter, A, for aself-affine surface are scale-dependent, heterogeneous andanisotropic, and their values generally decrease withincreasing size of the sample. However, this scale-dependencyis limited to a certain sizedefined as the stationaritythreshold, where the surface roughness parameters of thefracture samples remain essentially constant beyond thisstationarity threshold. The surface roughness and thegeometrical aperture of the tested concrete fracture replicasin this study did not reach stationarity due to the structuralnon-stationarity of their surface at small scales. Although theaperture histogram of the fractures was almost independent ofthe sample size, below their stationarity threshold both theHurst exponent, Hb, and aperture proportionality constant, Gb,decrease on increasing the sample sizes.</p><p>To investigate the scale effect on the mechanical propertiesof single rock fractures, several normal loading and directshear tests were performed on the concrete fracture replicassubjected to different normal stresses under Constant NormalLoad (CNL) conditions. The results showed that both normal andshear stiffnesses, as well as the shear strength parameters ofthe fracture samples, decrease on increasing the sample size.It was observed that the structural non-stationarity of surfaceroughness largely controls the contact areas and damage zoneson the fracture surfaces as related to the direction of theshearing.</p><p>The aperture maps of the concrete fracture replicas ofvarying size and at different shear displacements, obtainedfrom numerical simulation of the aperture evolution duringshearing using their digitized surfaces, were used toinvestigate the effect of scale on the transmissivity of thesingle rock fractures. A FEM code was utilized to numericallysimulate the fluid flow though the single rock fractures ofvarying size. The results showed that flow rate not onlyincreases on increasing the sample size, but also significantlyincreases in the direction perpendicular to the shearing, dueto the anisotropic roughness of the fractures.</p><p><b>Key words:</b>Anisotropy, Aperture, Asperity degradation,Contact area, Finite Element Method (FEM), Flow analysis,Fractals, Fracture morphology, Heterogeneity,Stress-deformation, Surface roughness, Roughness-Length method,Scale dependency, Stationarity, Transmissivity, 3D laserscanner.</p>
|
175 |
Turbulent Flow Analysis and Coherent Structure Identification in Experimental Models with Complex GeometriesAmini, Noushin 2011 December 1900 (has links)
Turbulent flows and coherent structures emerging within turbulent flow fields have been extensively studied for the past few decades and a wide variety of experimental and numerical techniques have been developed for measurement and analysis of turbulent flows. The complex nature of turbulence requires methods that can accurately estimate its highly chaotic spatial and temporal behavior. Some of the classical cases of turbulent flows with simpler geometries have been well characterized by means of the existing experimental techniques and numerical models. Nevertheless, since most turbulent fields are of complex geometries; there is an increasing interest in the study of turbulent flows through models with more complicated geometries.
In this dissertation, characteristics of turbulent flows through two different facilities with complex geometries are studied applying two different experimental methods. The first study involves the investigation of turbulent impinging jets through a staggered array of rods with or without crossflow. Such flows are crucial in various engineering disciplines. This experiment aimed at modeling the coolant flow behavior and mixing phenomena within the lower plenum of a Very High Temperature Reactor (VHTR). Dynamic Particle Image Velocimetry (PIV) and Matched Index of Refraction (MIR) techniques were applied to acquire the turbulent velocity fields within the model. Some key flow features that may significantly enhance the flow mixing within the test section or actively affect some of the structural components were identified in the velocity fields. The evolution of coherent structures within the flow field is further investigated using a Snapshot Proper Orthogonal Decomposition (POD) technique. Furthermore, a comparative POD method is proposed and successfully implemented for identification of the smaller but highly influential coherent structures which may not be captured in the full-field POD analysis.
The second experimental study portrays the coolant flow through the core of an annular pebble bed VHTR. The complex geometry of the core and the highly turbulent nature of the coolant flow passing through the gaps of fuel pebbles make this case quite challenging. In this experiment, a high frequency Hot Wire Anemometry (HWA) system is applied for velocity measurements and investigation of the bypass flow phenomena within the near wall gaps of the core. The velocity profiles within the gaps verify the presence of an area of increased velocity close to the outer reflector wall; however, the characteristics of the coolant flow profile is highly dependent on the gap geometry and to a less extent on the Reynolds number of the flow. The time histories of the velocity are further analyzed using a Power Spectra Density (PSD) technique to acquire information about the energy content and energy transfer between eddies of different sizes at each point within the gaps.
|
176 |
Stochastic Modeling and Analysis of Power Systems with Intermittent Energy SourcesPirnia, Mehrdad 10 February 2014 (has links)
Electric power systems continue to increase in complexity because of the deployment of market mechanisms, the integration of renewable generation and distributed energy resources (DER) (e.g., wind and solar), the penetration of electric vehicles and other price sensitive loads. These revolutionary changes and the consequent increase in uncertainty and dynamicity call for significant modifications to power system operation models including unit commitment (UC), economic load dispatch (ELD) and optimal power flow (OPF). Planning and operation of these ???smart??? electric grids are expected to be impacted significantly, because of the intermittent nature of various supply and demand resources that have penetrated into the system with the recent advances.
The main focus of this thesis is on the application of the Affine Arithmetic (AA) method to power system operational problems. The AA method is a very efficient and accurate tool to incorporate uncertainties, as it takes into account all the information amongst dependent variables, by considering their correlations, and hence provides less conservative bounds compared to the Interval Arithmetic (IA) method. Moreover, the AA method does not require assumptions to approximate the probability distribution function (pdf) of random variables.
In order to take advantage of the AA method in power flow analysis problems, first a novel formulation of the power flow problem within an optimization framework that includes complementarity constraints is proposed. The power flow problem is formulated as a mixed complementarity problem (MCP), which can take advantage of robust and efficient state-of-the-art nonlinear programming (NLP) and complementarity problems solvers. Based on the proposed MCP formulation, it is formally demonstrated that the Newton-Raphson (NR) solution of the power flow problem is essentially a step of the traditional General Reduced Gradient (GRG) algorithm. The solution of the proposed MCP model is compared with the commonly used NR method using a variety of small-, medium-, and large-sized systems in order to examine the flexibility and robustness of this approach.
The MCP-based approach is then used in a power flow problem under uncertainties, in order to obtain the operational ranges for the variables based on the AA method considering active and reactive power demand uncertainties. The proposed approach does not rely on the pdf of the uncertain variables and is therefore shown to be more efficient than the traditional solution methodologies, such as Monte Carlo Simulation (MCS). Also, because of the characteristics of the MCP-based method, the resulting bounds take into consideration the limits of real and reactive power generation.
The thesis furthermore proposes a novel AA-based method to solve the OPF problem with uncertain generation sources and hence determine the operating margins of the thermal generators in systems under these conditions. In the AA-based OPF problem, all the state and control variables are treated in affine form, comprising a center value and the corresponding noise magnitudes, to represent forecast, model error, and other sources of uncertainty without the need to assume a pdf. The AA-based approach is benchmarked against the MCS-based intervals, and is shown to obtain bounds close to the ones obtained using the MCS method, although they are slightly more conservative. Furthermore, the proposed algorithm to solve the AA-based OPF problem is shown to be efficient as it does not need the pdf approximations of the random variables and does not rely on iterations to converge to a solution. The applicability of the suggested approach is tested on a large real European power system.
|
177 |
Evaluation environnementale de territoires à travers l'analyse de filières : la comptabilité biophysique pour l'aide à la décision délibérative / Environmental assessment of territories through supply chain analysis : biophysical accounting for deliberative decision-aidingCourtonne, Jean-Yves 28 June 2016 (has links)
Les conséquences de nos modes de production et de consommation sur l’environnement mondial sont reconnues et analysées depuis plusieurs décennies : changement climatique, effondrement de la biodiversité, tensions sur de nombreuses ressources stratégiques etc.Notre travail s’inscrit dans un courant de pensée visant à développer d’autres indicateurs de richesse. Dans une perspective de durabilité forte, nous nous concentrons sur une comptabilité biophysique (non monétaire), apte à pointer les externalités environnementales. Si une part importante des recherches dans ce domaine a été dédiée aux échelons nationaux, nous nous intéressons ici aux échelles locales, et en particulier aux régions françaises. Après avoir étudié les caractéristiques d’outils existants mobilisés dans les domaines de l’économie écologique et de l’écologie industrielle, comme l’Empreinte Ecologique, l’Analyse de Flux de Matières (AFM), l’Analyse de Cycle de Vie ou l’Analyse Entrée-Sortie, nous nous focalisons sur les filières de production que nous analysons à partir des quantités de matières qu’elles mobilisent au cours des étapes de production, transformation, transport et consommation. La méthode développée, AFM Filière, permet de produire des schémas de flux cohérents au niveau national, dans chaque région, et quand les données le permettent, à des niveaux infra-régionaux. Ceux-ci sont basés sur un processus systématique de réconciliation des données disponibles. Nous évaluons la précision de ces données d’entrée, ce qui permet de fournir des intervalles de confiance sur les résultats, pouvant à leur tour pointer vers des manques de connaissance. En particulier, nous fournissons une évaluation détaillée de la précision de l’enquête permanente sur le transport routier de marchandises (TRM), une pièce maîtresse de l’AFM Filière. Nous montrons au passage que réaliser le bilan matières sur une période de plusieurs années permet non seulement de s’affranchir du problème des stocks, mais aussi de réduire significativement l’incertitude sur les échanges entre régions. Nous adaptons par la suite la méthode des chaînes de Markov absorbantes pour tracer les flux jusqu’à leur destination finale et allouer les pressions sur l’environnement produites tout au long de la filière. Les flux de matières peuvent également être couplés à des modèles économiques afin de prévoir leur évolution en réponse à certaines politiques. En collaboration avec le Laboratoire d’Economie Forestière (LEF), nous fournissons ainsi la première tentative de représentation des flux sur la filière forêt-bois française, et analysons l’impact de différentes politiques de réduction des exports de bois brut sur l’économie et sur les flux physiques. Enfin, nous montrons comment il serait possible d’articuler ces analyses de filières avec les méthodes d’analyse qualitative déployées dans le domaine de l’écologie territoriale, et en particulier, l’analyse des jeux d’acteurs dans la filière. Nous situons notre travail dans le cadre normatif de la démocratie délibérative. A ce titre, nous réfléchissons aux apports de la comptabilité biophysique aux processus de décisions publiques incluant diverses parties prenantes. Nous dressons un panorama des modes de décision, des étapes clé d’un processus d’aide à la décision, des méthodes multicritères mais également des différentes formes que peut prendre la participation des citoyens. Nous proposons finalement une méthode d’aide à la délibération fondée sur l’élicitation de la satisfaction et du regret éprouvé par chaque partie prenante face à un futur donné. Celle-ci vise à organiser la discussion sur le mode du consensus apparent, qui facilite par nature le respect des minorités. Enfin, en partant des principales critiques adressées à la quantification, nous proposons en conclusion une réflexion sur les conditions qui permettraient de mettre la comptabilité écologique au service de l’émancipation démocratique. / The consequences of our modes of production and consumptions on the global environment have been recognized and analyzed for many decades: climate change, biodiversity collapse, tensions on numerous strategic resources etc. Our work follows a line of thought aiming at developing other indicators of wealth, alternative to the Growth Domestic Product. In particular, in a perspective of strong sustainability, we focus on biophysical (non-monetary) accounting, with the objective of pinpointing environmental externalities. A large part of existing research in this domain being targeted towards national levels, we rather focus on subnational scales, with on strong emphasis on French regions. With decentralization policies, these territories are indeed given increasing jurisdiction and also benefit from greater margins of action than national or international levels to implement a transition to sustainability. After studying the characteristic of existing tools used in the fields of ecological economics and industrial ecology, such as the Ecological Footprint, Material Flow Analysis (MFA), Life Cycle Assessment or Input-Output Analysis, we focus on supply chains that we analyze through the quantities of materials they mobilize during the production, transformation, transport and consumption steps. The method developed, the Supply-Chain MFA, provides coherent flow diagrams at the national scale, but also in every region and, when data allow it, at infra-regional levels. These diagrams are based on a systematic reconciliation process of available data. We assess the precision of input data, which allows to provide confidence interval on results, and in turn, to put the light on lacks of knowledge. In particular, we provide a detailed uncertainty assessment of the French domestic road freight survey (TRM), a crucial piece of the Supply-Chain MFA. By doing so, we show that undertaking the study on a period of several years not only solves the issue of stocks but also significantly reduces uncertainties on trade flows between regions. We then adapt the Absorbing Markov Chains framework to trace flows to their final destination and to allocate environmental pressures occurring all along the supply chain. For instance, in the case of cereals, we study energy consumption, greenhouse gas emissions, the blue water footprint, land use and the use of pesticides. Material flows can also be coupled with economic modeling in order to forecast how they will likely respond to certain policies. In collaboration with the laboratory of forest economics (LEF), we thusly provide the first attempt of representing the whole French forest-wood supply-chain, and we analyze the impact of a set of policies on both the economy and physical flows. Finally, we show the opportunities of linking these supply-chain results with qualitative methods unfold in the domain of territorial ecology, stakeholder analysis in particular. We situate our work in the normative framework of deliberative democracy and are therefore interested in the contributions of biophysical accounting to public decision processes that include diverse stakeholders. We propose an overview of decision modes, key steps of decision-aiding, multicriteria methods, but also of the various forms taken by citizen participation. We eventually design a deliberation-aiding method, based on elicitation of each stakeholder’s satisfaction and regret regarding a given future. It aims at organizing the discussion on an apparent consensus mode, which by nature facilitates the respect of minorities. Finally, based on the main criticisms addressed to quantification, we propose in conclusion thoughts on the conditions that could put biophysical accounting at the service of democratic emancipation.
|
178 |
Optimisation multicritère pour une gestion globale des ressources : application au cycle du cuivre en France / Multicriteria optimization for a global resource management : application to French copper cycleBonnin, Marie 11 December 2013 (has links)
L'amélioration de la gestion des ressources naturelles est nécessaire pour répondre aux nombreux enjeux liés à leur exploitation. Ce travail propose une méthodologie d'optimisation de leur gestion, appliquée au cas du cuivre en France. Quatre critères permettant de juger les stratégies de gestion ont été retenus : le coût, les impacts environnementaux, la consommation énergétique et les pertes de ressources. La première étape de cette méthodologie est l'analyse de la situation actuelle, grâce à une modélisation du cycle français du cuivre de 2000 à 2009. Cet examen a montré que la France importe la quasi-totalité de ses besoins sous forme de cuivre raffiné, et a une industrie de recyclage peu développée. Suite à ces premiers résultats, la problématique du traitement des déchets de cuivre, et notamment de leur recyclage, a été étudiée. Une stratégie de modélisation des flux recyclés, basée sur la construction de flowsheets, a été développée. La formulation mathématique générale du problème a ensuite été définie : il s'agit d'un problème mixte, non-linéaire et a priori multiobjectif, qui a une contrainte égalité forte (la conservation de la masse). Une étude des méthodes d'optimisation a conduit à choisir un algorithme génétique (AG). Une alternative a également été envisagée pour résoudre le problème multiobjectif par programmation linéaire en le linéarisant "sous contrainte". Ce travail a mis en évidence la nécessité de développer une filière de recyclage efficace des déchets électriques et électroniques en France. Il a de plus montré que le cuivre contenu dans les déchets ne permet pas de couvrir la demande et qu'il est nécessaire d'importer du cuivre, de préférence sous forme de débris. / Improving the natural resources management is necessary to address the many issues related to their exploitation. This work proposes an optimization methodology for their management, applied to the case of copper in France. Four criteria are identified to assess management strategies: cost, environmental impacts, energy consumption and resource losses. The first step of this methodology is the analysis of the current situation, by modelling the French copper cycle from 2000 to 2009. This analysis showed that France imports almost all of its needs as refined copper, and has an underdeveloped recycling industry. Following these initial results, the problematic of copper wastes, including recycling, has been investigated. A recycled flow modelling strategy has been developed, based on the construction of flowsheets. The general mathematical formulation of the problem is then defined. It is a non-linear, mixed and a priori multiobjective problem, with a strong equality constraint (mass conservation). A review of optimization methods has led to choose a genetic algorithm (GA). An alternative was also proposed to solve the multiobjective problem with linear programming, by linearizing it under constraint. This work has highlighted the necessity of developing an effective recycling field of wastes from electric and electronic equipment in France. It also showed that the copper contained in wastes does not meet the demand, so that France needs to import copper, preferably as scraps.
|
179 |
Detecção de eventos de segurança de redes por intermédio de técnicas estatísticas e associativas aplicadas a fluxos de dados /Proto, André. January 2011 (has links)
Orientador: Adriano Mauro Cansian / Banca: Paulo Licio de Geus / Banca: Marcos Antônio Cavenaghi / Resumo: Este trabalho desenvolve e consolida um sistema de identificação e correlação de comportamentos de usuários e serviços em redes de computadores. A definição destes perfis auxiliará a identificação de comportamentos anômalos ao perfil de um grupo de usuários e serviços e a detecção de ataques em redes de computadores. Este sistema possui como estrutura base a utilização do padrão IPFIX - IP Flow Information Export - como exportador de informações sumarizadas de uma rede de computadores. O projeto prevê duas etapas principais: o desenvolvimento de um coletor de fluxos baseado no protocolo NetFlow, formalizado pela Internet Engineering Task Force (IETF) como padrão IPFIX, que acrescente melhorias na sumarização das informações oferecidas, consumindo menor espaço de armazenamento; a utilização de técnicas de mineração de dados estatísticas e associativas para detecção, correlação e classificação de comportamentos e eventos em redes de computadores. Este modelo de sistema mostra-se inovador na análise de fluxos de rede por meio da mineração de dados, empreendendo características importantes aos sistemas de monitoramento e segurança computacional, como escalabilidade de redes de alta velocidade e a detecção rápida de atividades ilícitas, varreduras de rede, intrusão, ataques de negação de serviço e de força bruta, sendo tais eventos considerados como grandes ameaças na Internet. Além disso, possibilita aos administradores de redes um melhor conhecimento do perfil da rede administrada / Abstract: This work develops and consolidates an identification and correlation system of users and services behaviors on computer networks. The definition about these profiles assists in identifying anomalous behavior for the users and services profile and detecting attacks on computer networks. This system is based on the use of standard IPFIX - IP Flow Information Export - as a summarizing information exporter of a computer network. The project provides two main steps: the development of a flow collector based on the NetFlow protocol, formalized by Internet Engineering Task Force (IETF) as IPFIX standard, which improves the summarization of provided information, resulting in less storage space; the use of data mining association techniques for the detection, correlation and classification of behaviors and events in computer networks. This system model innovates in the analysis through IPFIX flow mining, adding important features to the monitoring of systems and computer security, such as scalability for high speed networks and fast detection of illicit activities, network scan, intrusion, DoS and brute force attacks, which are events considered big threats on the Internet. Also, it enables network administrators to have a better profile of the managed network / Mestre
|
180 |
Sistema fluxo-batelada monossegmentado: determinação espectrofotométrica de boro em plantas. / Monosegmented flow-batch system: Spectrophotometric determination of boron in plants.Barreto, Inakã Silva 30 August 2012 (has links)
Made available in DSpace on 2015-05-14T13:21:18Z (GMT). No. of bitstreams: 1
Arquivototal.pdf: 5236156 bytes, checksum: bb419d4ddca1889deb0fe27fbd777c26 (MD5)
Previous issue date: 2012-08-30 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / This work introduces the monosegmented flow-batch (MSFB) analysis concept. This system combines favourable characteristics of both flowbatch and the monosegmented analysers, allowing use of the flow-batch system for slow reaction kinetics without impairing sensitivity or sampling
throughput. The MSFB was evaluated during spectrophotometric determination of boron in plant extracts, which is a method that involves a slow reaction between boron and azomethine-H. All standard solutions were prepared in-line, and all analytical processes completed by simply changing the operational parameters in the MSFB control software. The limit of detection was estimated at 0.008 mg L−1. The measurements could be performed at a rate of 120 samples per hour with satisfactory precision. The proposed MSFB was successfully applied to analyse 10 plant samples and the results are in agreement with the reference method at a 95% level of confidence. / Esse trabalho introduz o conceito fluxo-batelada monossegmentado (monosegmented flow-batch - MSFB). Esse sistema combina as características favoráveis do sistema fluxo-batelada (flow-batch analysis FBA) e do fluxo monossegmentado (monosegmented flow analysis
MSFA), permitindo o uso do FBA em reações de cinética lenta sem prejuízo na sensibilidade ou na frequência de amostragem. O MSFB foi avaliado durante a determinação espectrofotométrica de boro em extrato de plantas, baseado no método que envolve a reação lenta entre o boro e
a azometina-H. Todas as soluções padrão foram preparadas in-line e todos os processos analíticos foram realizados por simples mudanças nos parâmetros operacionais do software de controle do MSFB. O limite de detecção foi estimado em 0,008 mg L-1. As medidas foram executadas com frequência analítica de 120 amostras por hora, com precisão satisfatória. O MSFB foi aplicado com sucesso na análise de 10 amostras
de extratos plantas e os resultados foram equivalentes aos obtidos pelo método de referência, ao nível de 95% de confiança estatística.
|
Page generated in 0.0565 seconds