Spelling suggestions: "subject:"bindustrial applications."" "subject:"0industrial applications.""
401 |
Characterization, Stabilization, and Utilization of Waste-to-Energy Residues in Civil Engineering ApplicationsTian, Yixi January 2022 (has links)
About 27 million metric tons of municipal solid waste are used annually as fuel in U.S. Waste-to-Energy (WTE) power plants, which annually generate seven million tons of bottom ash (BA) and fly ash (FA). In the U.S., bottom ash and fly ash residues are mixed to “combined ash” (CA) in the approximate ratio of 6 to 1, and are disposed in landfills after metal separation. The disposal of WTE ash is a significant cost and land use item of waste management.
This dissertation aims to (i) comprehensively understand the characterization and properties of WTE ash; (ii) provide practical and economic stabilization technologies to reduce the leachability of heavy metals in WTE ash and assessing whether it can be further beneficially used as secondary materials; (iii) utilize the stabilized/processed WTE ash as secondary construction materials in civil engineering applications, thus diverting materials from landfills and contributing to the circular economy.
The Characterization section provides a comprehensive assessment of WTE bottom ash, fly ash, and combined ash, including chemical composition (XRF, ICP-OES, IC), mineral composition (X-ray diffraction-XRD quantification), thermogravimetric analysis (TGA), particle size distribution, and scanning electron microscopy (SEM). The physical properties of WTE residues were also investigated, including moisture, bulk density, specific gravity, void content, and water absorption. Leaching Environmental Assessment Framework (LEAF) Method 1313 of the U.S. Environmental Protection Agency (EPA) was used to understand the effect of eluate pH on the leachability of heavy metals. Combination of the above methods was applied to quantify the crystalline and amorphous phases present in WTE residues and produced specimens.
In the U.S., WTE BA is discharged from the combustion chamber into a water tank. The BA includes 50-70% mineral fraction, 15-30% glass and ceramics, 5-13% ferrous metals, 2-5% non-ferrous metals, and 1-5% unburned organics. This thesis received the BA samples after ferrous and non-ferrous metal recycling. The major chemical composition includes SiO2 (34%), CaO (21%), Al2O3 (9%), and Fe2O3 (11%). According to XRD quantification results, BA consists of 76% amorphous phases (glass and metastable minerals), and the dominant crystalline mineral is quartz (SiO2, 12%). The calcium silicate (aluminate) hydrates (C-S-(A)-H) gel formed during the water quenching process embeds fine particles in the amorphous phases.
The U.S. WTE air pollution control systems commonly include semi-dry scrubbers, with a few plants using dry scrubbers. FA consists of two kinds of particles: the furnace particles carried in the process gas and the newly-formed particles in the scrubber. The major chemical composition in FA includes CaO (40%), Cl (15%), SO3 (8%), CO2 (8%), and activated carbon/organic matter (3%) due to the injection of absorbents (hydrated lime and activated carbon) and the effects of flue gas scrubbing. The empirical formulae of the constituent crystalline (40-50%) and amorphous (50-60%) phases were derived. The excess water in semi-dry scrubbers improved the hydration reaction between newly-formed particles and furnace particles and resulted in the transformation of amorphous phase to calcium silicate hydrates (C-S-H) phase. The hydration products of semi-dry FA immobilized some heavy metals and reduced their leachability to below the levels of the Resource Conservation and Recovery Act (RCRA) by Toxic Characteristic Leaching Procedure (TCLP) test, as compared to dry scrubber FA, which exceeded the limits of RCRA.
The U.S. combined ash can pass the TCLP test and comply with the RCRA standards for non-hazardous landfill disposal. The Stabilization section examines the effects of processing combined ash. CA undergoes water washing, crushing, and size separation processes to three fractions: coarse (27%, CCA, 9.5-25 mm), medium (37%, MCA, 2-9.5 mm), and fine (25%, FCA, < 2 mm), identified by particle size distribution results. The by-products of the washing process are extra fine filter cake ash (EFFCA, 8% of CA) collected from the water treatment system and ash dissolved in the wastewater (3% of CA). The characterization (chemical composition, mineral composition, and leachability) of ash the fractions (CCA, MCA, and FCA) showed that their mineral changed during the processing and exhibited significantly lower leachability (LEAF Method 1313-pH dependence), in comparison to as-received CA. The processed ash fractions with reduced leachability of heavy metals, can be further beneficially used as secondary materials.
The effect of pH of the washing agents (water, acid and alkaline solutions) on the chemical/mineral transformation and the heavy metals leachability of the FA, BA, and CA was assessed. A novel technique of determining the distribution of various elements in washed ash (product), filter cake (by-product), and wastewater (dissolution) during ash processing was developed to compare the effectiveness of the washing process, which is dominated by dissolution and precipitation reactions. As-received FA, BA, and CA contained 50-75% of amorphous phases in metastable status, which are transformed to crystalline phases during the washing process. It was concluded that water washing is the most practical method for transforming WTE CA to construction material.
The Utilization section examined the use of WTE ash in civil engineering applications, i.e.
(i) Using the CCA and MCA fractions as stone aggregate substitute in structural concrete;
(ii) Using FCA as sand substitute or using the milled FCA (MFCA) powder as mineral addition in cement mortar;
(iii) Using FCA and EFFCA powder as metakaolin substitute in artificial aggregate;
(iv) Using FA and phosphate FA (PFA) as cement substitute in cement mortar.
In conclusion, the CA size fractions, i.e., MCA and CCA, are suitable for use as aggregate substitutes in the production of structural concrete. Up to 100 wt.% of stone aggregate in concrete can be substituted by MCA and CCA. The compressive strength of the optimal products exceeds 28 MPa after 28 days of curing, which is comparable to commercial concrete products using natural stone aggregate. The optimum concrete mixture composition was 40 wt.% of MCA or CCA, 30 wt.% sand, 20 wt.% cement, 10 wt.% water, and superplasticizer, with compressive strength of 28-30 MPa and elastic modulus of 6,300-6,600 MPa. The optimal products complied with stringent leaching standards, and the properties of the final products were comparable to the conventional civil engineering materials.
All FCA or MFCA products were effectively stabilized/solidified and transformed to non-hazardous material that can be used in construction. The main challenge in the utilization of FCA or MFCA in cement mortar is the cementitious phase expansion due to the metallic aluminum present in FCA or MFCA. It was concluded that up to 50 vol.% of sand in cement mortar can be directly substituted by FCA, and up to 25 vol.% of MFCA can be utilized as mineral addition to replace cement in the production of cement mortar.
In the production of artificial aggregates, up to 15% of FCA or up to 10% of EFFCA can replace metakaolin by volume. The produced samples indicated crushing strength of 4 and 1.5 MPa, respectively. The specific gravity and water absorption of optimal ash aggregate is 1.3 and 30%. The FCA and EFFCA aggregates exhibited good chemical stability and reduced the cracks observed in the fire resistance test. The ash aggregates can be used as a lightweight aggregate for non-structural applications. FCA can improve the workability of the metakaolin mixture and extend the setting time, which is beneficial for geopolymer aggregate manufacturing. The heavy metals from FCA and EFFCA can be effectively stabilized/solidified in artificial aggregate.
Phosphoric acid can effectively stabilize the as-received FA, so that the dry scrubber FA passes the TCLP test and complies with the RCRA standards. The mineral transformations of individual ash and ash-cement paste were investigated by the XRD quantification analysis. FA and PFA enhanced the hydration degree of cement, and received higher mechanical performance than reference in 0-25 vol.% cement replacement. The leachability of heavy metals was effectively reduced in a wide leaching range (eluate pH 0-12.5), realized the stabilization/solidification purposes under restricted non-hazardous landfill standards.
|
402 |
Selective Audio Filtering for Enabling Acoustic Intelligence in Mobile, Embedded, and Cyber-Physical SystemsXia, Stephen January 2022 (has links)
We are seeing a revolution in computing and artificial intelligence; intelligent machines have become ingrained in and improved every aspect of our lives. Despite the increasing number of intelligent devices and breakthroughs in artificial intelligence, we have yet to achieve truly intelligent environments. Audio is one of the most common sensing and actuation modalities used in intelligent devices. In this thesis, we focus on how we can more robustly integrate audio intelligence into a wide array of resource-constrained platforms that enable more intelligent environments. We present systems and methods for adaptive audio filtering that enables us to more robustly embed acoustic intelligence into a wide range of real time and resource-constrained mobile, embedded, and cyber-physical systems that are adaptable to a wide range of different applications, environments, and scenarios.
First, we introduce methods for embedding audio intelligence into wearables, like headsets and helmets, to improve pedestrian safety in urban environments by using sound to detect vehicles, localize vehicles, and alert pedestrians well in advance to give them enough time to avoid a collision. We create a segmented architecture and data processing pipeline that partitions computation between embedded front-end platform and the smartphone platform. The embedded front-end hardware platform consists of a microcontroller and commercial-off-the shelf (COTS) components embedded into a headset and samples audio from an array of four MEMS microphones. Our embedded front-end platform computes a series of spatiotemporal features used to localize vehicles: relative delay, relative power, and zero crossing rate. These features are computed in the embedded front-end headset platform and transmitted wirelessly to the smartphone platform because there is not enough bandwidth to transmit more than two channels of raw audio with low latency using standard wireless communication protocols, like Bluetooth Low-Energy. The smartphone platform runs machine learning algorithms to detect vehicles, localize vehicles, and alert pedestrians. To help reduce power consumption, we integrate an application specific integrated circuit into our embedded front-end platform and create a new localization algorithm called angle via polygonal regression (AvPR) that combines the physics of audio waves, the geometry of a microphone array, and a data driven training and calibration process that enables us to estimate the high resolution direction of the vehicle while being robust to noise resulting from movements in the microphone array as we walk the streets.
Second, we explore the challenges in adapting our platforms for pedestrian safety to more general and noisier scenarios, namely construction worker safety sounds of nearby power tools and machinery that are orders of magnitude greater than that of a distant vehicle. We introduce an adaptive noise filtering architecture that allows workers to filter out construction tool sounds and reveal low-energy vehicle sounds to better detect them. Our architecture combines the strengths of both the physics of audio waves and data-driven methods to more robustly filter out construction sounds while being able to run on a resource-limited mobile and embedded platform. In our adaptive filtering architecture, we introduce and incorporate a data-driven filtering algorithm, called probabilistic template matching (PTM), that leverages pre-trained statistical models of construction tools to perform content-based filtering. We demonstrate improvements that our adaptive filtering architecture brings to our audio-based urban safety wearable in real construction site scenarios and against state-of-art audio filtering algorithms, while having a minimal impact on the power consumption and latency of the overall system. We also explore how these methods can be used to improve audio privacy and remove privacy-sensitive speech from applications that have no need to detect and analyze speech.
Finally, we introduce a common selective audio filtering platform that builds upon our adaptive filtering architecture for a wide range of real-time mobile, embedded, and cyber-physical applications. Our architecture can account for a wide range of different sounds, model types, and signal representations by integrating an algorithm we present called content-informed beamforming (CIBF). CIBF combines traditional beamforming (spatial filtering using the physics of audio waves) with data driven machine learning sound detectors and models that developers may already create for their own applications to enhance and filter out specified sounds and noises. Alternatively, developers can also select sounds and models from a library we provide. We demonstrate how our selective filtering architecture can improve the detection of specific target sounds and filter out noises in a wide range of application scenarios. Additionally, through two case studies, we demonstrate how our selective filtering architecture can easily integrate into and improve the performance of real mobile and embedded applications over existing state-of-art solutions, while having minimal impact on latency and power consumption. Ultimately, this selective filtering architecture enables developers and engineers to more easily embed robust audio intelligence into common objects found around us and resource-constrained systems to create more intelligent environments.
|
403 |
Detect and Repair Errors for DNN-based SoftwareTian, Yuchi January 2021 (has links)
Nowadays, deep neural networks based software have been widely applied in many areas including safety-critical areas such as traffic control, medical diagnosis and malware detection, etc. However, the software engineering techniques, which are supposed to guarantee the functionality, safety as well as fairness, are not well studied. For example, some serious crashes of DNN based autonomous cars have been reported. These crashes could have been avoided if these DNN based software were well tested. Traditional software testing, debugging or repairing techniques do not work well on DNN based software because there is no control flow, data flow or AST(Abstract Syntax Tree) in deep neural networks. Proposing software engineering techniques targeted on DNN based software are imperative. In this thesis, we first introduced the development of SE(Software Engineering) for AI(Artificial Intelligence) area and how our works have influenced the advancement of this new area. Then we summarized related works and some important concepts in SE for AI area. Finally, we discussed four important works of ours.
Our first project DeepTest is one of the first few papers proposing systematic software testing techniques for DNN based software. We proposed neuron coverage guided image synthesis techniques for DNN based autonomous cars and leveraged domain specific metamorphic relation to generate oracle for new generated test cases to automatically test DNN based software. We applied DeepTest to testing three top performing self-driving car models in Udacity self-driving car challenge and our tool has identified thousands of erroneous behaviors that may lead to potential fatal crash.
In DeepTest project, we found that the natural variation such as spatial transformations or rain/fog effects have led to problematic corner cases for DNN based self-driving cars. In the follow-up project DeepRobust, we studied per-point robustness of deep neural network under natural variation. We found that for a DNN model, some specific weak points are more likely to cause erroneous outputs than others under natural variation. We proposed a white-box approach and a black-box approach to identify these weak data points. We implemented and evaluated our approaches on 9 DNN based image classifiers and 3 DNN based self-driving car models. Our approaches can successfully detect weak points with good precision and recall for both DNN based image classifiers and self-driving cars.
Most of existing works in SE for AI area including our DeepTest and DeepRobust focus on instance-wise errors, which are single inputs that result in a DNN model's erroneous outputs. Different from instance-wise errors, group-level errors reflect a DNN model's weak performance on differentiating among certain classes or inconsistent performance across classes. This type of errors is very concerning since it has been found to be related to many real-world notorious errors without malicious attackers. In our third project DeepInspect, we first introduced the group-level errors for DNN based software and categorized them into confusion errors and bias errors based on real-world reports. Then we proposed neuron coverage based distance metric to detect group-level errors for DNN based software without requiring labels. We applied DeepInspect to testing 8 pretrained DNN models trained in 6 popular image classification datasets, including three adversarial trained models. We showed that DeepInspect can successfully detect group-level violations for both single-label and multi-label classification models with high precision.
As a follow-up and more challenging research project, we proposed five WR(weighted regularization) techniques to repair group-level errors for DNN based software. These five different weighted regularization techniques function at different stages of retraining or inference of DNNs including input phase, layer phase, loss phase and output phase. We compared and evaluated these five different WR techniques in both single-label and multi-label classifications including five combinations of four DNN architectures on four datasets. We showed that WR can effectively fix confusion and bias errors and these methods all have their pros, cons and applicable scenario.
All our four projects discussed in this thesis have solved important problems in ensuring the functionality, safety as well as fairness for DNN based software and had significant influence in the advancement of SE for AI area.
|
404 |
The role of model implementation in neuroscientific applications of machine learningAbe, Taiga January 2024 (has links)
In modern neuroscience, large scale machine learning models are becoming increasingly critical components of data analysis. Despite the accelerating adoption of these large scale machine learning tools, there are fundamental challenges to their use in scientific applications that remain largely unaddressed. In this thesis, I focus on one such challenge: variability in the predictions of large scale machine learning models relative to seemingly trivial differences in their implementation.
Existing research has shown that the performance of large scale machine learning models (more so than traditional model like linear regression) is meaningfully entangled with design choices such as the hardware components, operating system, software dependencies, and random seed that the corresponding model depends upon. Within the bounds of current practice, there are few ways of controlling this kind of implementation variability across the broad community of neuroscience researchers (making data analysis less reproducible), and little understanding of how data analyses might be designed to mitigate these issues (making data analysis unreliable). This dissertation will present two broad research directions that address these shortcomings.
First, I will describe a novel, cloud-based platform for sharing data analysis tools reproducibly and at scale. This platform, called NeuroCAAS, enables developers of novel data analyses to precisely specify an implementation of their entire data analysis, which can then be used automatically by any other user on custom built cloud resources. I show that this approach is able to efficiently support a wide variety of existing data analysis tools, as well as novel tools which would not be feasible to build and share outside of a platform like NeuroCAAS.
Second, I conduct two large-scale studies on the behavior of deep ensembles. Deep ensembles are a class of machine learning model which uses implementation variability to improve the quality of model predictions; in particular, by aggregating the predictions of deep networks over stochastic initialization and training. Deep ensembles simultaneously provide a way to control the impact of implementation variability (by aggregating predictions across random seeds) and also to understand what kind of predictive diversity is generated by this particular form of implementation variability. I present a number of surprising results that contradict widely held intuitions about the performance of deep ensembles as well as the mechanisms behind their success, and show that in many aspects, the behavior of deep ensembles is similar to that of an appropriately chosen single neural network. As a whole, this dissertation presents novel methods and insights focused on the role of implementation variability in large scale machine learning models, and more generally upon the challenges of working with such large models in neuroscience data analysis. I conclude by discussing other ongoing efforts to improve the reproducibility and accessibility of large scale machine learning in neuroscience, as well as long term goals to speed the adoption and reliability of such methods in a scientific context.
|
405 |
Performance Analysis and Improvement of 5G based Mission Critical Motion Control ApplicationsBhimavarapu, Koushik January 2022 (has links)
The industrial needs in the production of goods and control of processes within the factory keep leapfrogging daily by the necessities to fulfil the needs of the ever-growing population. In recent times, the industries are looking towards Industry 4.0 to improve their overall productivity and scalability. One of the significant aspects that are required to meet the requirements of Industry 4.0 is communication networks among industrial applications. Nowadays, industries from the cross markets are looking to replace their existing wired networks with wireless networks, which indeed brings many use-cases and a lot of new business models into existence. To make all these options possible, wireless networks need to meet the stringent requirements of these industrial applications in the form of reliability, latency, and service availability. This thesis focuses on a systematic methodology to integrate wireless networks like 5G, Wi-Fi 6, etc., into real-life automation devices. It also describes a methodology to evaluate their communication and control performance by varying control parameters like topology, cycle time, and type of networks. It also devises some techniques and methods that can improve the overall performance, i.e., both control and communication performance of the control applications. The method used to implement this work is a case study. This work integrates and tests the industrial applications in a real-life scenario. It is the best effort to bring a unique perspective of communication engineers and control engineers together regarding the performance of the industrial applications. This work tries to verify the suitability of the wireless in mission-critical control application scenarios with respect to their communication and control performance. Software for data analysis and visualization and its methodology for analyzing the traffic flow of the control applications via different wireless networks is demonstrated by varying different control parameters. It is shown that it is challenging for 5G to support the shorter cycle time values, and performance will get better and more stable with the increase in the cycle time of the control application. It is also found that the 1-Hop wireless topologies have a comparatively better control performance than 2-Hop wireless topologies. In the end, it is found that the communication and control performance of the motion control application can be improved by using the hybrid topology, which is a mixture of 5G and Wi-Fi 6, by modifying some key aspects. The thesis work helps to introduce a novel systematic methodology for measuring and analyzing the communication and control applications via different wireless networks. It also gives a better idea for the control engineers in the industry about which cycle times the different wireless networks and their topologies support when integrated with industrial automation devices. It also describes which wireless networks support industrial applications better. It ends with a novel methodology that could improve the performance of the mission-critical motion applications by using existing wireless technologies.
|
406 |
Laser Additive Manufacturing of Magnetic MaterialsMikler, Calvin V. 08 1900 (has links)
A matrix of variably processed Fe-30at%Ni was deposited with variations in laser travel speeds as well and laser powers. A complete shift in phase stability occurred as a function of varying laser travel speed. At slow travel speeds, the microstructure was dominated by a columnar fcc phase. Intermediate travel speeds yielded a mixed microstructure comprised of both the columnar fcc and a martensite-like bcc phase. At the fastest travel speed, the microstructure was dominated by the bcc phase. This shift in phase stability subsequently affected the magnetic properties, specifically saturation magnetization. Ni-Fe-Mo and Ni-Fe-V permalloys were deposited from an elemental blend of powders as well. Both systems exhibited featureless microstructures dominated by an fcc phase. Magnetic measurements yielded saturation magnetizations on par with conventionally processed permalloys, however coercivities were significantly larger; this difference is attributed to microstructural defects that occur during the additive manufacturing process.
|
407 |
Intuitive programming of mobile manipulation applications : A functional and modular GUI architecture for End-User robot programming / Intuitiv programmering av mobil manipulations applikationer : En funktionell och modulär GUI arkitektur för slutanvändares robot programmeringDe Martini, Alessandro January 2021 (has links)
Mobile manipulators are changing the way companies and industries complete their work. Untrained end users risk facing unfunctional and nonuser- friendly Graphical User Interfaces. Recently, there has been shortages of people and talent in the heathcare industry where these applications would benefit in being used to accomplish easy and low level tasks. All these reasons contribute to the need of finding functional robot-user ways of communicating that allow the expansion of mobile manipulation applications. This thesis addresses the problem of finding an intuitive way to deploy a mobile manipulator in a laboratory environment. This thesis has analyzed whether it is possible to permit the user to work with a manipulator efficiently and without too much effort via a functional graphical user interface. Creating a modular interface based on user needs is the innovation value of this work. It allows the expansion of mobile manipulator applications that increases the number of possible users. To accomplish this purpose a Graphical User Interface application is proposed using an explanatory research strategy. First, user data was acquired using an ad hoc research survey and mixed with literature implementations to create the right application design. Then, an iterative implementation based on code-creation and tests was used to design a valuable solution. Finally, the results from an observational user study with non-roboticist programmers are presented. The results were validated with the help of 10 potential end users and a validation matrix. This demonstrated how the system is both functional and user-friendly for novices, but also expressive for experts. / Mobilmanipulatorer förändrar sättet som företag och industrier utför sitt arbete. Otränade slutanvändare och särskilt de utan programmeringskunskap kommer att bemötas av icke-funktionella och användarovänliga grafiska användargränssnitt. Den senaste tiden har det varit brist på specialiserad personal inom hälsovårdsindustrin som har resulterat i ett beroende på dessa applikationer för att genomföra enkla uppgifter samt uppgifter på låg nivå. Alla dessa faktorer bidrar till det ökande behovet att hitta ett funktionellt sätt att kommunicera mellan robot och slutanvändare vilket tillåter expansionen av mobilmanipulatorapplikationer. Arbetet som beskrivs i denna avhandling adresserar problemet att finna ett intuitivt sätt att använda en mobilmanipulator i ett laboratoriemijö. Möjligheten att tillåta användaren att på ett enkelt och effektivt sätt arbeta med en manipulator via ett funtionellt grafiskt användargränssnitt analyseras. Innovationsvärdet och detta examensarbetes bidrag till nuvarande kunskap betraktar möjligheten att skapa ett modulärt gränssnitt baserat på användares behov. Detta möjliggör expansionen av mobilmanipulatörers applikation vilket ökar antalet möjliga användare. En förklarande forskningsstrategi används för att föreslå en grafisk användargränssnittsapplikation för att uppnå detta mål. Först användes data från ad hoc-undersökningar blandat med litteraturimplementeringar för att skapa den rätta applikationsdesignen. En iterativ implementering baserad på kodskapande samt tester användes sedan för att designa en värdefull lösning redo att testas. Slutligen presenteras resultat från en användarobservationsstudie med icke-robotikprogrammerare. De insamlade resultaten som samlades in under valideringsstadiet tack vare en grupp bestående av tio potentiella slutanvändare har analyserats genom användandet av en valideringsmatris som är baserad på tre parametrar. Detta demonstrerade hur systemet är både funktionellt och användarvänligt för nybörjare men också expressivt för experter.
|
408 |
Electron beam techniques for testing and restructuring of wafer-scale integrated circuitsShaver, David Carl. January 1981 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1981 / Includes bibliographical references. / by David Carl Shaver. / Ph. D. / Ph. D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
|
409 |
Local resilience, canola cropping, and biodiesel productionBates, Christopher Allen 27 January 2006 (has links)
New technology may have negative, as well as positive, effects on a sociocultural
system. Biodiesel is growing in popularity as a fuel alternative that addresses global
warming and reduces dependency on petroleum. The biodiesel innovation fits well
into the existing behavioral infrastructure of Linn and Benton Counties, Oregon. The
introduction of this technology fuels two community-based biodiesel initiatives: the
Corvallis Biodiesel Cooperative (CBC) and the OSU Biodiesel Initiative (OBI).
However, the increasing demands for biodiesel increases the demand for vegetable oil.
Canola is the most efficient oil producing crop suggested for the southern Willamette
Valley of Oregon. Canola cropping fits into the behavioral infrastructure of local
grass seed growers' tradition. However, canola cropping presents outcrossing risks to
neighboring specialty seed and organic growers. This calls into question the resilience
and sustainability of canola cropping. The decisions made about biodiesel production
and oilseed cropping will impact the future environment, culture, political autonomy,
and sustainability of this local community. The dominant values that serve this community will determine the resilience of culture and identity that is maintained or
emerges in the face of social-ecological challenges and technological innovations.
The research methodology includes interviews, participant observation, and
informational media to triangulate data. These methods serve to inform an integrated
framework of holistic, values analysis, social-ecological, and cultural materialism
theoretical approaches. The holistic approach provides the behavioral components and
the values analysis approach provides the mental components that are integrated into a
cultural materialism framework. These components are evaluated by the social-ecological
approach. Evaluation of the CBC and OBI suggests that values play a
greater role in cultural materialism than previously believed. A new theoretical
perspective emerges to explain resilience and causal effects. The social-ecological
approach, illustrated by panarchy theory, is also integrated into the cultural
materialism approach. The integration of the four theoretical approaches, and the
emergence of a new theoretical perspective, provides a means to explain resilience and
sustainability for the CBC and OBI. This integrated approach also examines three
potential paths of resilience and sustainability for the grass seed, specialty seed, and
organic growing traditions.
Path A predicts long-term resilience and sustainability for grass seed growers and
canola cropping, but collapse for the specialty seed and organic growing traditions.
Path B predicts that a proposed regulated canola cropping compromise will only
prolong the inevitable collapse of the specialty seed and organic growing traditions.
Along both Paths A and B, diversity is lost from the sociocultural system as specialty
seed and organic growing traditions decline. Canola cropping increases the potential
for energy security, but food security is reduced. Path C suggests how to maintain the
current sociocultural system of grass seed, specialty seed, and organic growing
traditions and promote long-term resilience and sustainability. / Graduation date: 2006
|
410 |
Aplicação da polianilina contendo nanopartículas de zinco em revestimentos de carrocerias automotivas / Application of the polyaniline with nanoparticles of zinc in the automotive bodies coatingsSantos, Claudia Joanita 07 December 2017 (has links)
A proteção contra corrosão na indústria automotiva é um problema de alto impacto econômico. Além de características anticorrosivas, os revestimentos utilizados nas indústrias automotivas precisam apresentar boa aparência e resistência física. Os revestimentos multicamadas normalmente utilizados em indústrias automotivas são compostos por e-coat, primer, base e verniz, sendo que o primer pode ser utilizado como um substituto do e-coat em processos de retoque ou retrabalho. Sob outra perspectiva, os polímeros condutores têm sido amplamente estudados como um revestimento anticorrosivo e o compósito de Polianilina-Melamina-Zinco (PZn) apresentou resultados promissores em relação à corrosão. Então, este estudo analisou a aplicação da PZn incorporado em revestimento multicamada utilizado na indústria automotiva, substituindo o primer comercial em processo de retrabalho. Adicionalmente, os resultados de PZn foram comparados com os primers utilizados atualmente (FLASH e GLASURIT). Para a caracterização dos materiais, realizaramse ensaios de MEV e EDS para o substrato e espessura de camada nos conjuntos pintados. Para a determinação de características visuais, foram feitas medições de cor, brilho e aspecto. Para corrosão, realizaram-se ensaios eletroquímicos (OCP e Tafel) e ensaio cíclico. Por fim, para resistência física foram analisados os ensaios de umidade, flexibilidade e aderência: corte cruzado, jato d´água, impacto pontual e batida de pedra. Os resultados dos ensaios de aspecto visual foram similares para todos os conjuntos pintados. Para corrosão, os ensaios eletroquímicos apresentaram resultados similares ao ensaio químico de corrosão cíclica, nos quais os conjuntos CJ_PANI e CJ_FLASH mostraram-se semelhantes e o CJ_GLASURIT obteve melhores resultados, sendo este o conjunto mais resistente. Os ensaios de resistência física podem ser divididos em secos e úmidos. Os ensaios realizados a seco tiveram resultados similares entre todos os conjuntos pintados, porém, nos úmidos o conjunto CJ_PANI apresentou-se susceptível à defeitos. Sendo assim, concluiu-se que o compósito de PZn apresentou bons resultados em relação às características visuais, ensaios eletroquímicos de corrosão e ensaios físicos a seco, quando utilizado em sistemas de pintura multicamadas de indústrias automotivas. Contudo, a PZn apresentou-se sucetível à defeitos em abientes úmidos, perdendo aderência. / The protection against corrosion in the automotive industry is a high impact problem on the economic view. Besides of an antirust characteristic, the coatings used in the automotive industries needs to have a good appearance and phisical resistance. The multilayer coatings used normally are compound of e-coat, primer, base and clear coat, where the primer can also be used as an e-coat substitute in rework processes. In the other side, the conductive polymers has been widely discussed as an anti-rust coating material and the Polyaniline-zinc composite (PZn) had presented good results concerning the corrosion protection. Then, this study had applied the PZn in a multilayer coating used in the automotive industry, substituting the commercial primer in a common rework process. In addition, the PZn results were compared with the commercial primers used currently (FLASH and GLASURIT). To the material characterization, were done a MEV and EDS analysis of the substrate and were measured the coating thickness of the painted sets. To determinate the visual characteristics, were measured color, gloss and aspect. For corrosion, were performed electrochemical measurements (OCP and Tafel) and the cyclic test. Moreover, the phisical resistance were evaluated with humidity, flexibility and adherence tests: cross cut, steam jet, punctual impact and stone chip. The visual aspects results were similar for all the painted sets. For corrosion, the electrochemical tests presented similar results with the chemical test done by cycle test, in which the CJ_PANI and CJ_FLASH sets had showed them equivalent and the CJ_GLASURIT had better results, being the set more resistant. The phisical resistance could be divided in two types: dry and wet. The dry tests had similar results between the painted sets, but in the wet ones, the PZn set had presented susceptible for defects. In conclusion, the PZn composite had presented good results regarding visual aspects, electrochemical corrosion and dry technical tests when used in multilayer paint systems of the automotive industry. However, it presented a susceptible for defects in humidity environmental, losing adherence.
|
Page generated in 0.1462 seconds