1 |
Trace metal phase speciations by using cross-flow filtration in Erh-Jen estaurineYeh, Hsiao-chien 08 August 2000 (has links)
The aim of this research is to build up the model of trace metal phase speciation in Erh-Jen estuarine. Colloidal species were separated from the dissolved fraction by using cross-flow filtration technique. This allowed us to study the distribution of trace metal¡¦s phase speciation and the relationships between: (1) seasonal variation and phase speciation of trace metal; (2) stational variation and phase speciation of trace metal; (3) the concentration of total and and speciations; (4) the prodution and removel of speciation in different salinity; (5) the flocculants and phase speciation of trace metal. The results will supply abundant information about colloidal metals that should be great benefit to the study of metal speciation of Taiwan waters in the future.
It is seriously polluted by trace metals in Erh-Jen estuarine. Iron has the highest concentration in river water, followed by nickel¡A manganese¡A zinc and copper. Station 3 which is located on San-Yen-Kung river is the most polluted.
Zinc, copper and iron exist in particulate phase predominately. Particle and truly dissolve phase are the major species of TOC¡Bmanganese and nickel. Contents of all the metals and TOC have significant linear correlation with their major species.
Zinc, copper and iron are found predominantly in colloidal fraction(1 kD~0.45£gm), averaging 54.9 ¡Ó19.3 %, 75.6 ¡Ó12.3 % and 72.5 ¡Ó28.0 %, respectively, of the filter-passing pool. TOC, manganese and nickel, resides primarily in the truly dissolved phase , averaging 63.2 ¡Ó11.5 %, 95.1 ¡Ó3.3 % and 72.0 ¡Ó23.9 %, respectively, of the filter-passing pool.
The linear correlation between salinity and TOC, zinc, copper, manganese, nickel and iron are not significant in this study.
|
2 |
The Influence of Sleep Deprivation on the Contingent Negative VariationTERASHIMA, MASAYOSHI, YAMADA, SHIN'YA, SAKAKIBARA, HISATAKA, MIYAO, MASARU, OHGA, TAKASHI 03 1900 (has links)
No description available.
|
3 |
Effects of Illumination and Viewing Angle on the Modeling of Flicker Perception in CRT DisplaysSidebottom, Shane D. 21 March 1997 (has links)
This study evaluated the usefulness of a psychophysical model as part of a new ANSI/HFES 100 standard for CRT flicker. A graph based flicker prediction method developed from Farrell, 1987 was evaluated. The Farrell model is based on phosphor persistence, screen luminance, display size, and viewing distance. The graph based method assumes a worse case scenario (i.e. a white display screen shown on a display with P4 phosphor). While the Farrell model requires photometric measurements to be taken using special equipment, the graph based method require a knowledge of the display size, viewing distance, screen luminance, and refresh rate. Ten participants viewed different display sizes from different eccentricities under different levels of illumination and luminance. In each condition the display's refresh rate was manipulated using the Method of Limits to determine the critical flicker frequency (CFF). An Analysis of Variance was used to detirmine significant effects on CFF. CFF increased with increasing luminance and display size. Adequate illumination significantly increased CFF. A viewing eccentricity of 30 degrees (measured horizontally from the center of the screen) produced the highest CFF values. Under the conditions of 30 degrees eccentricity and 250 to 500 lux illumination, observed 50% CFF threshold values exceeded the 90% CFF threshold values predicted by the graph based method. This study demonstates that when tested under the same conditions it was developed under, the Farrell method successfully predicts flicker perception; however, when tested under conditions representative of real world working conditions, the Farrell model fails to predict flicker perception. New parameters for the model are suggested.<INPUT TYPE="hidden" NAME="abstract" VALUE="This study evaluated the usefulness of a psychophysical model as part of a new ANSI/HFES 100 standard for CRT flicker. A graph based flicker prediction method developed from Farrell, 1987 was evaluated. The Farrell model is based on phosphor persistence, screen luminance, display size, and viewing distance. The graph based method assumes a worse case scenario (i.e. a white display screen shown on a display with P4 phosphor). While the Farrell model requires photometric measurements to be taken using special equipment, the graph based method require a knowledge of the display size, viewing distance, screen luminance, and refresh rate. Ten participants viewed different display sizes from different eccentricities under different levels of illumination and luminance. In each condition the display's refresh rate was manipulated using the Method of Limits to determine the critical flicker frequency (CFF). An Analysis of Variance was used to detirmine significant effects on CFF. CFF increased with increasing luminance and display size. Adequate illumination significantly increased CFF. A viewing eccentricity of 30 degrees (measured horizontally from the center of the screen) produced the highest CFF values. Under the conditions of 30 degrees eccentricity and 250 to 500 lux illumination, observed 50% CFF threshold values exceeded the 90% CFF threshold values predicted by the graph based method. This study demonstates that when tested under the same conditions it was developed under, the Farrell method successfully predicts flicker perception; however, when tested under conditions representative of real world working conditions, the Farrell model fails to predict flicker perception. New parameters for the model are suggested." / Master of Science
|
4 |
A construção do tipo Foi Fez: uma abordagem funcionalistaSilva, Thaís Moreira January 2010 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-05-22T19:21:00Z
No. of bitstreams: 1
thaismoreirasilva.pdf: 626958 bytes, checksum: d52adc518914cd38ed1dc463c74228fe (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-05-22T21:41:59Z (GMT) No. of bitstreams: 1
thaismoreirasilva.pdf: 626958 bytes, checksum: d52adc518914cd38ed1dc463c74228fe (MD5) / Made available in DSpace on 2017-05-22T21:41:59Z (GMT). No. of bitstreams: 1
thaismoreirasilva.pdf: 626958 bytes, checksum: d52adc518914cd38ed1dc463c74228fe (MD5)
Previous issue date: 2010 / O presente trabalho tem como objeto de estudo a construção Foi Fez – doravante, CFF –, a qual foi descrita e analisada por Rodrigues (2006, 2009) como um novo padrão construcional no português brasileiro. Embora os trabalhos realizados pela autora contribuam bastante para análise da CFF, há ainda lacunas e importantes questões controversas a serem debatidas. Apesar de ter discutido o estatuto teórico da gramaticalização em sua tese de doutorado, Rodrigues (2006) não conseguiu encontrar subsídios para afirmar que a CFF – enquanto construção – constitui um caso de gramaticalização. Esta dissertação, portanto, tem como objetivo avançar em relação à proposta de Rodrigues (2006, 2009) na medida em que busca corroborar a hipótese de que a CFF representaria um caso de gramaticalização de uma construção, em que V1 atuaria com escopo de subjetificação (TRAUGOTT, 1995, 2010) em relação a V2 e a todo conteúdo proposicional.
Com o intuito de validar tal hipótese – e considerando que a CFF é mais encontrada em situações reais de fala –, optamos por trabalhar com corpora que recobrissem a modalidade falada. Mais especificamente até, optamos por trabalhar com corpora que representassem o dialeto mineiro, a saber: a) o corpus do projeto “Fala Mineira”, constituído pela Profa. Nilza Barrozo Dias, na Universidade Federal de Juiz de Fora; b) o corpus do projeto “Mineirês: a construção de um dialeto”; constituído pela Profa. Jânia Martins Ramos, na Universidade Federal de Minas Gerais; c) o corpus do projeto “Corpus Conceição de Ibitipoca”, constituído pela Profa. Terezinha Cristina Campos de Resende. / The present work has as its objective the study of the construction “Foi Fez” – henceforth FFC –, which was described and analysed by Rodrigues (2006, 2009) as a new constructional pattern in Brazilian Portuguese. Although the author‟s works contribute heavily to the analysis of the FFC, there are still gaps and important controversial questions to be debated. Despite discussing the theoretical status of grammaticalization in her doctoral thesis, Rodrigues (2006) did not succeed in finding recourse to state that FFC – as a construction – constituted a case of grammaticalization. This dissertation, therefore, aims to advance towards the proposal of Rodrigues (2006, 2009) in so far as we aim to corroborate the hypothesis that the FFC would represent a case of grammaticalization of a construction, wherein V1 would act within the scope of subjectivation (TRAUGOTT, 1995, 2010) in relation to V2 and to all prepositional content. For validating the established hypotheses – and considering that the FFC is more encountered in real situations of speech –, we opted for working on corpora that could recover the spoken modality. Even more specifically, we opted for working on corpora that could represent the dialect from Minas Gerais, that is: a) the corpus of the project “Fala Mineira”, constituted by Professor Nilza Barrozo Dias, at Federal University of Juiz de Fora; b) the corpus of the project “Mineirês: a construção de um dialeto”, constituted by Professor Jânia Martins Ramos, at Federal University of Minas Gerais; c) the corpus of the project “Corpus Conceição de Ibitipoca”, constituted by Professor Terezinha Cristina Campos de Resende.
|
5 |
Modélisation de la source des séismes par inversion des données sismologiques et géodésiques : application aux séismes du Nord de l’Algérie / Seismic source modeling by inverting seismologic and geodetic data : application to Algerian earthquakesBeldjoudi, Hamoud 11 July 2017 (has links)
La caractérisation de la source d’un séisme se fait à partir de l’analyse des mesures des déplacements transitoires et statiques du sol, et dépend de la quantité et de la qualité de ces mesures. Nous avons travaillé sur la détermination des mécanismes au foyer des séismes modérés de Tadjena (Mw 5.0, 2006), Béni-Ilmane (Mw 5.5, 2010), Hammam Melouane (Mw 5.0, 2013), Bordj-Ménaïel (Mw 4.1, 2014), Alger (Mw 5.7, 2014) et M’ziraa (Mw 5.1, 2016). Pour cela, nous avons inversés les formes d’ondes en champ proche et régional des stations large bandes et accélérométriques du réseau sismologique algérien (ADSN). Nous avons déterminé la distribution spatio-temporelle du glissement cosismique du séisme de Boumerdes-Zemmouri (Mw 6.8, 2003) en inversant conjointement l’ensemble des données disponibles, sismologiques (télésismiques, accélérometriques) et géodésiques (GPS, InSAR, soulèvement côtier). Nous avons travaillé sur la relation qui peut exister entre le séisme de Boumerdes-Zemmouri et les séismes de Hammam Melouane, Bordj Ménaïel et Alger (Mw 5.7, 2014) en termes de transfert de contrainte (CFF). Nous avons calculé le champ de contrainte dans différentes régions de l’Algérie par inversion des mécanismes aux foyers disponibles dans chaque région. / Studies of the earthquake source are based on observations of seismic and static ground motions, which depend on the quality and the density of measurements. In this thesis, we present studies of the determination of the focal mechanism of the Tadjena (Mw 5.0, 2006), Béni-Ilmane (Mw 5.5, 2010), Hammam Melouane (Mw 5.0, 2013), Bordj-Ménaïel (Mw 4.1, 2014), Alger (Mw 5.7, 2014) and M’ziraa (Mw 5.1, 2016) earthquakes, by modeling waveforms in the near and regional field with broadband and strong motion stations of the Algerian Digital Seismic Netwok (ADSN). In addition, we determined the coseismic slip distribution of the Boumerdes-Zemmouri earthquake (Mw 6.8, 2003) by inverting a most comprehensive set of data (teleseismic, strong motion, GPS, InSAR, coastal uplift). We calculated the Coulomb Failure Function between the Boumerdes-Zemmouri earthquake (source fault) and the Hammam Melouane, Bordj Ménaïel and Algiers (Mw 5.7, 2014) events (receiver faults). We computed the stress tensor in different regions of Algeria obtained from the inversion of the available focal mechanisms.
|
6 |
Identifiering av kritiska problem vid implementering av ERP-system ur ett leverantörsperspektiv : En identifiering av bidragande faktorer till misslyckade implementeringsprocesser av affärssystem / Identification of critical problems in the implementation process of ERP-systems from a supplier's P.O.V. : An identification of contributing factors for failured implementation processes of ERP-systemsAronsson, Oscar January 2018 (has links)
ERP system har de senaste åren uppträtt som en central faktor vid framgångsrik informationshantering och agerat som en grundpelare i många företagsorganisationer. Implementering av ett nytt affärssystem är en komplex process vilket bidrar till att organisationer får svårigheter med att genomföra processen. Implementering av affärssystem handlar om att kombinera tidigare separerade system med fokus på att tillhandahålla en mer komplett informationsresurs för en organisation. ERP systemen tog fart under millenniumskiftet y2k och många företag hade vid det laget svårt att implementera systemen med ett positivt slutresultat. Arton år senare så kvarstår problematiken trots att ERP systemen har utvecklats och företagen blivit mer kunniga inom området. Detta visar en studie från Panorama Consulting (2017) där det framkom att 26 procent av företagen misslyckats med implementeringen 74 procent går över budget samt att 59 procent passerade den tänkta tidsplanen. För att finna en lösning på problematiken så har flertalet forskare genomfört studier där fokus ligger på att identifiera de negativt bidragande faktorerna vid ERP implementering. Dessa faktorer kallas för ”Critical failure factors”. Det finns idag lite empirisk forskning där information har samlats från ERP leverantörer. Forskningen som finns grundar sig i gamla teorier för vilka faktorer som bidrar positivt respektive negativt för en implementeringsprocess. Dessa teorier utgår de flesta forskare från i sina studier kring implementeringen av ERP system. Denna studie fokuserar därför på ERP leverantörernas perspektiv kring negativt bidragande faktorer vid implementering av ERP system. Stor vikt har legat på att samla in faktorer från ERP leverantörer för att vid ett senare skede i studien ställa dessa mot de faktorer som nämnts i tidigare studier för att finna samband och eventuella avvikelser. Studien använde sig av intervjuer som datainsamlingsmetod och fem intervjuer har genomförts med erfarna ERP leverantörer. Resultatet av det insamlade materialet från ERP leverantörerna identifierar ett antal kritiska faktorer som kan sammankopplas med faktorer från tidigare studier. Intressant är att studien även identifierat ett antal faktorer som avviker från tidigare studier. Slutsatserna som går att dra från studien är att implementeringsprocesserna kan förbättras och bli avsevärt mycket mer effektiva med hjälp av den nya insikten i problematiken som ERP leverantörerna framför. Medvetenhet om dessa negativt bidragande faktorer vid implementering av ett ERP system bidrar med kunskap inom området för implementeringsprocesser som i sin tur kan användas av företag och privatpersoner för att underlätta och förhindra de problem som kan uppstå i en sådan process. / In recent years, the ERP system has been a key factor in successful information management and served as a foundation for many corporate organizations. Implementing a new business system is a complex process, which leads organizations having difficulties implementing the process. Implementation of business systems involves combining previously separated systems focusing on providing a more complete information resource for an organization. ERP systems accelerated during the millennium shift y2k, and many companies at the time had difficulties to implement the systems with a positive end result. Eighteen years later, the problem remains, despite the fact that ERP systems have evolved and businesses become more knowledgeable in the field. This shows a study from Panorama Consulting (2017) where it was found that 26 percent of companies failed with implementation. 74 percent exceed budget and 59 percent passed the intended schedule. To find a solution to the problem, most researchers have conducted studies focusing on identifying the negative contributing factors in ERP implementation. These factors are called "Critical failure factors". There is currently a lack of empirical research where information has been gathered from ERP providers. The research found is based on old theories for which factors contribute positively and negatively to an implementation process. These theories are being used by most researchers in their studies regarding the implementation of the ERP system. This study therefore focuses on ERP suppliers' perspectives on negative contributing factors in implementing ERP systems. Major emphasis has been placed on collecting factors from ERP providers in order that at a later stage in the study they can address the factors mentioned in previous studies to find relationships and possible deviations. The study used interviews as data collection method and five interviews have been conducted with experienced ERP providers. The result of the collected material from ERP providers identifies a number of critical factors that can be linked to factors from previous studies. Interestingly, the study also identified a number of factors that differ from previous studies. The conclusions that can be deduced from the study are that the implementation processes can be improved and become significantly more efficient with the help of the new insight into the problems that the ERP providers lifts up. Awareness of these negative contributing factors in implementing an ERP system contributes knowledge in the field of implementation processes, which in turn can be used by companies and individuals to facilitate and prevent the problems that may arise in such a process.
|
7 |
Development of Optically Selective Plasmonic Coatings : Design of experiment (DoE) approach to develop the effect of plasmonic materials on selective surfacesKhaled, Fatima January 2024 (has links)
Absolicon is a pioneering solar technology development company specializing in the manufacturing and selling of advanced solar energy systems engineered to generate renewable energy for diverse use. Comprising essential components such as reflectors (mirrors) and a solar receiver tube, these solar energy systems are equipped to efficiently capture and convert solar irradiation into usable thermal energy. As an integral facet of an ongoing research, this project will contribute to optimize the reflection and absorption capacity in receiver tubes of Absolicon's solar collectors. The aim is to investigate optically selective plasmonic coatings intended as an undercoating in the solar selective surfaces. The main coating material that will be used and analysed is gold due to its plasmonic properties and inert nature as well as its low toxicity. The gold will be coated on stainless steel using physical vapor deposition (PVD) and then annealed at mid-to-high temperatures to produce a plasmonic surface. The effect of Au thicknesses, annealing times/temperature and will be investigated to optimize the coating with regards to optical properties based on a systematic method called Design of Experiments (DoE). The goal for the gold coating is to increase the reflectance in the infrared region while generating a plasmonic absorption peak in the visible region (the position and width will be optimized), making it a more beneficial surface to coat a solar selective surface than the original stainless steel (SS). It was found that the size and inter-particle distance of GNPs depend on the temperature and annealing time for different thickness. The surface analysis from SEM-images and AFM-topographs provided that samples with smaller grains are more likely to exhibit significant plasmonic effects compared to larger grains. According to the surface characterization, either thinner gold coating exposed to high temperature for short annealing time or thicker gold coating with longer annealing time provide plasmonic absorption peak in visible light region.
|
8 |
Development of High-throughput Membrane Filtration Techniques for Biological and Environmental Applications / Development of High-throughput Membrane Filtration TechniquesKazemi, Amir Sadegh 11 1900 (has links)
Membrane filtration processes are widely utilized across different industrial sectors for biological and environmental separations. Examples of the former are sterile filtration and protein fractionation via microfiltration (MF) and ultrafiltration (UF) while drinking water treatment, tertiary treatment of wastewater, water reuse and desalination via MF, UF, nanofiltration (NF) and reverse-osmosis (RO) are examples of the latter. A common misconception is that the performance of membrane separation is solely dependent on the membrane pore size, whereas a multitude of parameters including solution conditions, solute concentration, presence of specific ions, hydrodynamic conditions, membrane structure and surface properties can significantly influence the separation performance and the membrane’s fouling propensity. The conventional approach for studying filtration performance is to use a single lab- or pilot-scale module and perform numerous experiments in a sequential manner which is both time-consuming and requires large amounts of material. Alternatively, high-throughput (HT) techniques, defined as the miniaturized version of conventional unit operations which allow for multiple experiments to be run in parallel and require a small amount of sample, can be employed. There is a growing interest in the use of HT techniques to speed up the testing and optimization of membrane-based separations. In this work, different HT screening approaches are developed and utilized for the evaluation and optimization of filtration performance using flat-sheet and hollow-fiber (HF) membranes used in biological and environmental separations. The effects of various process factors were evaluated on the separation of different biomolecules by combining a HT filtration method using flat-sheet UF membranes and design-of-experiments methods. Additionally, a novel HT platform was introduced for multi-modal (constant transmembrane pressure vs. constant flux) testing of flat-sheet membranes used in bio-separations. Furthermore, the first-ever HT modules for parallel testing of HF membranes were developed for rapid fouling tests as well as extended filtration evaluation experiments. The usefulness of the modules was demonstrated by evaluating the filtration performance of different foulants under various operating conditions as well as running surface modification experiments. The techniques described herein can be employed for rapid determination of the optimal combination of conditions that result in the best filtration performance for different membrane separation applications and thus eliminate the need to perform numerous conventional lab-scale tests. Overall, more than 250 filtration tests and 350 hydraulic permeability measurements were performed and analyzed using the HT platforms developed in this thesis. / Thesis / Doctor of Philosophy (PhD) / Membrane filtration is widely used as a key separation process in different industries. For example, microfiltration (MF) and ultrafiltration (UF) are used for sterilization and purification of bio-products. Furthermore, MF, UF and reverse-osmosis (RO) are used for drinking water and wastewater treatment. A common misconception is that membrane filtration is a process solely based on the pore size of the membrane whereas numerous factors can significantly affect the performance. Conventionally, a large number of lab- or full-scale experiments are performed to find the optimum operating conditions for each filtration process. High-throughput (HT) techniques are powerful methods to accelerate the pace of process optimization—they allow for multiple experiments to be run in parallel and require smaller amounts of sample. This thesis focuses on the development of different HT techniques that require a minimal amount of sample for parallel testing and optimization of membrane filtration processes with applications in environmental and biological separations. The introduced techniques can reduce the amount of sample used in each test between 10-50 times and accelerate process development and optimization by running parallel tests.
|
Page generated in 0.0333 seconds