Spelling suggestions: "subject:"dilution."" "subject:"ilution.""
101 |
Détermination de coefficients de partage et de limites de solubilité du méthanol dans des mélanges liquides comportant azote et hydrocarbure(s) aux conditions opératoires des unités de fractionnement du gaz naturelCourtial, Xavier 12 December 2008 (has links) (PDF)
Après le traitement du gaz naturel, le méthanol est présent sous forme de traces. Notre objectif est de déterminer les propriétés thermodynamiques des mélanges “hydrocarbure(s) – méthanol“ aux conditions opératoires rencontrées lors du fractionnement du gaz naturel, à hautes et basses températures. En effet, les industries peuvent être pénalisées financièrement lorsque la teneur finale en méthanol dépasse 50 ppm molaire. Pour cela, nous souhaitons connaître les équilibres entre phases aux conditions spécifiques régnant dans ces unités. Peu de données existent pour de si faibles teneurs en méthanol. De plus, les modèles thermodynamiques utilisables sur l'ensemble du domaine de compositions ne permettent pas de représenter correctement les équilibres à dilution infinie. Un appareillage, basé sur une technique "statique-analytique" avec échantillonnages des phases et analyse par CPG est utilisé. A hautes températures, nous avons déterminé les coefficients de partage du méthanol. L'appareillage a été adapté au cours du temps, afin d'améliorer la quantification de traces de méthanol (inférieures à 1 000 ppm). Une procédure d'étalonnage originale tenant compte de l'adsorption du méthanol dans les circuits d'analyse a été mise au point. Les nouvelles mesures montrent qu'à l'incertitude expérimentale près, la pression totale du système ainsi que le coefficient de partage du méthanol sont seulement fonctions de la température. Les constantes de la loi de Henry ainsi que les coefficients d'activité à dilution infinie du méthanol sont calculés. A basses températures, nous souhaitons déterminer les valeurs limites de solubilité du méthanol dans les mélanges liquides : azote - hydrocarbure(s). Un appareillage est en cours d'adaptation pour réaliser ces mesures. Les nouvelles données serviront de bases au développement des simulateurs de procédés de fractionnement, en vue d'estimer le plus précisément possible les teneurs en méthanol dans les hydrocarbures produits.
|
102 |
Volatility and number measurement of diesel engine exhaust particlesBernemyr, Hanna January 2007 (has links)
Today, emission legislations for engine exhaust particles are mass based. The engines of today are low-emitting with respect to particle mass, with the emissions approaching the detection limit of the current measurement method. This calls for new and improved measurement methods. Both from the point of view of the engine developers and regarding human health effects, particle number seem to be the particle property of greatest interest to legislate upon. Recently, a proposal for a new particle number based measurement methodology has been put forward by the United Nations Economic Commission for Europe (UN ECE). The gas and particle mixture (the aerosol) of engine exhaust is not a stable system. The size and the number of the particles change over time as the temperature and pressure change. Particle number measurements call for dilution which changes the gas-phase concentrations of the condensing gases. The dilution process alters the conditions in the aerosol and thereby influences the measurements. Within the current project it was desired to better understand the outcome of particle number measurements and the complexities of particle sampling, dilution and conditioning prior to measurements. Two experimental set-ups have been developed within the project. The first system includes a rotating disc diluter followed by a volatility Tandem Differential Mobility Analyser (v-TDMA). The second set-up, called the EMIR-system, includes ejector diluters in series followed by a stand-alone Condensation Particle Counter (CPC). After the development of these experimental set-ups, the v-TDMA has been used to study the volatility and the size distributed number concentration of exhaust particles. The EMIR-system was used for total number concentration measurements including only the solid fraction of the aerosol. The experimental work has given practical experience that can be used to estimate the benefits and disadvantages of upcoming measuring methodology. For the engine developers, in order to produce engines that meet future legislation limits, it is essential to know how the measurement procedure influences the aerosol. In summary, the experimental studies have shown that the number of nucleation mode particles is strongly affected by varied dilution. No upper threshold value of the dilution has been found where the dilution effect diminishes. The volatility studies have shown that it is mainly the nucleation mode particles that are affected by heat. The v-TDMA instrument have shown to be a sensitive analytical tool which, if desired to use for further engine exhaust particle characterization, needs some development work. Experimental work with the EMIR-system, which in principle is similar to the instruments proposed for a future standard, shows that these types of measurement systems are sensitive to small changes in the detector cut-off. The major outcome of the project lies in the new detailed knowledge about particle number measurements from engines. / QC 20100628
|
103 |
Instrumental and methodological developments for isotope dilution analysis of gaseous mercury speciesLarsson, Tom January 2007 (has links)
This thesis deals with instrumental and methodological developments for speciation analysis of gaseous mercury (Hg(g)), based on isotope dilution analysis (IDA). The studied species include Hg0, (CH3)2Hg, CH3HgX and HgX2 (where X symbolises a negatively charged counter ion in the form of a halide or hydroxyl ion). Gas chromatography hyphenated with inductively coupled plasma mass spectrometry (GC-ICPMS) was used for separation and detection of Hg(g) species. Permeation tubes were used for the generation of gaseous isotopically enriched Hg standards (tracers). These tracers were continuously added to the sample gas stream during sampling of Hg(g). A mobile prototype apparatus incorporating both the permeation source and a sampling unit for collection of Hg(g) was developed and used for this purpose. Hg(g) species were pre-concentrated on Tenax TA and / or Carbotrap solid adsorbents. Au-Pt was used for pre-concentration of total Hg(g), either as the primary medium, or as backup. Collected species were eluted from these media and introduced to the instrument by thermal desorption. Various degrees of species transformations, as well as losses of analyte during pre-concentration and elution, were found to occur for both Tenax TA and Carbotrap. The performance characteristics of these media were shown to be species specific, as well as matrix dependent. The development of an on-line derivatisation procedure allowed for minimised species transformations, as well as reduced adsorption and memory effects of ionic Hg(g) species within the analytical system. In conclusion, IDA provides an important tool for identification, minimisation and correction of the above mentioned analytical problems. Furthermore, it offers significant advantages with respect to quality assurance, compared to conventional techniques, both when it comes to rational development of new methodology, as well as for continuous validation of existing procedures. The developed methodology for speciation analysis of Hg(g) has been tested in various applications, including the determination of Hg(g) species concentrations in ambient air (both in and outdoor) and in the head space of sediment microcosms. Hg(g) species were formed in the sediments as a result of naturally occurring redox and methylation processes, after addition of an isotope enriched aqueous Hg(II) precursor. The methodology has also been used for assessing the risk of occupational exposure to Hg(g) species during remediation of a Hg contaminated soil and for studying Hg0(g) transport and Au-Pt pre-concentration characteristics in natural gases. Hg0 was used as the model species in the latter experiments, since it is believed to be the dominating form of Hg(g) in natural gas. The results indicate that the occurrence of H2S can cause temperature dependent adsorption and memory effects of Hg0(g) in the presence of stainless steel, thereby providing a plausible explanation to the variability of results for sour gases occasionally observed in the field. Hg0(g) has demonstrated overall high recovery during collection on Au-Pt tubes for all gases tested in this thesis. Recent (unpublished) results however indicate that there are potential species specific and matrix dependent effects associated with the Au-Pt based pre-concentration of Hg(g) in natural gases.
|
104 |
Corporate Equity Warrant: Pricing Arbitrage-Free ed Implicazioni per la Finanza Aziendale / Corporate Equity Warrants: Arbitrage-Free Pricing and Implications for Corporate FinanceBARBI, MASSIMILIANO 06 March 2009 (has links)
I corporate equity warrant rappresentano un affascinante metodo di finanziamento “ibrido” disponibile per le imprese. In prima approssimazione, un warrant è assimilabile ad una opzione call e, pertanto, il pricing è spesso effettuato applicando le formule di valutazione sviluppate per tali strumenti dalla teoria finanziaria. Tuttavia, la valutazione dei warrant presenta complicazioni ulteriori rispetto alla determinazione del prezzo di opzioni call, e la ragione risiede principalmente in alcuni elementi distintivi di maggiore complessità, tra cui l’effetto diluitivo del capitale esistente derivante dall’esercizio, ed il c.d. effetto “risk-shifting”, in base al quale si verifica un trasferimento di rischio sistematico dagli azionisti ai possessori di warrant, non appena questi strumenti vengono emessi.
L’obiettivo di questa tesi è di analizzare il tema dell’emissione dei corporate warrant dal punto di vista della finanza d’impresa e derivare un metodo di pricing innovativo per tener conto di un fenomeno (risk-shifting effect) tuttora non considerato dalla letteratura finanziaria. Dopo aver derivato formalmente tale approccio e le formule ad esso conseguenti, il lavoro propone una simulazione teorica ed un test empirico condotto su un campione di warrant quotati sul mercato italiano. Entrambi tali verifiche dimostrano come il modello presentato incorpori una maggiore bontà previsiva del prezzo di mercato rispetto agli approcci esistenti. / Corporate equity warrants are one of the more fascinating capital-raising tools available to corporate finance officers. At a first approximation, they are option-like securities and according to this similarity, the pricing is usually performed by application of the standard option pricing theory. However, the theoretical and empirical analysis of warrants still remains an interesting research field within the finance literature. The reason is that warrants are more complex than call options. From an asset pricing point of view, the presence of some specific features (e.g., the equity dilution) prevents from using simple plain-vanilla formulas, while from a corporate finance standpoint, warrants offer several implications, principally because they affect the systematic risk of common stocks and are related to the choice of the firm’s capital structure.
The purpose of this thesis is to analyse corporate warrants and address some of the main open questions about their value. In particular, after reviewing the financial literature about warrant pricing and presenting some commonly accepted formulas, the relationship between warrants and the volatility of the underlying stock return is examined. Contrarily to the classical call options, in fact, warrants affect the capital structure of the issuing firm and produce a risk-shifting effect among equity claimants. We derive an alternative approach to pricing equity warrant, embedding this risk-shifting feature, and we propose both a theoretical simulation and an empirical test based on a sample of Italian warrants proving its accuracy.
|
105 |
Application of Speciated Isotopes Dilution Mass Spectronmetry to the Assessment of Human Health and Toxic ExposureFahrenholz, Timothy 19 February 2012 (has links)
Previous work by our research group demonstrated that quantitative chemical analysis of analytes, such as mercury and chromium species, in environmental matrices could be successfully carried out without using calibration curves and with correction for species interconversion by using EPA Method 6800A. This method encompasses isotope dilution mass spectrometry (IDMS) and speciated isotope dilution mass spectrometry (SIDMS), both of which are described in detail in chapter 1. Research described in this dissertation expands upon our earlier work by applying the method to the speciation of mercury in biological matrices, the speciation of glutathione in red blood cells and whole blood, and the analysis of enzyme activity in mammalian tissue. / Bayer School of Natural and Environmental Sciences; / Chemistry and Biochemistry; / PhD; / Dissertation;
|
106 |
Inferred Weak Rock Mass Classification for Stope Design2013 July 1900 (has links)
Empirical design methods are commonly used for rock mechanics evaluations. An appropriate method of rock mass classification is required to use these empirical methods. There are limitations for rock mass classification methods when access to the ore zone is restricted.
The Cameco Corporation Eagle Point Mine in northern Saskatchewan, Canada, uses the longhole open stope mining method for the recovery of uranium ore. The Modified Dilution graph is used for the prediction of stope hanging wall dilution. The mine currently uses a rock mass classification based on an estimate of the alteration and strength of a rock mass from geological drift mapping. Since this method is highly subjective, point load testing of diamond drill hole core was completed to attempt to correlate the alteration and strength of different rock types to remove the user subjectivity. The results of the testing indicated a general trend of decreasing rock strength with increasing alteration, albeit with considerable scatter.
A repeatable, standardized method of evaluating the stope geometry and inferred rock mass classification for reconciliation purposes was developed. The standardized stope evaluation method removes significant subjectivity currently involved in estimates of stope geometries and the magnitude of dilution. A new lithology based method for interpreting the mine specific geological alteration and strength classification system was developed based on several sources of rock mass classification observations. This resulted in a correlation linking individual rock mass property descriptions between different classification systems for an improved estimate of the Q’ classification value. This improved method of estimating the rock classification Q’ value, as well as conventional techniques for linking classification systems, was used in a stope reconciliation process to predict open stope dilution.
Twenty-seven stope reconciliation case histories were documented and used to compare predicted and measured dilution, based on three different approaches for estimating rock mass classification values. The results showed a minor improvement in dilution prediction using the approach developed in this study. The systematic stope reconciliation and rock mass classification approach did highlight areas in the weak pegmatoidal rocks where improved rock classification estimates should be investigated.
|
107 |
Isolation and Characterization of Uncultured Freshwater Bacterioplankton from Lake Ekoln and Lake Erken through Dilution-to-Extinction Approach and Molecular Analysis ToolsZhang, Jiazhuo January 2012 (has links)
Not many of the abundant freshwater bacterial groups have a representative cultured isolate. In this master thesis project, some abundant bacterioplankton from two lakes (Lake Ekoln and Lake Erken) could be isolated by a dilution-to-extinction approach. Sterilized lake water which was obtained through an ultrafiltration system was used resembling a natural medium. Specific fragments of 16s rRNA of the isolates were amplified by universal bacterial primers (27f and 1492r, 341f and 805r.) for genotyping against a freshwater sequence database and RDP training set (Version 7). A total of 33 isolates from the two lakes were taxonomically classified and revealed the isolation of typical and abundant freshwater bacteria. Original bacterial community of Lake Ekoln was also analyzed by 16S rRNA clone library construction for diversity study. Phylogenetic trees were built through neighbor-joining method by Mega (Version 5) to reveal the evolutionary relationships among database entries, obtained isolates and clones.
|
108 |
On-line HPLCForss, Erik January 2012 (has links)
In order to increase the analysis frequency and thereby achieve a better understanding of the kinetics and dynamics of the chemical process without increasing the workload of the already strained analytical laboratory at Cambrex Karlskoga AB, this projects goal was to investigate whether a crude prototype for mobile on-line HPLC-analysis with automatic sampling and dilution could be built based on certain flow-injection analysis techniques. It was possible to achieve dilution with good repeatability even though saturation effects in the filter proved problematic. Separation and dilution of a binary mixture was also successful as proof-ofconcept.
|
109 |
Balansgången mellan kommersiell framgång och exklusiv image : ‐ Att lyckas med varumärkesutvidgning nedåt av lyxvarumärkenZakharkina, Polina, Jansson, Christine January 2011 (has links)
Many
luxury
companies
within
the
fashion
industry
today
choose
to
extend
their
brands downwards
in
order
to
reach
new
customer
segments
and
hence
increase
their
profitability.
A
brand
extension
strategy
that
leverages
the
core
values
of
the
luxury
brand
is
a
new
possibility
for
luxury
brands
to
position
themselves
towards
a
broader
customer
base.
Meanwhile
there
is
a
risk
that
the
extension
dilutes
the
image
of
the
luxury
brand
and
has
a
negative
effect
on
the
company
in
the
long
term.
Thus
a
tradeoff
exists
between
becoming
more
accessible
and
maintaining
the
exclusivity
of
the
luxury
brand.
The
objective
of
this
thesis
is
to
investigate
how
luxury
brands
that
perform
downward
brand
extensions
to
reach
new
markets
can
succeed
with
this
strategy
without
diluting
the
brand
image.
This
is
achieved
by
studying
the
perceptions
of
the
new
target
segment
towards
the
extension
of
luxury
brands.
The
results
of
the
study
show
that
the
risk
of
brand
dilution
is
minimized when
the
core
values
of
the
luxury
brand
are
transferred
to
the
brand
extension
while
the extension
at
the
same
time
is
successfully
targeted
towards
the
specific
customer
segment. / Många lyxföretag inom modebranchen idag väljer att använda sig av varumärkesutvidgning
nedåt
för
att
nå
ut
till
nya
kundsegment
och
därigenom
öka
sin
lönsamhet.
Varumärkesutvidgningsstrategin
som
drar
nytta
av
lyxvarumärkets
kärnvärden
är
ett
nytt
sätt
för
lyxvarumärken
att
positionera
sig
gentemot
en
bredare
kundbas.
Samtidigt
finns
dock
risken
att
utvidgningen
kan
ge
upphov
till
urvattning
av
lyxvarumärkets
image
samt
skada
varumärket
och
därigenom
påverka
företaget
negativt
på
sikt.
Det
finns
således
en
balansgång
mellan
tillgänglighet
och
exklusivitet
för
lyxvarumärken.
Syftet
med
uppsatsen
är
att
undersöka
hur
lyxvarumärken
som
utvidgar
sig
nedåt
för
att
nå
en
ny
marknad
kan
lyckas
med
denna
strategi
utan
att
samtidigt
urvattna
sitt
varumärke.
Detta
genom
att
undersöka
den
nya
målgruppens
uppfattningar
kring
varumärkesutvidgning
av
lyxvarumärken.
Studiens
resultat
visar
att
risken
för
varumärkesurvattning
minimeras
då
lyxvarumärkets
kärnvärden
överförs till utvidgningen
samtidigt
som
den
nischas
mot
den
specifika
målgruppen.
|
110 |
Wireless Location Tracking Algorithms based on GDOP in the Mobile EnvironmentKuo, Ting-Fu 31 August 2011 (has links)
The thesis is to explore wireless location tracking algorithms based on geometric dilution of precision (GDOP) in the mobile environment. The GDOP can be used as an indication of positioning accuracy, affected by the geometric relationship between the target and sensing units. The smaller the GDOP is, the better positioning accuracy. By using the information of sensing units and time difference of arrival (TDOA) positioning method, we use extended Kalman filter as an estimator to track and predict the state of a moving target. From previous research, the lowest GDOP value, located at the center of a regular polygon, represents the best positioning accuracy in 2-D scenario with numerous sensing units. It is important to find the best locations for the sensing units. Simulated annealing algorithm was used in previous studies. However, it only finds a location at a time, and consumes computation load and time. Due to the above-mentioned reasons, we propose a location tracking system, which consists of a base traiver station and numerous mobile sensing units. By using the information of a base transceiver station and the predicted position of target, we can obtain the best locations for all the mobile sensing units with the calculation of rotation matrix. The locations can also be used as beacons for relocating mobile sensing units. It may take many cycles to move mobile sensing units to the best locations. We have to perform path planning for mobile sensing units. Due to the location change of the moving target, the routes need adjustment accordingly. If the predicted stay of a mobile sensing unit is inside the obstacle, we adjust the route of the mobile sensing unit to make it stay out of the obstacle. Therefore, we also propose a path planning scheme for mobile sensing units to avoid obstacles. Through simulations, the proposed method decreases the tracking time effectively, and find the best locations precisely. When mobile sensing units move toward the best locations, they successfully avoid obstacles and move toward the position with the minimum GDOP. Through the course, good positioning accuracy can be maintained.
|
Page generated in 0.0607 seconds