• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 8
  • 2
  • Tagged with
  • 18
  • 12
  • 10
  • 7
  • 7
  • 7
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Approximate Data Analytics Systems

Le Quoc, Do 22 March 2018 (has links) (PDF)
Today, most modern online services make use of big data analytics systems to extract useful information from the raw digital data. The data normally arrives as a continuous data stream at a high speed and in huge volumes. The cost of handling this massive data can be significant. Providing interactive latency in processing the data is often impractical due to the fact that the data is growing exponentially and even faster than Moore’s law predictions. To overcome this problem, approximate computing has recently emerged as a promising solution. Approximate computing is based on the observation that many modern applications are amenable to an approximate, rather than the exact output. Unlike traditional computing, approximate computing tolerates lower accuracy to achieve lower latency by computing over a partial subset instead of the entire input data. Unfortunately, the advancements in approximate computing are primarily geared towards batch analytics and cannot provide low-latency guarantees in the context of stream processing, where new data continuously arrives as an unbounded stream. In this thesis, we design and implement approximate computing techniques for processing and interacting with high-speed and large-scale stream data to achieve low latency and efficient utilization of resources. To achieve these goals, we have designed and built the following approximate data analytics systems: • StreamApprox—a data stream analytics system for approximate computing. This system supports approximate computing for low-latency stream analytics in a transparent way and has an ability to adapt to rapid fluctuations of input data streams. In this system, we designed an online adaptive stratified reservoir sampling algorithm to produce approximate output with bounded error. • IncApprox—a data analytics system for incremental approximate computing. This system adopts approximate and incremental computing in stream processing to achieve high-throughput and low-latency with efficient resource utilization. In this system, we designed an online stratified sampling algorithm that uses self-adjusting computation to produce an incrementally updated approximate output with bounded error. • PrivApprox—a data stream analytics system for privacy-preserving and approximate computing. This system supports high utility and low-latency data analytics and preserves user’s privacy at the same time. The system is based on the combination of privacy-preserving data analytics and approximate computing. • ApproxJoin—an approximate distributed joins system. This system improves the performance of joins — critical but expensive operations in big data systems. In this system, we employed a sketching technique (Bloom filter) to avoid shuffling non-joinable data items through the network as well as proposed a novel sampling mechanism that executes during the join to obtain an unbiased representative sample of the join output. Our evaluation based on micro-benchmarks and real world case studies shows that these systems can achieve significant performance speedup compared to state-of-the-art systems by tolerating negligible accuracy loss of the analytics output. In addition, our systems allow users to systematically make a trade-off between accuracy and throughput/latency and require no/minor modifications to the existing applications.
12

Approximate Data Analytics Systems

Le Quoc, Do 22 January 2018 (has links)
Today, most modern online services make use of big data analytics systems to extract useful information from the raw digital data. The data normally arrives as a continuous data stream at a high speed and in huge volumes. The cost of handling this massive data can be significant. Providing interactive latency in processing the data is often impractical due to the fact that the data is growing exponentially and even faster than Moore’s law predictions. To overcome this problem, approximate computing has recently emerged as a promising solution. Approximate computing is based on the observation that many modern applications are amenable to an approximate, rather than the exact output. Unlike traditional computing, approximate computing tolerates lower accuracy to achieve lower latency by computing over a partial subset instead of the entire input data. Unfortunately, the advancements in approximate computing are primarily geared towards batch analytics and cannot provide low-latency guarantees in the context of stream processing, where new data continuously arrives as an unbounded stream. In this thesis, we design and implement approximate computing techniques for processing and interacting with high-speed and large-scale stream data to achieve low latency and efficient utilization of resources. To achieve these goals, we have designed and built the following approximate data analytics systems: • StreamApprox—a data stream analytics system for approximate computing. This system supports approximate computing for low-latency stream analytics in a transparent way and has an ability to adapt to rapid fluctuations of input data streams. In this system, we designed an online adaptive stratified reservoir sampling algorithm to produce approximate output with bounded error. • IncApprox—a data analytics system for incremental approximate computing. This system adopts approximate and incremental computing in stream processing to achieve high-throughput and low-latency with efficient resource utilization. In this system, we designed an online stratified sampling algorithm that uses self-adjusting computation to produce an incrementally updated approximate output with bounded error. • PrivApprox—a data stream analytics system for privacy-preserving and approximate computing. This system supports high utility and low-latency data analytics and preserves user’s privacy at the same time. The system is based on the combination of privacy-preserving data analytics and approximate computing. • ApproxJoin—an approximate distributed joins system. This system improves the performance of joins — critical but expensive operations in big data systems. In this system, we employed a sketching technique (Bloom filter) to avoid shuffling non-joinable data items through the network as well as proposed a novel sampling mechanism that executes during the join to obtain an unbiased representative sample of the join output. Our evaluation based on micro-benchmarks and real world case studies shows that these systems can achieve significant performance speedup compared to state-of-the-art systems by tolerating negligible accuracy loss of the analytics output. In addition, our systems allow users to systematically make a trade-off between accuracy and throughput/latency and require no/minor modifications to the existing applications.
13

Optimization and performance of grinding circuits: the case of Buzwagi Gold Mine (BGM)

Wikedzi, Alphonce Wendelin 19 April 2018 (has links) (PDF)
Buzwagi Gold Mine (BGM) is operated by Acacia Mining and located in the Lake Victoria Goldfields of central Tanzania. The mine commenced its operation since April 2009 and treats a sulphide copper-gold ore to produce gold in form of doré bars and a concentrate containing gold, copper and silver. The BGM comminution circuit includes a primary crushing stage with a gyratory crusher and a two grinding circuits using a Semi-Autogenous Grinding (SAG) mill and a ball mill. The SAG mill circuit also includes a single-deck screen and a cone crusher while the ball mill circuit utilizes hydrocyclones. Currently, the grinding circuits are inefficient in achieving the aspired product fineness of xP,80 = 125 μm even at low to normal throughputs (450-600 t/h). An evaluation and optimization study of the circuit performance was conducted to improve the product fineness through circuit surveys, experimental lab work and simulations. In three full scale sampling campaigns, size distributions and solids contents of the samples were determined at selected points in the circuit. Further, several types of breakage tests were conducted; standard Bond tests to determine ore grindability and work indices, batch grinding tests to determine parameters for breakage and selection functions , and standard ball mill tests for mineral liberation characterization by an automated mineral liberation analyzer (MLA).The tests were conducted in a size range from 0.063 to 2 mm. Then, mass balance of the circuit was calculated and the models for mills, screens and hydrocyclones were employed in MODSIM (version 3.6.24). Firstly, simulations were conducted to optimize the existing plant. Several options were evaluated such as reduction of SAG screen aperture, adjustment of cyclone feed solids content and reduction of vortex finder and apex diameters. Moreover, simulations were also evaluated for a possible modification of the existing circuit and include; partial splitting of the cyclone underflow back to SAG mill, introduction of a second classification stage as well as introduction of a second ball mill. The evaluation of breakage tests and survey data revealed the following; the Bond work index obtained for the current ore ranges between 17.20 - 18.70 kWh/t compared to 14.50 - 16.50 kWh/t which was estimated during plant design.This indicates a change in hardness of the ore during the last 7 years. Harder ore means more energy requirement for an efficient operation, the consequence of which is increased costs. Thus, a periodic review of the ore hardness for ongoing mining operation is recommended. This will help in establishing better blends as well as prediction of appropriate tonnages for the existing ore types, so as to be efficiently treated by the available plant design. The work indices of the ore blends treated during survey were correlated with their quartz content and showed a strong linear relationship (R2= 0.95). Therefore, the work index for the BGM ore could be predicted based on known quartz content of the material. Further, the model could be used as a control tool for monitoring hardness variation of the SAG mill feed. The mineral liberation studies indicated that the valuable phase (pyrite-pyrrhotite) could be liberated at relatively coarser particle sizes (200-400 µm). This implies that, there could be no problem with the efficiency of the gravity circuit for the BGM operation, where the gold contained in pyrite-pyrrhotite could be easily concentrated. However, the efficiency of flotation and cyanidation processes will still require finer feed. In overall, the liberation characteristics of the ore blends treated during survey showed minor differences. The Bond efficiency factors of 48-61 % were obtained for the BGM grinding circuit, indicating an inefficient operation. This suggests that the operation could achieve targets by lowering the throughput. Further, the SAG mill circuit was characterized by fluctuating feed size of between xF,80 =102 to 185 mm. A need for control of the feed size as well as blending ratios was recommended for an efficient operation in terms of throughput and final product size. This could be achieved through closer monitoring of the primary crusher performance and proper control of the ratios for the SAG mill feeders drawing the ore from the stockpile. The ball mill grinding efficiency was poor and could be indicated by the fraction < 125 µm of only 5-9 % or xP, 80 : >400 µm in the mill discharge. This was deemed due to poor hydrocyclone performance which was characterized by higher feed solids content, coarser overflow xP,80: >200 µm as well as cut sizes, xT : > 200 µm. An improvement of product fineness up to 327 µm could be achieved during the simulation and optimization of the existing design. This could be achieved by modification of the operating conditions such as reduction of SAG screen aperture from 12 mm to 10 mm, reduction of vortex finder from 280 mm to 270.3 mm, reduction of apex diameter from 150 mm to 145.6 mm as well as adjustment of the cyclone feed solids content from 66.7 to 67.1 %. Based on this result, it was concluded that the current equipment could not achieve the target product quality (i.e. xP,80 = 125 µm ). Further simulations based on flowsheet modification options showed that a second ball mill (series configuration) can help to achieve the desired product fineness as well as an increase of throughput from 618 t/h to 780 t/h. Although the circulating load increases to approximately 500 % in this configuration, it is outweighed by the benefits. Importantly, this option is cost intensive and hence may be considered as a long term solution and especially after cost-benefit analysis. Finally, the results based on optimization of the existing design is recommended as short term solution for improvement of the BGM operation. Although the fineness achieved is still low (i.e. xP,80 = 327 µm) compared to the target (i.e. xP,80 = 125 µm), this gives additional advantage in the sense that, also better hydrocyclone performance is achieved in terms of overflow product (xP,80 = 105 µm vs. > 240 µm) , cut size (xT =133.1 µm vs. > 220 µm) and circulating load (CL =350 %). The improved overflow fineness will contribute to improved efficiency for the downstream processes.
14

Optimization and performance of grinding circuits: the case of Buzwagi Gold Mine (BGM)

Wikedzi, Alphonce Wendelin 03 April 2018 (has links)
Buzwagi Gold Mine (BGM) is operated by Acacia Mining and located in the Lake Victoria Goldfields of central Tanzania. The mine commenced its operation since April 2009 and treats a sulphide copper-gold ore to produce gold in form of doré bars and a concentrate containing gold, copper and silver. The BGM comminution circuit includes a primary crushing stage with a gyratory crusher and a two grinding circuits using a Semi-Autogenous Grinding (SAG) mill and a ball mill. The SAG mill circuit also includes a single-deck screen and a cone crusher while the ball mill circuit utilizes hydrocyclones. Currently, the grinding circuits are inefficient in achieving the aspired product fineness of xP,80 = 125 μm even at low to normal throughputs (450-600 t/h). An evaluation and optimization study of the circuit performance was conducted to improve the product fineness through circuit surveys, experimental lab work and simulations. In three full scale sampling campaigns, size distributions and solids contents of the samples were determined at selected points in the circuit. Further, several types of breakage tests were conducted; standard Bond tests to determine ore grindability and work indices, batch grinding tests to determine parameters for breakage and selection functions , and standard ball mill tests for mineral liberation characterization by an automated mineral liberation analyzer (MLA).The tests were conducted in a size range from 0.063 to 2 mm. Then, mass balance of the circuit was calculated and the models for mills, screens and hydrocyclones were employed in MODSIM (version 3.6.24). Firstly, simulations were conducted to optimize the existing plant. Several options were evaluated such as reduction of SAG screen aperture, adjustment of cyclone feed solids content and reduction of vortex finder and apex diameters. Moreover, simulations were also evaluated for a possible modification of the existing circuit and include; partial splitting of the cyclone underflow back to SAG mill, introduction of a second classification stage as well as introduction of a second ball mill. The evaluation of breakage tests and survey data revealed the following; the Bond work index obtained for the current ore ranges between 17.20 - 18.70 kWh/t compared to 14.50 - 16.50 kWh/t which was estimated during plant design.This indicates a change in hardness of the ore during the last 7 years. Harder ore means more energy requirement for an efficient operation, the consequence of which is increased costs. Thus, a periodic review of the ore hardness for ongoing mining operation is recommended. This will help in establishing better blends as well as prediction of appropriate tonnages for the existing ore types, so as to be efficiently treated by the available plant design. The work indices of the ore blends treated during survey were correlated with their quartz content and showed a strong linear relationship (R2= 0.95). Therefore, the work index for the BGM ore could be predicted based on known quartz content of the material. Further, the model could be used as a control tool for monitoring hardness variation of the SAG mill feed. The mineral liberation studies indicated that the valuable phase (pyrite-pyrrhotite) could be liberated at relatively coarser particle sizes (200-400 µm). This implies that, there could be no problem with the efficiency of the gravity circuit for the BGM operation, where the gold contained in pyrite-pyrrhotite could be easily concentrated. However, the efficiency of flotation and cyanidation processes will still require finer feed. In overall, the liberation characteristics of the ore blends treated during survey showed minor differences. The Bond efficiency factors of 48-61 % were obtained for the BGM grinding circuit, indicating an inefficient operation. This suggests that the operation could achieve targets by lowering the throughput. Further, the SAG mill circuit was characterized by fluctuating feed size of between xF,80 =102 to 185 mm. A need for control of the feed size as well as blending ratios was recommended for an efficient operation in terms of throughput and final product size. This could be achieved through closer monitoring of the primary crusher performance and proper control of the ratios for the SAG mill feeders drawing the ore from the stockpile. The ball mill grinding efficiency was poor and could be indicated by the fraction < 125 µm of only 5-9 % or xP, 80 : >400 µm in the mill discharge. This was deemed due to poor hydrocyclone performance which was characterized by higher feed solids content, coarser overflow xP,80: >200 µm as well as cut sizes, xT : > 200 µm. An improvement of product fineness up to 327 µm could be achieved during the simulation and optimization of the existing design. This could be achieved by modification of the operating conditions such as reduction of SAG screen aperture from 12 mm to 10 mm, reduction of vortex finder from 280 mm to 270.3 mm, reduction of apex diameter from 150 mm to 145.6 mm as well as adjustment of the cyclone feed solids content from 66.7 to 67.1 %. Based on this result, it was concluded that the current equipment could not achieve the target product quality (i.e. xP,80 = 125 µm ). Further simulations based on flowsheet modification options showed that a second ball mill (series configuration) can help to achieve the desired product fineness as well as an increase of throughput from 618 t/h to 780 t/h. Although the circulating load increases to approximately 500 % in this configuration, it is outweighed by the benefits. Importantly, this option is cost intensive and hence may be considered as a long term solution and especially after cost-benefit analysis. Finally, the results based on optimization of the existing design is recommended as short term solution for improvement of the BGM operation. Although the fineness achieved is still low (i.e. xP,80 = 327 µm) compared to the target (i.e. xP,80 = 125 µm), this gives additional advantage in the sense that, also better hydrocyclone performance is achieved in terms of overflow product (xP,80 = 105 µm vs. > 240 µm) , cut size (xT =133.1 µm vs. > 220 µm) and circulating load (CL =350 %). The improved overflow fineness will contribute to improved efficiency for the downstream processes.
15

Silikonstab-Passivsammler für hydrophobe Organika

Gunold, Roman 23 March 2016 (has links) (PDF)
Diese Dissertation beschäftigt sich mit der passiven Probenahme von hydrophoben organischen Schadstoffen in Oberflächengewässern: Polyaromatische Kohlenwasserstoffe (PAK), polychlorierte Biphenyle (PCB), polybromierte Biphenylether (PBDE), Organochlorpestizide (u. a. HCH, DDX) und weitere hydrophobe Pestizide. Die Zielstellung dieser Arbeit lag bei der Validierung des Silikonstabs als Alternativmethode im Gewässermonitoring zu konventionellen Probenahmetechniken wie Schöpf- und Wochenmischproben der Wasserphase sowie Schwebstoffanalysen. Die Probenahme mit dem Silikonstab erfolgte durch dessen Exposition im Gewässer für einen Zeitraum zwischen einer Woche und mehreren Monaten. Nach Einholung wurden die im Silikonstab akkumulierten Schadstoffe (Analyten) mittels instrumenteller Analytik quantifiziert. Die Probenaufgabe erfolgte ohne vorherige Lösungsmittelextraktion durch direktes Erhitzen des Silikonstabs, wodurch die Analyten vom Polymer desorbieren (Thermodesorption). Die durch Hitze freigesetzten Analyten wurden direkt auf eine chromatographische Trennsäule gegeben und massenspektroskopisch quantifiziert. Nach Erhalt der Ergebnisse der Silikonstab-Analytik gibt es verschiedene Herangehensweisen für die Berechnung der zeitgemittelten Analytkonzentrationen im Gewässer, die in dieser Arbeit vorgestellt und diskutiert werden. Dazu gehören die Verwendung von experimentellen Daten aus Kalibrierversuchen und Berechnungen auf Grundlage von physikochemischen Eigenschaften der Analyten wie dem Sammler-Wasser-Verteilungskoeffizienten. Im Zuge dieser Arbeit wurde die Aufnahmekinetik des Silikonstabs bei verschiedenen Temperaturen und Fließgeschwindigkeiten mit Hilfe von Kalibrierversuchen untersucht. Die gewonnenen experimentellen Daten wurden für die Entwicklung von Rechenmodellen herangezogen, mit denen das Aufnahmeverhalten vorgesagt werden soll. Es wurden Sammler-Wasser-Verteilungskoeffizienten für den Silikonstab u. a. mit der Kosolvenzmethode bestimmt und als Parameter für die Berechnung von zeitgemittelten Analytkonzentrationen des Gewässers verwendet. Für die Validierung wurde der Silikonstab in zwei Gewässergütemessstationen der Fließgewässer Mulde (Dessau) und Elbe (Magdeburg) in Durchflussbehältern exponiert und die zeitgemittelten Analytkonzentrationen mit verschiedenen Rechenmodellen bestimmt. Die erhaltenen Werte werden mit gleichzeitig entnommenen Wochenmischproben der Wasserphase sowie monatlichen Schwebstoffproben verglichen und die Eignung des Silikonstabs als alternative Probenahmemethode für das Umweltmonitoring von Oberflächengewässern diskutiert.
16

Silikonstab-Passivsammler für hydrophobe Organika: Aufnahmekinetik, Verteilungskoeffizienten, Modellierung und Freiland-Kalibrierung

Gunold, Roman 14 December 2015 (has links)
Diese Dissertation beschäftigt sich mit der passiven Probenahme von hydrophoben organischen Schadstoffen in Oberflächengewässern: Polyaromatische Kohlenwasserstoffe (PAK), polychlorierte Biphenyle (PCB), polybromierte Biphenylether (PBDE), Organochlorpestizide (u. a. HCH, DDX) und weitere hydrophobe Pestizide. Die Zielstellung dieser Arbeit lag bei der Validierung des Silikonstabs als Alternativmethode im Gewässermonitoring zu konventionellen Probenahmetechniken wie Schöpf- und Wochenmischproben der Wasserphase sowie Schwebstoffanalysen. Die Probenahme mit dem Silikonstab erfolgte durch dessen Exposition im Gewässer für einen Zeitraum zwischen einer Woche und mehreren Monaten. Nach Einholung wurden die im Silikonstab akkumulierten Schadstoffe (Analyten) mittels instrumenteller Analytik quantifiziert. Die Probenaufgabe erfolgte ohne vorherige Lösungsmittelextraktion durch direktes Erhitzen des Silikonstabs, wodurch die Analyten vom Polymer desorbieren (Thermodesorption). Die durch Hitze freigesetzten Analyten wurden direkt auf eine chromatographische Trennsäule gegeben und massenspektroskopisch quantifiziert. Nach Erhalt der Ergebnisse der Silikonstab-Analytik gibt es verschiedene Herangehensweisen für die Berechnung der zeitgemittelten Analytkonzentrationen im Gewässer, die in dieser Arbeit vorgestellt und diskutiert werden. Dazu gehören die Verwendung von experimentellen Daten aus Kalibrierversuchen und Berechnungen auf Grundlage von physikochemischen Eigenschaften der Analyten wie dem Sammler-Wasser-Verteilungskoeffizienten. Im Zuge dieser Arbeit wurde die Aufnahmekinetik des Silikonstabs bei verschiedenen Temperaturen und Fließgeschwindigkeiten mit Hilfe von Kalibrierversuchen untersucht. Die gewonnenen experimentellen Daten wurden für die Entwicklung von Rechenmodellen herangezogen, mit denen das Aufnahmeverhalten vorgesagt werden soll. Es wurden Sammler-Wasser-Verteilungskoeffizienten für den Silikonstab u. a. mit der Kosolvenzmethode bestimmt und als Parameter für die Berechnung von zeitgemittelten Analytkonzentrationen des Gewässers verwendet. Für die Validierung wurde der Silikonstab in zwei Gewässergütemessstationen der Fließgewässer Mulde (Dessau) und Elbe (Magdeburg) in Durchflussbehältern exponiert und die zeitgemittelten Analytkonzentrationen mit verschiedenen Rechenmodellen bestimmt. Die erhaltenen Werte werden mit gleichzeitig entnommenen Wochenmischproben der Wasserphase sowie monatlichen Schwebstoffproben verglichen und die Eignung des Silikonstabs als alternative Probenahmemethode für das Umweltmonitoring von Oberflächengewässern diskutiert.:I ZUSAMMENFASSUNG ...................................................................................................... 2 II INHALTSVERZEICHNIS .................................................................................................. 3 III ABBILDUNGSVERZEICHNIS .......................................................................................... 5 IV TABELLENVERZEICHNIS ................................................................................................ 6 V GLEICHUNGSVERZEICHNIS ............................................................................................ 7 VI ABKÜRZUNGSVERZEICHNIS........................................................................................... 9 0 VIELEN DANK AN … ...................................................................................................... 11 1. EINLEITUNG ................................................................................................................ 12 1.1 Wasser, seine Nutzung und Verschmutzung ............................................................ 12 1.2 Das Wasser und seine Schadstoffe .......................................................................... 15 1.3 Monitoring von Oberflächengewässern .................................................................... 17 1.3.1 Entnahme konventioneller Schöpfproben .............................................................. 17 1.3.2 Entnahme von Mischproben (integrative oder Kompositproben) ........................... 18 1.3.3 Probenahme des Schwebstoffanteils in der Wasserphase .................................... 19 2. PASSIVSAMMLER IN DER WASSERANALYTIK ................................................................ 21 2.1 Theoretische Grundlagen ......................................................................................... 21 2.1.1 Allgemeiner Aufbau von Passivsammlern ............................................................... 23 2.1.2 Die einzelnen Schritte von der Wasser- in die Sammelphase ................................ 25 2.1.3 Adsorptive und absorptive Akkumulation des Analyten in der Sammelphase ........ 26 2.2 Passivsammlersysteme in der Wasseranalytik ......................................................... 28 2.2.1 Absorbierende Passivsammler für hydrophobe Analyten ....................................... 28 2.2.1.1 Semipermeable membrane device (SPMD) .......................................................... 28 2.2.1.2 LDPE-Streifen (LDPE strips) ................................................................................ 29 2.2.1.3 Silikonplatten (silicone sheets) ........................................................................... 30 2.2.1.4 Chemcatcher ...................................................................................................... 31 2.2.1.5 Lösungsmittelfreie Passivsammler (MESCO / Silikonstab) .................................. 32 2.2.2 Absorbierende Passivsammler für polare Analyten ............................................... 35 2.2.2.1 Polar organic integrative Sampler (POCIS) ......................................................... 35 2.2.2.2 Chemcatcher ...................................................................................................... 35 2.3 Auswertung von Passivsammlerdaten ..................................................................... 35 2.3.1 Gleichgewichtssammler ......................................................................................... 36 2.3.2 Laborkalibrierung .................................................................................................. 37 2.3.3 In-situ-Kalibrierung mit Performance Reference Compounds (PRC) ...................... 38 2.3.4 Validierung von Passivsammlern............................................................................ 39 3. LÖSLICHKEIT UND THERMODYNAMISCHES GLEICHGEWICHT ...................................... 41 3.1 Freie Enthalpie und chemisches Potential ................................................................ 41 3.2 Lineare freie Energie-Beziehungen (LFER) für die Abschätzung von KSW ................ 41 3.3 Kosolvenzmodelle für die Modellierung von KSW ...................................................... 43 3.3.1 Log-Linear-Modell von Yalkowsky .......................................................................... 43 3.3.2 Freie Enthalpie-Ansatz (Khossravi-Connors-Modell) .............................................. 44 3.3.3 Jouyban-Acree-Modell ............................................................................................ 44 4. MATERIAL UND METHODEN ......................................................................................... 45 4.1 Präparation der verwendeten Passivsammler .......................................................... 45 4.2 Laborkalibrierung zur Bestimmung von Sammelraten ............................................... 45 4.2.1 Beschreibung der Versuche für die Silikonstab-Kalibrierung .................................. 45 4.3 Experimentelle Bestimmung von Sammler-Wasser-Verteilungskoeffizienten KSW ... 48 4.3.1 Zeitabhängige KSW-Bestimmung in der Wasserphase .......................................... 48 4.3.2 KSW-Bestimmung mit der Kosolvenzmethode ....................................................... 50 4.4 Validierung des Silikonstabs an limnischen Gewässergütemessstationen ............... 52 5. ERGEBNISSE UND DISKUSSION ................................................................................... 55 5.1 Sammelraten RS für den Silikonstab aus Kalibrierversuchen .................................... 55 5.1.1 Temperaturabhängigkeit ....................................................................................... 58 5.1.2 Einfluss der Hydrodynamik auf die Aufnahmekinetik von PAK ................................ 59 5.1.3 Modellierung von Sammelraten .............................................................................. 62 5.1.3.1 Polynomisches Modell nach Vrana [137] ............................................................. 62 5.1.3.2 Diffusionsmodell nach Booij [71] ......................................................................... 64 5.1.3.3 Diffusionsmodell nach Rusina [85] ...................................................................... 66 5.1.4 Wahl der geeigneten In-situ-Kalibrierung am Beispiel eines Kalibrierversuchs ..... 67 5.1.4.1 Berechnung von In-situ-Sammelraten mit RS-Modellen ...................................... 68 5.1.4.2 Berechnung von In-situ-Sammelraten über Eliminierung von PRCs .................... 69 5.1.4.3 Vergleich Modelle und PRCs mit experimentellen Sammelraten .......................... 70 5.2 Experimentelle Bestimmung des Sammler-Wasser-Verteilungskoeffizienten KSW ... 73 5.2.1 Zeitabhängige KSW-Bestimmung in der Wasserphase .......................................... 73 5.2.2 Zusammenfassung KSW(t)-Versuche in der Wasserphase .................................... 81 5.2.3 KSW-Bestimmung mit der Kosolvenzmethode ....................................................... 81 5.2.3.1 Kosolvenzmodelle ............................................................................................... 83 5.2.4 Zusammenfassung ................................................................................................ 90 5.3 Empirische Modelle zur Abschätzung von KSW-Werten ............................................ 92 5.3.1 Lineare Korrelation des KSW mit physikochemischen und Molekülparametern ...... 92 5.3.2 Berechnung mit Mehrparameter-Regression (LSER) .............................................. 95 5.3.3 Zusammenfassung Abschätzung von KSW-Werten für den Silikonstab ................. 97 5.4 Freilandvalidierung des Silikonstab-Passivsammlers ................................................ 97 5.4.1 Ausbringung an Gewässergütemessstationen....................................................... 97 5.4.1.1 Validierung des Silikonstabs mit Wasserproben ............................................... 100 5.4.1.2 Validierung des Silikonstabs mit Sedimentproben ............................................ 102 5.4.2 Validierung des Silikonstabs bei Laborvergleichsstudien ..................................... 105 6. ERGEBNISSE UND AUSBLICK ..................................................................................... 105 VII LITERATURVERZEICHNIS ......................................................................................... 107 VIII ANHANG ................................................................................................................. 116
17

KARTOTRAK, integrated software solution for contaminated site characterization

Wagner, Laurent 03 November 2015 (has links) (PDF)
Kartotrak software allows optimal waste classification and avoids unnecessary remediation. It has been designed for those - site owners, safety authorities or contractors, involved in environmental site characterization projects - who need to locate and estimate contaminated soil volumes confidently.
18

KARTOTRAK, integrated software solution for contaminated site characterization: presentation of 3D geomodeling software, held at IAMG 2015 in Freiberg

Wagner, Laurent 03 November 2015 (has links)
Kartotrak software allows optimal waste classification and avoids unnecessary remediation. It has been designed for those - site owners, safety authorities or contractors, involved in environmental site characterization projects - who need to locate and estimate contaminated soil volumes confidently.

Page generated in 0.0901 seconds