101 |
A Study of the Effect of Three "Non-Rinsing" Compounds on the Tensile Strength of Cotton PercaleBell, Mildred L. 06 1900 (has links)
The purpose of the present study is to determine the effect of three "non-rinse" washing compounds upon the tensile strength of cotton percale, in order to have some basis for recommendation as to use when teaching laundering to homemaking students and homemakers.
|
102 |
Oil spill: are we doing enough to avoid it?Pu, Jaan H. 19 April 2017 (has links)
Yes / This paper reviews the recent studies on the identification and cleaning methods, as well as the consequences of oil spill. The future progression on oil spill prevention studies are also projected here.
|
103 |
An oral history of women cleaning workers in Hong KongTang, Lynn., 鄧琳. January 2006 (has links)
published_or_final_version / abstract / Sociology / Master / Master of Philosophy
|
104 |
Modeling and Querying Uncertainty in Data CleaningBeskales, George January 2012 (has links)
Data quality problems such as duplicate records, missing values, and violation of integrity constrains frequently appear in real world applications. Such problems cost enterprises billions of dollars annually, and might have unpredictable consequences in mission-critical tasks. The process of data cleaning refers to detecting and correcting errors in data in order to improve the data quality. Numerous efforts have been taken towards improving the effectiveness and the efficiency of the data cleaning.
A major challenge in the data cleaning process is the inherent uncertainty about the cleaning decisions that should be taken by the cleaning algorithms (e.g., deciding whether two records are duplicates or not). Existing data cleaning systems deal with the uncertainty in data cleaning decisions by selecting one alternative, based on some heuristics, while discarding (i.e., destroying) all other alternatives, which results in a false sense of certainty. Furthermore, because of the complex dependencies among cleaning decisions, it is difficult to reverse the process of destroying some alternatives (e.g., when new external information becomes available). In most cases, restarting the data cleaning from scratch is inevitable whenever we need to incorporate new evidence.
To address the uncertainty in the data cleaning process, we propose a new approach, called probabilistic data cleaning, that views data cleaning as a random process whose possible outcomes are possible clean instances (i.e., repairs). Our approach generates multiple possible clean instances to avoid the destructive aspect of current cleaning systems. In this dissertation, we apply this approach in the context of two prominent data cleaning problems: duplicate elimination, and repairing violations of functional dependencies (FDs).
First, we propose a probabilistic cleaning approach for the problem of duplicate elimination. We define a space of possible repairs that can be efficiently generated. To achieve this goal, we concentrate on a family of duplicate detection approaches that are based on parameterized hierarchical clustering algorithms. We propose a novel probabilistic data model that compactly encodes the defined space of possible repairs. We show how to efficiently answer relational queries using the set of possible repairs. We also define new types of queries that reason about the uncertainty in the duplicate elimination process.
Second, in the context of repairing violations of FDs, we propose a novel data cleaning approach that allows sampling from a space of possible repairs. Initially, we contrast the existing definitions of possible repairs, and we propose a new definition of possible repairs that can be sampled efficiently. We present an algorithm that randomly samples from this space, and we present multiple optimizations to improve the performance of the sampling algorithm.
Third, we show how to apply our probabilistic data cleaning approach in scenarios where both data and FDs are unclean (e.g., due to data evolution or inaccurate understanding of the data semantics). We propose a framework that simultaneously modifies the data and the FDs while satisfying multiple objectives, such as consistency of the resulting data with respect to the resulting FDs, (approximate) minimality of changes of data and FDs, and leveraging the trade-off between trusting the data and trusting the FDs. In presence of uncertainty in the relative trust in data versus FDs, we show how to extend our cleaning algorithm to efficiently generate multiple possible repairs, each of which corresponds to a different level of relative trust.
|
105 |
Methode zur Analyse von Reinigungsprozessen in nicht immergierten Systemen der LebensmittelindustrieMauermann, Marc 09 July 2012 (has links) (PDF)
Die Auslegung von automatischen Reinigungsprozessen in der Lebensmittelverarbeitung erfolgt überwiegend semi-empirisch und zur Gewährleistung der erforderlichen Produktsicherheit werden die Parameter Reinigungshäufigkeit, -dauer und Chemikalieneinsatz tendenziell zu hoch angesetzt. Das erweiterte Verständnis von Wirkzusammenhängen in industriellen Reinigungsprozessen würde die Auslegung verbessern und zu effizienteren Prozessen führen. Ziel der vorliegenden Arbeit ist es daher, mit einer neuartigen Untersuchungsmethode Voraussetzungen zur Analyse von Reinigungsprozessen in nicht immergierten Systemen zu erarbeiten. Im Mittelpunkt der Arbeit stehen Reinigungsprozesse, die durch den direkten Aufprall eines Flüssigkeitsstrahls auf einer ebenen Oberfläche gekennzeichnet sind. Im ersten Teil der Arbeit werden sowohl der Wissensstand als auch offene Fragenstellungen zu Wirkzusammenhängen von nicht immergierten Reinigungsvorgängen herausgearbeitet. Anschließend erfolgt eine Diskussion von in der Literatur beschriebenen industriellen sowie labortechnischen Methoden zur Untersuchung von Reinigungsprozessen in nicht immergierten Systemen. Auf den Rechercheergebnissen aufbauend, wurde eine Untersuchungsmethode auf Basis der optischen Erfassung von Fluoreszenzemissionen erarbeitet, die eine direkte, orts- und zeitaufgelöste Analyse des Reinigungsverlaufs ermöglicht. Zur Überprüfung der Validität des methodischen Ansatzes wurden schwerpunktmäßig kausale Zusammenhänge zwischen Betriebsparametern des Reinigungssystems und der Reinigbarkeit genutzt.
|
106 |
Fluorocarbon Post-Etch Residue Removal Using Radical Anion ChemistryTimmons, Christopher L. 14 December 2004 (has links)
During fabrication of integrated circuits, fluorocarbon plasma etching is used to pattern dielectric layers. As a byproduct of the process, a fluorocarbon residue is deposited on exposed surfaces and must be removed for subsequent processing. Conventional fluorocarbon cleaning processes typically include at least one plasma or liquid treatment that is oxidative in nature. Oxidative chemistries, however, cause material degradation to next generation low-dielectric constant (low-k) materials that are currently being implemented into fabrication processes. This work addresses the need for alternative fluorocarbon-residue removal chemistries that are compatible with next generation low-k materials. Radical anion chemistries are known for their ability to defluorinate fluorocarbon materials by a reductive mechanism. Naphthalene radical anion solutions, generated using sodium metal, are used to establish cleaning effectiveness with planar model residue films. The penetration rate of the defluorination reaction into model fluorocarbon film residues is measured and modeled. Because sodium is incompatible with integrated circuit processing, naphthalene radical anions are alternatively generated using electrochemical techniques. Using electrochemically-generated radical anions, residue removal from industrially patterned etch structures is used to evaluate the process cleaning efficiency. Optimization of the radical anion concentration and exposure time is important for effective residue removal. The efficiency of removal also depends on the feature spacing and the electrochemical solvent chosen. The synergistic combination of radical anion defluorination and wetting or swelling of the residue by the solvent is necessary for complete removal. In order to understand the interaction between the solvent and the residue, the surface and interfacial energy are determined using an Owens/Wendt analysis. These studies reveal chemical similarities between specific solvents and the model residue films. This approach can also be used to predict residue or film swelling by interaction with chemically similar solvents.
|
107 |
Design of regulated velocity flow assurance device for petroleum industryYardi, Chaitanya Narendra 17 February 2005 (has links)
The petroleum industry faces problems in transportation of crude petroleum be-
cause of the deposition of paraffins, hydrates and asphaltenes on the insides of the
pipeline. These are conventionally removed using either chemical inhibitors or mechani-
cal devices, called pigs, which travel through the pipeline and mechanically scrape away
the deposits. These pigs are propelled by the pipeline product itself and hence travel at
the same velocity as the product. Research has indicated that cleaning would be better
if the pigs are traveling at a relatively constant velocity of around 70% of the product
velocity.
This research utilizes the concept of regulating the bypass flow velocity in order to
maintain the pig velocity. The bypass flow is regulated by the control unit based on
the feedback from the turbine flowmeter, which monitors the bypass flow. A motorized
butterfly valve is used for actually controlling the bypass flow.
In addition to cleaning, the proposed pig utilizes on-board electronics like accelerom-
eter and pressure transducers to store the data gathered during the pig run. This data
can then be analyzed and the condition of the pipeline predicted.
Thus, this research addresses the problem of designing a pig to maintain a constant
velocity in order to achieve better cleaning. It also helps gather elementary data that
can be used to predict the internal conditions in the pipe.
|
108 |
Nano-Particle Removal from Surface of Materials Used in EUV Mask FabricationPandit, Viraj Sadanand January 2006 (has links)
With device scaling, the current optical lithography technique is reaching its technological limit to print small features. Extreme Ultra-Violet (EUV) lithography has shown promise to print extremely thin lines reliably and cost-effectively. Many challenges remain before introducing EUV to large scale manufacturing. The main challenge addressed in this study is particle removal from EUV mask surfaces (CrON1, CrON2, and fused silica) and thermal oxide (SiO₂). Effective pre-clean procedures were developed for each surface. As chemical cleaning methods fail to meet SEMATECH criteria, addition of megasonic energy to EUV mask cleaning baths is seen as a promising cleaning methodology. As the requirement to print fine lines needs to be met, all materials used in EUV mask fabrication either absorb the incident EUV wavelength light or reflect it. Therefore, the masks used in the industry will be reflective instead of the conventional transmissive masks. Also, for the same reason, no protective pellicle can be used leading to all the surfaces unprotected from particle contamination. To avoid the detrimental effect of the particle contamination, a cleaning study for nano-particle removal was performed. A dark field microscope was utilized to study the removal of gold nano-particles from surfaces. The cleaning procedures utilized H₂SO₄ and NH₄OH chemistries with and without megasonic irradiation. The cleaning variables were bath concentration, temperature, and megasonic power. The contamination variables were the gold nanoparticles charge and size, from 40nm to 100nm. For 100 nm negatively charged gold nano-particles deposited on a CrON1 surface, a 1:10 H₂SO₄:DI bath at boiling temperature (101°C) without megasonics gave high particle removal efficiency (PRE) values as did a 1:10 H₂SO₄:DI bath at 35°C with 100W megasonics. Comparison of removal of poly diallyl-dimethyl ammonium chloride (PDAC) coated and uncoated gold nano-particles deposited on a CrON1 surface using dilute H₂SO₄ baths indicated that the coated, positively charged nano-particles were more difficult to remove. PRE trends for different baths indicate surface dissolution (shown to be thermodynamically favorable) as the particle removal mechanism. However, experimental etch rates indicated minimal surface etching in a 10 minute bath. Increased surface roughness indicated possible local galvanic corrosion at particle sites. Low surface etching results meet SEMATECH requirements. During the fused silica surface cleaning study, particle charge (negative) and size (100 nm) of the contamination source and cleaning bath chemistry (NH₄OH) were kept constant. Low PREs were obtained at room temperature for all NH₄OH bath concentrations; however, high PREs were obtained at an elevated temperature (78°C) without megasonics and at room temperature in more dilute chemistries with megasonic power applied. Similar PRE trends were demonstrated for thermal SiO₂ surfaces. The experimental etch rates of the thermal SiO₂ agree with published values.
|
109 |
L'essai d'un nettoyeur de drains hydrauliqueLaperrière, Lucie. January 1988 (has links)
No description available.
|
110 |
Modeling and Querying Uncertainty in Data CleaningBeskales, George January 2012 (has links)
Data quality problems such as duplicate records, missing values, and violation of integrity constrains frequently appear in real world applications. Such problems cost enterprises billions of dollars annually, and might have unpredictable consequences in mission-critical tasks. The process of data cleaning refers to detecting and correcting errors in data in order to improve the data quality. Numerous efforts have been taken towards improving the effectiveness and the efficiency of the data cleaning.
A major challenge in the data cleaning process is the inherent uncertainty about the cleaning decisions that should be taken by the cleaning algorithms (e.g., deciding whether two records are duplicates or not). Existing data cleaning systems deal with the uncertainty in data cleaning decisions by selecting one alternative, based on some heuristics, while discarding (i.e., destroying) all other alternatives, which results in a false sense of certainty. Furthermore, because of the complex dependencies among cleaning decisions, it is difficult to reverse the process of destroying some alternatives (e.g., when new external information becomes available). In most cases, restarting the data cleaning from scratch is inevitable whenever we need to incorporate new evidence.
To address the uncertainty in the data cleaning process, we propose a new approach, called probabilistic data cleaning, that views data cleaning as a random process whose possible outcomes are possible clean instances (i.e., repairs). Our approach generates multiple possible clean instances to avoid the destructive aspect of current cleaning systems. In this dissertation, we apply this approach in the context of two prominent data cleaning problems: duplicate elimination, and repairing violations of functional dependencies (FDs).
First, we propose a probabilistic cleaning approach for the problem of duplicate elimination. We define a space of possible repairs that can be efficiently generated. To achieve this goal, we concentrate on a family of duplicate detection approaches that are based on parameterized hierarchical clustering algorithms. We propose a novel probabilistic data model that compactly encodes the defined space of possible repairs. We show how to efficiently answer relational queries using the set of possible repairs. We also define new types of queries that reason about the uncertainty in the duplicate elimination process.
Second, in the context of repairing violations of FDs, we propose a novel data cleaning approach that allows sampling from a space of possible repairs. Initially, we contrast the existing definitions of possible repairs, and we propose a new definition of possible repairs that can be sampled efficiently. We present an algorithm that randomly samples from this space, and we present multiple optimizations to improve the performance of the sampling algorithm.
Third, we show how to apply our probabilistic data cleaning approach in scenarios where both data and FDs are unclean (e.g., due to data evolution or inaccurate understanding of the data semantics). We propose a framework that simultaneously modifies the data and the FDs while satisfying multiple objectives, such as consistency of the resulting data with respect to the resulting FDs, (approximate) minimality of changes of data and FDs, and leveraging the trade-off between trusting the data and trusting the FDs. In presence of uncertainty in the relative trust in data versus FDs, we show how to extend our cleaning algorithm to efficiently generate multiple possible repairs, each of which corresponds to a different level of relative trust.
|
Page generated in 0.0199 seconds