Spelling suggestions: "subject:"residuals"" "subject:"presiduals""
111 |
Extensions of the normal distribution using the odd log-logistic family: theory and applications / Extensões do normal distribuição utilizando a família odd log-logística: teoria e aplicaçõesAltemir da Silva Braga 23 June 2017 (has links)
In this study we propose three new distributions and a study with longitudinal data. The first was the Odd log-logistic normal distribution: theory and applications in analysis of experiments, the second was Odd log-logistic t Student: theory and applications, the third was the Odd log-logistic skew normal: the new distribution skew-bimodal with applications in analysis of experiments and the fourth regression model with random effect of the Odd log-logistic skew normal distribution: an application in longitudinal data. Some have been demonstrated such as symmetry, quantile function, some expansions, ordinary incomplete moments, mean deviation and the moment generating function. The estimation of the model parameters were approached by the method of maximum likelihood. In applications were used regression models to data from a completely randomized design (CRD) or designs completely randomized in blocks (DBC). Thus, the models can be used in practical situations for as a completely randomized designs or completely randomized blocks designs, mainly, with evidence of asymmetry, kurtosis and bimodality. / A distribuição normal é uma das mais importantes na área de estatística. Porém, não é adequada para ajustar dados que apresentam características de assimetria ou de bimodalidade, uma vez que tal distribuição possui apenas os dois primeiros momentos, diferentes de zero, ou seja, a média e o desvio-padrão. Por isso, muitos estudos são realizados com a finalidade de criar novas famílias de distribuições que possam modelar ou a assimetria ou a curtose ou a bimodalidade dos dados. Neste sentido, é importante que estas novas distribuições tenham boas propriedades matemáticas e, também, a distribuição normal como um submodelo. Porém, ainda, são poucas as classes de distribuições que incluem a distribuição normal como um modelo encaixado. Dentre essas propostas destacam-se: a skew-normal, a beta-normal, a Kumarassuamy-normal e a gama-normal. Em 2013 foi proposta a nova família X de distribuições Odd log-logística-G com o objetivo de criar novas distribuições de probabildade. Assim, utilizando as distribuições normal e a skew-normal como função base foram propostas três novas distribuições e um quarto estudo com dados longitudinais. A primeira, foi a distribuição Odd log-logística normal: teoria e aplicações em dados de ensaios experimentais; a segunda foi a distribuição Odd log-logística t Student: teoria e aplicações; a terceira foi a distribuição Odd log-logística skew-bimodal com aplicações em dados de ensaios experimentais e o quarto estudo foi o modelo de regressão com efeito aleatório para a distribuição distribuição Odd log-logística skew-bimodal: uma aplicação em dados longitudinais. Estas distribuições apresentam boas propriedades tais como: assimetria, curtose e bimodalidade. Algumas delas foram demonstradas como: simetria, função quantílica, algumas expansões, os momentos incompletos ordinários, desvios médios e a função geradora de momentos. A flexibilidade das novas distrições foram comparada com os modelos: skew-normal, beta-normal, Kumarassuamy-normal e gama-normal. A estimativas dos parâmetros dos modelos foram obtidas pelo método da máxima verossimilhança. Nas aplicações foram utilizados modelos de regressão para dados provenientes de delineamentos inteiramente casualizados (DIC) ou delineamentos casualizados em blocos (DBC). Além disso, para os novos modelos, foram realizados estudos de simulação para verificar as propriedades assintóticas das estimativas de parâmetros. Para verificar a presença de valores extremos e a qualidade dos ajustes foram propostos os resíduos quantílicos e a análise de sensibilidade. Portanto, os novos modelos estão fundamentados em propriedades matemáticas, estudos de simulação computacional e com aplicações para dados de delineamentos experimentais. Podem ser utilizados em ensaios inteiramente casualizados ou em blocos casualizados, principalmente, com dados que apresentem evidências de assimetria, curtose e bimodalidade.
|
112 |
Robust gamma generalized linear models with applications in actuarial scienceWang, Yuxi 09 1900 (has links)
Les modèles linéaires généralisés (GLMs) constituent l’une des classes de modèles les plus populaires en statistique. Cette classe contient une grande variété de modèles de régression fréquemment utilisés, tels que la régression linéaire normale, la régression logistique et les gamma GLMs. Dans les GLMs, la distribution de la variable de réponse définit une famille
exponentielle. Un désavantage de ces modèles est qu’ils ne sont pas robustes par rapport aux valeurs aberrantes. Pour les modèles comme la régression linéaire normale et les gamma GLMs, la non-robustesse est une conséquence des ailes exponentielles des densités. La différence entre les tendances de l’ensemble des données et celles des valeurs aberrantes donne lieu à des inférences et des prédictions biaisées.
A notre connaissance, il n’existe pas d’approche bayésienne robuste spécifique pour les GLMs. La méthode la plus populaire est fréquentiste ; c’est celle de Cantoni and Ronchetti (2001). Leur approche consiste à adapter les M-estimateurs robustes pour la régression linéaire au contexte des GLMs. Cependant, leur estimateur est dérivé d’une modification de la
dérivée de la log-vraisemblance, au lieu d’une modification de la vraisemblance (comme avec les M-estimateurs robustes pour la régression linéaire). Par conséquent, il n’est pas possible d’établir une correspondance claire entre la fonction modifiée à optimiser et un modèle. Le fait de proposer un modèle robuste présente deux avantages. Premièrement, il permet de comprendre et d’interpréter la modélisation. Deuxièmement, il permet l’analyse fréquentiste et bayésienne. La méthode que nous proposons s’inspire des idées de la régression linéaire robuste bayésienne. Nous adaptons l’approche proposée par Gagnon et al. (2020), qui consiste à utiliser une distribution normale modifiée avec des ailes plus relevées pour le terme d’erreur. Dans notre contexte, la distribution de la variable de réponse est une version modifiée
où la partie centrale de la densité est conservée telle quelle, tandis que les extrémités sont remplacées par des ailes log-Pareto, se comportant comme (1/|x|)(1/ log |x|)λ. Ce mémoire se concentre sur les gamma GLMs. La performance est mesurée à la fois théoriquement et empiriquement, avec une analyse des données sur les coûts hospitaliers. / Generalized linear models (GLMs) form one of the most popular classes of models in statistics. This class contains a large variety of commonly used regression models, such as normal linear regression, logistic regression and gamma GLMs. In GLMs, the response variable distribution defines an exponential family. A drawback of these models is that they are non-robust against outliers. For models like the normal linear regression and gamma GLMs, the non-robustness is a consequence of the exponential tails of the densities. The difference in trends in the bulk of the data and the outliers yields skewed inference and prediction.
To our knowledge, there is no Bayesian robust approach specifically for GLMs. The most popular method is frequentist; it is that of Cantoni and Ronchetti (2001). Their approach is to adapt the robust M-estimators for linear regression to the context of GLMs. However, their estimator is derived from a modification of the derivative of the log-likelihood, instead of from a modification of the likelihood (as with robust M-estimators for linear regression). As a consequence, it is not possible to establish a clear correspondence between the modified function to optimize and a model. Having a robust model has two advantages. First, it allows for an understanding and an interpretation of the modelling. Second, it allows for both frequentist and Bayesian analysis. The method we propose is based on ideas from Bayesian robust linear regression. We adapt the approach proposed by Gagnon et al. (2020), which consists of using a modified normal distribution with heavier tails for the error term. In our context, the distribution of the response variable is a modified version where the central part of the density is kept as is, while the extremities are replaced by log-Pareto tails, behaving like (1/|x|)(1/ log |x|)λ. The focus of this thesis is on gamma GLMs. The performance is measured both theoretically and empirically, with an analysis of hospital costs data.
|
113 |
Agglomerationstechnologien für Reststoffe aus Midrex-DirektreduktionsanlagenLohmeier, Laura 31 May 2023 (has links)
Bei der Herstellung von direkt reduziertem Eisen im Midrex-Direktreduktionsprozess fallen zahlreiche eisenhaltige, feinkörnige Hüttenreststoffe an. Um eine Deponierung der Reststoffe zu vermeiden wurden zwei verschiedene Varianten zur Aufbereitung dieser Reststoffe durch Brikettierung anhand von Laborversuchen erprobt und hinsichtlich ihrer resultierenden Eigenschaften bewertet. Variante I umfasst die Brikettierung der Reststoffe zum erneuten Einsatz als Ausgangsmaterial im Midrex-Direktreduktionsprozess. Variante II untersucht die Einbindung der Reststoffe bei der ohnehin stattfindenen Heißbrikettierung der reduzierten Pellets zu heiß brikettiertem Eisen (HBI). Die vorliegende Arbeit zeigt für beide Varianten, dass mit geeigneten Mischungszusammensetzungen und Brikettierbedingungen Briketts mit ausreichenden mechanischen, thermischen und metallurgischen Eigenschaften hergestellt werden können. Die zugrunde liegenden Bindemechanismen werden anhand von optischer Mikroskopie, Vickershärtebestimmungen und REM/EDX-Untersuchungen qualitativ beurteilt.:1 Einleitung
2 Stand der Technik
2.1 Direktreduktion, Midrex-Prozess und Reststoffproblematik
2.2 Agglomeration von eisenhaltigen Reststoffen und Feinerzen
2.2.1 Brikettierung von Hüttenreststoffen aus dem Midrex-Prozess
2.2.2 Brikettierung von Reststoffen aus dem Hochofen
2.2.3 Aufbauagglomeration von Feinerzen
2.2.4 Sintern von Feinerzen
2.2.5 Auswahl einer geeigneten Agglomerationsmethode
2.2.6 Anforderungen an die Briketts für den Einsatz im Midrex-Prozess
2.2.7 Anforderungen an HBI-Briketts
2.3 Zielstellung
3 Anwendungsbezogene Grundlagen
3.1 Bindemechanismen
3.2 Pressverdichtung
3.3 Bindemittel
3.3.1 Vorbemerkungen
3.3.2 Bentonit
3.3.3 Stärke und Cellulose
3.3.4 Sulfitablaugen
3.3.5 Zement
3.3.6 Löschkalk
4 Versuchsaufbau und Versuchsdurchführung
4.1 Charakterisierung des Einsatzmaterials und der Bindemittel
4.1.1 Hüttenreststoffe
4.1.2 Eisenerzpellets
4.1.3 DRI-Pellets
4.1.4 Bindemittel
4.2 Brikettierung mit Bindemittel
4.2.1 Statistische Versuchsplanung
4.2.2 Mischen
4.2.3 Vorwärmen
4.2.4 Brikettieren
4.2.5 Aushärten und Lagerung
4.2.6 Mechanische und metallurgische Beurteilung der Briketts
4.3 Heißbrikettierung
4.3.1 Mischen
4.3.2 Aufheizen
4.3.3 Brikettieren
4.3.4 Beurteilung der Briketteigenschaften
4.4 Betrachtung der Bindemechanismen
5 Ergebnisse und Diskussion
5.1 Brikettierung der Reststoffmischung mit Bindemitteln (Variante I a)
5.1.1 Mechanische Eigenschaften der Briketts
5.1.2 Metallurgische Eigenschaften der Briketts
5.1.3 Chemische Eigenschaften der Reststoffbriketts mit Bindemittel
5.1.4 Zusammenfassung Brikettierung mit Bindemitteln
5.2 Heißbrikettierung der Reststoffmischung (Variante I b)
5.2.1 Einfluss der Mischungszusammensetzung
5.2.2 Einfluss der Pressbedingungen
5.2.3 Mikroskopische Betrachtung
5.2.4 Zusammenfassung Heißbrikettierung der Reststoffmischung
5.3 Gemeinsame Brikettierung der Reststoffmischung mit DRI-Pellets (Variante II)
5.3.1 Einfluss der Reststoffmischung auf die HBI Qualität
5.3.2 Einfluss der Pressbedingungen (Vorwärmtemperatur, Pressdruck)
5.3.3 Mikroskopische Betrachtung
5.3.4 Zusammenfassung Brikettierung Reststoffmischung mit DRI-Pellets
5.4 Beurteilung der Methoden zur Klärung der Bindemechanismen
5.5 Vergleichende Beurteilung der verschiedenen Verwertungsvarianten
6 Zusammenfassung und Ausblick
7 Literaturverzeichnis
Abbildungsverzeichnis
Tabellenverzeichnis
Abkürzungs- und Symbolverzeichnis
Anhang
|
114 |
Data-Driven Diagnosis For Fuel Injectors Of Diesel Engines In Heavy-Duty TrucksEriksson, Felix, Björkkvist, Emely January 2024 (has links)
The diesel engine in heavy-duty trucks is a complex system with many components working together, and a malfunction in any of these components can impact engine performance and result in increased emissions. Fault detection and diagnosis have therefore become essential in modern vehicles, ensuring optimal performance and compliance with progressively stricter legal requirements. One of the most common faults in a diesel engineis faulty injectors, which can lead to fluctuations in the amount of fuel injected. Detecting these issues is crucial, prompting a growing interest in exploring additional signals beyond the currently used signal to enhance the performance and robustness of diagnosing this fault. In this work, an investigation was conducted to identify signals that correlate with faulty injectors causing over- and underfueling. It was found that the NOx, O2, and exhaust pressure signals are sensitive to this fault and could potentially serve as additional diagnostic signals. With these signals, two different diagnostic methods were evaluated to assess their effectiveness in detecting injector faults. The methods evaluated were data-driven residuals and Random Forest classifier. The data-driven residuals, when combined with the CUSUM algorithm, demonstrated promising results in detecting faulty injectors. The O2 signal proved effective in identifying both fault instances, while NOx and exhaust pressure were more effective at detecting overfueling. The Random Forest classifier also showed good performance in detecting both over- and underfueling. However, it was observed that using a classifier requires more extensive data preprocessing. Two preprocessing methods were employed: integrating previous measurements and calculating statistical measures over a defined time span. Both methods showed promising results, with the latter proving to be the better choice. Additionally, the generalization capabilities of these methods across different operating conditions were evaluated. It was demonstrated thatthe data-driven residuals yielded better results compared to the classifier, which requiredtraining on new cases to perform effectively.
|
115 |
Coordinated management of urban wastewater systems by means of advanced environmental decision support systemsMurlà Tuyls, Damián 17 May 2013 (has links)
In the last decades and due to the Water Framework Directive implementation, the urban wastewater cycle management has become more complex. The concept of integrated urban wastewater system management is introduced, and it becomes necessary to consider some new data as the sewer system characteristics or the receiving body. In this sense, environmental decision support systems (EDSS) are very useful and powerful tools to support during the decision making process. A new EDSS for the integrated urban wastewater system management has been developed integrating data from several sources (bibliographic, theoretical or historical) and a real case-based virtual system able to perform simulations. Results demonstrate the benefits of using this kind of systems in comparison with a standard approach, which do not use the expert knowledge and encourages continuing with this research in order to improve the quality and efficiency of this type of EDSSs / En les darreres dècades, i degut a la implementació de la Directiva Marc de l’Aigua, la gestió del cicle urbà de l’aigua ha esdevingut més complexa. S’introdueix el concepte de gestió integrada de conca, i és necessari considerar certa informació, com les característiques de la xarxa de clavegueram o del medi receptor. D’aquesta manera, els sistemes de suport a la decisió ambiental (EDSS) són eines potentíssimes que faciliten la presa de decisions en aquest camp. S’ha desenvolupat un nou EDSS per a la gestió integrada de conca que utilitza una base sòlida de coneixement expert, integrant informació de diverses fonts (bibliogràfiques, teòriques o històriques) i dos sistemes virtuals basats en dades reals sobre els quals és possible realitzar simulacions. Els resultats demostren que aquest sistema presenta beneficis importants respecte a una gestió estàndard sense sistema expert, i esperona a prosseguir amb la recerca i el desenvolupament per a millorar-lo
|
116 |
Eliminación de compuestos causantes de olores mediante adsorbentes/catalizadores obtenidos a partir de lodos de depuradoraRos Sans, Anna 16 December 2006 (has links)
El aumento de la cantidad de lodos y las dificultades inherentes a su aplicación agrícola y/o disposición en vertederos, hace necesario encontrar nuevas alternativas para su gestión. A nivel europeo, hoy en día se tiende hacia la aplicación de tratamientos térmicos (incineración, pirólisis y gasificación) que permiten una valoración energética de los lodos, si bien generan un residuo sólido que sigue siendo necesario gestionar. El problema medioambiental provocado por (malos) olores resulta difícil de abordar de una manera genérica, teniendo en consideración la propia naturaleza del olor y sus posibles causas. Los olores en las EDARs son provocados básicamente por la degradación de la materia orgánica en condiciones anaeróbicas y se detectan en todas las operaciones unitarias en diferentes niveles de concentración. Esta tesis incidiendo en ambos aspectos, tiene por objeto investigar la valorización de lodos como materiales precursores de adsorbentes/ catalizadores para la eliminación de olores en el entorno de las EDARs, maximizando la reutilización de los lodos.Para la realización de los experimentos se han seleccionado lodos procedentes de tres EDARs situadas en la región de Girona (SC, SB, SL) que difieren en cuanto al tratamiento de los lodos. Ambas muestras han sido caracterizadas con el fin de determinar las diferencias más importantes en los lodos de partida. Los parámetros de caracterización incluyen el análisis de composición química (análisis elemental e inmediato, determinación contenido en cenizas, medida pH, DRX, FT-IR, SEM / EDX) así como análisis de superficie (adsorción de N2 y CO2). En primer lugar los lodos caracterizados han sido sometidos a diferentes tratamientos térmicos de gasificación y pirólisis y los adsorbentes/ catalizadores obtenidos se han probado como adsorbentes para la eliminación de H2S. Como consecuencia de este estudio, se ha desechado el uso de uno de lodos (SC) puesto que se obtenían resultados muy similares a (SB), a continuación el estudio se centró en el lodo de SL. Con este objetivo se han preparado 12 muestras 6 de ellas pirolizadas y 6 gasificadas en el rango de temperaturas que comprende 600-1100 ºC. Posteriormente las muestras han sido caracterizadas y se ha determinado la capacidad de eliminación (x/M) del H2S. Los resultados muestran que hemos sido capaces de obtener unos materiales que si bien, presentan un bajo desarrollo de porosidad dan lugar a valores de capacidades de eliminación elevados y comparables a carbones y materiales adsorbentes comerciales (Centaur, Sorbalit). Las elevadas eficiencias de eliminación se atribuyen básicamente a la presencia de especies catalíticamente activas tales como los óxidos mixtos de calcio y hierro determinados por DRX en las muestras tratadas térmicamente. El segundo bloque de resultados se centra la mejora de las propiedades texturales de estos materiales adsorbentes. Con este objetivo se llevaron acabo procesos de activación física con CO2 y química con H3PO4 e hidróxidos alcalinos (NaOH y KOH), que hasta el momento no se había probado con este tipo de precursores. Los resultados indican que la activación física (CO2) y química (H3PO4) no son unos buenos métodos para la obtención de adsorbentes altamente porosos con este tipo de materia prima bajo las condiciones probadas, sin embargo la activación con hidróxidos alcalinos da lugar a materiales adsorbentes con superficies específicas de hasta 1600 m2g-1. En el caso de la activación con hidróxidos, tanto el incremento de la relación agente activante/ precursor como el incremento de la temperatura producen un descenso del rendimiento, al mismo tiempo que incrementan el valor de SBET.Los materiales resultantes de la activación con hidróxidos alcalinos se han probado como adsorbentes/ catalizadores para la eliminación de H2S. Los resultados indican que un incremento del área superficial no es indicativo de un aumento de la capacidad de eliminación dada la naturaleza ácida de estos materiales obtenidos. Con el fin de contrarrestar el efecto ácido de estos materiales se han realizado los mismos ensayos añadiendo NaOH al lecho de reacción llegando a valores de x/M de hasta 450 mgg-1. Posteriormente también se han realizado ensayos de eliminación de NH3 con algunas de estas muestras, y los resultados obtenidos de x/M son del orden de carbones activados comerciales. Los materiales adsorbentes obtenidos tras la activación con hidróxidos alcalinos se convierten en materiales muy atractivos para ser utilizados como adsorbentes/ catalizadores de múltiples contaminantes (COVs, Hg...). / During the last years there has been an increase in the number of wastewater treatment plants arising after the implementation of regulatory policies focused on sustainable development of contemporary societies. A large quantity of sewage sludge is produced and in addition, some traditional disposal routes are coming under pressure and others are being phased out. Therefore, it is necessary to seek cost-effective and innovate solutions to the problem incurred by sewage sludge disposal. Nowadays, the tendency in Europe is to use this residue to obtain energy by thermal treatments, such as incineration, pyrolysis and gasification, though during this treatment a residue that needs to be disposed is generated.Furthermore, the environmental problems prompted by odors are difficult to solve take considering the different origin and reasons for these bad smells. Bad smell in waste water treatment plants is produced basically by organic matter degradation. This thesis takes into account these two aspects. The aim of this work focuses on the revalorization of sewage sludge to obtain / prepare adsorbents / catalysts from various sewage-based precursors and their application in H2S and in NH3 abatement at ambient temperature. These two latter compounds are paradigmatic in odor related problems.The sewage sludge samples used in this study were obtained from three Spanish WWTPs located in Girona (SC, SB, SL). The influent of these selected facilities is mainly of domestic origin and differs in sludge treatment schemes. A detailed characterization of the solids under consideration is carried out, as a purpose to define the main differences. The techniques used to characterize precursors include chemical characterization (elemental analysis, ash content, pH determination, DRX, FT-IR, SEM / EDX), as well as their porosity characterization (physical adsorption of N2 and CO2).In the first part of the study we focused on dried samples that were subjected to different thermal treatments such as pyrolysis and gasification at different temperatures. The adsorbents/catalysts obtained were used for H2S removal. Afterwards, 12 samples were prepared from SL, 6 of them gasified between 600 - 1100 ºC and 6 pyrolysed at the same temperatures. The samples were characterized and used as an adsorbents for H2S removal. The results shown that we were able to obtain adsorbents with high removal efficiencies despite their low porosity development. These adsorption capacities values (x/M) are in the same range or even higher than x/M values from commercial active carbons (Centaur, Sorbalit). These high x/M values have been prompted to catalytic species such as dicalcium ferrite, identified by XRD.The second part of the study was focused on the preparation of adsorbents from these sewage sludge-based precursors (SB, SL) using different activation processes. Specifically, physical activation with CO2, H3PO4 activation and alkaline hydroxide activation were explored. It is worth noticing that, as far as we know, there are no previous studies in the literature dealing with the activation of sewage sludge by alkaline hydroxides (NaOH or KOH), whereas CO2 and H3PO4 have been scarcely used for the activation of this precursor. The results of the textural characterization of the materials prepared from physical activation by CO2 and chemical activation by H3PO4 show that, these precursors are not a suitable for the preparation of adsorbents by these two methods. Nevertheless, chemical activation by alkaline hydroxides can be a suitable method to develop porosity and surface areas higher than 1600 m2g-1 can be obtained from both sewage sludge precursors (SB, SL). An increase in the hydroxide: precursor ratio leads to an enhancement of the adsorption capacity of the adsorbents. The resultant materials were tested as adsorbents/catalyst for H2S abatement. The results shown that we are been able to obtain adsorbents from sludge-based materials with high surface areas but with relatively low adsorption capacities as a consequence of its acidic nature. NaOH was added to the adsorption bed to counteract the acidic nature of these materials. The x/M values obtained are higher than 450 mgg-1. In addition, some of these materials were used for NH3 removal being the results obtained comparable to those for commercial activated carbons x/M values, tested under similar experimental conditions. These "new" activated materials can be used as adsorbents for many environmental applications such as VOC, Hg removal, etc.
|
117 |
Drinking water treatment sludge production and dewaterabilityфVerrelli, D. I. January 2008 (has links)
The provision of clean drinking water typically involves treatment processes to remove contaminants. The conventional process involves coagulation with hydrolysing metal salts, typically of aluminium (‘alum’) or trivalent iron (‘ferric’). Along with the product water this also produces a waste by-product, or sludge. The fact of increasing sludge production — due to higher levels of treatment and greater volume of water supply — conflicts with modern demands for environmental best practice, leading to higher financial costs. A further issue is the significant quantity of water that is held up in the sludge, and wasted. / One means of dealing with these problems is to dewater the sludge further. This reduces the volume of waste to be disposed of. The consistency is also improved (e.g. for the purpose of landfilling). And a significant amount of water can be recovered. The efficiency, and efficacy, of this process depends on the dewaterability of the sludge.In fact, good dewaterability is vital to the operation of conventional drinking water treatment plants (WTP’s). The usual process of separating the particulates, formed from a blend of contaminants and coagulated precipitate, relies on ‘clarification’ and ‘thickening’, which are essentially settling operations of solid–liquid separation.WTP operators — and researchers — do attempt to measure sludge dewaterability, but usually rely on empirical characterisation techniques that do not tell the full story and can even mislead. Understanding of the physical and chemical nature of the sludge is also surprisingly rudimentary, considering the long history of these processes. / The present work begins by reviewing the current state of knowledge on raw water and sludge composition, with special focus on solid aluminium and iron phases and on fractal aggregate structure. Next the theory of dewatering is examined, with the adopted phenomenological theory contrasted with empirical techniques and other theories.The foundation for subsequent analyses is laid by experimental work which establishes the solid phase density of WTP sludges. Additionally, alum sludges are found to contain pseudoböhmite, while 2-line ferrihydrite and goethite are identified in ferric sludges. / A key hypothesis is that dewaterability is partly determined by the treatment conditions. To investigate this, numerous WTP sludges were studied that had been generated under diverse conditions: some plant samples were obtained, and the remainder were generated in the laboratory (results were consistent). Dewaterability was characterised for each sludge in concentration ranges relevant to settling, centrifugation and filtration using models developed by LANDMAN and WHITE inter alia; it is expressed in terms of both equilibrium and kinetic parameters, py(φ) and R(φ) respectively.This work confirmed that dewaterability is significantly influenced by treatment conditions.The strongest correlations were observed when varying coagulation pH and coagulant dose. At high doses precipitated coagulant controls the sludge behaviour, and dewaterability is poor. Dewaterability deteriorates as pH is increased for high-dose alum sludges; other sludges are less sensitive to pH. These findings can be linked to the faster coagulation dynamics prevailing at high coagulant and alkali dose.Alum and ferric sludges in general had comparable dewaterabilities, and the characteristics of a magnesium sludge were similar too.Small effects on dewaterability were observed in response to variations in raw water organic content and shearing. Polymer flocculation and conditioning appeared mainly to affect dewaterability at low sludge concentrations. Ageing did not produce clear changes in dewaterability.Dense, compact particles are known to dewater better than ‘fluffy’ aggregates or flocs usually encountered in drinking water treatment. This explains the superior dewaterability of a sludge containing powdered activated carbon (PAC). Even greater improvements were observed following a cycle of sludge freezing and thawing for a wide range of WTP sludges. / Further aspects considered in the present work include deviations from simplifying assumptions that are usually made. Specifically: investigation of long-time dewatering behaviour, wall effects, non-isotropic stresses, and reversibility of dewatering (or ‘elasticity’).Several other results and conclusions, of both theoretical and experimental nature, are presented on topics of subsidiary or peripheral interest that are nonetheless important for establishing a reliable basis for research in this area. / This work has proposed links between industrial drinking water coagulation conditions, sludge dewaterability from settling to filtration, and the microstructure of the aggregates making up that sludge. This information can be used when considering the operation or design of a WTP in order to optimise sludge dewaterability, within the constraints of producing drinking water of acceptable quality.
|
118 |
Evolution and stability of falling liquid films with thermocapillary effects - Evolution et stabilité de films liquides tombants avec effets thermocapillairesScheid, Benoit 15 March 2004 (has links)
This thesis deals with the dynamics of a thin liquid film falling down a heated plate. The heating yields surface tension gradients that induce thermocapillary stresses on the free surface, thus affecting the stability and the evolution of the film. Accounting for the coherence of the flow due to viscosity, two main approaches that reduce the dimensionality of the original problem are usually considered depending on the flow rate (as measured by the Reynolds number): the `long wave' asymptotic expansion for small Reynolds numbers and the `integral boundary layer' approximation for moderate Reynolds numbers. The former suffers from singularities and the latter from incorrectness of the instability threshold for the occurrence of hydrodynamic waves. Thus, the aim of this thesis is twofold: in a first part, we define quantitatively the validity of the `long wave' evolution equation (Benney equation) for the film thickness h including the thermocapillary effect; and in a second part, we improve the `integral boundary layer' approach by combining a gradient expansion to a weighted residual method.
In the first part, we further investigate the Benney equation in its validity domain in the case of periodically inhomogeneous heating in the streamwise direction. It induces steady-state deformations of the free surface with increased transfer rate in regions where the film is thinner, and also in average. The inhomogeneities of the heating also modify the nature of travelling wave solutions at moderate temperature gradients and allows for suppressing wave motion at larger ones.
Moreover, large temperature gradients (for instance positive ones) in the streamwise direction produce large local film thickening that may in turn become unstable with respect to transverse disturbances such that the flow may organize in rivulet-like structures. The mechanism of such instability is elucidated via an energy analysis. The main features of the rivulet pattern are described experimentally and recovered by direct numerical simulations.
In the second part, various models are obtained, which are valid for larger Reynolds numbers than the Benney equation and account for second-order viscous and inertial effects. We then elaborate a strategy to select the optimal model in terms of linear stability properties and existence of nonlinear solutions (solitary waves), for the widest possible range of parameters. This model -- called reduced model -- is a system of three coupled evolution equations for the local film thickness h, the local flow rate q and the surface temperature Ts. Solutions of this model indicate that the interaction of the hydrodynamic and thermocapillary modes is non-trivial, especially in the region of large-amplitude solitary waves.
Finally, the three-dimensional evolution of the solutions of the reduced model in the presence of periodic forcing and noise compares favourably with available experimental data in isothermal conditions and with direct numerical simulations in non-isothermal conditions.
------------------------------------------------
Cette thèse analyse la dynamique d'un film mince s'écoulant le long d'une paroi chauffée. Le chauffage crée des gradients de tension superficielle qui induisent des tensions thermocapillaires à la surface libre, altérant ainsi la stabilité et l'évolution du film. Grâce à la cohérence de l'écoulement assurée par la viscosité, deux approches permettant de réduire la dimensionnalité du problème original sont habituellement considérées suivant le débit (mesuré par le nombre de Reynolds): l'approximation asymptotique dite `longues ondes' pour les faibles nombres de Reynolds et l'approximation `intégrale couche limite' pour les nombres de Reynolds modérés. Cependant, la première approximation souffre de singularités et la dernière de prédictions imprécises du seuil de stabilité des ondes hydrodynamiques à la surface du film. Le but de cette thèse est donc double: dans une première partie, il s'agit de déterminer, de manière quantitative, la validité de l'équation d'évolution `longues ondes' (ou équation de Benney) pour l'épaisseur du film h, en y incluant l'effet thermocapillaire; et dans une seconde partie, il s'agit d'améliorer l'approche `intégrale couche limite' en combinant un développement en gradients avec une méthode aux résidus pondérés.
Dans la première partie, nous étudions l'équation de Benney, dans son domaine de validité, dans le cas d'un chauffage inhomogène et périodique dans la direction de l'écoulement. Cela induit des déformations permanentes de la surface libre avec un accroissement du transfert de chaleur dans les régions où le film est plus mince, mais aussi en moyenne. Un chauffage inhomogène modifie également la nature des solutions d'ondes progressives pour des gradients de températures modérés et conduit même à leur suppression pour des gradients de températures plus importants. De plus, ceux-ci, lorsqu'ils sont par exemple positifs le long de l'écoulement, produisent des épaississements localisés du film qui peuvent à leur tour devenir instables par rapport à des perturbations suivant la direction transverse à l'écoulement. Ce dernier s'organise alors sous forme d'une structure en rivulets. Le mécanisme de cette instabilité est élucidé via une analyse énergétique des perturbations. Les principales caractéristiques des structures en rivulets sont décrites expérimentalement et retrouvées par l'intermédiaire de simulations numériques.
Dans la seconde partie, nous dérivons une famille de modèles valables pour des nombres de Reynolds plus grands que l'équation de Benney, qui prennent en compte les effets visqueux et inertiels du second ordre. Nous élaborons ensuite une stratégie pour sélectionner le modèle optimal en fonction de ses propriétés de stabilité linéaire et de l'existence de solutions non-linéaires (ondes solitaires), et ce pour la gamme de paramètres la plus large possible. Ce modèle -- appelé modèle réduit -- est un système de trois équations d'évolution couplées pour l'épaisseur locale de film h, le débit local q et la température de surface Ts. Les solutions de ce modèle indiquent que l'interaction des modes hydrodynamiques et thermocapillaires n'est pas triviale, spécialement dans le domaine des ondes solitaires de grande amplitude. Finalement, l'évolution tri-dimensionnelle des solutions du modèle réduit en présence d'un forçage périodique ou d'un bruit se compare favorablement aux données expérimentales disponibles en conditions isothermes, ainsi qu'aux simulations numériques directes en conditions non-isothermes
|
119 |
Modelos de regressão beta inflacionados / Inflated beta regression modelsOspina Martinez, Raydonal 04 April 2008 (has links)
Nos últimos anos têm sido desenvolvidos modelos de regressão beta, que têm uma variedade de aplicações práticas como, por exemplo, a modelagem de taxas, razões ou proporções. No entanto, é comum que dados na forma de proporções apresentem zeros e/ou uns, o que não permite admitir que os dados provêm de uma distribuição contínua. Nesta tese, são propostas, distribuições de mistura entre uma distribuição beta e uma distribuição de Bernoulli, degenerada em zero e degenerada em um para modelar dados observados nos intervalos [0, 1], [0, 1) e (0, 1], respectivamente. As distribuições propostas são inflacionadas no sentido de que a massa de probabilidade em zero e/ou um excede o que é permitido pela distribuição beta. Propriedades dessas distribuições são estudadas, métodos de estimação por máxima verossimilhança e momentos condicionais são comparados. Aplicações a vários conjuntos de dados reais são examinadas. Desenvolvemos também modelos de regressão beta inflacionados assumindo que a distribuição da variável resposta é beta inflacionada. Estudamos estimação por máxima verossimilhança. Derivamos expressões em forma fechada para o vetor escore, a matriz de informação de Fisher e sua inversa. Discutimos estimação intervalar para diferentes quantidades populacionais (parâmetros de regressão, parâmetro de precisão) e testes de hipóteses assintóticos. Derivamos expressões para o viés de segunda ordem dos estimadores de máxima verossimilhança dos parâmetros, possibilitando a obtenção de estimadores corrigidos que são mais precisos que os não corrigidos em amostras finitas. Finalmente, desenvolvemos técnicas de diagnóstico para os modelos de regressão beta inflacionados, sendo adotado o método de influência local baseado na curvatura normal conforme. Ilustramos a teoria desenvolvida em um conjuntos de dados reais. / The last years have seen new developments in the theory of beta regression models, which are useful for modelling random variables that assume values in the standard unit interval such as proportions, rates and fractions. In many situations, the dependent variable contains zeros and/or ones. In such cases, continuous distributions are not suitable for modeling this kind of data. In this thesis we propose mixed continuous-discrete distributions to model data observed on the intervals [0, 1],[0, 1) and (0, 1]. The proposed distributions are inflated beta distributions in the sense that the probability mass at 0 and/or 1 exceeds what is expected for the beta distribution. Properties of the inflated beta distributions are given. Estimation based on maximum likelihood and conditional moments is discussed and compared. Empirical applications using real data set are provided. Further, we develop inflated beta regression models in which the underlying assumption is that the response follows an inflated beta law. Estimation is performed by maximum likelihood. We provide closed-form expressions for the score function, Fishers information matrix and its inverse. Interval estimation for different population quantities (such as regression parameters, precision parameter, mean response) is discussed and tests of hypotheses on the regression parameters can be performed using asymptotic tests. We also derive the second order biases of the maximum likelihood estimators and use them to define bias-adjusted estimators. The numerical results show that bias reduction can be effective in finite samples. We also develop a set of diagnostic techniques that can be employed to identify departures from the postulated model and influential observations. To that end, we adopt the local influence approach based in the conformal normal curvature. Finally, we consider empirical examples to illustrate the theory developed.
|
120 |
Development of an Electromagnetic Glottal Waveform Sensor for Applications in High Acoustic Noise EnvironmentsPelteku, Altin E. 14 January 2004 (has links)
The challenges of measuring speech signals in the presence of a strong background noise cannot be easily addressed with traditional acoustic technology. A recent solution to the problem considers combining acoustic sensor measurements with real-time, non-acoustic detection of an aspect of the speech production process. While significant advancements have been made in that area using low-power radar-based techniques, drawbacks inherent to the operation of such sensors are yet to be surmounted. Therefore, one imperative scientific objective is to devise new, non-invasive non-acoustic sensor topologies that offer improvements regarding sensitivity, robustness, and acoustic bandwidth. This project investigates a novel design that directly senses the glottal flow waveform by measuring variations in the electromagnetic properties of neck tissues during voiced segments of speech. The approach is to explore two distinct sensor configurations, namely the“six-element" and the“parallel-plate" resonator. The research focuses on the modeling aspect of the biological load and the resonator prototypes using multi-transmission line (MTL) and finite element (FE) simulation tools. Finally, bench tests performed with both prototypes on phantom loads as well as human subjects are presented.
|
Page generated in 0.0521 seconds