291 |
Efficient algorithms for compressed sensing and matrix completionWei, Ke January 2014 (has links)
Compressed sensing and matrix completion are two new data acquisition techniques whose efficiency is achieved by exploring low dimensional structures in high dimensional data. Despite the combinatorial nature of compressed sensing and matrix completion, there has been significant development of computationally efficient algorithms which can produce accurate desired solutions to these problems. In this thesis, we are concerned with the development of low per iteration computational complexity algorithms for compressed sensing and matrix completion. First, we derive a locally optimal stepsize selection rule for the simplest iterative hard thresholding algorithm for matrix completion, and obtain a simple yet efficient algorithm. It is observed to have average case performance superior in some aspects to other matrix completion algorithms. To balance the fast convergence rates of more sophisticated recovery algorithms with the low per iteration computational cost of simple line-search algorithms, we introduce a family of conjugate gradient iterative hard thresholding algorithms for both compressed sensing and matrix completion. The theoretical results establish recovery guarantees for the restarted and projected variants of the algorithms, while the empirical performance comparisons establish significant computational advantages of the proposed methods over other hard thresholding algorithms. Finally, we introduce an alternating steepest descent method and a scaled variant especially designed for the matrix completion problem based on a simple factorization model of the low rank matrix. The computational efficacy of this method is achieved by reducing the high per iteration computational cost of the second order method and fully exploring the numerical linear algebra structure in the algorithm. Empirical evaluations establish the effectiveness of the proposed algorithms, compared with other state-of-the-art algorithms.
|
292 |
UF membranes operated on paper machine wastewater : fouling tendencies and characterisationDomingo, Garth Selby 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2001. / ENGLISH ABSTRACT: At the Mondi Kraft paper mill in Piet Retief, paper machine effluent is pre-treated by means
of dissolved air flotation (DAF) and a mierostrainer prior to ultrafiltration (UF). Despite the
rigorous pre-treatment of the effluent, severe fouling of the UF membranes still persisted,
resulting in a sharp decrease in operational flux. In an attempt to improve the flux
performance of the UF membranes an investigation was launched into the possible causes of
membrane fouling. The study yielded the following results:
Ultraviolet-visible (UV-Vis) spectrophotometric analyses of varIOUS effluent samples
collected from different locations at the mill indicated the presence of aromatic compounds.
Lignosulphonate appeared to be the main constituent in all the samples analysed.
UV-Vis spectrophotometry was also performed on fouling extracted from membranes In
order to evaluate the different approaches attempted to reduce membrane fouling. Most of
the UV-Vis spectra obtained did not show the absorbance maxima below 210 nm that were
seen for the paper machine effluent, DAF product, lignosulphonate and mierostrainer product.
This indicated that the compounds with absorbance at lower wavelengths did not foul the
membranes to the same extent as the aromatic substances with absorbance maxima between
230 and 400 nm.
The influence of pH on the absorption of the various effluent samples was also investigated.
An increase in pH resulted in (1) a "shift" in the wavelength scans from a lower to a higher
wavelength, suggesting ionisation (deprotonation) with a subsequent delocalization of
electrons and (2) an increase in the turbidity.
The increase in turbidity which accompanied the Increase in pH could be explained by
complex formation between the carboxylate ions, phenolic groups and divalent metal ions
present in the effluent. Inductively coupled plasma analyses of several effluent samples with
pH values 7 and 13 indicated the presence of significant amounts of Ca2+ ions in the effluent.
There was a significant decrease in the Ca2+ levels with an increase in pH, which supported
the hypothesis that Ca2+ might contribute to complex formation. This resulted in a decrease
in solubility and an increase in turbidity. The addition of a chelating agent
(ethylenediaminetetra-acetic acid disodium salt) to an effluent solution at pH 13 redissolved
the precipitate and considerably reduced the turbidity. The subsequent addition of CaCh again induced precipitation and increased turbidity, confirming the role of Ca2+ in complex
formation.
Gel permeation chromatographic analyses of mierostrainer product at pH 13 showed the
formation of high molecular mass organo-calcium complexes. The exact molecular mass of
the complexes present in the mierostrainer product could not be determined by electro spray
mass spectrometry because of their poor ionisation ability.
Atomic force microscopy and scanning electron microscopy (SEM) showed distinct
differences in the membrane surface texture before and after fouling. Furthermore, SEM
images of the UF membranes exposed the limited ability of the 30 urn microstrainer, installed
downstream from the DAF unit, to remove residual fibres from the DAF product.
Static fouling experiments performed on all the flocculants and coagulants used In the
paper-making process at the mill showed that none of these substances fouled the UF
membranes.
Cleaning of the UF membranes with Triton XIOO®, a nOn-IOnIC surfactant, caused a
temporary increase in the operating flux to values higher than that of the initial flux.
Mechanical cleaning of the UF membrane surface with spongebalIs proved to be one of the
most effective and successful methods to prevent flux loss caused by fouling.
Pre-coating of the UF membranes with Plutonic" FI08, another non-ionic surfactant, did not
promote membrane productivity. Evaluation of various types of membranes indicated that
hydrophilic or negatively charged membranes withstood membrane fouling more effectively
than hydrophobic UF membranes under the same operating conditions. / AFRIKAANSE OPSOMMING: By Mondi Kraft se papier meule in Piet Retief word afloopwater vanaf die papiermasjiene vir
hergebruik met behulp van ultrafiltrasie (UF) behandel. Opgeloste lugflotasie (OLF) en
mikrosiwwing word as voorbehandeling vir die UF membraanproses ingespan. Ondanks die
intensiewe voorafbehandeling wat toegepas word, vind daar geweldige aanvuiling van die UF
membrane plaas wat tot die vinnige verlaging in bedryfsfluks aanleiding gee. 'n Ondersoek
na die moontlike oorsake van membraan-aanvuiling het die volgende bevindinge opgelewer:
Ultraviolet-sigbare (UV-Vis) spektroskopie van water monsters wat by die meule versamel is,
het die teenwoordigheid van aromatiese komponente aangetoon, met lignosulfonaat die
hoofkomponent in al die monsters wat ontleed is.
Ekstrakte afkomstig van aangevuilde membrane is ook met behulp van UV-Vis-spektroskopie
geanaliseer om verskeie benaderings te evalueer om 'n afname in membraan-aanvuiling te
bewerkstellig. Die oorgrootte meerderheid spektra het nie die absorpsie maksima onder
210 nm aangetoon wat teenwoordig was in monsters van die papier masjien afloopwater,
OLF uitvloeisel, lignosulfonaat en mikrosif produkwater nie. Dit het aangedui dat die
komponente wat by laer golflengte absorbeer nie die UF membrane in dieselfde mate aanvuil
as daardie komponente wat by hoër golflengtes (tussen 230 en 400 nm) absorbeer nie.
Die invloed wat pH op die absorpsie van komponente teenwoordig in die onderskeie
afloopwatermonsters het, is ook ondersoek. 'n Toename in pH het bygedra tot (1) 'n
verskuiwing in die spektra vanaf 'n lae na 'n hoër golflengte vanweë ionisasie
(deprotonering) met gevolglike delokalisasie van elektrone en (2) 'n toename in turbiditeit.
Die toename in turbiditeit wat verband hou met die toename in pH was verduidelik aan die
hand van kompleksvorming tussen die karboksilaat ione, fenoliese groepe en divalente metaal
ione in die afloopwater. Induktief gekoppelde plasma analise van verskeie water monsters by
pH 7 en 13 het die teenwoordigheid van 'n groot hoeveelheid Ca2+ aangetoon. 'n Verlaging
in die vlakke van opgeloste Ca2+ het met die toename in pH verband gehou. Dit het die
moontlike verbintenis tussen Ca2+ en kompleksvorming ondersteun wat bygedra het tot die
afname in oplosbaarheid en toename in turbiditeit. Die byvoeging van
etileendiamientetra-asynsuur-dinatriumsout, 'n kelerings reagens by afloopwater (pH 13) het
die presipitaat weer in oplossing gebring en die turbiditeit merkwaardig verlaag. Die byvoeging van CaCh het weer presipitasie geïnduseer, met 'n gevolglike toename in
turbiditeit. Hiermee is Ca2+ se rol in kompleksvorming bevestig.
Gelpermeasie-chromatografiese analise van die mikrosif produk (pH 13) het die vorming van
hoë molekulêre massa organo-kalsium komplekse bevestig. Dit was egter nie moontlik om
met behulp van massaspektrometrie die korrekte molekulêre massa van die komplekse te
bepaal nie vanweë hul onvermoë om te ioniseer.
Atomiese krag mikroskopie en skandeer elektron mikroskopie (SEM) het duidelik die voor en
na verskil getoon wat aanvuiling op die membraantekstuur gehad het. 'n SEM foto van die
aangevuilde UF membraan het die onvermoë van die mikrosif blootgelê om oorblywende
vesels vanuit die OLF produkwater te verwyder.
Resultate bekom gedurende passiewe aanvuilingseksperimente het aangetoon dat al die
in-proses flokkulante en koagulante wat gebruik word by die papier meule geen bydrae tot die
aanvuiling van die UF membrane maak nie.
Skoonmaak van die UF membrane met Triton XIOO® bring 'n verhoging in bedryfsvloed
teweeg, maar die verhoging, wat hoër as die oorspronklike vloed is, is kortstondig.
Meganiese skoonmaak van die buismembrane met behulp van sponsballe blyk die mees
effektiewe skoonmaakmetode te wees.
Voorafbehandeling van die UF membrane met Plutonic" F 108 het nie die
membraanproduktiwiteit verhoog nie. Daar is ook bevind dat hidrofiliese of negatief gelaaide
membrane groter weerstand bied teen aanvuiling in vergelyking met hidrofobiese UF
membrane onder dieselfde bedryfstoestande.
|
293 |
Impact of different construction methods for concrete buildings on waste managementLoong, Cheuk-yui, Michael., 龍焯銳. January 2004 (has links)
published_or_final_version / Environmental Management / Master / Master of Science in Environmental Management
|
294 |
A review of the use less plastic bags campaignDai, Lai-man, Raymond., 戴勵文. January 1998 (has links)
published_or_final_version / Environmental Management / Master / Master of Science in Environmental Management
|
295 |
Planning on treatments of solid domestic waste in Hong KongCheng, Hoi-cheung., 鄭凱章. January 1997 (has links)
published_or_final_version / Urban Planning / Master / Master of Science in Urban Planning
|
296 |
Provable alternating minimization for non-convex learning problemsNetrapalli, Praneeth Kumar 17 September 2014 (has links)
Alternating minimization (AltMin) is a generic term for a widely popular approach in non-convex learning: often, it is possible to partition the variables into two (or more) sets, so that the problem is convex/tractable in one set if the other is held fixed (and vice versa). This allows for alternating between optimally updating one set of variables, and then the other. AltMin methods typically do not have associated global consistency guarantees; even though they are empirically observed to perform better than methods (e.g. based on convex optimization) that do have guarantees. In this thesis, we obtain rigorous performance guarantees for AltMin in three statistical learning settings: low rank matrix completion, phase retrieval and learning sparsely-used dictionaries. The overarching theme behind our results consists of two parts: (i) devising new initialization procedures (as opposed to doing so randomly, as is typical), and (ii) establishing exponential local convergence from this initialization. Our work shows that the pursuit of statistical guarantees can yield algorithmic improvements (initialization in our case) that perform better in practice. / text
|
297 |
Energy and Design Cost Efficiency for Streaming Applications on Systems-on-ChipZhu, Jun January 2009 (has links)
<p>With the increasing capacity of today's integrated circuits, a number ofheterogeneous system-on-chip (SoC) architectures in embedded systemshave been proposed. In order to achieve energy and design cost efficientstreaming applications on these systems, new design space explorationframeworks and performance analysis approaches are required. Thisthesis considers three state-of-the-art SoCs architectures, i.e., themulti-processor SoCs (MPSoCs) with network-on-chip (NoC) communication,the hybrid CPU/FPGA architectures, and the run-time reconfigurable (RTR)FPGAs. The main topic of the author?s research is to model and capturethe application scheduling, architecture customization, and bufferdimensioning problems, according to the real-time requirement. Sincethese problems are NP-complete, heuristic algorithms and constraintprogramming solver are used to compute a solution.For NoC communication based MPSoCs, an approach to optimize thereal-time streaming applications with customized processorvoltage-frequency levels and memory sizes is presented. A multi-clockedsynchronous model of computation (MoC) framework is proposed inheterogeneous timing analysis and energy estimation. Using heuristicsearching (i.e., greedy and taboo search), the experiments show anenergy reduction (up to 21%) without any loss in application throughputcompared with an ad-hoc approach.On hybrid CPU/FPGA architectures, the buffer minimization scheduling ofreal-time streaming applications is addressed. Based on event models,the problem has been formalized decoratively as constraint basescheduling, and solved by public domain constraint solver Gecode.Compared with traditional PAPS method, the proposed method needssignificantly smaller buffers (2.4% of PAPS in the best case), whilehigh throughput guarantees can still be achieved.Furthermore, a novel compile-time analysis approach based on iterativetiming phases is proposed for run-time reconfigurations in adaptivereal-time streaming applications on RTR FPGAs. Finally, thereconfigurations analysis and design trade-offs analysis capabilities ofthe proposed framework have been exemplified with experiments on bothexample and industrial applications.</p> / Andres
|
298 |
An assessment of carbon emissions reduction potential through zero waste activities in South African municipalities.Jagath, Rohini. January 2010 (has links)
The inception of global warming has resulted in mitigation efforts across all relevant
sectors. Waste management activities produce approximately 12% of methane
emissions in South Africa. The current scope of waste management favours
sustainable strategies targeting zero waste and waste diversion, however landfill
disposal of municipal solid waste (MSW) is still the primary strategy employed by South
African municipalities. This study evaluated the greenhouse gas (GHG) impacts of
various waste management scenarios that included recycling, composting, anaerobic
digestion, and landfill gas recovery through case studies of the eThekwini Municipality
(Mariannhill landfill) and uMgungundlovu District Municipality (New England Road
landfill) MSW streams. Each waste management strategy was assessed on the basis
of GHG emissions, landfill space savings and economic feasibility. A waste stream
analysis (WSA) was conducted to obtain both the qualitative and quantitative data
required. The results of the WSA determined that the biogenic fraction of the MSW
stream for typical South African municipalities varies between 32-40% while the
recyclable fraction ranges between 38-44%. The Waste Resource Optimisation Scenario Evaluation (WROSE) model was
developed for the quantification of GHG emissions and is based on the US EPA
emissions factors for landfill disposal, landfill gas recovery, recycling and composting.
An emissions factor was derived to include the GHG impacts of anaerobic digestion
using a streamlined life cycle analysis approach. The results confirmed that recycling,
anaerobic digestion and composting all produce GHG reductions, in comparison with
the baseline scenario of landfill disposal, and a combination thereof through
Mechanical Biological Treatment (MBT) produced the greatest net GHG reductions
(between -63,338 to -71,522 MTCO2e/annum for the New England Road MSW stream,
and -71,280 to -86,123/annum MTCO2e for the Mariannhill MSW stream). The results
indicated that the implementation of MBT scenarios would produce landfill space
savings of 94,375 to 103,302 m3 for the Mariannhill landfill, extending the landfill
lifespan by 12-14 years, while savings of 73,399 – 74,100 m3 could be realised for the
New England Road landfill, extending the landfill lifespan by 2-3 years. The study
concluded that while the focus of waste management has changed and zero
waste/waste diversion goals require alternative waste management methods to be
implemented, the capital and operational costs of some technologies are the main
barrier for implementation in developing countries, and that the environmental and
social benefits should be evaluated further to truly gauge the costs/benefits involved. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2010.
|
299 |
Teorie her a poker / Game theory and pokerSchmid, Martin January 2013 (has links)
This thesis introduces the basic concepts of the game theory. Necessary models and solution concepts are described. Follows the summary of the computational complexity of these concepts and corresponding algorithms. Poker is formalized as one of the game theory game models. State of the art algorithms for the ex- tensive form games are explained with the application to the Poker. The thesis also introduces the Annual Computer Poker Competition and participating pro- grams. Finally, new result about the extensive form games with many actions is presented. Keywords: Game theory, Poker, Nash equilibrium, Extensive form games
|
300 |
Bayesian methods for inverse problems in signal and image processing / Méthodes bayésiennes pour la résolution des problèmes inverses de grande dimension en traitement du signal et des imagesMarnissi, Yosra 25 April 2017 (has links)
Les approches bayésiennes sont largement utilisées dans le domaine du traitement du signal. Elles utilisent des informations a priori sur les paramètres inconnus à estimer ainsi que des informations sur les observations, pour construire des estimateurs. L'estimateur optimal au sens du coût quadratique est l'un des estimateurs les plus couramment employés. Toutefois, comme la loi a posteriori exacte a très souvent une forme complexe, il faut généralement recourir à des outils d'approximation bayésiens pour l'approcher. Dans ce travail, nous nous intéressons particulièrement à deux types de méthodes: les algorithmes d'échantillonnage Monte Carlo par chaînes de Markov (MCMC) et les approches basées sur des approximations bayésiennes variationnelles (VBA).La thèse est composée de deux parties. La première partie concerne les algorithmes d'échantillonnage. Dans un premier temps, une attention particulière est consacrée à l'amélioration des méthodes MCMC basées sur la discrétisation de la diffusion de Langevin. Nous proposons une nouvelle méthode pour régler la composante directionnelle de tels algorithmes en utilisant une stratégie de Majoration-Minimisation ayant des propriétés de convergence garanties. Les résultats expérimentaux obtenus lors de la restauration d'un signal parcimonieux confirment la rapidité de cette nouvelle approche par rapport à l'échantillonneur usuel de Langevin. Dans un second temps, une nouvelle méthode d'échantillonnage basée sur une stratégie d'augmentation des données est proposée pour améliorer la vitesse de convergence et les propriétés de mélange des algorithmes d'échantillonnage standards. L'application de notre méthode à différents exemples en traitement d'images montre sa capacité à surmonter les difficultés liées à la présence de corrélations hétérogènes entre les coefficients du signal.Dans la seconde partie de la thèse, nous proposons de recourir aux techniques VBA pour la restauration de signaux dégradés par un bruit non-gaussien. Afin de contourner les difficultés liées à la forme compliquée de la loi a posteriori, une stratégie de majoration est employée pour approximer la vraisemblance des données ainsi que la densité de la loi a priori. Grâce à sa flexibilité, notre méthode peut être appliquée à une large classe de modèles et permet d'estimer le signal d'intérêt conjointement au paramètre de régularisation associé à la loi a priori. L'application de cette approche sur des exemples de déconvolution d'images en présence d'un bruit mixte Poisson-gaussien, confirme ses bonnes performances par rapport à des méthodes supervisées de l'état de l'art. / Bayesian approaches are widely used in signal processing applications. In order to derive plausible estimates of original parameters from their distorted observations, they rely on the posterior distribution that incorporates prior knowledge about the unknown parameters as well as informations about the observations. The posterior mean estimator is one of the most commonly used inference rule. However, as the exact posterior distribution is very often intractable, one has to resort to some Bayesian approximation tools to approximate it. In this work, we are mainly interested in two particular Bayesian methods, namely Markov Chain Monte Carlo (MCMC) sampling algorithms and Variational Bayes approximations (VBA).This thesis is made of two parts. The first one is dedicated to sampling algorithms. First, a special attention is devoted to the improvement of MCMC methods based on the discretization of the Langevin diffusion. We propose a novel method for tuning the directional component of such algorithms using a Majorization-Minimization strategy with guaranteed convergence properties.Experimental results on the restoration of a sparse signal confirm the performance of this new approach compared with the standard Langevin sampler. Second, a new sampling algorithm based on a Data Augmentation strategy, is proposed to improve the convergence speed and the mixing properties of standard MCMC sampling algorithms. Our methodological contributions are validated on various applications in image processing showing the great potentiality of the proposed method to manage problems with heterogeneous correlations between the signal coefficients.In the second part, we propose to resort to VBA techniques to build a fast estimation algorithm for restoring signals corrupted with non-Gaussian noise. In order to circumvent the difficulties raised by the intricate form of the true posterior distribution, a majorization technique is employed to approximate either the data fidelity term or the prior density. Thanks to its flexibility, the proposed approach can be applied to a broad range of data fidelity terms allowing us to estimate the target signal jointly with the associated regularization parameter. Illustration of this approach through examples of image deconvolution in the presence of mixed Poisson-Gaussian noise, show the good performance of the proposed algorithm compared with state of the art supervised methods.
|
Page generated in 0.1151 seconds