• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 302
  • 106
  • 35
  • 34
  • 23
  • 11
  • 10
  • 6
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 625
  • 132
  • 102
  • 96
  • 78
  • 75
  • 62
  • 58
  • 52
  • 48
  • 47
  • 39
  • 39
  • 36
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Systems Uncertainty in Systems Biology & Gene Function Prediction

Falin, Lee J. 06 April 2011 (has links)
The widespread use of high-throughput experimental assays designed to measure the entire complement of a cells genes or gene products has led to vast stores of data which are extremely plentiful in terms of the number of items they can measure in a single sample, yet often sparse in the number of samples per experiment due to their high cost. This often leads to datasets where the number of treatment levels or time points sampled is limited, or where there are very small numbers of technical and/or biological replicates. If the goal is to use this data to infer network models, these sparse datasets can lead to under-determined systems. While model parameter variation and its effects on model robustness has been well studied, most of this work has looked exclusively at accounting for variation only from measurement error. In contrast, little work has been done to isolate and quantify the amount of parameter variation caused by the uncertainty in the unmeasured regions of time course experiments. Here we introduce a novel algorithm to quantify the uncertainty in the unmeasured inter- vals between biological measurements taken across a set of quantitative treatments. The algorithm provides a probabilistic distribution of possible gene expression values within un- measured intervals, based on a plausible biological constraint. We show how quantification of this uncertainty can be used to guide researchers in further data collection by identifying which samples would likely add the most information to the system under study. We also present an application of this method to isolate and quantify two distinct sources of model parameter variation. In the concluding chapter we discuss another source of uncertainty in systems biology, namely gene function prediction, and compare several algorithms designed for that purpose. / Ph. D.
42

Three-Dimensional Analysis of Geogrid Reinforcement used in a Pile-Supported Embankment

Halvordson, Kyle Arthur 21 January 2008 (has links)
Pile-supported geogrid-reinforced embankments are an exciting new foundation system that is utilized when sites are limited by a soft soil or clay. In this system, an embankment is supported by a bridging layer, consisting of granular fill and one or multiple layers of geogrid reinforcement. The bridging layer transfers the load to piles that have been driven into the soft soil or clay. The load from the embankment induces large deformations in the geogrid reinforcement, causing tensile forces in the ribs of the geogrid. Many of the current methods used to design geogrid reinforcement for this system simplify the approach by assuming that the reinforcement has a parabolic deformed shape. The purpose of this thesis is to thoroughly examine the behavior of the geogrid in a pile-supported embankment system, in an effort to determine the accuracy of the parabolic deformed shape, and identify the most important parameters that affect reinforcement design. The geogrid was analyzed using a three-dimensional model that included a cable net to represent the geogrid and linear springs to represent the soil underneath the geogrid. A larger pressure was applied to the geogrid regions that are directly above the pile caps so that arching effects could be considered, and the stiffness of the springs on top of the pile were stiffer to account for the thin layer of soil between the geogrid and the pile cap. A Mathematica algorithm was used to solve this model using the minimization of energy method. The results were compared to another model of this system that used a membrane to represent the geosynthetic reinforcement. Additionally, the maximum strain was compared to the strain obtained from a geosynthetic reinforcement design formula. A parametric study was performed using the Mathematica algorithm by varying the pile width, embankment pressure applied to the soil, embankment pressure applied to the pile, stiffness of the soil, stiffness of the soil on top of the pile, stiffness of the geogrid, geogrid orientation, rotational stiffness of the geogrid, and the layers of geogrid reinforcement. / Master of Science
43

Legislative support for waste reduction initiatives

Liu, Wai-leung., 廖為良. January 1997 (has links)
published_or_final_version / Environmental Management / Master / Master of Science in Environmental Management
44

Application of analytical chemistry to waste minimisation in the powder coating industry.

January 2005 (has links)
A local company instituted a new chemical procedure in their spray phosphating system used in the pretreatment of large components for industrial racking systems. An inorganic conversion coating is deposited on the workpiece surface during phosphating and this prepares the surface to receive an organic top-coat. The organic coating is applied to the workpiece surface in the form of a powder and cured to form a continuous film about 80 u.m thick. The solution chemistry of the phosphating system was monitored by sampling and chemical analysis and taking direct reading instrumental measurements on the process and rinse solutions. The process was also evaluated using the results of a waste minimisation audit. This involved gathering data on composition, flow rates and costs of inputs and outputs of the process. Two types of information were collected and used during the audit, namely chemical monitoring (concentration levels of Na, Fe, Zn, Mo, Mn and Cr and measurements of conductivity, TDS, SS and pH) and water usage data on the Phosphating Line and existing data (raw materials, workpieces and utility inputs as well as domestic waste, factory waste and scrap metal outputs). The data were analysed using four established waste minimisation techniques. The Scoping Audit and the Water Economy Assessment results were determined using empirically derived models. The Mass Balance and the True Cost of Waste findings were obtained through more detailed calculations using the results of the chemical analysis. The results of the audit showed that the most important area for waste minimsation in the Phosphating Line was the (dragged-out phosphating chemicals present in) wastewater stream. According to the scoping audit, water usage had the third highest waste minimisation potential behind powder and steel consumption for the entire powder coating process. While the scoping audit and the specific water intake value showed that water consumption for the process was not excessive, it did not indicate that the pollution level in the rinse waters was high. Further, drag-out calculations showed that drag-out volumes were typical of those found in the metal finishing industry. However the presence of high levels of metal species in the rinse waters was highlighted through the chemical monitoring of the Phosphating Line. The True Cost of Waste Analysis estimated potential financial savings for the effluent stream at about R8000 for a period of 105 days. However this does not take into consideration the cost of the liability associated with this stream when exceeding effluent discharge limits (given in the Trade Effluent Bylaws) or of the chemical treatment necessary to render this stream suitable for discharge to sewer. Intervention using only "low-cost-no-cost" waste minimisation measures was recommended as a first step before contemplating further areas for technical or economic feasibility studies. However, a further study involving monitoring the sludge was recommended in order to establish the potential financial savings offered by this waste stream. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2005.
45

Critical factors in effective construction waste minimisation at the design stage: a Gauteng region case study

Wortmann, Anine Eschberger 28 April 2015 (has links)
A research report submited to the Faculty of Engineering and the Built Environment, of the University of the Witwatersrand, Johannesburg, in part all fulfillment of the requirements for the MSc. (Building) in Construction Project Management. / Construction waste minimisation and avoidance at the design stage of a construction project is the most favourable solution in the existing waste management hierarchy triangle. However, there are currently only a limited number of exploratory and context-specific studies that state effective construction waste minimisation factors which can be implemented during the design stage. This can be regarded as a relatively new concept and new research topic, especially as no studies have been done in a South African or a Gauteng region context. This research report aims to address this local knowledge gap. The research method included an initial conceptual framework of factors (identified from surveying both global and local literature) as a launch pad in order to quantitatively survey design consultants in Gauteng with regards to both the significance and ease of implementation of the identified factors. The research target population consisted of; architects, architectural technologists, architectural draughtsman, structural engineers, structural technologists, structural draughtsman and finally sustainability consultants. The target population was further narrowed by only including designers who have both attempted to minimise construction on greenfield projects in Gauteng and who have received Green Building Council of South Africa (GBCSA) accreditation on the same project. This report presents a hierarchical list of twenty-six critical factors that can be implemented during the design stage in order to minimise or avoid construction waste in the context of Gauteng, South Africa. The report further indicates which of these factors will be easier to implement than others. These factors are aimed mainly at clients of construction projects, as they are in essence the stakeholders who will contractually enforce designers to implement these construction waste minimisation factors in order to lower project costs. Furthermore; these factors will also serve as valuable references for the Gauteng Provincial Government as the factors can be utilized in order to drive provincial construction waste regulations and eventually national reform.
46

Compressed sensing for error correction on real-valued vectors

Tordsson, Pontus January 2019 (has links)
Compressed sensing (CS) is a relatively new branch of mathematics with very interesting applications in signal processing, statistics and computer science. This thesis presents some theory of compressed sensing, which allows us to recover (high-dimensional) sparse vectors from (low-dimensional) compressed measurements by solving the L1-minimization problem. A possible application of CS to the problem of error correction is also presented, where sparse vectors are that of arbitrary noise. Successful sparse recovery by L1-minimization relies on certain properties of rectangular matrices. But these matrix properties are extremely subtle and difficult to numerically verify. Therefore, to get an idea of how sparse (or dense) errors can be, numerical simulation of error correction was done. These simulations show the performance of error correction with respect to various levels of error sparsity and matrix dimensions. It turns out that error correction degrades slower for low matrix dimensions than for high matrix dimensions, while for sufficiently sparse errors, high matrix dimensions offer a higher likelihood of guaranteed error correction.
47

Mechanism Design For Covering Problems

Minooei, Hadi January 2014 (has links)
Algorithmic mechanism design deals with efficiently-computable algorithmic constructions in the presence of strategic players who hold the inputs to the problem and may misreport their input if doing so benefits them. Algorithmic mechanism design finds applications in a variety of internet settings such as resource allocation, facility location and e-commerce, such as sponsored search auctions. There is an extensive amount of work in algorithmic mechanism design on packing problems such as single-item auctions, multi-unit auctions and combinatorial auctions. But, surprisingly, covering problems, also called procurement auctions, have almost been completely unexplored, especially in the multidimensional setting. In this thesis, we systematically investigate multidimensional covering mechanism- design problems, wherein there are m items that need to be covered and n players who provide covering objects, with each player i having a private cost for the covering objects he provides. A feasible solution to the covering problem is a collection of covering objects (obtained from the various players) that together cover all items. Two widely considered objectives in mechanism design are: (i) cost-minimization (CM) which aims to minimize the total cost incurred by the players and the mechanism designer; and (ii) payment minimization (PayM), which aims to minimize the payment to players. Covering mechanism design problems turn out to behave quite differently from packing mechanism design problems. In particular, various techniques utilized successfully for packing problems do not perform well for covering mechanism design problems, and this necessitates new approaches and solution concepts. In this thesis we devise various techniques for handling covering mechanism design problems, which yield a variety of results for both the CM and PayM objectives. In our investigation of the CM objective, we focus on two representative covering problems: uncapacitated facility location (UFL) and vertex cover. For multi-dimensional UFL, we give a black-box method to transform any Lagrangian-multiplier-preserving ??-approximation algorithm for UFL into a truthful-in-expectation, ??-approximation mechanism. This yields the first result for multi-dimensional UFL, namely a truthful-in-expectation 2-approximation mechanism. For multi-dimensional VCP (Multi-VCP), we develop a decomposition method that reduces the mechanism-design problem into the simpler task of constructing threshold mechanisms, which are a restricted class of truthful mechanisms, for simpler (in terms of graph structure or problem dimension) instances of Multi-VCP. By suitably designing the decomposition and the threshold mechanisms it uses as building blocks, we obtain truthful mechanisms with approximation ratios (n is the number of nodes): (1) O(r2 log n) for r-dimensional VCP; and (2) O(r log n) for r-dimensional VCP on any proper minor-closed family of graphs (which improves to O(log n) if no two neighbors of a node belong to the same player). These are the first truthful mechanisms for Multi-VCP with non-trivial approximation guarantees. For the PayM objective, we work in the oft-used Bayesian setting, where players??? types are drawn from an underlying distribution and may be correlated, and the goal is to minimize the expected total payment made by the mechanism. We consider the problem of designing incentive compatible, ex-post individually rational (IR) mechanisms for covering problems in the above model. The standard notion of incentive compatibility (IC) in such settings is Bayesian incentive compatibility (BIC), but this notion is over-reliant on having precise knowledge of the underlying distribution, which makes it a rather non- robust notion. We formulate a notion of IC that we call robust Bayesian IC (robust BIC) that is substantially more robust than BIC, and develop black-box reductions from robust BIC-mechanism design to algorithm design. This black-box reduction applies to single- dimensional settings even when we only have an LP-relative approximation algorithm for the algorithmic problem. We obtain near-optimal mechanisms for various covering settings including single- and multi-item procurement auctions, various single-dimensional covering problems, and multidimensional facility location problems. Finally, we study the notion of frugality, which considers the PayM objective but in a worst-case setting, where one does not have prior information about the players??? types. We show that some of our mechanisms developed for the CM objective are also good with respect to certain oft-used frugality benchmarks proposed in the literature. We also introduce an alternate benchmark for frugality, which more directly reflects the goal that the mechanism???s payment be close to the best possible payment, and obtain some preliminary results with respect to this benchmark.
48

A review of the use less plastic bags campaign /

Dai, Lai-man, Raymond. January 1998 (has links)
Thesis (M. Sc.)--University of Hong Kong, 1998. / Includes bibliographical references (leaf 107-110).
49

Planning on treatments of solid domestic waste in Hong Kong /

Cheng, Hoi-cheung. January 1997 (has links)
Thesis (M. Sc.)--University of Hong Kong, 1997. / Includes bibliographical references.
50

Desigualdade de Díaz-Saá e aplicações / Díaz-Saá Inequality and aplications

Cunha, Lucas Gabriel Ferreira da 03 March 2017 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2017-03-20T15:49:26Z No. of bitstreams: 2 Dissertação - Lucas Gabriel Ferreira da Cunha - 2017.pdf: 1607355 bytes, checksum: 485729a91d466d80865e9d841a306018 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2017-03-20T15:49:42Z (GMT) No. of bitstreams: 2 Dissertação - Lucas Gabriel Ferreira da Cunha - 2017.pdf: 1607355 bytes, checksum: 485729a91d466d80865e9d841a306018 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2017-03-20T15:49:42Z (GMT). No. of bitstreams: 2 Dissertação - Lucas Gabriel Ferreira da Cunha - 2017.pdf: 1607355 bytes, checksum: 485729a91d466d80865e9d841a306018 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-03-03 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / In this work we will present and demonstrate the Diaz & Saá’s Inequality thus like the tools used in their demonstration and we will apply the results obtained in semilinear elliptic problems with limited and not limited domains. We will present necessary and sufficient conditions to show the existence and uniqueness of solution for the following −Δp u = f (x, u) problem type in a limited domain. Moreover, we will also obtain regularity of solution to this problem. Next we will show results relative to the first eigenvalue of a (p, q) − Laplacian system type in R^N . / Neste trabalho apresentaremos e demonstraremos a desigualdade de Díaz & Saá assim como as ferramentas utilizadas em sua demonstração e aplicaremos os resultados obtidos em problemas elípticos semilineares com domínios limitados e não limitados. Exibiremos condições necessárias e suficientes para mostrarmos a existência e a unicidade de solução para um problema do tipo −Δp u = f (x, u) em um domínio limitado, obteremos também a regularidade da solução para esse problema. Em seguida mostraremos resultados relativos ao primeiro autovalor de um sistema do tipo (p, q) − Laplaciano em R^N .

Page generated in 0.16 seconds