• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 151
  • 26
  • 20
  • 10
  • 8
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 283
  • 283
  • 95
  • 46
  • 42
  • 33
  • 30
  • 27
  • 27
  • 26
  • 25
  • 24
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Oxidation of pharmaceuticals and personal products by permanganate

Gibson, Sara Nichols 08 April 2010 (has links)
Pharmaceuticals and personal care products (PPCPs) are widely used, resulting in trace amounts being detected in the aquatic environment. This presence is of human health and ecological concern and it is necessary to determine the best methods to eliminate them from our waters. The oxidation of PPCPs by permanganate was evaluated using a spectrophotometer to monitor permanganate reduction. Thirty-nine compounds were chosen to represent numerous classifications, including beta blockers, cephalosporins, fluoroquinolones, macrolides, non-steroidal anti-inflammatory drugs, phenol structures, polypeptides, sulfonamides, tetracyclines, and triazines. The reactivity of each compound was determined by measuring the absorbance of permanganate over time as it reacted with an excess of the compound. The absorbance data was fit to a pseudo-first-order reaction model that accounted for the growth of manganese dioxide colloids. The most reactive groups that reduced permanganate within minutes at pH 7.0 were the cephalosporins, phenol structures, and tetracyclines. The majority of the remaining pharmaceuticals and personal care products were moderately or weakly reactive (reducing permanganate within hours). Caffeine, carbadox, monensin, simetone, and tri(2-carboxyethyl)phosphine were poorly reactive (reducing permanganate over days). Metoprolol was the only selected compound that was determined to be potentially non-reactive (no reaction after 1 day). Polarizability and refractive index of the organic compounds showed significant positive correlations (R-squared > 0.50) with the first-order reaction rates for non-steroidal anti-inflammatory drugs and the phenol structures group. The half-life of each PPCP was determined based on a typical dosage of permanganate used for pre-oxidation. Eleven of the thirty-nine PPCPs had a half-life of less than thirty minutes (a typical contact time), indicating that oxidation by permanganate may be a viable option. There are many opportunities for further research in this area, including investigating more PPCPs, physicochemical property correlations, and the impact of water quality conditions
52

Three essays on contract theory and applications

Hwang, Sunjoo 04 September 2015 (has links)
This dissertation consists of three essays. The first essay examines a general theory of information based on informal contracting. The measurement problem—the disparity of true and measured performances—is at the core of many failures in incentive systems. Informal contracting can be a potential solution since, unlike in formal contracting, it can utilize a lot of qualitative and informative signals. However, informal contracting must be self-enforced. Given this trade-off between informativeness and self-enforcement, I show that a new source of statistical information is economically valuable in informal con- tracting if and only if it is sufficiently informative that it refines the existing pass/fail criterion. I also find that a new information is more likely valuable, as the stock of existing information is large. This information theory has implications on the measurement problem, a puzzle of relative performance evaluation and human resources management. I also provide a methodological contribution. For tractable analysis, the first-order approach (FOA) should be employed. Existing FOA-justifying conditions (e.g. the Mirrlees-Rogerson condition) are so strong that the information ranking condition can be applied only to a small set of information structures. Instead, I find a weak FOA- justifying condition, which holds in many prominent examples (with multi- variate normal or some of univariate exponential family distributions). The second essay analyzes the effectiveness of managerial punishments in mitigating moral hazard problem of government bailouts. Government bailouts of systemically important financial or industrial firms are necessary ex-post but cause moral hazard ex-ante. A seemingly perfect solution to this time-inconsistency problem is saving a firm while punishing its manager. I show that this idea does not necessarily work if ownership and management are separated. In this case, the shareholder(s) of the firm has to motivate the manager by using incentive contracts. Managerial punishments (such as Obama’s $500,000 bonus cap) could distort the incentive-contracting program. The shareholder’s ability to motivate the manager could then be reduced and thereby moral hazard could be exacerbated depending on corporate governance structures and punishment measures, which means the likelihood of future bailouts increases. As an alternative, I discuss the effectiveness of shareholder punishments. The third essay analyzes how education affect workers’ career-concerns. A person’s life consists of two important stages: the first stage as a student and the second stage as a worker. In order to address how a person chooses an education-career path, I examine an integrated model of education and career-concerns. In the first part, I analyze the welfare effect of education. In Spence’s job market signaling model, education as a sorting device improves efficiency by mitigating the lemon market problem. In my integrated model, by contrast, education as a sorting device can be detrimental to social welfare, as it eliminates the work incentive generated by career-concerns. In this regard, I suggest scholarship programs aimed at building human capital rather than sorting students. The second part provides a new perspective on education: education is job-risk hedging device (as well as human capital enhancing or sorting device). I show that highly risk-averse people take high education in order to hedge job-risk and pursue safe but medium-return work path. In contrast, lowly risk-averse people take low education, bear job-risk, and pursue high-risk high-return work path. This explains why some people finish college early and begin start-ups, whereas others take master’s or Ph.D. degrees and find safe but stable jobs. / text
53

RULES BASED MODELING OF DISCRETE EVENT SYSTEMS WITH FAULTS AND THEIR DIAGNOSIS

Huang, Zhongdong 01 January 2003 (has links)
Failure diagnosis in large and complex systems is a critical task. In the realm of discrete event systems, Sampath et al. proposed a language based failure diagnosis approach. They introduced the diagnosability for discrete event systems and gave a method for testing the diagnosability by first constructing a diagnoser for the system. The complexity of this method of testing diagnosability is exponential in the number of states of the system and doubly exponential in the number of failure types. In this thesis, we give an algorithm for testing diagnosability that does not construct a diagnoser for the system, and its complexity is of 4th order in the number of states of the system and linear in the number of the failure types. In this dissertation we also study diagnosis of discrete event systems (DESs) modeled in the rule-based modeling formalism introduced in [12] to model failure-prone systems. The results have been represented in [43]. An attractive feature of rule-based model is it's compactness (size is polynomial in number of signals). A motivation for the work presented is to develop failure diagnosis techniques that are able to exploit this compactness. In this regard, we develop symbolic techniques for testing diagnosability and computing a diagnoser. Diagnosability test is shown to be an instance of 1st order temporal logic model-checking. An on-line algorithm for diagnosersynthesis is obtained by using predicates and predicate transformers. We demonstrate our approach by applying it to modeling and diagnosis of a part of the assembly-line. When the system is found to be not diagnosable, we use sensor refinement and sensor augmentation to make the system diagnosable. In this dissertation, a controller is also extracted from the maximally permissive supervisor for the purpose of implementing the control by selecting, when possible, only one controllable event from among the ones allowed by the supervisor for the assembly line in automaton models.
54

Accelerating Convergence of Large-scale Optimization Algorithms

Ghadimi, Euhanna January 2015 (has links)
Several recent engineering applications in multi-agent systems, communication networks, and machine learning deal with decision problems that can be formulated as optimization problems. For many of these problems, new constraints limit the usefulness of traditional optimization algorithms. In some cases, the problem size is much larger than what can be conveniently dealt with using standard solvers. In other cases, the problems have to be solved in a distributed manner by several decision-makers with limited computational and communication resources. By exploiting problem structure, however, it is possible to design computationally efficient algorithms that satisfy the implementation requirements of these emerging applications. In this thesis, we study a variety of techniques for improving the convergence times of optimization algorithms for large-scale systems. In the first part of the thesis, we focus on multi-step first-order methods. These methods add memory to the classical gradient method and account for past iterates when computing the next one. The result is a computationally lightweight acceleration technique that can yield significant improvements over gradient descent. In particular, we focus on the Heavy-ball method introduced by Polyak. Previous studies have quantified the performance improvements over the gradient through a local convergence analysis of twice continuously differentiable objective functions. However, the convergence properties of the method on more general convex cost functions has not been known. The first contribution of this thesis is a global convergence analysis of the Heavy- ball method for a variety of convex problems whose objective functions are strongly convex and have Lipschitz continuous gradient. The second contribution is to tailor the Heavy- ball method to network optimization problems. In such problems, a collection of decision- makers collaborate to find the decision vector that minimizes the total system cost. We derive the optimal step-sizes for the Heavy-ball method in this scenario, and show how the optimal convergence times depend on the individual cost functions and the structure of the underlying interaction graph. We present three engineering applications where our algorithm significantly outperform the tailor-made state-of-the-art algorithms. In the second part of the thesis, we consider the Alternating Direction Method of Multipliers (ADMM), an alternative powerful method for solving structured optimization problems. The method has recently attracted a large interest from several engineering communities. Despite its popularity, its optimal parameters have been unknown. The third contribution of this thesis is to derive optimal parameters for the ADMM algorithm when applied to quadratic programming problems. Our derivations quantify how the Hessian of the cost functions and constraint matrices affect the convergence times. By exploiting this information, we develop a preconditioning technique that allows to accelerate the performance even further. Numerical studies of model-predictive control problems illustrate significant performance benefits of a well-tuned ADMM algorithm. The fourth and final contribution of the thesis is to extend our results on optimal scaling and parameter tuning of the ADMM method to a distributed setting. We derive optimal algorithm parameters and suggest heuristic methods that can be executed by individual agents using local information. The resulting algorithm is applied to distributed averaging problem and shown to yield substantial performance improvements over the state-of-the-art algorithms. / <p>QC 20150327</p>
55

The applicability of batch tests to assess biomethanation potential of organic waste and assess scale up to continuous reactor systems

Qamaruz Zaman, Nastaein January 2010 (has links)
Many of the current methods of assessing anaerobic biodegradability of solid samples require sample modification prior to testing. Steps like sample drying, grinding, re-drying and re-grinding to 2mm or less make the test results difficult to apply to field conditions and could lead to oxygen exposure, possibly distorting the results. Finally, because of a small sample size of about 10-50g w/w, the test result may not be representative of the bulk material. A new tool dubbed ‘tube’ has been developed, made of 10 cm diameter PVC pipe measuring 43.5 cm long with 3600 ml capacity with caps at both ends. For easy sample introduction, one endcap is fixed while the other is screw capped. A distinctive feature is the wide neck opening of about 10 cm where solid samples can be introduced as is, without further sample modification. Research has proven the tube applicable across various types of solid organic waste and conditions provided that a suitable organic loading rate is determined. The tube is best operated using 5-7 days pre-digested digested sewage sludge as seed, with minimal mixing and without the addition of nutrients or alkali solution. The test result can be obtained within 4-6 days to 20 days, signifying a 50-75% and 95% substrate degradation, respectively. Irreproducibility seen in some experiments may not only be a function of the seed and the substrate. The organic loading rate (OLR) at which the test is conducted is also influential especially if test is conducted closer to its maximum OLR tolerance where anaerobic process is more erratic. The performance of a continuous reactor digesting on a similar substrate can be estimated using this new tool. Food waste is established by the tubes to have an ultimate methane potential (B0) of 0.45L CH4/g VS. The same substrate when digested in a continuous reactor will produce about (B) 0.32 L CH4/g VS. The first order rate constant for both systems; batch and continuous are identical at 0.12 to 0.28 d-1. First order kinetics is efficient at modelling the anaerobic degradation when the process is healthy but may be less reliable under an unstable process. This research recommends the use of kinetics in combination with the experimental data (e.g. HRT, OLR, yield) when planning and designing an industrial plant to avoid overdesign and unnecessary building, maintenance and operating costs.
56

MEMBRANE IMMOBILIZED REACTIVE Fe/Pd NANOPARTICLES: MODELING AND TCE DEGRADATION RESULTS

He, Ruo 01 January 2012 (has links)
Detoxification of chlorinated organic compound is an important and urgent issue in water remediation nowadays. Trichloroethylene (TCE), as a model compound in this study, has been proved to be degraded effectively by bimetallic nanoparticles (NPs) in solution phase. In this study, Fe/Pd bimetallic NPs were synthesized in poly (acrylic acid) (PAA) functionalized polyvinylidene fluoride (PVDF) microfiltration membranes. TCE dechlorination with these bimetallic NPs was conducted under different pH values and different metal loadings to study the role of corrosion on reaction rates. One-dimensional mathematical model with pseudo first-order reaction kinetic was introduced to discuss the TCE dechlorination profile in membrane system. Reduction reaction in pores is affected by several parameters including NP loading and size, TCE diffusivity, void volume fraction and surface-area-based reaction rates. This model result indicated that modification is needed to correct the reaction rate obtained from bulk solution in order to represent the actual efficiency of NPs on reduction reaction. In addition, TCE dechlorination mainly occurred near NPs’ surface. Second part of model indicated that reduction mechanism with TCE adsorption-desorption behavior could be used to discuss dechlorination with a high TCE concentration.
57

A Methodology for the Development and Verification of Expressive Ontologies

Katsumi, Megan 12 December 2011 (has links)
This work focuses on the presentation of a methodology for the development and verification of expressive ontologies. Motivated by experiences with the development of first-order logic ontologies, we call attention to the inadequacies of existing development methodologies for expressive ontologies. We attempt to incorporate pragmatic considerations inspired by our experiences while maintaining the rigorous definition and verification of requirements necessary for the development of expressive ontologies. We leverage automated reasoning tools to enable semiautomatic verification of requirements, and to assist other aspects of development where possible. In addition, we discuss the related issue of ontology quality, and formulate a set of requirements for MACLEOD - a proposed development tool that would support our lifecycle.
58

A Methodology for the Development and Verification of Expressive Ontologies

Katsumi, Megan 12 December 2011 (has links)
This work focuses on the presentation of a methodology for the development and verification of expressive ontologies. Motivated by experiences with the development of first-order logic ontologies, we call attention to the inadequacies of existing development methodologies for expressive ontologies. We attempt to incorporate pragmatic considerations inspired by our experiences while maintaining the rigorous definition and verification of requirements necessary for the development of expressive ontologies. We leverage automated reasoning tools to enable semiautomatic verification of requirements, and to assist other aspects of development where possible. In addition, we discuss the related issue of ontology quality, and formulate a set of requirements for MACLEOD - a proposed development tool that would support our lifecycle.
59

Metamodel-Based Probabilistic Design for Dynamic Systems with Degrading Components

Seecharan, Turuna Saraswati January 2012 (has links)
The probabilistic design of dynamic systems with degrading components is difficult. Design of dynamic systems typically involves the optimization of a time-invariant performance measure, such as Energy, that is estimated using a dynamic response, such as angular speed. The mechanistic models developed to approximate this performance measure are too complicated to be used with simple design calculations and lead to lengthy simulations. When degradation of the components is assumed, in order to determine suitable service times, estimation of the failure probability over the product lifetime is required. Again, complex mechanistic models lead to lengthy lifetime simulations when the Monte Carlo method is used to evaluate probability. Based on these problems, an efficient methodology is presented for probabilistic design of dynamic systems and to estimate the cumulative distribution function of the time to failure of a performance measure when degradation of the components is assumed. The four main steps include; 1) transforming the dynamic response into a set of static responses at discrete cycle-time steps and using Singular Value Decomposition to efficiently estimate a time-invariant performance measure that is based upon a dynamic response, 2) replacing the mechanistic model with an approximating function, known as a “metamodel” 3) searching for the best design parameters using fast integration methods such as the First Order Reliability Method and 4) building the cumulative distribution function using the summation of the incremental failure probabilities, that are estimated using the set-theory method, over the planned lifetime. The first step of the methodology uses design of experiments or sampling techniques to select a sample of training sets of the design variables. These training sets are then input to the computer-based simulation of the mechanistic model to produce a matrix of corresponding responses at discrete cycle-times. Although metamodels can be built at each time-specific column of this matrix, this method is slow especially if the number of time steps is large. An efficient alternative uses Singular Value Decomposition to split the response matrix into two matrices containing only design-variable-specific and time-specific information. The second step of the methodology fits metamodels only for the significant columns of the matrix containing the design variable-specific information. Using the time-specific matrix, a metamodel is quickly developed at any cycle-time step or for any time-invariant performance measure such as energy consumed over the cycle-lifetime. In the third step, design variables are treated as random variables and the First Order Reliability Method is used to search for the best design parameters. Finally, the components most likely to degrade are modelled using either a degradation path or a marginal distribution model and, using the First Order Reliability Method or a Monte Carlo Simulation to estimate probability, the cumulative failure probability is plotted. The speed and accuracy of the methodology using three metamodels, the Regression model, Kriging and the Radial Basis Function, is investigated. This thesis shows that the metamodel offers a significantly faster and accurate alternative to using mechanistic models for both probabilistic design optimization and for estimating the cumulative distribution function. For design using the First-Order Reliability Method to estimate probability, the Regression Model is the fastest and the Radial Basis Function is the slowest. Kriging is shown to be accurate and faster than the Radial Basis Function but its computation time is still slower than the Regression Model. When estimating the cumulative distribution function, metamodels are more than 100 times faster than the mechanistic model and the error is less than ten percent when compared with the mechanistic model. Kriging and the Radial Basis Function are more accurate than the Regression Model and computation time is faster using the Monte Carlo Simulation to estimate probability than using the First-Order Reliability Method.
60

実験モード解析によるモーダルパラメータの分散評価とそれに基づく精度の最適化

畔上, 秀幸, Azegami, Hideyuki, 沖津, 昭慶, Okitsu, Akiyoshi, 野田, 英一, Noda, Eiichi, 小林, 秀孝, Kobayashi, Hidetaka 11 1900 (has links)
No description available.

Page generated in 0.0369 seconds