• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 302
  • 106
  • 35
  • 34
  • 23
  • 11
  • 10
  • 6
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 626
  • 132
  • 103
  • 96
  • 79
  • 75
  • 62
  • 58
  • 52
  • 48
  • 47
  • 40
  • 40
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Error-Tolerant Coding and the Genetic Code

Gutfraind, Alexander January 2006 (has links)
The following thesis is a project in mathematical biology building upon the so-called "error minimization hypothesis" of the genetic code. After introducing the biological context of this hypothesis, I proceed to develop some relevant information-theoretic ideas, with the overall goal of studying the structure of the genetic code. I then apply the newfound understanding to an important question in the debate about the origin of life, namely, the question of the temperatures in which the genetic code, and life in general, underwent their early evolution. <br /><br /> The main advance in this thesis is a set of methods for calculating the primordial evolutionary pressures that shaped the genetic code. These pressures are due to genetic errors, and hence the statistical properties of the errors and of the genome are imprinted in the statistical properties of the code. Thus, by studying the code it is possible to reconstruct, to some extent, the primordial error rates and the composition of the primordial genome. In this way, I find evidence that the fixation of the genetic code occurred in organisms which were not thermophiles.
172

Robust Search Methods for Rational Drug Design Applications

Sadjad, Bashir January 2009 (has links)
The main topic of this thesis is the development of computational search methods that are useful in drug design applications. The emphasis is on exhaustiveness of the search method such that it can guarantee a certain level of geometric accuracy. In particular, the following two problems are addressed: (i) Prediction of binding mode of a drug molecule to a receptor and (ii) prediction of crystal structures of drug molecules. Predicting the binding mode(s) of a drug molecule to a target receptor is pivotal in structure-based rational drug design. In contrast to most approaches to solve this problem, the idea in this work is to analyze the search problem from a computational perspective. By building on top of an existing docking tool, new methods are proposed and relevant computational results are proven. These methods and results are applicable for other place-and-join frameworks as well. A fast approximation scheme for the docking of rigid fragments is described that guarantees certain geometric approximation factors. It is also demonstrated that this can be translated into an energy approximation for simple scoring functions. A polynomial time algorithm is developed for the matching phase of the docked rigid fragments. It is demonstrated that the generic matching problem is NP-hard. At the same time the optimality of the proposed algorithm is proven under certain scoring function conditions. The matching results are also applicable for some of the fragment-based de novo design methods. On the practical side, the proposed method is tested on 829 complexes from the PDB. The results show that the closest predicted pose to the native structure has the average RMS deviation of 1.06 °A. The prediction of crystal structures of small organic molecules has significantly improved over the last two decades. Most of the new developments, since the first blind test held in 1999, have occurred in the lattice energy estimation subproblem. In this work, a new efficient systematic search method that avoids random moves is proposed. It systematically searches through the space of possible crystal structures and conducts search space cuts based on statistics collected from the structural databases. It is demonstrated that the fast search method for rigid molecules can be extended to include flexible molecules as well. Also, the results of some prediction experiments are provided showing that in most cases the systematic search generates a structure with less than 1.0°A RMSD from the experimental crystal structure. The scoring function that has been developed for these experiments is described briefly. It is also demonstrated that with a more accurate lattice energy estimation function, better results can be achieved with the proposed robust search method.
173

Parametric classification and variable selection by the minimum integrated squared error criterion

January 2012 (has links)
This thesis presents a robust solution to the classification and variable selection problem when the dimension of the data, or number of predictor variables, may greatly exceed the number of observations. When faced with the problem of classifying objects given many measured attributes of the objects, the goal is to build a model that makes the most accurate predictions using only the most meaningful subset of the available measurements. The introduction of [cursive l] 1 regularized model titling has inspired many approaches that simultaneously do model fitting and variable selection. If parametric models are employed, the standard approach is some form of regularized maximum likelihood estimation. While this is an asymptotically efficient procedure under very general conditions, it is not robust. Outliers can negatively impact both estimation and variable selection. Moreover, outliers can be very difficult to identify as the number of predictor variables becomes large. Minimizing the integrated squared error, or L 2 error, while less efficient, has been shown to generate parametric estimators that are robust to a fair amount of contamination in several contexts. In this thesis, we present a novel robust parametric regression model for the binary classification problem based on L 2 distance, the logistic L 2 estimator (L 2 E). To perform simultaneous model fitting and variable selection among correlated predictors in the high dimensional setting, an elastic net penalty is introduced. A fast computational algorithm for minimizing the elastic net penalized logistic L 2 E loss is derived and results on the algorithm's global convergence properties are given. Through simulations we demonstrate the utility of the penalized logistic L 2 E at robustly recovering sparse models from high dimensional data in the presence of outliers and inliers. Results on real genomic data are also presented.
174

Optimization of Fuel Consumption in a Hybrid Powertrain

Sivertsson, Martin January 2010 (has links)
Increased environmental awareness together with new legislative demands on lowered emissions and a rising fuel cost have put focus on increasing the fuel efficiency in new vehicles. Hybridization is a way to increase the efficiency of the powertrain.The Haldex electric Torque Vectoring Device is a rear axle with a built in electric motor, designed to combine all-wheel drive with hybrid functionality. A method is developed for creating a real time control algorithm that minimizes the fuel consumption. First the consumption reduction potential of the system is investigated using Dynamic Programming. A real time control algorithm is then devised that indicates a substantial consumption reduction potential compared to all-wheel drive, under the condition that the assumed and measured efficiencies are accurate. The control algorithm is created using equivalent consumption minimization strategy and is implemented without any knowledge of the future driving mission. Two ways of adapting the control according to the battery state of charge are proposed and investigated. The controller optimizes the torque distribution for the current gear as well as assists the driver by recommending the gear which would give the lowest consumption. The simulations indicate a substantial fuel consumption reduction potential even though the system primarily is an all-wheel drive concept. The results from vehicle tests show that the control system is charge sustaining and the driveability is deemed good by the test-drivers.
175

Optimization of Heat Sinks with Flow Bypass Using Entropy Generation Minimization

Hossain, Md Rakib January 2006 (has links)
Forced air cooling of electronic packages is enhanced through the use of extended surfaces or heat sinks that reduce boundary resistance allowing heat generating devices to operate at lower temperatures, thereby improving reliability. Unfortunately, the clearance zones or bypass regions surrounding the heat sink, channel some of the cooling air mass away from the heat sink, making it difficult to accurately estimate thermal performance. The design of an "optimized" heat sink requires a complete knowledge of all thermal resistances between the heat source and the ambient air, therefore, it is imperative that the boundary resistance is properly characterized, since it is typically the controlling resistance in the path. Existing models are difficult to incorporate into optimization routines because they do not provide a means of predicting flow bypass based on information at hand, such as heat sink geometry or approach velocity. <br /><br /> A procedure is presented that allows the simultaneous optimization of heat sink design parameters based on a minimization of the entropy generation associated with thermal resistance and fluid pressure drop. All relevant design parameters such as geometric parameters of a heat sink, source and bypass configurations, heat dissipation, material properties and flow conditions can be simultaneously optimized to characterize a heat sink that minimizes entropy generation and in turn results in a minimum operating temperature of an electronic component. <br /><br /> An analytical model for predicting air flow and pressure drop across the heat sink is developed by applying conservation of mass and momentum over the bypass regions and in the flow channels established between the fins of the heat sink. The model is applicable for the entire laminar flow range and any type of bypass (side, top or side and top both) or fully shrouded configurations. During the development of the model, the flow was assumed to be steady, laminar, developing flow. The model is also correlated to a simple equation within 8% confidence level for an easy implementation into the entropy generation minimization procedure. The influence of all the resistances to heat transfer associated with a heat sink are studied, and an order of magnitude analysis is carried out to include only the influential resistances in the thermal resistance model. Spreading and material resistances due to the geometry of the base plate, conduction and convection resistances associated with the fins of the heat sink and convection resistance of the wetted surfaces of the base plate are considered for the development of a thermal resistance model. The thermal resistance and pressure drop model are shown to be in good agreement with the experimental data over a wide range of flow conditions, heat sink geometries, bypass configurations and power levels, typical of many applications found in microelectronics and related fields. Data published in the open literature are also used to show the flexibility of the models to simulate a variety of applications. <br /><br /> The proposed thermal resistance and pressure drop model are successfully used in the entropy generation minimization procedure to design a heat sink with bypass for optimum dimensions and performance. A sensitivity analysis is also carried out to check the influence of bypass configurations, power levels, heat sink materials and the coverage ratio on the optimum dimensions and performance of a heat sink and it is found that any change in these parameters results in a change in the optimized heat sink dimensions and flow conditions associated with the application for optimal heat sink performance.
176

Error-Tolerant Coding and the Genetic Code

Gutfraind, Alexander January 2006 (has links)
The following thesis is a project in mathematical biology building upon the so-called "error minimization hypothesis" of the genetic code. After introducing the biological context of this hypothesis, I proceed to develop some relevant information-theoretic ideas, with the overall goal of studying the structure of the genetic code. I then apply the newfound understanding to an important question in the debate about the origin of life, namely, the question of the temperatures in which the genetic code, and life in general, underwent their early evolution. <br /><br /> The main advance in this thesis is a set of methods for calculating the primordial evolutionary pressures that shaped the genetic code. These pressures are due to genetic errors, and hence the statistical properties of the errors and of the genome are imprinted in the statistical properties of the code. Thus, by studying the code it is possible to reconstruct, to some extent, the primordial error rates and the composition of the primordial genome. In this way, I find evidence that the fixation of the genetic code occurred in organisms which were not thermophiles.
177

Robust Search Methods for Rational Drug Design Applications

Sadjad, Bashir January 2009 (has links)
The main topic of this thesis is the development of computational search methods that are useful in drug design applications. The emphasis is on exhaustiveness of the search method such that it can guarantee a certain level of geometric accuracy. In particular, the following two problems are addressed: (i) Prediction of binding mode of a drug molecule to a receptor and (ii) prediction of crystal structures of drug molecules. Predicting the binding mode(s) of a drug molecule to a target receptor is pivotal in structure-based rational drug design. In contrast to most approaches to solve this problem, the idea in this work is to analyze the search problem from a computational perspective. By building on top of an existing docking tool, new methods are proposed and relevant computational results are proven. These methods and results are applicable for other place-and-join frameworks as well. A fast approximation scheme for the docking of rigid fragments is described that guarantees certain geometric approximation factors. It is also demonstrated that this can be translated into an energy approximation for simple scoring functions. A polynomial time algorithm is developed for the matching phase of the docked rigid fragments. It is demonstrated that the generic matching problem is NP-hard. At the same time the optimality of the proposed algorithm is proven under certain scoring function conditions. The matching results are also applicable for some of the fragment-based de novo design methods. On the practical side, the proposed method is tested on 829 complexes from the PDB. The results show that the closest predicted pose to the native structure has the average RMS deviation of 1.06 °A. The prediction of crystal structures of small organic molecules has significantly improved over the last two decades. Most of the new developments, since the first blind test held in 1999, have occurred in the lattice energy estimation subproblem. In this work, a new efficient systematic search method that avoids random moves is proposed. It systematically searches through the space of possible crystal structures and conducts search space cuts based on statistics collected from the structural databases. It is demonstrated that the fast search method for rigid molecules can be extended to include flexible molecules as well. Also, the results of some prediction experiments are provided showing that in most cases the systematic search generates a structure with less than 1.0°A RMSD from the experimental crystal structure. The scoring function that has been developed for these experiments is described briefly. It is also demonstrated that with a more accurate lattice energy estimation function, better results can be achieved with the proposed robust search method.
178

Dynamic Control in Stochastic Processing Networks

Lin, Wuqin 05 May 2005 (has links)
A stochastic processing network is a system that takes materials of various kinds as inputs, and uses processing resources to produce other materials as outputs. Such a network provides a powerful abstraction of a wide range of real world, complex systems, including semiconductor wafer fabrication facilities, networks of data switches, and large-scale call centers. Key performance measures of a stochastic processing network include throughput, cycle time, and holding cost. The network performance can dramatically be affected by the choice of operational policies. We propose a family of operational policies called maximum pressure policies. The maximum pressure policies are attractive in that their implementation uses minimal state information of the network. The deployment of a resource (server) is decided based on the queue lengths in its serviceable buffers and the queue lengths in their immediate downstream buffers. In particular, the decision does not use arrival rate information that is often difficult or impossible to estimate reliably. We prove that a maximum pressure policy can maximize throughput for a general class of stochastic processing networks. We also establish an asymptotic optimality of maximum pressure policies for stochastic processing networks with a unique bottleneck. The optimality is in terms of minimizing workload process. A key step in the proof of the asymptotic optimality is to show that the network processes under maximum pressure policies exhibit a state space collapse.
179

Combination Of Alkaline Solubilization With Microwave Digestion As A Sludge Disintegration Method: Effect On Gas Production And Quantity And Dewaterability Of Anaerobically Digested Sludge

Dogan, Ilgin 01 July 2008 (has links) (PDF)
The significant increase in the sewage sludge production in treatment plants makes anaerobic digestion more important as a stabilization process. However hydrolysis is the rate-limiting step of anaerobic digestion because of the semirigid structure of the microbial cells. Pretreatment of waste activated sludge (WAS) leads to disruption of cell walls and release of extracellular and intracellular materials. Therefore biodegradability of sludge will be improved in terms of more biogas production and sludge minimization. Among the pretreatment methods, alkaline, thermal and thermochemical pretreatments are effectual ones. Considering the effect of thermal pretreatment, microwave technology in which the sample reaches to elevated temperatures very rapidly is a very new pretreatment method. However no previous research has been conducted to test the effectiveness of microwave (MW) irradiation combined with alkaline pretreatment. Since both of these techniques seem to be highly effective, their combination can act synergistically and even more efficient method can be obtained. Therefore the main objective of this study was to investigate the effect of combination of a chemical method (alkaline pretreatment) and a physical method (microwave irradiation) in improving anaerobic digestion of WAS. In the first part of the study, alkaline and MW pretreatment methods were examined separately, then their combinations were investigated for the first time in the literature in terms of COD solubilization, turbidity and CST. Highest SCOD was achieved with the combined method of MW+pH-12. In the second part, based on the results obtained in the first part, alkaline pretreatments of pH-10 and pH-12 / MW pretreatment alone and combined pretreatments of MW+pH-10 and MW+pH-12 pretreated WAS samples were anaerobically digested in small scale batch anaerobic reactors. In correlation with the highest protein and carbohydrate releases with MW+pH-12, highest total gas and methane productions were achieved with MW+pH-12 pretreatment reactor with 16.3% and 18.9% improvements over control reactor, respectively. Finally the performance of MW+pH-12 pretreatment was examined with 2L anaerobic semi-continuous reactors. 43.5% and 53.2% improvements were obtained in daily total gas and methane productions. TS, VS and TCOD reductions were improved by 24.9%, 35.4% and 30.3%, respectively. Pretreated digested sludge had 22% improved dewaterability than non-pretreated digested sludge. Higher SCOD and NH3-N concentrations were measured in the effluent of pretreated digested sludge / however, PO4-P concentration did not vary so much. Heavy metal concentrations of all digested sludges met Soil Pollution Control Regulation Standards. Finally a simple cost calculation was done for a MW+pH-12 pretreatment of WAS for a fictitious WWTP. Results showed that, WWTP can move into profit in 5.5 years.
180

Municipal Sludge Minimization: Evaluation Of Ultrasonic And Acidic Pretreatment Methods And Their Subsequent Effects On Anaerobic Digestion

Apul, Onur Guven 01 February 2009 (has links) (PDF)
Sludge management is one of the most difficult and expensive problems in wastewater treatment plant operation. Consequently, &amp / #8216 / sludge minimization&amp / #8217 / concept arose to solve the excess sludge production by sludge pretreatment. Sludge pretreatment converts the waste sludge into a more bioavailable substrate for anaerobic digestion and leads to an enhanced degradation. The enhanced degradation results in more organic reduction and more biogas production. Therefore, sludge pretreatment is a means of improving sludge management in a treatment plant. Among pretreatment methods, acidic pretreatment has been subject of limited successful studies reported in the literature. On the contrary / ultrasonic pretreatment was reported as an effective pretreatment method. Main objective of this study was to investigate the effects of these two pretreatment methods and their combination in order to achieve a synergistic effect and improve the success of both pretreatment methods. Experimental investigation of pretreatment methods consists of preliminary studies for deciding the most appropriate pretreatment method. Anaerobic batch tests were conducted for optimization of the parameters of selected method. Finally, operation of semi-continuous anaerobic reactors was to investigate the effect of pretreatment on anaerobic digestion in details. Preliminary studies indicated that, more effective pretreatment method in terms of solubilization of organics is ultrasonic pretreatment. Fifteen minutes of sonication enhanced 50 mg/L initial soluble COD concentration up to a value of 2500 mg/L. Biochemical methane potential tests indicated that the increased soluble substrate improved anaerobic biodegradability concurrently. Finally, semi-continuous anaerobic reactors were used to investigate the efficiency of pretreatment under different operating conditions. Results indicate that at SRT 15 days and OLR 0.5 kg/m3d ultrasonic pretreatment improved the daily biogas production of anaerobic digester by 49% and methane percentage by 16% and 24% more volatile solids were removed after pretreatment. Moreover, even after pushing reactors into worse operating conditions such as shorter solids retention time (7.5 days) and low strength influent, pretreatment worked efficiently and improved the anaerobic digestion. Finally cost calculations were performed. Considering the gatherings from enhancement of biogas amount, higher methane percentage and smaller amounts of volatile solid disposal from a treatment plant / installation and operation costs of ultrasound were calculated. The payback period of the installation was found to be 4.7 years.

Page generated in 0.1077 seconds