211 |
The Differential Regulation of Transfer RNA in Higher Eukaryotes and Their Emerging Role in MalignancyPinkard, Otis William, III 26 May 2023 (has links)
No description available.
|
212 |
Or Best Offer: A Privacy Policy Negotiation ProtocolWalker, Daniel David 12 July 2007 (has links) (PDF)
Users today are concerned about how their information is collected, stored and used by Internet sites. Privacy policy languages, such as the Platform for Privacy Preferences (P3P), allow websites to publish their privacy practices and policies in machine readable form. Currently, software agents designed to protect users' privacy follow a "take it or leave it" approach when evaluating these privacy policies. This approach is inflexible and gives the server ultimate control over the privacy of web transactions. Privacy policy negotiation is one approach to leveling the playing field by allowing a client to negotiate with a server to determine how that server collects and uses the client's data. We present a privacy policy negotiation protocol, "Or Best Offer", that includes a formal model for specifying privacy preferences and reasoning about privacy policies. The protocol is guaranteed to terminate within three rounds of negotiation while producing policies that are Pareto-optimal, and thus fair to both parties. That is, it remains fair to both the client and the server.
|
213 |
Optimality Conditions for Cardinality Constrained Optimization ProblemsXiao, Zhuoyu 11 August 2022 (has links)
Cardinality constrained optimization problems (CCOP) are a new class of optimization
problems with many applications. In this thesis, we propose a framework
called mathematical programs with disjunctive subspaces constraints (MPDSC), a
special case of mathematical programs with disjunctive constraints (MPDC), to investigate
CCOP. Our method is different from the relaxed complementarity-type reformulation
in the literature. The first contribution of this thesis is that we study various stationarity conditions for MPDSC, and then apply them to CCOP. In particular, we recover disjunctive-type strong (S-) stationarity and Mordukhovich (M-) stationarity for CCOP, and then reveal the relationship between them and those from the relaxed complementarity-type reformulation. The second contribution of this thesis is that we obtain some new results for MPDSC, which do not hold for MPDC in general. We show that many constraint qualifications like the relaxed constant positive linear dependence (RCPLD) coincide with their piecewise versions for MPDSC. Based on such result, we prove that RCPLD implies error bounds for MPDSC. These two results also hold for CCOP. All of these disjunctive-type constraint qualifications for CCOP derived from MPDSC are weaker than those from the relaxed complementarity-type reformulation in some sense. / Graduate
|
214 |
Using Pareto points for model identification in predictive toxicologyPalczewska, Anna Maria, Neagu, Daniel, Ridley, Mick J. January 2013 (has links)
no / Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology.
|
215 |
Optimal investment in an oil-based economy. Theoretical and Empirical Study of a Ramsey-Type Model for Libya.Zarmouh, Omar Othman January 1998 (has links)
In a developing oil-based economy like Libya the availability of finance is largely
affected by the availability of oil revenues which are subjected to disturbances and
shocks. Therefore, the decision to save and invest a certain ratio of the country's
aggregate output is, to large extent, determined (and affected) by the shocks in the oil
markets rather than the requirements of economic development.
In this study an attempt is made to determine the optimal rate of saving and
investment, both defined as a ratio of the aggregate output, according to the
requirements of economic development. For this purpose, a neo-classical Ramsey-type
model for Libya is constructed and applied to obtain theoretically and empirically the
optimal saving and investment rate during the period (1965-1991). The results reveal
that Libya was investing over the optimal level during the oil boom of 1970s and less
than the optimal level during the oil crisis of 1980s. In addition, an econometric
investigation of the determinants of actual investment by sector (agriculture, non-oil
industry, and services) is carried out in order to shed lights on how possible it is for
Libya to adjust actual investment towards its optimal level. It is found that, as expected,
the most important factor which can be used in this respect is the oil revenues or,
generally, the availability of finance. In addition, the study reveals that investment in
agriculture is associated, during the period of study, with a very low marginal
productivity of capital whereas marginal productivity was higher in both non-oil
industry and services.
Finally, the study investigates also the future potential saving and investment rates
and concludes that the economy, which has already reached its steady state, can be
pushed out towards further growth if the economy can be able to increase the level of
per worker human capital, proxied by the secondary school enrolment as a percentage of
population. / Secretariat of Higher Education in Libya and
Libyan Interests Section in London
|
216 |
Interpretation, Identification and Reuse of Models. Theory and algorithms with applications in predictive toxicology.Palczewska, Anna Maria January 2014 (has links)
This thesis is concerned with developing methodologies that enable existing
models to be effectively reused. Results of this thesis are presented in
the framework of Quantitative Structural-Activity Relationship (QSAR)
models, but their application is much more general. QSAR models relate
chemical structures with their biological, chemical or environmental
activity. There are many applications that offer an environment to build
and store predictive models. Unfortunately, they do not provide advanced
functionalities that allow for efficient model selection and for interpretation
of model predictions for new data. This thesis aims to address these
issues and proposes methodologies for dealing with three research problems:
model governance (management), model identification (selection),
and interpretation of model predictions. The combination of these methodologies
can be employed to build more efficient systems for model reuse
in QSAR modelling and other areas.
The first part of this study investigates toxicity data and model formats
and reviews some of the existing toxicity systems in the context of model
development and reuse. Based on the findings of this review and the principles
of data governance, a novel concept of model governance is defined.
Model governance comprises model representation and model governance
processes. These processes are designed and presented in the context of
model management. As an application, minimum information requirements
and an XML representation for QSAR models are proposed.
Once a collection of validated, accepted and well annotated models is
available within a model governance framework, they can be applied for
new data. It may happen that there is more than one model available for
the same endpoint. Which one to chose? The second part of this thesis
proposes a theoretical framework and algorithms that enable automated
identification of the most reliable model for new data from the collection
of existing models. The main idea is based on partitioning of the search
space into groups and assigning a single model to each group. The construction
of this partitioning is difficult because it is a bi-criteria problem.
The main contribution in this part is the application of Pareto points for
the search space partition. The proposed methodology is applied to three
endpoints in chemoinformatics and predictive toxicology.
After having identified a model for the new data, we would like to know
how the model obtained its prediction and how trustworthy it is. An interpretation
of model predictions is straightforward for linear models thanks
to the availability of model parameters and their statistical significance.
For non linear models this information can be hidden inside the model
structure. This thesis proposes an approach for interpretation of a random
forest classification model. This approach allows for the determination of
the influence (called feature contribution) of each variable on the model
prediction for an individual data. In this part, there are three methods proposed
that allow analysis of feature contributions. Such analysis might
lead to the discovery of new patterns that represent a standard behaviour
of the model and allow additional assessment of the model reliability for
new data. The application of these methods to two standard benchmark
datasets from the UCI machine learning repository shows a great potential
of this methodology. The algorithm for calculating feature contributions
has been implemented and is available as an R package called rfFC. / BBSRC and Syngenta (International Research Centre at Jealott’s Hill, Bracknell, UK).
|
217 |
A Computational Analysis of the Structure of the Genetic CodeDegagne, Christopher 11 1900 (has links)
The standard genetic code (SGC) is the cipher used by nearly all organisms to transcribe information stored in DNA and translate it into its amino acid counterparts. Since the early 1960s, researchers have observed that the SGC is structured so that similar codons encode amino acids with similar physiochemical properties. This structure has been hypothesized to buffer the SGC against transcription or translational error because single nucleotide mutations usually either are silent or impart minimal effect on the containing protein. We herein briefly review different theories for the origin of that structure. We also briefly review different computational experiments designed to quantify buffering capacity for the SGC.
We report on computational Monte Carlo simulations that we performed using a computer program that we developed, AGCT. In the simulations, the SGC was ranked against other, hypothetical genetic codes (HGC) for its ability to minimize physiochemical distances between amino acids encoded by codons separated by single nucleotide mutations. We analyzed unappreciated structural aspects and neglected properties in the SGC. We found that error measure type affected SGC ranking. We also found that altering stop codon positions had no effect on SGC ranking, but including stop codons in error calculations improved SGC ranking. We analyzed 49 properties individually and identified conserved properties. Among these, we found that long-range non-bonded energy is more conserved than is polar requirement, which previously was considered to be the most conserved property in the SGC. We also analyzed properties in combinations. We hypothesized that the SGC is organized as a compromise among multiple properties.
Finally, we used AGCT to test whether different theories on the origin of the SGC could explain more convincingly the buffering capacity in the SGC. We found that, without accounting for transition/transversion biases, the SGC ranking was modest enough under constraints imposed by the coevolution and four column theories that it could be explained due to constraints associated with either theory (or both theories); however, when transition/transversion biases were included, only the four column theory returned a SGC ranking modest enough that it could be explained due to constraints associated with that theory. / Thesis / Master of Science (MSc) / The standard genetic code (SGC) is the cipher used almost universally to transcribe information stored in DNA and translate it to amino acid counterparts. Since the mid 1960s, researchers have recognized that the SGC is organized so that similar three-nucleotide RNA codons encode amino acids with similar properties; researchers consequently hypothesized that the SGC is structured to minimize effects from transcription or translation errors. This hypothesis has been tested using computer simulation. I briefly review results from those studies, complement them by analyzing unappreciated structural aspects and neglected properties, and test two theories on the origin of the SGC.
|
218 |
Exploring mechanisms underlying recruitment of white crappie in Ohio reservoirsBunnell, David B., Jr. 20 December 2002 (has links)
No description available.
|
219 |
Advances in aircraft design: multiobjective optimization and a markup languageDeshpande, Shubhangi Govind 23 January 2014 (has links)
Today's modern aerospace systems exhibit strong interdisciplinary coupling and require a multidisciplinary, collaborative approach. Analysis methods that were once considered feasible only for advanced and detailed design are now available and even practical at the conceptual design stage. This changing philosophy for conducting conceptual design poses additional challenges beyond those encountered in a low fidelity design of aircraft. This thesis takes some steps towards bridging the gaps in existing technologies and advancing the state-of-the-art in aircraft design.
The first part of the thesis proposes a new Pareto front approximation method for multiobjective optimization problems. The method employs a hybrid optimization approach using two derivative free direct search techniques, and is intended for solving blackbox simulation based multiobjective optimization problems with possibly nonsmooth functions where the analytical form of the objectives is not known and/or the evaluation of the objective function(s) is very expensive (very common in multidisciplinary design optimization). A new adaptive weighting scheme is proposed to convert a multiobjective optimization problem to a single objective optimization problem. Results show that the method achieves an arbitrarily close approximation to the Pareto front with a good collection of well-distributed nondominated points.
The second part deals with the interdisciplinary data communication issues involved in a collaborative mutidisciplinary aircraft design environment. Efficient transfer, sharing, and manipulation of design and analysis data in a collaborative environment demands a formal structured representation of data. XML, a W3C recommendation, is one such standard concomitant with a number of powerful capabilities that alleviate interoperability issues. A compact, generic, and comprehensive XML schema for an aircraft design markup language (ADML) is proposed here to provide a common language for data communication, and to improve efficiency and productivity within a multidisciplinary, collaborative environment. An important feature of the proposed schema is the very expressive and efficient low level schemata. As a proof of concept the schema is used to encode an entire Convair B58. As the complexity of models and number of disciplines increases, the reduction in effort to exchange data models and analysis results in ADML also increases. / Ph. D.
|
220 |
Some Contributions to Inferential Issues of Censored Exponential Failure DataHan, Donghoon 06 1900 (has links)
In this thesis, we investigate several inferential issues regarding the lifetime data from exponential distribution under different censoring schemes. For reasons of time constraint and cost reduction, censored sampling is commonly employed in practice, especially in reliability engineering. Among various censoring schemes, progressive Type-I censoring provides not only the practical advantage of known termination time but also greater flexibility to the experimenter in the design stage by allowing for the removal of test units at non-terminal time points. Hence, we first consider the inference for a progressively Type-I censored life-testing experiment with k uniformly spaced intervals. For small to moderate sample sizes, a practical modification is proposed to the censoring scheme in order to guarantee a feasible life-test under progressive Type-I censoring. Under this setup, we obtain the maximum likelihood estimator (MLE) of the unknown mean parameter and derive the exact sampling distribution of the MLE through the use of conditional moment generating function under the condition that the existence of the MLE is ensured. Using the exact distribution of the MLE as well as its asymptotic distribution and the parametric bootstrap method, we discuss the construction of confidence intervals for the mean parameter and their performance is then assessed through Monte Carlo simulations. Next, we consider a special class of accelerated life tests, known as step-stress
tests in reliability testing. In a step-stress test, the stress levels increase discretely at pre-fixed time points and this allows the experimenter to obtain information on the parameters of the lifetime distributions more quickly than under normal operating conditions. Here, we consider a k-step-stress accelerated life testing experiment with an equal step duration τ. In particular, the case of progressively Type-I censored data with a single stress variable is investigated. For small to moderate sample sizes, we introduce another practical modification to the model for a feasible k-step-stress test under progressive censoring, and the optimal τ is searched using the modified model. Next, we seek the optimal τ under the condition that the step-stress test proceeds to the k-th stress level, and the efficiency of this conditional inference is compared to the preceding models. In all cases, censoring is allowed at each change stress point iτ, i = 1, 2, ... , k, and the problem of selecting the optimal Tis discussed using C-optimality, D-optimality, and A-optimality criteria. Moreover, when a test unit fails, there are often more than one fatal cause for the failure, such as mechanical or electrical. Thus, we also consider the simple stepstress models under Type-I and Type-II censoring situations when the lifetime distributions corresponding to the different risk factors are independently exponentially distributed. Under this setup, we derive the MLEs of the unknown mean parameters of the different causes under the assumption of a cumulative exposure model. The exact distributions of the MLEs of the parameters are then derived through the use of conditional moment generating functions. Using these exact distributions as well as the asymptotic distributions and the parametric bootstrap method, we discuss the construction of confidence intervals for the parameters and then assess their performance through Monte Carlo simulations. / Thesis / Doctor of Philosophy (PhD)
|
Page generated in 0.079 seconds