311 |
Regulation of B cell development by antigen receptorsHauser, Jannek January 2011 (has links)
The developmental processes of lymphopoiesis generate mature B lymphocytes from hematopoietic stem cells through increasingly restricted intermediates. Networks of transcription factors regulate these cell fate choices and are composed of both ubiquitously expressed and B lineage-specific factors. E-protein transcription factors are encoded by the three genes E2A, E2-2 (SEF2-1), and HEB. The E2A gene is required for B cell development and encodes the alternatively spliced proteins E12 and E47. During B lymphocyte development, the cells have to pass several checkpoints verifying the functionality of their antigen receptors. Early in the development, the expression of a pre-B cell receptor (pre-BCR) with membrane-bound immunoglobulin (Ig) heavy chain protein associated with surrogate light chain (SLC) proteins is a critical checkpoint that monitors for functional Ig heavy chain rearrangement. Signaling from the pre-BCR induces survival and a limited clonal expansion. Here it is shown that pre-BCR signaling rapidly down-regulates the SLCs l5 and VpreB and also the co-receptor CD19. Ca2+ signaling and E2A were shown to be essential for this regulation. E2A mutated in its binding site for the Ca2+ sensor protein calmodulin (CaM), and thus with CaM-resistant DNA binding, makes l5, VpreB and CD19 expression resistant to the inhibition following pre-BCR stimulation. Thus, Ca2+ down-regulates SLC and CD19 gene expression upon pre-BCR stimulation through inhibition of E2A by Ca2+/CaM. A general negative feedback regulation of the pre-BCR proteins as well as many co-receptors and proteins in signal pathways from the receptor was also shown. After the ordered recombination of Ig heavy chain gene segments, also Ig light chain gene segments are recombined together to create antibody diversity. The recombinations are orchestrated by the recombination activating gene (RAG) enzymes, other enzymes that cleave/mutate/assemble DNA of the Ig loci, and the transcription factor Pax5. A key feature of the immune system is the concept that one lymphocyte has only one antigen specificity that can be selected for or against. This requires that only one of the alleles of genes for Ig chains is made functional. The mechanism of this allelic exclusion has however been an enigma. Here pre-BCR signaling was shown to down-regulate several components of the recombination machinery including RAG1 and RAG2 through CaM inhibition of E2A. Furthermore, E2A, Pax5 and the RAGs were shown to be in a complex bound to key sequences on the IgH gene before pre-BCR stimulation and instead bound to CaM after this stimulation. Thus, the recombination complex is directly released through CaM inhibition of E2A. Upon encountering antigens, B cells must adapt to produce a highly specific and potent antibody response. Somatic hypermutation (SH), which introduces point mutations in the variable regions of Ig genes, can increase the affinity for antigen, and antibody effector functions can be altered by class switch recombination (CSR), which changes the expressed constant region exons. Activation-induced cytidine deaminase (AID) is the mutagenic antibody diversification enzyme that is essential for both SH and CSR. The AID enzyme has to be tightly controlled as it is a powerful mutagen. BCR signaling, which signals that good antibody affinity has been reached, was shown to inhibit AID gene expression through CaM inhibition of E2A. SH increases the antigen binding strength by many orders of magnitude. Each round of SH leads to one or a few mutations, followed by selection for increased affinity. Thus, BCR signaling has to enable selection for successive improvements in antibodies (Ab) over an extremely broad range of affinities. Here the BCR is shown to be subject to general negative feedback regulation of the receptor proteins as well as many co-receptors and proteins in signal pathways from the receptor. Thus, the BCR can down-regulate itself to enable sensitive detection of successive improvements in antigen affinity. Furthermore, the feedback inhibition of the BCR signalosome and most of its protein, and most other gene regulations by BCR stimulation, is through inhibition of E2A by Ca2+/CaM. Differentiation to Ab-secreting plasmablasts and plasma cells is antigen-driven. The interaction of antigen with the membrane-bound Ab of the BCR is critical in determining which clones enter the plasma cell response. Genome-wide analysis showed that differentiation of B cells to Ab-secreting cell is induced by BCR stimulation through very fast regulatory events, and induction of IRF-4 and down-regulation of Pax5, Bcl-6, MITF, Ets-1, Fli-1 and Spi-B gene expressions were identified as immediate early events. Ca2+ signaling through CaM inhibition of E2A was essential for these rapid down-regulations of immediate early genes after BCR stimulation in initiation of plasma cell differentiation.
|
312 |
Surrogate-Assisted Evolutionary AlgorithmsLoshchilov, Ilya 08 January 2013 (has links) (PDF)
Les Algorithmes Évolutionnaires (AEs) ont été très étudiés en raison de leur capacité à résoudre des problèmes d'optimisation complexes en utilisant des opérateurs de variation adaptés à des problèmes spécifiques. Une recherche dirigée par une population de solutions offre une bonne robustesse par rapport à un bruit modéré et la multi-modalité de la fonction optimisée, contrairement à d'autres méthodes d'optimisation classiques telles que les méthodes de quasi-Newton. La principale limitation de AEs, le grand nombre d'évaluations de la fonction objectif, pénalise toutefois l'usage des AEs pour l'optimisation de fonctions chères en temps calcul. La présente thèse se concentre sur un algorithme évolutionnaire, Covariance Matrix Adaptation Evolution Strategy (CMA-ES), connu comme un algorithme puissant pour l'optimisation continue boîte noire. Nous présentons l'état de l'art des algorithmes, dérivés de CMA-ES, pour résoudre les problèmes d'optimisation mono- et multi-objectifs dans le scénario boîte noire. Une première contribution, visant l'optimisation de fonctions coûteuses, concerne l'approximation scalaire de la fonction objectif. Le meta-modèle appris respecte l'ordre des solutions (induit par la valeur de la fonction objectif pour ces solutions) ; il est ainsi invariant par transformation monotone de la fonction objectif. L'algorithme ainsi défini, saACM-ES, intègre étroitement l'optimisation réalisée par CMA-ES et l'apprentissage statistique de meta-modèles adaptatifs ; en particulier les meta-modèles reposent sur la matrice de covariance adaptée par CMA-ES. saACM-ES préserve ainsi les deux propriété clé d'invariance de CMA-ES~: invariance i) par rapport aux transformations monotones de la fonction objectif; et ii) par rapport aux transformations orthogonales de l'espace de recherche. L'approche est étendue au cadre de l'optimisation multi-objectifs, en proposant deux types de meta-modèles (scalaires). La première repose sur la caractérisation du front de Pareto courant (utilisant une variante mixte de One Class Support Vector Machone (SVM) pour les points dominés et de Regression SVM pour les points non-dominés). La seconde repose sur l'apprentissage d'ordre des solutions (rang de Pareto) des solutions. Ces deux approches sont intégrées à CMA-ES pour l'optimisation multi-objectif (MO-CMA-ES) et nous discutons quelques aspects de l'exploitation de meta-modèles dans le contexte de l'optimisation multi-objectif. Une seconde contribution concerne la conception d'algorithmes nouveaux pour l'optimi\-sation mono-objectif, multi-objectifs et multi-modale, développés pour comprendre, explorer et élargir les frontières du domaine des algorithmes évolutionnaires et CMA-ES en particulier. Spécifiquement, l'adaptation du système de coordonnées proposée par CMA-ES est couplée à une méthode adaptative de descente coordonnée par coordonnée. Une stratégie adaptative de redémarrage de CMA-ES est proposée pour l'optimisation multi-modale. Enfin, des stratégies de sélection adaptées aux cas de l'optimisation multi-objectifs et remédiant aux difficultés rencontrées par MO-CMA-ES sont proposées.
|
313 |
The constitutional and contractual implications of the application of chapter 19 of the Children's Act 38 of 2005Lewis, Samantha Vanessa January 2011 (has links)
In this research, I carefully and coherently examine Chapter 19 of the Children's Act 38 of 2005 as the first legislation to afford surrogate motherhood agreements legal recognition in South Africa. I argue that the application of Chapter 19 imposes a number of unwarranted limitations on several of the constitutional rights of the parties to a surrogacy agreement. In addition, I propose that Chapter 19 is not in accordance with the principal of the best interests of the child. I examine the history of surrogate motherhood in South Africa and establish that, prior to the enactment of Chapter 19, no legislation expressly afforded surrogate motherhood agreements legal recognition. Hence, prior to the enactment of Chapter 19, parties who entered surrogacy agreements could, first, not rely on the agreement to enforce contractual obligations, and secondly, the legal positions of the parties to the agreement were uncertain. Thirdly, a child born of a surrogacy agreement was seen as the child of the surrogate mother and not of the commissioning parents.
|
314 |
An efficient approach for high-fidelity modeling incorporating contour-based sampling and uncertaintyCrowley, Daniel R. 13 January 2014 (has links)
During the design process for an aerospace vehicle, decision-makers must have an accurate understanding of how each choice will affect the vehicle and its performance. This understanding is based on experiments and, increasingly often, computer models. In general, as a computer model captures a greater number of phenomena, its results become more accurate for a broader range of problems. This improved accuracy typically comes at the cost of significantly increased computational expense per analysis.
Although rapid analysis tools have been developed that are sufficient for many design efforts, those tools may not be accurate enough for revolutionary concepts subject to grueling flight conditions such as transonic or supersonic flight and extreme angles of attack. At such conditions, the simplifying assumptions of the rapid tools no longer hold. Accurate analysis of such concepts would require models that do not make those simplifying assumptions, with the corresponding increases in computational effort per analysis. As computational costs rise, exploration of the design space can become exceedingly expensive. If this expense cannot be reduced, decision-makers would be forced to choose between a thorough exploration of the design space using inaccurate models, or the analysis of a sparse set of options using accurate models. This problem is exacerbated as the number of free parameters increases, limiting the number of trades that can be investigated in a given time. In the face of limited resources, it can become critically important that only the most useful experiments be performed, which raises multiple questions: how can the most useful experiments be identified, and how can experimental results be used in the most effective manner?
This research effort focuses on identifying and applying techniques which could address these questions. The demonstration problem for this effort was the modeling of a reusable booster vehicle, which would be subject to a wide range of flight conditions while returning to its launch site after staging. Contour-based sampling, an adaptive sampling technique, seeks cases that will improve the prediction accuracy of surrogate models for particular ranges of the responses of interest. In the case of the reusable booster, contour-based sampling was used to emphasize configurations with small pitching moments; the broad design space included many configurations which produced uncontrollable aerodynamic moments for at least one flight condition. By emphasizing designs that were likely to trim over the entire trajectory, contour-based sampling improves the predictive accuracy of surrogate models for such designs while minimizing the number of analyses required.
The simplified models mentioned above, although less accurate for extreme flight conditions, can still be useful for analyzing performance at more common flight conditions. The simplified models may also offer insight into trends in the response behavior. Data from these simplified models can be combined with more accurate results to produce useful surrogate models with better accuracy than the simplified models but at less cost than if only expensive analyses were used. Of the data fusion techniques evaluated, Ghoreyshi cokriging was found to be the most effective for the problem at hand.
Lastly, uncertainty present in the data was found to negatively affect predictive accuracy of surrogate models. Most surrogate modeling techniques neglect uncertainty in the data and treat all cases as deterministic. This is plausible, especially for data produced by computer analyses which are assumed to be perfectly repeatable and thus truly deterministic. However, a number of sources of uncertainty, such as solver iteration or surrogate model prediction accuracy, can introduce noise to the data. If these sources of uncertainty could be captured and incorporated when surrogate models are trained, the resulting surrogate models would be less susceptible to that noise and correspondingly have better predictive accuracy. This was accomplished in the present effort by capturing the uncertainty information via nuggets added to the Kriging model.
By combining these techniques, surrogate models could be created which exhibited better predictive accuracy while selecting the most informative experiments possible. This significantly reduced the computational effort expended compared to a more standard approach using space-filling samples and data from a single source. The relative contributions of each technique were identified, and observations were made pertaining to the most effective way to apply the separate and combined methods.
|
315 |
Progressive Validity Metamodel Trust Region OptimizationThomson, Quinn Parker 26 February 2009 (has links)
The goal of this work was to develop metamodels of the MDO framework piMDO and provide new research in metamodeling strategies. The theory of existing metamodels is presented and implementation details are given. A new trust region scheme --- metamodel trust region optimization (MTRO) --- was developed. This method uses a progressive level of minimum validity in order to reduce the number of sample points required for the optimization process. Higher levels of validity require denser point distributions, but the reducing size of the region during the optimization process mitigates an increase the number of points required. New metamodeling strategies include: inherited optimal latin hypercube sampling, hybrid latin hypercube sampling, and kriging with BFGS. MTRO performs better than traditional trust region methods for single discipline problems and is competitive against other MDO architectures when used with a CSSO algorithm. Advanced metamodeling methods proved to be inefficient in trust region methods.
|
316 |
Progressive Validity Metamodel Trust Region OptimizationThomson, Quinn Parker 26 February 2009 (has links)
The goal of this work was to develop metamodels of the MDO framework piMDO and provide new research in metamodeling strategies. The theory of existing metamodels is presented and implementation details are given. A new trust region scheme --- metamodel trust region optimization (MTRO) --- was developed. This method uses a progressive level of minimum validity in order to reduce the number of sample points required for the optimization process. Higher levels of validity require denser point distributions, but the reducing size of the region during the optimization process mitigates an increase the number of points required. New metamodeling strategies include: inherited optimal latin hypercube sampling, hybrid latin hypercube sampling, and kriging with BFGS. MTRO performs better than traditional trust region methods for single discipline problems and is competitive against other MDO architectures when used with a CSSO algorithm. Advanced metamodeling methods proved to be inefficient in trust region methods.
|
317 |
A methodology for ballistic missile defense systems analysis using nested neural networksWeaver, Brian Lee 10 July 2008 (has links)
The high costs and political tensions associated with Ballistic Missile Defense Systems (BMDS) has driven much of the testing and evaluation of BMDS to be performed through high fidelity Modeling and Simulation (M&S). In response, the M&S environments have become highly complex, extremely computationally intensive, and far too slow to be of use to systems engineers and high level decision makers.
Regression models can be used to map the system characteristics to the metrics of interest, bringing about large quantities of data and allowing for real-time interaction with high-fidelity M&S environments, however the abundance of discontinuities and non-unique solutions makes the application of regression techniques hazardous. Due to these ambiguities, the transfer function from the characteristics to the metrics appears to have multiple solutions for a given set of inputs, which combined with the multiple inputs yielding the same set of outputs, causes troubles in creating a mapping. Due to the abundance of discontinuities, the existence of a neural network mapping from the system attributes to the performance metrics is not guaranteed, and if the mapping does exist, it requires a large amount of data to be for creating a regression model, making regression techniques less suitable to BMDS analysis.
By employing Nested Neural Networks (NNNs), intermediate data can be associated with an ambiguous output which can allow for a regression model to be made. The addition of intermediate data incorporates more knowledge of the design space into the analysis. Nested neural networks divide the design space to form a piece-wise continuous function, which allows for the user to incorporate system knowledge into the surrogate modeling process while reducing the size of a data set required to form the regression model.
This thesis defines nested neural networks along with methods and techniques for using NNNs to relieve the effects of discontinuities and non-unique solutions. To show the benefit of the approach, these techniques are applies them to a BMDS simulation. Case studies are performed to optimize the system configurations and assess robustness which could not be done without the regression models.
|
318 |
Speed profile variation as a surrogate measure of road safety based on GPS-equipped vehicle dataBoonsiripant, Saroch 06 April 2009 (has links)
The identification of roadway sections with a higher than expected number of crashes is usually based on long term crash frequency data. In situations where historical crash data are limited or not available, surrogate safety measures, based on characteristics such as road geometries, traffic volume, and speed variation are often considered. Most of existing crash prediction models relate safety to speed variation at a specific point on the roadway. However, such point-specific explanatory variables do not capture the effect of speed consistency along the roadway. This study developed several measures based on the speed profiles along road segments to estimate the crash frequency on urban streets. To collect speed profile data, second-by-second speed data were obtained from more than 460 GPS-equipped vehicles participating in the Commute Atlanta Study over the 2004 calendar year. A series of speed data filters have been developed to identify likely free-flow speed data. The quantified relationships between surrogate measures and crash frequency are developed using regression tree and generalized linear modeling (GLM) approaches. The results indicate that safety characteristics of roadways are likely a function of the roadway classification. Two crash prediction models with different set of explanatory variables were developed for higher and lower classification roadways. The findings support the potential use of the profile-based measures to evaluate the safety of road network as the deployment of GPS-equipped vehicles become more prevalent.
|
319 |
Enhanced classification approach with semi-supervised learning for reliability-based system designPatel, Jiten 02 July 2012 (has links)
Traditionally design engineers have used the Factor of Safety method for ensuring that designs do not fail in the field. Access to advanced computational tools and resources have made this process obsolete and new methods to introduce higher levels of reliability in an engineering systems are currently being investigated. However, even though high computational resources are available the computational resources required by reliability analysis procedures leave much to be desired. Furthermore, the regression based surrogate modeling techniques fail when there is discontinuity in the design space, caused by failure mechanisms, when the design is required to perform under severe externalities. Hence, in this research we propose efficient Semi-Supervised Learning based surrogate modeling techniques that will enable accurate estimation of a system's response, even under discontinuity. These methods combine the available set of labeled dataset and unlabeled dataset and provide better models than using labeled data alone. Labeled data is expensive to obtain since the responses have to be evaluated whereas unlabeled data is available in plenty, during reliability estimation, since the PDF information of uncertain variables is assumed to be known. This superior performance is gained by combining the efficiency of Probabilistic Neural Networks (PNN) for classification and Expectation-Maximization (EM) algorithm for treating the unlabeled data as labeled data with hidden labels.
|
320 |
Děti v náhradní rodinné péči homosexuálních osob / Childrenin foster careof homosexualpersonsFIALOVÁ, Nikola January 2018 (has links)
Althoughitislegalized by law to adoptchildren by homosexualindividuals in the Czech Republic now, mainfocusofthechapters in this thesis dealswithadoptivechildrenofregisteredhomosexualcouples in the country. Exceptforothergoals, the thesis aimsatthistopicfromthe point oftheethic and morality as thesociety´svalues. Italsosets up a discussionaboutthefactthatthischanceisgiven to theindividuals but not thecouplesthatofficiallyvalidatetherelationship, whichshowsthatthey are not aware to admittheirorientation. The thesis takes in theaccount not only second and philosophic bibliografy, but it si alsosupported by the public discoursebeing in progress in media and online editorials. These ale alsoimportant to beconsideredwhilesolvingthissituation.
|
Page generated in 0.0357 seconds