• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 21
  • 15
  • 5
  • 5
  • 4
  • 4
  • Tagged with
  • 102
  • 43
  • 21
  • 16
  • 15
  • 13
  • 12
  • 11
  • 10
  • 10
  • 10
  • 9
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

50-Year Catalogs of Uranus Trajectory Options with a New Python-Based Rapid Design Tool

Alec J Mudek (13129083) 22 July 2022 (has links)
<p>Ballistic and chemical trajectory options to Uranus are investigated for launch dates spanning 50 years. Trajectory solutions are found using STOUR, a patched conic propagator with an analytical ephemeris model. STOUR is heritage software developed by JPL and Purdue, written in FORTRAN. A total of 89 distinct gravity-assist paths to Uranus are considered, most of which will allow for a deep space maneuver (DSM) at some point along the path. For each launch year, the most desirable trajectory is identified and cataloged based on time of flight (up to 15 years), total $\Delta$V cost (DSM and capture maneuver), arrival $V_\infty$, and delivered payload. The Falcon Heavy (Recoverable), Vulcan VC6, Falcon Heavy (Expendable) and SLS Block 1B are considered to provide a range of low- to high-performance launch vehicle capabilities. A rough approximation of Starship's performance capabilities is also computed and applied to select years of launch dates. A flagship mission that delivers both a probe and an orbiter at Uranus is considered, which is approximated as a trajectory capable of delivering 2000 kg. Jupiter is unavailable as a gravity-assist body until the end of the 2020s but alternative gravity-assist paths exist, providing feasible trajectories even in years when Jupiter is not available. A rare Saturn-Uranus alignment in the late 2020's is identified which provides some such trajectory opportunities. A probe-and-orbiter mission to Uranus is feasible for a Vulcan VC6 with approximately 13 year flight times and for a Recoverable Falcon Heavy with approximately 14.5 year flight times. An Expendable Falcon Heavy reduces the time of flight to around 12.5 years and opens up `0E0U' as a gravity-assist path, while the SLS Block 1B typically offers trajectories with 10 to 11 year flight times and opens up more direct `JU' and `U' solutions. With the SLS, flight times as low as 7.5 years are possible.</p> <p>  </p> <p>A new, rapid grid search tool called GREMLINS is also outlined. This new software is capable of solving the same problems as STOUR, but improves on it in three crucial ways: an improved user-experience, more maneuver capabilities, and a more easily maintained and modified code base. GREMLINS takes a different approach to the broad search problem, forgoing $C_3$ matching in favor of using maneuvers to patch together tables of pre-computed Lambert arcs. This approach allows for vectorized computations across data frames of Lambert solutions, which can be computed much more efficiently than the for-loop style approach of past tools. Through the use of SQL tables and a two-step trajectory solving approach, this tool is able to run very quickly while still being able to handle any amount of data required for a broad search. Every line of code in GREMLINS is written in Python in an effort to make it more approachable and easier to develop for a wide community of users, as GREMLINS will be the only only grid search tool available as free and open source software. Multiple example missions and trajectory searches are explored to verify the output from GREMLINS and to compare its performance against STOUR. Despite using a slower coding language, GREMLINS is capable of performing the same trajectory searches in approximately 1/5 the runtime of STOUR, a FORTRAN-coded tool, thanks to its vectorized computations.</p>
62

Kegelsnedes as integrerende faktor in skoolwiskunde

Stols, Gert Hendrikus 30 November 2003 (has links)
Text in Afrikaans / Real empowerment of school learners requires preparing them for the age of technology. This empowerment can be achieved by developing their higher-order thinking skills. This is clearly the intention of the proposed South African FET National Curriculum Statements Grades 10 to 12 (Schools). This research shows that one method of developing higher-order thinking skills is to adopt an integrated curriculum approach. The research is based on the assumption that an integrated curriculum approach will produce learners with a more integrated knowledge structure which will help them to solve problems requiring higher-order thinking skills. These assumptions are realistic because the empirical results of several comparative research studies show that an integrated curriculum helps to improve learners' ability to use higher-order thinking skills in solving nonroutine problems. The curriculum mentions four kinds of integration, namely integration across different subject areas, integration of mathematics with the real world, integration of algebraic and geometric concepts, and integration into and the use of dynamic geometry software in the learning and teaching of geometry. This research shows that from a psychological, pedagogical, mathematical and historical perspective, the theme conic sections can be used as an integrating factor in the new proposed FET mathematics curriculum. Conics are a powerful tool for making the new proposed curriculum more integrated. Conics can be used as an integrating factor in the FET band by means of mathematical exploration, visualisation, relating learners' experiences of various parts of mathematics to one another, relating mathematics to the rest of the learners' experiences and also applying conics to solve real-life problems. / Mathematical Sciences / D.Phil. (Wiskundeonderwys)
63

Kegelsnedes as integrerende faktor in skoolwiskunde

Stols, Gert Hendrikus 30 November 2003 (has links)
Text in Afrikaans / Real empowerment of school learners requires preparing them for the age of technology. This empowerment can be achieved by developing their higher-order thinking skills. This is clearly the intention of the proposed South African FET National Curriculum Statements Grades 10 to 12 (Schools). This research shows that one method of developing higher-order thinking skills is to adopt an integrated curriculum approach. The research is based on the assumption that an integrated curriculum approach will produce learners with a more integrated knowledge structure which will help them to solve problems requiring higher-order thinking skills. These assumptions are realistic because the empirical results of several comparative research studies show that an integrated curriculum helps to improve learners' ability to use higher-order thinking skills in solving nonroutine problems. The curriculum mentions four kinds of integration, namely integration across different subject areas, integration of mathematics with the real world, integration of algebraic and geometric concepts, and integration into and the use of dynamic geometry software in the learning and teaching of geometry. This research shows that from a psychological, pedagogical, mathematical and historical perspective, the theme conic sections can be used as an integrating factor in the new proposed FET mathematics curriculum. Conics are a powerful tool for making the new proposed curriculum more integrated. Conics can be used as an integrating factor in the FET band by means of mathematical exploration, visualisation, relating learners' experiences of various parts of mathematics to one another, relating mathematics to the rest of the learners' experiences and also applying conics to solve real-life problems. / Mathematical Sciences / D.Phil. (Wiskundeonderwys)
64

Block-decomposition and accelerated gradient methods for large-scale convex optimization

Ortiz Diaz, Camilo 08 June 2015 (has links)
In this thesis, we develop block-decomposition (BD) methods and variants of accelerated *9gradient methods for large-scale conic programming and convex optimization, respectively. The BD methods, discussed in the first two parts of this thesis, are inexact versions of proximal-point methods applied to two-block-structured inclusion problems. The adaptive accelerated methods, presented in the last part of this thesis, can be viewed as new variants of Nesterov's optimal method. In an effort to improve their practical performance, these methods incorporate important speed-up refinements motivated by theoretical iteration-complexity bounds and our observations from extensive numerical experiments. We provide several benchmarks on various important problem classes to demonstrate the efficiency of the proposed methods compared to the most competitive ones proposed earlier in the literature. In the first part of this thesis, we consider exact BD first-order methods for solving conic semidefinite programming (SDP) problems and the more general problem that minimizes the sum of a convex differentiable function with Lipschitz continuous gradient, and two other proper closed convex (possibly, nonsmooth) functions. More specifically, these problems are reformulated as two-block monotone inclusion problems and exact BD methods, namely the ones that solve both proximal subproblems exactly, are used to solve them. In addition to being able to solve standard form conic SDP problems, the latter approach is also able to directly solve specially structured non-standard form conic programming problems without the need to add additional variables and/or constraints to bring them into standard form. Several ingredients are introduced to speed-up the BD methods in their pure form such as: adaptive (aggressive) choices of stepsizes for performing the extragradient step; and dynamic updates of scaled inner products to balance the blocks. Finally, computational results on several classes of SDPs are presented showing that the exact BD methods outperform the three most competitive codes for solving large-scale conic semidefinite programming. In the second part of this thesis, we present an inexact BD first-order method for solving standard form conic SDP problems which avoids computations of exact projections onto the manifold defined by the affine constraints and, as a result, is able to handle extra large-scale SDP instances. In this BD method, while the proximal subproblem corresponding to the first block is solved exactly, the one corresponding to the second block is solved inexactly in order to avoid finding the exact solution of a linear system corresponding to the manifolds consisting of both the primal and dual affine feasibility constraints. Our implementation uses the conjugate gradient method applied to a reduced positive definite dual linear system to obtain inexact solutions of the latter augmented primal-dual linear system. In addition, the inexact BD method incorporates a new dynamic scaling scheme that uses two scaling factors to balance three inclusions comprising the optimality conditions of the conic SDP. Finally, we present computational results showing the efficiency of our method for solving various extra large SDP instances, several of which cannot be solved by other existing methods, including some with at least two million constraints and/or fifty million non-zero coefficients in the affine constraints. In the last part of this thesis, we consider an adaptive accelerated gradient method for a general class of convex optimization problems. More specifically, we present a new accelerated variant of Nesterov's optimal method in which certain acceleration parameters are adaptively (and aggressively) chosen so as to: preserve the theoretical iteration-complexity of the original method; and substantially improve its practical performance in comparison to the other existing variants. Computational results are presented to demonstrate that the proposed adaptive accelerated method performs quite well compared to other variants proposed earlier in the literature.
65

Risk optimization with p-order conic constraints

Soberanis, Policarpio Antonio 01 December 2009 (has links)
My dissertation considers solving of linear programming problems with p-order conic constraints that are related to a class of stochastic optimization models with risk objective or constraints that involve higher moments of loss distributions. The general proposed approach is based on construction of polyhedral approximations for p-order cones, thereby approximating the non-linear convex p-order conic programming problems using linear programming models. It is shown that the resulting LP problems possess a special structure that makes them amenable to efficient decomposition techniques. The developed algorithms are tested on the example of portfolio optimization problem with higher moment coherent risk measures that reduces to a p-order conic programming problem. The conducted case studies on real financial data demonstrate that the proposed computational techniques compare favorably against a number of benchmark methods, including second-order conic programming methods.
66

A Computational Approach To Nonparametric Regression: Bootstrapping Cmars Method

Yazici, Ceyda 01 September 2011 (has links) (PDF)
Bootstrapping is a resampling technique which treats the original data set as a population and draws samples from it with replacement. This technique is widely used, especially, in mathematically intractable problems. In this study, it is used to obtain the empirical distributions of the parameters to determine whether they are statistically significant or not in a special case of nonparametric regression, Conic Multivariate Adaptive Regression Splines (CMARS). Here, the CMARS method, which uses conic quadratic optimization, is a modified version of a well-known nonparametric regression model, Multivariate Adaptive Regression Splines (MARS). Although performing better with respect to several criteria, the CMARS model is more complex than that of MARS. To overcome this problem, and to improve the CMARS performance further, three different bootstrapping regression methods, namely, Random-X, Fixed-X and Wild Bootstrap are applied on four data sets with different size and scale. Then, the performances of the models are compared using various criteria including accuracy, precision, complexity, stability, robustness and efficiency. Random-X yields more precise, accurate and less complex models particularly for medium size and medium scale data even though it is the least efficient method.
67

Novel Algorithms for Protein Structure Determination from Sparse NMR Data

Tripathy, Chittaranjan January 2012 (has links)
<p>Nuclear magnetic resonance (NMR) spectroscopy is an established technique for macromolecular structure determination at atomic resolution. However, the majority of the current structure determination approaches require a large set of experiments and use large amount of data to elucidate the three dimensional protein structure. While current structure determination protocols may perform well in data-rich settings, protein structure determination still remains to be a difficult task in a sparse-data setting. Sparse data arises in high-throughput settings, for larger proteins, membrane proteins, and symmetric protein complexes; thereby requiring novel algorithms that can compute structures with provable guarantees on solution quality and running time.</p><p>In this dissertation project we made an effort to address the key computational bottlenecks in NMR structural biology. Specifically, we improved and extended the recently-developed techniques by our laboratory, and developed novel algorithms and computational tools that will enable protein structure determination from sparse NMR data. An underlying goal of our project was to minimize the number of NMR experiments, hence the amount of time and cost to perform them, and still be able to determine protein structures accurately from a limited set of experimental data. The algorithms developed in this dissertation use the global orientational restraints from residual dipolar coupling (RDC) and residual chemical shift anisotropy (RCSA) data from solution NMR, in addition to a sparse set of distance restraints from nuclear Overhauser effect (NOE) and paramagnetic relaxation enhancement (PRE) measurements. We have used tools from algebraic geometry to derive analytic expressions for the bond vector and peptide plane orientations, by exploiting the mathematical interplay between RDC- or RCSA-derived sphero-conics and protein kinematics, which in addition to improving our understanding of the geometry of the restraints from these experimental data, have been used by our algorithms to compute the protein structures provably accurately. Our algorithms, which determine protein backbone global fold from sparse NMR data, were used in the high-resolution structure determination protocol developed in our laboratory to solve the solution NMR structures of the FF Domain 2 of human transcription elongation factor CA150 (RNA polymerase II C-terminal domain interacting protein), which have been deposited into the Protein Data Bank. We have developed a novel, sparse data, RDC-based algorithm to compute ensembles of protein loop conformations in the presence of a moderate level of dynamics in the loop regions. All the algorithms developed in this dissertation have been tested on experimental NMR data. The promising results obtained by our algorithms suggest that our algorithms can be successfully applied to determine high-quality protein backbone structures from a limited amount of experimental NMR data, and hence will be useful in automated NOE assignments and high-resolution protein backbone structure determination from sparse NMR data. The algorithms and the software tools developed during this project are made available as free open-source to the scientific community.</p> / Dissertation
68

Návrh nástroje pro tváření kuželových prolisů v austenitickém nerezovém plechu / A design of the die for creation of conic press in the austhenitic inox

Šlosr, Michal January 2012 (has links)
The thesis deals with a design of the tool that creates the conic press in the austhenitic inox 17 240 (X5CrNi18-10). In the first part are described the possible production technologies with a focus of non rigid tool. For the realization is chosen the Guerin process. The rigid tool is replaced by non rigid tool (rubber / polyurethane). In a final part is performed an evaluation of the influence of non rigid tool, lubricants and venting.
69

Electro-optical Characterization of Bistable Smectic A Liquid Crystal Displays

Buyuktanir, Ebru Aylin 11 April 2008 (has links)
No description available.
70

Detection and identification of elliptical structure arrangements in images : theory and algorithms / Détection et identification de structures elliptiques en images : Paradigme et algorithmes

Patraucean, Viorica 19 January 2012 (has links)
Cette thèse porte sur différentes problématiques liées à la détection, l'ajustement et l'identification de structures elliptiques en images. Nous plaçons la détection de primitives géométriques dans le cadre statistique des méthodes a contrario afin d'obtenir un détecteur de segments de droites et d'arcs circulaires/elliptiques sans paramètres et capable de contrôler le nombre de fausses détections. Pour améliorer la précision des primitives détectées, une technique analytique simple d'ajustement de coniques est proposée ; elle combine la distance algébrique et l'orientation du gradient. L'identification d'une configuration de cercles coplanaires en images par une signature discriminante demande normalement la rectification Euclidienne du plan contenant les cercles. Nous proposons une technique efficace de calcul de la signature qui s'affranchit de l'étape de rectification ; elle est fondée exclusivement sur des propriétés invariantes du plan projectif, devenant elle même projectivement invariante / This thesis deals with different aspects concerning the detection, fitting, and identification of elliptical features in digital images. We put the geometric feature detection in the a contrario statistical framework in order to obtain a combined parameter-free line segment, circular/elliptical arc detector, which controls the number of false detections. To improve the accuracy of the detected features, especially in cases of occluded circles/ellipses, a simple closed-form technique for conic fitting is introduced, which merges efficiently the algebraic distance with the gradient orientation. Identifying a configuration of coplanar circles in images through a discriminant signature usually requires the Euclidean reconstruction of the plane containing the circles. We propose an efficient signature computation method that bypasses the Euclidean reconstruction; it relies exclusively on invariant properties of the projective plane, being thus itself invariant under perspective

Page generated in 0.0165 seconds