• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6769
  • 2451
  • 1001
  • 805
  • 777
  • 234
  • 168
  • 119
  • 83
  • 79
  • 70
  • 63
  • 54
  • 52
  • 50
  • Tagged with
  • 15002
  • 2422
  • 1971
  • 1814
  • 1642
  • 1528
  • 1381
  • 1327
  • 1284
  • 1252
  • 1220
  • 1114
  • 972
  • 928
  • 926
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
931

Efficient Low-Speed Flight in a Wind Field

Feldman, Michael A. 24 July 1996 (has links)
A new software tool was needed for flight planning of a high altitude, low speed unmanned aerial vehicle which would be flying in winds close to the actual airspeed of the vehicle. An energy modeled NLP formulation was used to obtain results for a variety of missions and wind profiles. The energy constraint derived included terms due to the wind field and the performance index was a weighted combination of the amount of fuel used and the final time. With no emphasis on time and with no winds the vehicle was found to fly at maximum lift to drag velocity, V<sub>md</sub>. When flying in tail winds the velocity was less than V<sub>md</sub>, while flying in head winds the velocity was higher than Vmd. A family of solutions was found with varying times of flight and varying fuel amounts consumed which will aid the operator in choosing a flight plan depending on a desired landing time. At certain parts of the flight, the turning terms in the energy constraint equation were found to be significant. An analysis of a simpler vertical plane cruise optimal control problem was used to explain some of the characteristics of the vertical plane NLP results. / Master of Science
932

An Interpolation-Based Approach to Optimal H<sub>∞</sub> Model Reduction

Flagg, Garret Michael 01 June 2009 (has links)
A model reduction technique that is optimal in the H<sub>∞</sub>-norm has long been pursued due to its theoretical and practical importance. We consider the optimal H<sub>∞</sub> model reduction problem broadly from an interpolation-based approach, and give a method for finding the approximation to a state-space symmetric dynamical system which is optimal over a family of interpolants to the full order system. This family of interpolants has a simple parameterization that simplifies a direct search for the optimal interpolant. Several numerical examples show that the interpolation points satisfying the Meier-Luenberger conditions for H₂-optimal approximations are a good starting point for minimizing the H<sub>∞</sub>-norm of the approximation error. Interpolation points satisfying the Meier-Luenberger conditions can be computed iteratively using the IRKA algorithm [12]. We consider the special case of state-space symmetric systems and show that simple sufficient conditions can be derived for minimizing the approximation error when starting from the interpolation points found by the IRKA algorithm. We then explore the relationship between potential theory in the complex plane and the optimal H<sub>∞</sub>-norm interpolation points through several numerical experiments. The results of these experiments suggest that the optimal H<sub>∞</sub> approximation of order r yields an error system for which significant pole-zero cancellation occurs, effectively reducing an order n+r error system to an order 2r+1 system. These observations lead to a heuristic method for choosing interpolation points that involves solving a rational Zolatarev problem over a discrete set of points in the complex plane. / Master of Science
933

Mission-driven Sensor Network Design for Space Domain Awareness

Harris, Cameron Douglas 09 December 2024 (has links)
This research presents a novel framework for optimizing sensor networks to enhance Space Domain Awareness in the face of a burgeoning resident space object population. By employing advanced metaheuristic optimization techniques and high-fidelity modeling and simulation, this research investigates the intricate interplay between sensor characteristics, network topology, and state estimation performance. The research aims to develop actionable recommendations for optimizing sensor network design, considering factors such as viewing geometry, sensor phenomenology, and background noise. Through rigorous simulations and analysis, this work seeks to contribute significantly to the advancement of Space Domain Awareness. A key product of this research is the development of a novel lattice-based genetic algorithm tailored for constrained metaheuristic optimization that converges in 15% fewer generations than traditional methods. This algorithm demonstrates its effectiveness in producing practical sensor network designs that can enhance space object tracking and surveillance capabilities. Results will show network designs that fill current coverage gaps over the Atlantic and Pacific oceans, remain consistent with geographical and geopolitical boundaries, and exploit regions with favorable environmental conditions. The outcome is a set of actionable solutions that triple observation capacity and reduce catalog observation gap times by up to 50%. / Doctor of Philosophy / Space Domain Awareness is critical for ensuring the safety and security of space operations. As the number of objects in space continues to grow, strategic sensor network design is essential for effective tracking and surveillance. This research presents a novel approach to designing sensor networks that maximizes their effectiveness towards specified mission outcomes. Leveraging advanced computer modeling and optimization techniques, a method is developed that considers factors like sensor location, capabilities, and the environment. The research has led to improved sensor network designs, enhanced coverage of space, and reduced gaps in observations of space objects. Overall, this research provides valuable insights and practical solutions for improving SDA capabilities. These results can help ensure the safety and security of space operations for the future space environment.
934

Discrete and Continuous Nonconvex Optimization: Decision Trees, Valid Inequalities, and Reduced Basis Techniques

Dalkiran, Evrim 26 April 2011 (has links)
This dissertation addresses the modeling and analysis of a strategic risk management problem via a novel decision tree optimization approach, as well as development of enhanced Reformulation-Linearization Technique (RLT)-based linear programming (LP) relaxations for solving nonconvex polynomial programming problems, through the generation of valid inequalities and reduced representations, along with the design and implementation of efficient algorithms. We first conduct a quantitative analysis for a strategic risk management problem that involves allocating certain available failure-mitigating and consequence-alleviating resources to reduce the failure probabilities of system safety components and subsequent losses, respectively, together with selecting optimal strategic decision alternatives, in order to minimize the risk or expected loss in the event of a hazardous occurrence. Using a novel decision tree optimization approach to represent the cascading sequences of probabilistic events as controlled by key decisions and investment alternatives, the problem is modeled as a nonconvex mixed-integer 0-1 factorable program. We develop a specialized branch-and-bound algorithm in which lower bounds are computed via tight linear relaxations of the original problem that are constructed by utilizing a polyhedral outer-approximation mechanism in concert with two alternative linearization schemes having different levels of tightness and complexity. We also suggest three alternative branching schemes, each of which is proven to guarantee convergence to a global optimum for the underlying problem. Extensive computational results and sensitivity analyses are presented to provide insights and to demonstrate the efficacy of the proposed algorithm. In particular, our methodology outperformed the commercial software BARON (Version 8.1.5), yielding a more robust performance along with an 89.9% savings in effort on average. Next, we enhance RLT-based LP relaxations for polynomial programming problems by developing two classes of valid inequalities: v-semidefinite cuts and bound-grid-factor constraints. The first of these uses concepts derived from semidefinite programming. Given an RLT relaxation, we impose positive semidefiniteness on suitable dyadic variable-product matrices, and correspondingly derive implied semidefinite cuts. In the case of polynomial programs, there are several possible variants for selecting such dyadic variable-product matrices for imposing positive semidefiniteness restrictions in order to derive implied valid inequalities, which leads to a new class of cutting planes that we call v-semidefinite cuts. We explore various strategies for generating such cuts within the context of an RLT-based branch-and-cut scheme, and exhibit their relative effectiveness towards tightening the RLT relaxations and solving the underlying polynomial programming problems, using a test-bed of randomly generated instances as well as standard problems from the literature. Our results demonstrate that these cutting planes achieve a significant tightening of the lower bound in contrast with using RLT as a stand-alone approach, thereby enabling an appreciable reduction in the overall computational effort, even in comparison with the commercial software BARON. Empirically, our proposed cut-enhanced algorithm reduced the computational effort required by the latter two approaches by 44% and 77%, respectively, over a test-bed of 60 polynomial programming problems. As a second cutting plane strategy, we introduce a new class of bound-grid-factor constraints that can be judiciously used to augment the basic RLT relaxations in order to improve the quality of lower bounds and enhance the performance of global branch-and-bound algorithms. Certain theoretical properties are established that shed light on the effect of these valid inequalities in driving the discrepancies between RLT variables and their associated nonlinear products to zero. To preserve computational expediency while promoting efficiency, we propose certain concurrent and sequential cut generation routines and various grid-factor selection rules. The results indicate a significant tightening of lower bounds, which yields an overall reduction in computational effort of 21% for solving a test-bed of 15 challenging polynomial programming problems to global optimality in comparison with the basic RLT procedure, and over a 100-fold speed-up in comparison with the commercial software BARON. Finally, we explore equivalent, reduced size RLT-based formulations for polynomial programming problems. Utilizing a basis partitioning scheme for an embedded linear equality subsystem, we show that a strict subset of RLT defining equalities imply the remaining ones. Applying this result, we derive significantly reduced RLT representations and develop certain coherent associated branching rules that assure convergence to a global optimum, along with static as well as dynamic basis selection strategies to implement the proposed procedure. In addition, we enhance the RLT relaxations with v-semidefinite cuts, which are empirically shown to further improve the relative performance of the reduced RLT method over the usual RLT approach. Computational results presented using a test-bed of 10 challenging polynomial programs to evaluate the different reduction strategies demonstrate that our superlative proposed approach achieved more than a four-fold improvement in computational effort in comparison with both the commercial software BARON and a recently developed open-source code, Couenne, for solving nonconvex mixed-integer nonlinear programming problems. Moreover, our approach robustly solved all the test cases to global optimality, whereas BARON and Couenne were jointly able to solve only a single instance to optimality within the set computational time limit, having an unresolved average optimality gap of 260% and 437%, respectively, for the other nine instances. This dissertation makes several broader contributions to the field of nonconvex optimization, including factorable, nonlinear mixed-integer programming problems. The proposed decision tree optimization framework can serve as a versatile management tool in the arenas of homeland security and health-care. Furthermore, we have advanced the frontier for tackling formidable nonconvex polynomial programming problems that arise in emerging fields such as signal processing, biomedical engineering, materials science, and risk management. An open-source software using the proposed reduced RLT representations, semidefinite cuts, bound-grid-factor constraints, and range reduction strategies, is currently under preparation. In addition, the different classes of challenging polynomial programming test problems that are utilized in the computational studies conducted in this dissertation have been made available for other researchers via the Web-page http://filebox.vt.edu/users/dalkiran/website/. It is our hope and belief that the modeling and methodological contributions made in this dissertation will serve society in a broader context through the myriad of widespread applications they support. / Ph. D.
935

Size Optimization of Utility-Scale Solar PV System  Considering Reliability Evaluation

Chen, Xiao 19 July 2016 (has links)
In this work, a size optimization approach for utility-scale solar photovoltaic (PV) systems is proposed. The purpose of the method is to determine the optimal solar energy generation capacity and optimal location by the minimizing total system cost subject to the constraint that the system reliability requirements. Due to the stochastic characteristic of the solar irradiation, the reliability performance of a power system with PV generation is quite different from the one with only conventional generation. Basically, generation adequacy level of power systems containing solar energy is evaluated by reliability assessment and the most widely used reliability index is the loss of load probability (LOLP). The value of LOLP depends on various factors such as power output of the PV system, outage rate of generating facilities and the system load profile. To obtain the LOLP, the Monte Carlo method is applied to simulate the reliability performance of the solar penetrated power system. The total system cost model consists of the system installation cost, mitigation cost, and saving fuel and operation cost. Mitigation cost is accomplished with N-1 contingency analysis. The cost function minimization process is implemented in Genetic Algorithm toolbox, which has the ability to search the global optimum with relative computational simplicity. / Master of Science
936

Optimizing analysis pipelines for improved variant discovery

Highnam, Gareth Wei An 17 April 2014 (has links)
In modern genomics, all experiments begin data collection with sequencing and downstream alignment or assembly processing. As such, the development of reliable sequencing pipelines is hugely important as a foundation for any future analysis on that data. While much existing work has been done on enhancing the throughput and computational performance of such pipelines, there is still the question of accuracy. The rift in knowledge between speed and accuracy can be attributed to the more conceptually complex nature of what constitutes the measurement of accuracy. Unlike simply parsing logs of memory usage and CPU hours, accuracy requires experimental validation. Subsets of accuracy are also created when assessing alignment or variations around particular genomic features such as indels, Copy Number Variants (CNVs), or microsatellite repeats. Here is the development of accuracy measurements in read alignment and variation calls, allowing the optimization of sequencing pipelines at all stages. The underlying hypothesis, then, is that different sequencing platforms and analysis software can be distinguished from each other in accuracy by both sample and genomic variation of interest. As the term accuracy suggests, the measurements of alignment and variation recall require comparison against a truth set, for which read library simulations and high quality data from the Genome in a Bottle Consortium or Illumina Omni array have served us. In exploring the hypothesis, the measurements are built into a community resource to crowdsource the creation of a benchmarking repository for pipeline comparison. Results from pipelines promoted by this computational model are then wet lab validated with support for a hierarchy of pipeline performance. Particularly, the construction of an accurate pipeline for genotyping microsatellite repeats will be investigated, which is then used to create a database of human microsatellites. Progress in this area is vital for the growth of sequencing in both clinical and research settings. For genomics research to fully translate to the bedside, the boom of new technology must be controlled by rational metrics and industry standardization. This project will address both of these issues, as well as contribute to the understanding of human microsatellite variation. / Ph. D.
937

Design Optimization of a High Aspect Ratio Rigid/Inflatable Wing

Butt, Lauren Marie 06 June 2011 (has links)
High aspect-ratio, long-endurance aircraft require different design modeling from those with traditional moderate aspect ratios. High aspect-ratio, long endurance aircraft are generally more flexible structures than the traditional wing; therefore, they require modeling methods capable of handling a flexible structure even at the preliminary design stage. This work describes a design optimization method for combining rigid and inflatable wing design. The design will take advantage of the benefits of inflatable wing configurations for minimizing weight, while saving on design pressure requirements and allowing portability by using a rigid section at the root in which the inflatable section can be stowed. The multidisciplinary design optimization will determine minimum structural weight based on stress, divergence, and lift-to-drag ratio constraints. Because the goal of this design is to create an inflatable wing extension that can be packed into the rigid section, packing constraints are also applied to the design. / Master of Science
938

Multi-Objective Optimization: Riccati Iteration and the Lotfi Manufacturing Problem

Mull, Benjamin Conaway 09 October 2002 (has links)
In current economic research, there are many problems that are difficult to solve without powerful computers, unique software, or novel approaches. I wrote this thesis because I believe that a powerful solution technique known as the Riccati Iteration is such a novel approach, and can be applied to complex problems that would otherwise be infeasible to solve. This thesis will demonstrate the power of the Riccati iteration by employing the Riccati iteration with spreadsheet software to solve a difficult dynamic optimization problem - a capital replacement problem posed by Lotfi where multiple objectives have been identified. The Riccati iteration will be shown to be the most practicable method for solving this problem, especially when compared to the Lagrange and Least-Squares solution methods. It is hoped that the demonstration in this thesis is so compelling that others may consider using the Riccati approach in their own research. / Master of Arts
939

Compiler-Directed Error Resilience for Reliable Computing

Liu, Qingrui 08 August 2018 (has links)
Error resilience has become as important as power and performance in modern computing architecture. There are various sources of errors that can paralyze real-world computing systems. Of particular interest to this dissertation are single-event errors. They can be the results of energetic particle strike or abrupt power outage that corrupts the program states leading to system failures. Specifically, energetic particle strike is the major cause of soft error while abrupt power outage can result in memory inconsistency in the nonvolatile memory systems. Unfortunately, existing techniques to handle those single-event errors are either resource consuming (e.g., hardware approaches) or heavy-weight (e.g., software approaches). To address this problem, this dissertation identifies idempotent processing as an alternative recovery technique to handle the system failures in an efficient and low-cost manner. Then, this dissertation first proposes to design and develop a compiler-directed lightweight methodology which leverages idempotent processing and the state-of-the-art sensor-based detection to achieve soft error resilience at low-cost. This dissertation also introduces a lightweight soft error tolerant hardware design that redefines idempotent processing where the idempotent regions can be created, verified and recovered from the processor's point of view. Furthermore, this dissertation proposes a series of compiler optimizations that significantly reduce the hardware and runtime overhead of the idempotent processing. Lastly, this dissertation proposes a failure-atomic system integrated with idempotent processing to resolve another type of single-event error, i.e., failure-induced memory inconsistency in the nonvolatile memory systems. / Ph. D. / Our computing systems are vulnerable to different kinds of errors. All these errors can potentially crash real-world computing systems. This dissertation specifically addresses the challenges of single-event errors. Single-event errors can be caused by energetic particle strikes or abrupt power outage that can corrupt the program states leading to system failures. Unfortunately, existing techniques to handle those single-event errors are expensive in terms of hardware/software. To address this problem, this dissertation leverages an interesting property called idempotence in the program. A region of code is idempotent if and only if it always generates the same output whenever the program jumps back to the region entry from any execution point within the region. Thus, we can leverage the idempotent property as a low-cost recovery technique to recover the system failures by jumping back to the beginning of the region where the errors occur. This dissertation proposes solutions to incorporate the idempotent property for resilience against those single-event errors. Furthermore, this dissertation introduces a series of optimization techniques with compiler and hardware support to improve the efficiency and overheads for error resilience. We believe that our proposed techniques in this dissertation can inspire researchers for future error resilience research.
940

Optimization of Disaggregated Space Systems Using the Disaggregated Integral Systems Concept Optimization Technology Methodology

Wagner, Katherine Mott 10 July 2020 (has links)
This research describes the development and application of the Disaggregated Integral Systems Concept Optimization Technology (DISCO-Tech) methodology. DISCO-Tech is a modular space system design tool that focuses on the optimization of disaggregated and non-traditional space systems. It uses a variable-length genetic algorithm to simultaneously optimize orbital parameters, payload parameters, and payload distribution for space systems. The solutions produced by the genetic algorithm are evaluated using cost estimation, coverage analysis, and spacecraft sizing modules. A set of validation cases are presented. DISCO-Tech is then applied to three representative space mission design problems. The first problem is the design of a resilient rideshare-manifested fire detection system. This analysis uses a novel framework for evaluating constellation resilience to threats using mixed integer linear programming. A solution is identified where revisit times of under four hours are achievable for $10.5 million, one quarter of the cost of a system manifested using dedicated launches. The second problem applies the same resilience techniques to the design of an expanded GPS monitor station network. Nine additional monitor stations are identified that allow the network to continuously monitor the GPS satellites even when five of the monitor stations are inoperable. The third problem is the design of a formation of satellites for performing sea surface height detection using interferometric synthetic aperture radar techniques. A solution is chosen that meets the performance requirements of an upcoming monolithic system at 70% of the cost of the monolithic system. / Doctor of Philosophy / Civilians, businesses, and the government all rely on space-based resources for their daily operations. For example, the signal provided by GPS satellites is used by drivers, commercial pilots, soldiers, and more. Communications satellites provide phone and internet to users in remote areas. Weather satellites provide short-term forecasting and measure climate change. Because of the importance of these and other space systems, it is necessary that they are designed in an efficient, reliable, and cost-effective manner. The Disaggregated Integral Systems Concept Optimization Technology (DISCO-Tech) is introduced as a means of designing these space systems. DISCO-Tech optimizes various aspects of the space mission, including the number of satellites needed to complete the mission, the location of the satellites, and the sensors that each satellite needs to accomplish its mission. This dissertation describes how DISCO-Tech works, then applies DISCO-Tech to several example missions. The first mission uses satellites to monitor forest fires in California. In order to reduce the cost of this mission, the satellites share launch vehicles with satellites from other, unrelated missions. Next, DISCO-Tech is used to choose the placement of new ground stations for GPS satellites. Because GPS is an important asset, this study also assesses the performance of the network of ground stations when some of the stations are inoperable. Finally, DISCO-Tech is used to design a group of satellites that measure sea level, since sea level is important for climatology research. A design is presented for a group of satellites that perform these measurements at a lower cost than a planned mission that uses a single satellite.

Page generated in 0.097 seconds