• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 20
  • 14
  • 13
  • 10
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 402
  • 402
  • 185
  • 177
  • 104
  • 70
  • 52
  • 49
  • 46
  • 42
  • 40
  • 39
  • 36
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Prise en compte des incertitudes des problèmes en vibro-acoustiques (ou interaction fluide-structure) / Taking into account the uncertainties of vibro-acoustic problems (or fluid-structure interaction)

Dammak, Khalil 27 November 2018 (has links)
Ce travail de thèse porte sur l’analyse robuste et l’optimisation fiabiliste des problèmes vibro-acoustiques (ou en interaction fluide-structure) en tenant en compte des incertitudes des paramètres d’entrée. En phase de conception et de dimensionnement, il parait intéressant de modéliser les systèmes vibro-acoustiques ainsi que leurs variabilités qui peuvent être essentiellement liées à l’imperfection de la géométrie ainsi qu’aux caractéristiques des matériaux. Il est ainsi important, voire indispensable, de tenir compte de la dispersion des lois de ces paramètres incertains afin d’en assurer une conception robuste. Par conséquent, l’objectif est de déterminer les capacités et les limites, en termes de précision et de coûts de calcul, des méthodes basées sur les développements en chaos polynomiaux en comparaison avec la technique référentielle de Monte Carlo pour étudier le comportement mécanique des problèmes vibro-acoustique comportant des paramètres incertains. L’étude de la propagation de ces incertitudes permet leur intégration dans la phase de conception. Le but de l’optimisation fiabiliste Reliability-Based Design Optimization (RBDO) consiste à trouver un compromis entre un coût minimum et une fiabilité accrue. Par conséquent, plusieurs méthodes, telles que la méthode hybride (HM) et la méthode Optimum Safety Factor (OSF), ont été développées pour atteindre cet objectif. Pour remédier à la complexité des systèmes vibro-acoustiques comportant des paramètres incertains, nous avons développé des méthodologies spécifiques à cette problématique, via des méthodes de méta-modèlisation, qui nous ont permis de bâtir un modèle de substitution vibro-acoustique, qui satisfait en même temps l’efficacité et la précision du modèle. L’objectif de cette thèse, est de déterminer la meilleure méthodologie à suivre pour l’optimisation fiabiliste des systèmes vibro-acoustiques comportant des paramètres incertains. / This PhD thesis deals with the robust analysis and reliability optimization of vibro-acoustic problems (or fluid-structure interaction) taking into account the uncertainties of the input parameters. In the design and dimensioning phase, it seems interesting to model the vibro-acoustic systems and their variability, which can be mainly related to the imperfection of the geometry as well as the characteristics of the materials. It is therefore important, if not essential, to take into account the dispersion of the laws of these uncertain parameters in order to ensure a robust design. Therefore, the purpose is to determine the capabilities and limitations, in terms of precision and computational costs, of methods based on polynomial chaos developments in comparison with the Monte Carlo referential technique for studying the mechanical behavior of vibro-acoustic problems with uncertain parameters. The study of the propagation of these uncertainties allows their integration into the design phase. The goal of the reliability-Based Design Optimization (RBDO) is to find a compromise between minimum cost and a target reliability. As a result, several methods, such as the hybrid method (HM) and the Optimum Safety Factor (OSF) method, have been developed to achieve this goal. To overcome the complexity of vibro-acoustic systems with uncertain parameters, we have developed methodologies specific to this problem, via meta-modeling methods, which allowed us to build a vibro-acoustic surrogate model, which at the same time satisfies the efficiency and accuracy of the model. The objective of this thesis is to determine the best methodology to follow for the reliability optimization of vibro-acoustic systems with uncertain parameters.
272

Otimização multidisciplinar em projeto de asas flexíveis / Multidisciplinary design optimization of flexible wings

Caixeta Júnior, Paulo Roberto 23 November 2006 (has links)
A indústria aeronáutica vem promovendo avanços tecnológicos em velocidades crescentes, para sobreviver em mercados extremamente competitivos. Neste cenário, torna-se imprescindível o uso de ferramentas de projeto que agilizem o desenvolvimento de novas aeronaves. Os atuais recursos computacionais permitiram um grande aumento no número de ferramentas que auxiliam o trabalho de projetistas e engenheiros. O projeto de uma aeronave é uma tarefa multidisciplinar por essência, o que logo incentivou o desenvolvimento de ferramentas computacionais que trabalhem com várias áreas ao mesmo tempo. Entre elas se destaca a otimização multidisciplinar em projeto, que une métodos de otimização à modelos matemáticos de áreas distintas de um projeto para encontrar soluções de compromisso. O presente trabalho introduz a otimização multidisciplinar em projeto (Multidisciplinary Design Optimization - MDO) e discorre sobre algumas aplicações possíveis desta metodologia. Foi realizada a implementação de um sistema de MDO para o projeto de asas flexíveis, considerando restrições de aeroelasticidade dinâmica e massa estrutural. Como meta, deseja-se encontrar distribuições ideais de rigidezes flexional e torcional da estrutura da asa, para maximizar a velocidade crítica de flutter e minimizar a massa estrutural. Para tanto, foram utilizados um modelo dinâmico-estrutural baseado no método dos elementos finitos, um modelo aerodinâmico não-estacionário baseado na teoria das faixas e nas soluções bidimensionais de Theodorsen, um modelo de previsão de flutter que utiliza o método K e, por fim, um otimizador baseado no método de algoritmos genéticos (AGs). São apresentados os detalhes empregados em cada modelo, as restrições aplicadas e a maneira como eles interagem ao longo da otimização. É feita uma análise para a escolha dos parâmetros de otimização por AG e em seguida a avaliação de dois casos, para verificação da funcionalidade do sistema implementado. Os resultados obtidos demonstram uma metodologia eficiente, que é capaz de buscar soluções ótimas para problemas propostos, que com devidos ajustes pode ter enorme valor para acelerar o desenvolvimento de novas aeronaves. / The aeronautical industry is always trying to speed up technological advances in order to survive in extremely competitive markets. In this scenario, the use of design tools to accelerate the development of new aircraft becomes essential. Current computational resources allow greater increase in the number of design tools to assist the work of aeronautical engineers. In essence, the design of an aircraft is a multidisciplinary task, which stimulates the development of computational tools that work with different areas at the same time. Among them, the multidisciplinary design optimization (MDO) can be distinguished, which combines optimization methods to mathematical models of distinct areas of a design to find compromise solutions. The present work introduces MDO and discourses on some possible applications of this methodology. The implementation of a MDO system for the design of flexible wings, considering dynamic aeroelasticity restrictions and the structural mass, was carried out. As goal, it is desired to find ideal flexional and torsional stiffness distributions of the wing structure, that maximize the critical flutter speed and minimize the structural mass. To do so, it was employed a structural dynamics model based on the finite element method, a nonstationary aerodynamic model based on the strip theory and Theodorsen’s two-dimensional solutions, a flutter prediction model based on the K method and a genetic algorithm (GA). Details on the model, restrictions applied and the way the models interact to each other through the optimization are presented. It is made an analysis for choosing the GA optimization parameters and then, the evaluation of two cases to verify the functionality of the implemented system. The results obtained illustrate an efficient methodology, capable of searching optimal solutions for proposed problems, that with the right adjustments can be of great value to accelerate the development of new aircraft.
273

Development of Safety Standards for CubeSat Propulsion Systems

Cheney, Liam Jon 28 February 2014 (has links)
The CubeSat community has begun to develop and implement propulsion systems. This movement represents a new capability which may satisfy mission needs such as orbital and constellation maintenance, formation flight, de-orbit, and even interplanetary travel. With the freedom and capability granted by propulsion systems, CubeSat providers must accept new responsibilities in proportion to the potential hazards that propulsion systems may present. The Cal Poly CubeSat program publishes and maintains the CubeSat Design Specification (CDS). They wish to help the CubeSat community to safety and responsibly expand its capabilities to include propulsive designs. For this reason, the author embarked on the task of developing a draft of safety standards CubeSat propulsion systems. Wherever possible, the standards are based on existing documents. The author provides an overview of certain concepts in systems safety with respect to the classification of hazards, determination of required fault tolerances, and the use of inhibits to satisfy fault tolerance requirements. The author discusses hazards that could exist during ground operations and through launch with respect to hazardous materials and pressure systems. Most of the standards related to Range Safety are drawn from AFSPCMAN 91-710. Having reviewed a range of hypothetical propulsion system architectures with an engineer from Range Safety at Vandenberg Air Force Base, the author compiled a case study. The author discusses many aspects of orbital safety. The author discusses the risk of collision with the host vehicle and with third party satellites along with the trackability of CubeSats using propulsion systems. Some recommendations are given for working with the Joint Functional Component Command for Space (JFCC SPACE), thanks to the input of two engineers who work with the Joint Space Operations Center (JSpOC). Command Security is discussed as an important aspect of a mission which implements a propulsion system. The author also discusses End-of-Life procedures such as safing and de-orbit operations. The orbital safety standards are intended to promote “good citizenship.” The author steps through each proposed standard and offers justification. The author is confident that these standards will set the stage for a dialogue in the CubeSat community which will lead to the formulation of a reasonable and comprehensive set of standards. The author hopes that the discussions given throughout this document will help CubeSat developers to visualize the path to flight readiness so that they can get started on the right foot.
274

Confidence-based model validation for reliability assessment and its integration with reliability-based design optimization

Moon, Min-Yeong 01 August 2017 (has links)
Conventional reliability analysis methods assume that a simulation model is able to represent the real physics accurately. However, this assumption may not always hold as the simulation model could be biased due to simplifications and idealizations. Simulation models are approximate mathematical representations of real-world systems and thus cannot exactly imitate the real-world systems. The accuracy of a simulation model is especially critical when it is used for the reliability calculation. Therefore, a simulation model should be validated using prototype testing results for reliability analysis. However, in practical engineering situation, experimental output data for the purpose of model validation is limited due to the significant cost of a large number of physical testing. Thus, the model validation needs to be carried out to account for the uncertainty induced by insufficient experimental output data as well as the inherent variability existing in the physical system and hence in the experimental test results. Therefore, in this study, a confidence-based model validation method that captures the variability and the uncertainty, and that corrects model bias at a user-specified target confidence level, has been developed. Reliability assessment using the confidence-based model validation can provide conservative estimation of the reliability of a system with confidence when only insufficient experimental output data are available. Without confidence-based model validation, the designed product obtained using the conventional reliability-based design optimization (RBDO) optimum could either not satisfy the target reliability or be overly conservative. Therefore, simulation model validation is necessary to obtain a reliable optimum product using the RBDO process. In this study, the developed confidence-based model validation is integrated in the RBDO process to provide truly confident RBDO optimum design. The developed confidence-based model validation will provide a conservative RBDO optimum design at the target confidence level. However, it is challenging to obtain steady convergence in the RBDO process with confidence-based model validation because the feasible domain changes as the design moves (i.e., a moving-target problem). To resolve this issue, a practical optimization procedure, which terminates the RBDO process once the target reliability is satisfied, is proposed. In addition, the efficiency is achieved by carrying out deterministic design optimization (DDO) and RBDO without model validation, followed by RBDO with the confidence-based model validation. Numerical examples are presented to demonstrate that the proposed RBDO approach obtains a conservative and practical optimum design that satisfies the target reliability of designed product given a limited number of experimental output data. Thus far, while the simulation model might be biased, it is assumed that we have correct distribution models for input variables and parameters. However, in real practical applications, only limited numbers of test data are available (parameter uncertainty) for modeling input distributions of material properties, manufacturing tolerances, operational loads, etc. Also, as before, only a limited number of output test data is used. Therefore, a reliability needs to be estimated by considering parameter uncertainty as well as biased simulation model. Computational methods and a process are developed to obtain confidence-based reliability assessment. The insufficient input and output test data induce uncertainties in input distribution models and output distributions, respectively. These uncertainties, which arise from lack of knowledge – the insufficient test data, are different from the inherent input distributions and corresponding output variabilities, which are natural randomness of the physical system.
275

Surface Modified Capillaries in Capillary Electrophoresis Coupled to Mass Spectrometry : Method Development and Exploration of the Potential of Capillary Electrophoresis as a Proteomic Tool

Zuberovic, Aida January 2009 (has links)
The increased knowledge about the complexity of the physiological processes increases the demand on the analytical techniques employed to explore them. A comprehensive analysis of the entire sample content is today the most common approach to investigate the molecular interplay behind a physiological deviation. For this purpose a method that offers a number of important properties, such as speed and simplicity, high resolution and sensitivity, minimal sample volume requirements, cost efficiency and robustness, possibility of automation, high-throughput and wide application range of analysis is requested. Capillary electrophoresis (CE) coupled to mass spectrometry (MS) has a great potential and fulfils many of these criteria. However, further developments and improvements of these techniques and their combination are required to meet the challenges of complex biological samples. Protein analysis using CE is a challenging task due to protein adsorption to the negatively charged fused-silica capillary wall. This is especially emphasised with increased basicity and size of proteins and peptides. In this thesis, the adsorption problem was addressed by using an in-house developed physically adsorbed polyamine coating, named PolyE-323. The coating procedure is fast and simple that generates a coating stable over a wide pH range, 2-11. By coupling PolyE-323 modified capillaries to MS, either using electrospray ionisation (ESI) or matrix-assisted laser desorption/ionisation (MALDI), successful analysis of peptides, proteins and complex samples, such as protein digests and crude human body fluids were obtained. The possibilities of using CE-MALDI-MS/MS as a proteomic tool, combined with a proper sample preparation, are further demonstrated by applying high-abundant protein depletion in combination with a peptide derivatisation step or isoelectric focusing (IEF). These approaches were applied in profiling of the proteomes of human cerebrospinal fluid (CSF) and human follicular fluid (hFF), respectively. Finally, a multiplexed quantitative proteomic analysis was performed on a set of ventricular cerebrospinal fluid (vCSF) samples from a patient with traumatic brain injury (TBI) to follow relative changes in protein patterns during the recovery process. The results presented in this thesis confirm the potential of CE, in combination with MS, as a valuable choice in the analysis of complex biological samples and clinical applications.
276

Optimal allocation of thermodynamic irreversibility for the integrated design of propulsion and thermal management systems

Maser, Adam Charles 13 November 2012 (has links)
More electric aircraft systems, high power avionics, and a reduction in heat sink capacity have placed a larger emphasis on correctly satisfying aircraft thermal management requirements during conceptual design. Thermal management systems must be capable of dealing with these rising heat loads, while simultaneously meeting mission performance. Since all subsystem power and cooling requirements are ultimately traced back to the engine, the growing interactions between the propulsion and thermal management systems are becoming more significant. As a result, it is necessary to consider their integrated performance during the conceptual design of the aircraft gas turbine engine cycle to ensure that thermal requirements are met. This can be accomplished by using thermodynamic modeling and simulation to investigate the subsystem interactions while conducting the necessary design trades to establish the engine cycle. As the foundation for this research, a parsimonious, transparent thermodynamic model of propulsion and thermal management systems performance was created with a focus on capturing the physics that have the largest impact on propulsion design choices. A key aspect of this approach is the incorporation of physics-based formulations involving the concurrent usage of the first and second laws of thermodynamics to achieve a clearer view of the component-level losses. This is facilitated by the direct prediction of the exergy destruction distribution throughout the integrated system and the resulting quantification of available work losses over the time history of the mission. The characterization of the thermodynamic irreversibility distribution helps give the designer an absolute and consistent view of the tradeoffs associated with the design of the system. Consequently, this leads directly to the question of the optimal allocation of irreversibility across each of the components. An irreversibility allocation approach based on the economic concept of resource allocation is demonstrated for a canonical propulsion and thermal management systems architecture. By posing the problem in economic terms, exergy destruction is treated as a true common currency to barter for improved efficiency, cost, and performance. This then enables the propulsion systems designer to better fulfill system-level requirements and to create a system more robust to future requirements.
277

Enabling methods for the design and optimization of detection architectures

Payan, Alexia Paule Marie-Renee 08 April 2013 (has links)
The surveillance of geographic borders and critical infrastructures using limited sensor capability has always been a challenging task in many homeland security applications. While geographic borders may be very long and may go through isolated areas, critical assets may be large and numerous and may be located in highly populated areas. As a result, it is virtually impossible to secure each and every mile of border around the country, and each and every critical infrastructure inside the country. Most often, a compromise must be made between the percentage of border or critical asset covered by surveillance systems and the induced cost. Although threats to homeland security can be conceived to take place in many forms, those regarding illegal penetration of the air, land, and maritime domains under the cover of day-to-day activities have been identified to be of particular interest. For instance, the proliferation of drug smuggling, illegal immigration, international organized crime, resource exploitation, and more recently, modern piracy, require the strengthening of land border and maritime awareness and increasingly complex and challenging national security environments. The complexity and challenges associated to the above mission and to the protection of the homeland may explain why a methodology enabling the design and optimization of distributed detection systems architectures, able to provide accurate scanning of the air, land, and maritime domains, in a specific geographic and climatic environment, is a capital concern for the defense and protection community. This thesis proposes a methodology aimed at addressing the aforementioned gaps and challenges. The methodology particularly reformulates the problem in clear terms so as to facilitate the subsequent modeling and simulation of potential operational scenarios. The needs and challenges involved in the proposed study are investigated and a detailed description of a multidisciplinary strategy for the design and optimization of detection architectures in terms of detection performance and cost is provided. This implies the creation of a framework for the modeling and simulation of notional scenarios, as well as the development of improved methods for accurate optimization of detection architectures. More precisely, the present thesis describes a new approach to determining detection architectures able to provide effective coverage of a given geographical environment at a minimum cost, by optimizing the appropriate number, types, and locations of surveillance and detection systems. The objective of the optimization is twofold. First, given the topography of the terrain under study, several promising locations are determined for each sensor system based on the percentage of terrain it is covering. Second, architectures of sensor systems able to effectively cover large percentages of the terrain at minimal costs are determined by optimizing the number, types and locations of each detection system in the architecture. To do so, a modified Genetic Algorithm and a modified Particle Swarm Optimization are investigated and their ability to provide consistent results is compared. Ultimately, the modified Particle Swarm Optimization algorithm is used to obtain a Pareto frontier of detection architectures able to satisfy varying customer preferences on coverage performance and related cost.
278

Parametric Yield of VLSI Systems under Variability: Analysis and Design Solutions

Haghdad, Kian 29 April 2011 (has links)
Variability has become one of the vital challenges that the designers of integrated circuits encounter. variability becomes increasingly important. Imperfect manufacturing process manifest itself as variations in the design parameters. These variations and those in the operating environment of VLSI circuits result in unexpected changes in the timing, power, and reliability of the circuits. With scaling transistor dimensions, process and environmental variations become significantly important in the modern VLSI design. A smaller feature size means that the physical characteristics of a device are more prone to these unaccounted-for changes. To achieve a robust design, the random and systematic fluctuations in the manufacturing process and the variations in the environmental parameters should be analyzed and the impact on the parametric yield should be addressed. This thesis studies the challenges and comprises solutions for designing robust VLSI systems in the presence of variations. Initially, to get some insight into the system design under variability, the parametric yield is examined for a small circuit. Understanding the impact of variations on the yield at the circuit level is vital to accurately estimate and optimize the yield at the system granularity. Motivated by the observations and results, found at the circuit level, statistical analyses are performed, and solutions are proposed, at the system level of abstraction, to reduce the impact of the variations and increase the parametric yield. At the circuit level, the impact of the supply and threshold voltage variations on the parametric yield is discussed. Here, a design centering methodology is proposed to maximize the parametric yield and optimize the power-performance trade-off under variations. In addition, the scaling trend in the yield loss is studied. Also, some considerations for design centering in the current and future CMOS technologies are explored. The investigation, at the circuit level, suggests that the operating temperature significantly affects the parametric yield. In addition, the yield is very sensitive to the magnitude of the variations in supply and threshold voltage. Therefore, the spatial variations in process and environmental variations make it necessary to analyze the yield at a higher granularity. Here, temperature and voltage variations are mapped across the chip to accurately estimate the yield loss at the system level. At the system level, initially the impact of process-induced temperature variations on the power grid design is analyzed. Also, an efficient verification method is provided that ensures the robustness of the power grid in the presence of variations. Then, a statistical analysis of the timing yield is conducted, by taking into account both the process and environmental variations. By considering the statistical profile of the temperature and supply voltage, the process variations are mapped to the delay variations across a die. This ensures an accurate estimation of the timing yield. In addition, a method is proposed to accurately estimate the power yield considering process-induced temperature and supply voltage variations. This helps check the robustness of the circuits early in the design process. Lastly, design solutions are presented to reduce the power consumption and increase the timing yield under the variations. In the first solution, a guideline for floorplaning optimization in the presence of temperature variations is offered. Non-uniformity in the thermal profiles of integrated circuits is an issue that impacts the parametric yield and threatens chip reliability. Therefore, the correlation between the total power consumption and the temperature variations across a chip is examined. As a result, floorplanning guidelines are proposed that uses the correlation to efficiently optimize the chip's total power and takes into account the thermal uniformity. The second design solution provides an optimization methodology for assigning the power supply pads across the chip for maximizing the timing yield. A mixed-integer nonlinear programming (MINLP) optimization problem, subject to voltage drop and current constraint, is efficiently solved to find the optimum number and location of the pads.
279

Conceptual Design and Technical Risk Analysis of Quiet Commercial Aircraft Using Physics-Based Noise Analysis Methods

Olson, Erik Davin 19 May 2006 (has links)
An approach was developed which allows for design studies of commercial aircraft using physics-based noise analysis methods while retaining the ability to perform the rapid tradeoff and risk analysis studies needed at the conceptual design stage. A prototype integrated analysis process was created for computing the total aircraft EPNL at the Federal Aviation Regulations Part 36 certification measurement locations using physics-based methods for fan rotor-stator interaction tones and jet mixing noise. The analysis process was then used in combination with design of experiments to create response surface equations (RSEs) for the engine and aircraft performance metrics, geometric constraints and takeoff and landing noise levels. In addition, Monte Carlo analysis was used to assess the expected variability of the metrics under the influence of uncertainty, and to determine how the variability is affected by the choice of engine cycle. Finally, the RSEs were used to conduct a series of proof-of-concept conceptual-level design studies demonstrating the utility of the approach. The study found that a key advantage to using physics-based analysis during conceptual design lies in the ability to assess the benefits of new technologies as a function of the design to which they are applied. The greatest difficulty in implementing the physics-based analysis proved to be the generation of design geometry at a sufficient level of detail for high-fidelity analysis.
280

A Systematic Process for Adaptive Concept Exploration

Nixon, Janel Nicole 29 November 2006 (has links)
This thesis presents a method for streamlining the process of obtaining and interpreting quantitative data for the purpose of creating a low-fidelity modeling and simulation environment. By providing a more efficient means for obtaining such information, quantitative analyses become much more practical for decision-making in the very early stages of design, where traditionally, quants are viewed as too expensive and cumbersome for concept evaluation. The method developed to address this need uses a Systematic Process for Adaptive Concept Exploration (SPACE). In the SPACE method, design space exploration occurs in a sequential fashion; as data is acquired, the sampling scheme adapts to the specific problem at hand. Previously gathered data is used to make inferences about the nature of the problem so that future samples can be taken from the more interesting portions of the design space. Furthermore, the SPACE method identifies those analyses that have significant impacts on the relationships being modeled, so that effort can be focused on acquiring only the most pertinent information. The results show that the combination of a tailored data set, and an informed model structure work together to provide a meaningful quantitative representation of the system while relying on only a small amount of resources to generate that information. In comparison to more traditional modeling and simulation approaches, the SPACE method provides a more accurate representation of the system using fewer resources to generate that representation. For this reason, the SPACE method acts as an enabler for decision making in the very early design stages, where the desire is to base design decisions on quantitative information while not wasting valuable resources obtaining unnecessary high fidelity information about all the candidate solutions. Thus, the approach enables concept selection to be based on parametric, quantitative data so that informed, unbiased decisions can be made.

Page generated in 0.1238 seconds