311 |
Rough surface scattering under Gaussian beam illumination and the Kirchhoff approximationTyeryar, Keith Allen 07 April 2009 (has links)
In this thesis, an analysis of the scattering of a rough perfect electric conductor (PEC) surface under illumination by a Gaussian beam using the Kirchhoff approximation is presented. The analysis assumes a source distribution which yields a Gaussian beam solution as a radiated field. This field is used to excite a current density on the surface using the Kirchhoff approximation. A vector potential approach utilizes this current to calculate the fields scattered by the surface. The analysis is carried out for the backscatter case and near-normal incidence in order to reduce the final numerical evaluation to a two-dimensional integration. The normalized radar cross-section (NRCS) is calculated and compared with the result for plane wave illumination.
The analysis explores the effects of varying the source aperture size, rough surface correlation length and rms height on the NRCS. An asymptotic evaluation of the mean squared field is presented, as well as the mathematical form of the fourth moment of the scattered field. As a further study, the NRCS of a rough surface under a Gaussian tapered plane wave illumination is presented. The interplay of the beam spot and correlation length for such illuminated surfaces is discussed. / Master of Science
|
312 |
An introductory survey of probability density function controlRen, M., Zhang, Qichun, Zhang, J. 03 October 2019 (has links)
Yes / Probability density function (PDF) control strategy investigates the controller design approaches where the random variables for the stochastic processes were adjusted to follow the desirable distributions. In other words, the shape of the system PDF can be regulated by controller design.Different from the existing stochastic optimization and control methods, the most important problem of PDF control is to establish the evolution of the PDF expressions of the system variables. Once the relationship between the control input and the output PDF is formulated, the control objective can be described as obtaining the control input signals which would adjust the system output PDFs to follow the pre-specified target PDFs. Motivated by the development of data-driven control and the state of the art PDF-based applications, this paper summarizes the recent research results of the PDF control while the controller design approaches can be categorized into three groups: (1) system model-based direct evolution PDF control; (2) model-based distribution-transformation PDF control methods and (3) data-based PDF control. In addition, minimum entropy control, PDF-based filter design, fault diagnosis and probabilistic decoupling design are also introduced briefly as extended applications in theory sense. / De Montfort University - DMU HEIF’18 project, Natural Science Foundation of Shanxi Province [grant number 201701D221112], National Natural Science Foundation of China [grant numbers 61503271 and 61603136]
|
313 |
A Novel Data-based Stochastic Distribution Control for Non-Gaussian Stochastic SystemsZhang, Qichun, Wang, H. 06 April 2021 (has links)
Yes / This note presents a novel data-based approach to investigate the non-Gaussian stochastic distribution control problem. As the motivation of this note, the existing methods have been summarised regarding to the drawbacks, for example, neural network weights training for unknown stochastic distribution and so on. To overcome these disadvantages, a new transformation for dynamic probability density function is given by kernel density estimation using interpolation. Based upon this transformation, a representative model has been developed while the stochastic distribution control problem has been transformed into an optimisation problem. Then, data-based direct optimisation and identification-based indirect optimisation have been proposed. In addition, the convergences of the presented algorithms are analysed and the effectiveness of these algorithms has been evaluated by numerical examples. In summary, the contributions of this note are as follows: 1) a new data-based probability density function transformation is given; 2) the optimisation algorithms are given based on the presented model; and 3) a new research framework is demonstrated as the potential extensions to the existing st
|
314 |
Nonreciprocal and Non-Spreading Transmission of Acoustic Beams through Periodic Dissipative StructuresZubov, Yurii 05 1900 (has links)
Propagation of a Gaussian beam in a layered periodic structure is studied analytically, numerically, and experimentally. It is demonstrated that for a special set of parameters the acoustic beam propagates without diffraction spreading. This propagation is also accompanied by negative refraction of the direction of phase velocity of the Bloch wave. In the study of two-dimensional viscous phononic crystals with asymmetrical solid inclusions, it was discovered that acoustic transmission is nonreciprocal. The effect of nonreciprocity in a static viscous environment is due to broken PT symmetry of the system as a whole. The difference in transmission is caused by the asymmetrical transmission and dissipation. The asymmetrical transmission is caused solely by broken mirror symmetry and could appear even in a lossless system. Asymmetrical dissipation of sound is a time-irreversible phenomenon that arises only if both energy dissipation and broken parity symmetry are present in the system. The numerical results for both types of phononic crystals were verified experimentally. Proposed devices could be exploited as collimation, rectification, and isolation acoustic devices.
|
315 |
Robust and Data-Driven Uncertainty Quantification Methods as Real-Time Decision Support in Data-Driven ModelsAlgikar, Pooja Basavaraj 05 February 2025 (has links)
The growing complexity and data in modern engineering and physical systems require robust frameworks for real-time decision-making. Data-driven models trained on observational data enable faster predictions but face key challenges—data corruption, bias, limited interpretability, and uncertainty misrepresentation—which can compromise their reliability. Propagating uncertainties from sources like model parameters and input features is crucial in data-driven models to ensure trustworthy predictions and informed decisions. Uncertainty quantification (UQ) methods are broadly categorized into surrogate-based models, which approximate simulators for speed and efficiency, and probabilistic approaches, such as Bayesian models and Gaussian processes, that inherently capture uncertainty into predictions. For real-time UQ, leveraging recent data instead of historical records enables more accurate and efficient uncertainty characterization, making it inherently data-driven. In dynamical analysis, the Koopman operator represents nonlinear system dynamics as linear systems by lifting state functions, enabling data-driven estimation through its applied form. By analyzing its spectral properties—eigenvalues, eigenfunctions, and modes—the Koopman operator reveals key insights into system dynamics and simplifies control design. However, inherent measurement uncertainty poses challenges for efficient estimation with dynamic mode and extended dynamic mode decomposition algorithms. This dissertation develops a statistical framework to propagate measurement uncertainties in the elements of the Koopman operator. This dissertation also develops robust estimation of model parameters, considering observational data, which is often corrupted, in Gaussian process settings. The proposed approaches adapt to evolving data and process agnostic— in which reliance on predefined source distributions is avoided. / Doctor of Philosophy / Modern engineering and scientific systems are increasingly complex and interconnected— operating in environments with significant uncertainties and dynamic changes. Traditional mathematical models and simulations often fall short in capturing the complexity of largescale real-world, ever-evolving systems—struggling to adapt to dynamic changes and fully utilize today's data-rich environments. This is especially critical in fields like renewable integrated power systems, robotics, etc., where real-time decisions must account for uncertainties in the environment, measurements, and operations. The growing availability of observational data—enabled by advanced sensors and computational tools—has driven a shift toward data-driven approaches. Unlike traditional simulators, these models are faster and learn directly from data. However, their reliability depends on robust methods to quantify and manage uncertainties, as corrupted data, biases, and measurement noise challenge their accuracy. This dissertation focuses on characterizing uncertainties at the source using recent data, instead of relying on assumed distributions or historical data, as is common in the literature. Given that observational data is often corrupted by outliers, this dissertation also develops robust parameter estimation within the Gaussian process setting. A central focus is the Koopman operator theory—a transformative framework that converts complex, nonlinear systems into simpler, linear representations. This research integrates measurement uncertainty quantification into Koopman-based models, providing a metric to assess the reliability of the Koopman operator under measurement noise.
|
316 |
Multi-layer designs and composite gaussian process models with engineering applicationsBa, Shan 21 May 2012 (has links)
This thesis consists of three chapters, covering topics in both the design and modeling aspects of computer experiments as well as their engineering applications. The first chapter systematically develops a new class of space-filling designs for computer experiments by splitting two-level factorial designs into multiple layers. The new design is easy to generate, and our numerical study shows that it can have better space-filling properties than the optimal Latin hypercube design. The second chapter proposes a novel modeling approach for approximating computationally expensive functions that are not second-order stationary. The new model is a composite of two Gaussian processes, where the first one captures the smooth global trend and the second one models local details. The new predictor also incorporates a flexible variance model, which makes it more capable of approximating surfaces with varying volatility. The third chapter is devoted to a two-stage sequential strategy which integrates analytical models with finite element simulations for a micromachining process.
|
317 |
A pareto frontier intersection-based approach for efficient multiobjective optimization of competing concept alternativesRousis, Damon 01 July 2011 (has links)
The expected growth of civil aviation over the next twenty years places significant emphasis on revolutionary technology development aimed at mitigating the environmental impact of commercial aircraft. As the number of technology alternatives grows along with model complexity, current methods for Pareto finding and multiobjective optimization quickly become computationally infeasible. Coupled with the large uncertainty in the early stages of design, optimal designs are sought while avoiding the computational burden of excessive function calls when a single design change or technology assumption could alter the results. This motivates the need for a robust and efficient evaluation methodology for quantitative assessment of competing concepts.
This research presents a novel approach that combines Bayesian adaptive sampling with surrogate-based optimization to efficiently place designs near Pareto frontier intersections of competing concepts. Efficiency is increased over sequential multiobjective optimization by focusing computational resources specifically on the location in the design space where optimality shifts between concepts. At the intersection of Pareto frontiers, the selection decisions are most sensitive to preferences place on the objectives, and small perturbations can lead to vastly different final designs. These concepts are incorporated into an evaluation methodology that ultimately reduces the number of failed cases, infeasible designs, and Pareto dominated solutions across all concepts.
A set of algebraic samples along with a truss design problem are presented as canonical examples for the proposed approach. The methodology is applied to the design of ultra-high bypass ratio turbofans to guide NASA's technology development efforts for future aircraft. Geared-drive and variable geometry bypass nozzle concepts are explored as enablers for increased bypass ratio and potential alternatives over traditional configurations. The method is shown to improve sampling efficiency and provide clusters of feasible designs that motivate a shift towards revolutionary technologies that reduce fuel burn, emissions, and noise on future aircraft.
|
318 |
A computational model of engineering decision makingHeller, Collin M. 13 January 2014 (has links)
The research objective of this thesis is to formulate and demonstrate a computational framework for modeling the design decisions of engineers. This framework is intended to be descriptive in nature as opposed to prescriptive or normative; the output of the model represents a plausible result of a designer's decision making process. The framework decomposes the decision into three elements: the problem statement, the designer's beliefs about the alternatives, and the designer's preferences. Multi-attribute utility theory is used to capture designer preferences for multiple objectives under uncertainty. Machine-learning techniques are used to store the designer's knowledge and to make Bayesian inferences regarding the attributes of alternatives. These models are integrated into the framework of a Markov decision process to simulate multiple sequential decisions. The overall framework enables the designer's decision problem to be transformed into an optimization problem statement; the simulated designer selects the alternative with the maximum expected utility. Although utility theory is typically viewed as a normative decision framework, the perspective in this research is that the approach can be used in a descriptive context for modeling rational and non-time critical decisions by engineering designers. This approach is intended to enable the formalisms of utility theory to be used to design human subjects experiments involving engineers in design organizations based on pairwise lotteries and other methods for preference elicitation. The results of these experiments would substantiate the selection of parameters in the model to enable it to be used to diagnose potential problems in engineering design projects.
The purpose of the decision-making framework is to enable the development of a design process simulation of an organization involved in the development of a large-scale complex engineered system such as an aircraft or spacecraft. The decision model will allow researchers to determine the broader effects of individual engineering decisions on the aggregate dynamics of the design process and the resulting performance of the designed artifact itself. To illustrate the model's applicability in this context, the framework is demonstrated on three example problems: a one-dimensional decision problem, a multidimensional turbojet design problem, and a variable fidelity analysis problem. Individual utility functions are developed for designers in a requirements-driven design problem and then combined into a multi-attribute utility function. Gaussian process models are used to represent the designer's beliefs about the alternatives, and a custom covariance function is formulated to more accurately represent a designer's uncertainty in beliefs about the design attributes.
|
319 |
A Model Fusion Based Framework For Imbalanced Classification Problem with Noisy DatasetJanuary 2014 (has links)
abstract: Data imbalance and data noise often coexist in real world datasets. Data imbalance affects the learning classifier by degrading the recognition power of the classifier on the minority class, while data noise affects the learning classifier by providing inaccurate information and thus misleads the classifier. Because of these differences, data imbalance and data noise have been treated separately in the data mining field. Yet, such approach ignores the mutual effects and as a result may lead to new problems. A desirable solution is to tackle these two issues jointly. Noting the complementary nature of generative and discriminative models, this research proposes a unified model fusion based framework to handle the imbalanced classification with noisy dataset.
The phase I study focuses on the imbalanced classification problem. A generative classifier, Gaussian Mixture Model (GMM) is studied which can learn the distribution of the imbalance data to improve the discrimination power on imbalanced classes. By fusing this knowledge into cost SVM (cSVM), a CSG method is proposed. Experimental results show the effectiveness of CSG in dealing with imbalanced classification problems.
The phase II study expands the research scope to include the noisy dataset into the imbalanced classification problem. A model fusion based framework, K Nearest Gaussian (KNG) is proposed. KNG employs a generative modeling method, GMM, to model the training data as Gaussian mixtures and form adjustable confidence regions which are less sensitive to data imbalance and noise. Motivated by the K-nearest neighbor algorithm, the neighboring Gaussians are used to classify the testing instances. Experimental results show KNG method greatly outperforms traditional classification methods in dealing with imbalanced classification problems with noisy dataset.
The phase III study addresses the issues of feature selection and parameter tuning of KNG algorithm. To further improve the performance of KNG algorithm, a Particle Swarm Optimization based method (PSO-KNG) is proposed. PSO-KNG formulates model parameters and data features into the same particle vector and thus can search the best feature and parameter combination jointly. The experimental results show that PSO can greatly improve the performance of KNG with better accuracy and much lower computational cost. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2014
|
320 |
Desenvolvimento e aplicação de métodos quânticos compostos baseados na teoria G3 para o estudo de propriedades atômicas, moleculares e mecanismo reacional de nitração do fenol / Development and application of composite quantum methods based on G3 theory for the study of atomic, molecular properties and phenol nitration mechanismRocha, Carlos Murilo Romero, 1988- 07 October 2013 (has links)
Orientador: Rogério Custodio / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Química / Made available in DSpace on 2018-08-23T02:21:16Z (GMT). No. of bitstreams: 1
Rocha_CarlosMuriloRomero_M.pdf: 3181887 bytes, checksum: 92221d4d84e390412b4bdcfb0ae7e206 (MD5)
Previous issue date: 2013 / Resumo: No presente trabalho o pseudopotencial CEP foi implementado na teoria G3(MP2)B3 e a adaptação denominada G3CEP(MP2)B3. Tal método foi aplicado no estudo de 247 entalpias padrão de formação, 104 energias de ionização, 63 afinidades eletrônicas, 10 afinidades protônicas e 22 energias de atomização de um conjunto de moléculas contendo elementos representativos do 2º, 3º e 4º períodos da tabela periódica, totalizando 446 dados termoquímicos. Os desvios absolutos médios, em relação aos dados experimentais, foram 1,60 kcal mol e 1,41 kcal mol para as teorias G3CEP(MP2)B3 e G3(MP2)B3, respectivamente, com reduções de 10-40% nos tempos de CPU com a implementação do pseudopotencial CEP. Além disso, a avaliação de outras propriedades tais como cargas atômicas, momentos de dipolo e energias de orbitais HOMO resultou em desvios absolutos médios, em relação ao método G3(MP2)B3 original, de 0,203 e, 0,044 D e 0,002 Eh, respectivamente. Outro objetivo do presente trabalho foi a aplicação do método G3CEP(MP2)B3 no estudo do mecanismo de nitração do fenol, em fase gasosa, promovida pelo eletrófilo NO2 . Tal avaliação mecanística evidenciou-nos a ocorrência de transferências eletrônicas do sistema p aromático ao íon nitrônio em etapas que precedem a formação do complexo-s, resultados que são convergentes à hipótese do mecanismo SET (Single Electron Transfer). Além do mecanismo de substituição eletrofílica aromática, o presente estudo evidenciou a ocorrência, em fase gasosa, de caminhos reacionais alternativos, através dos quais a transferência da espécie O ao sistema p aromático do fenol seria observada. As excelentes concordâncias entre a teoria G3CEP(MP2)B3 e os demais métodos Gn mais acurados (como G3(MP2)B3, G3CEP e G3) na previsão de barreiras de ativação, revelou-nos interessantes perspectivas quanto à aplicabilidade da teoria G3CEP(MP2)B3 na determinação mecanística de reações orgânicas, bem como na previsão acurada de barreiras rotacionais internas, frente a reduzidos custos computacionais / Abstract: In this work, the CEP (Compact Effective Potential) pseudopotential was adapted in the G3(MP2)B3 theory providing a theoretical alternative referred to as G3CEP(MP2)B3 for calculations involving second-, third-, and fourth-row representative elements. The G3CEP(MP2) B3 theory was applied in the study of 247 standard enthalpies of formation, 104 ionization energies, 63 electron affinities, 20 proton affinities and 22 atomization energies of a test set comprising 446 experimental energies. The total mean absolute deviation was 1.60 kcal mol for G3CEP(MP2)B3 theory against 1.41 kcal mol from all-electron G3(MP2)B3 calculations, with reductions of 10-40% in CPU time for the implemented theory. Furthermore, the assessment of other properties such as atomic charges, dipole moments and highest occupied molecular orbital (HOMO) energies resulted in mean absolute deviations, compared with those predicted by the original G3(MP2)B3 theory, of 0.203 e, 0.044 D and 0.002 Eh, respectively. In addition to the adaptation and assessment of G3CEP(MP2)B3 theory, the purpose of this work was also the application of the implemented theory in the study of phenol nitration mechanism, in gaseous phase, promoted by NO2 elepctrophile. The mechanistic evaluation at G3CEP(MP2)B3 level showed the occurrence of a single-electron-transfer step from aromatic p-system to the nitronium ion prior to the s-complex formation, in agreement with the SET (Single Electron Transfer) mechanism. Besides electrophilic aromatic substitution reaction, the present work provided insights into alternative reaction mechanisms through which O species are transferred to the phenol aromatic p-system. Excellent agreement between G3CEP(MP2)B3 theory and other more accurate Gn theories (for instance G3(MP2)B3, G3 and G3CEP) in predicting activation barriers showed that the implemented theory would be a useful tool in the study of reaction mechanisms and also for predicting internal rotational barriers with a significantly reduced computational cost / Mestrado / Físico-Química / Mestre em Química
|
Page generated in 0.038 seconds