Spelling suggestions: "subject:"nonparametric"" "subject:"onparametric""
261 |
Parametric Design and Optimization of an Upright of a Formula SAE carKaisare, Shubhankar Sudesh 06 June 2024 (has links)
The success of any racing car hinges on three key factors: its speed, handling, and reliability. In a highly competitive environment where lap times are extremely tight, even slight variations in components can significantly affect performance and, consequently, lap times. At the heart of a race car's performance lies the upright—a critical component of its suspension system. The upright serves to link the suspension arms to the wheels, effectively transmitting steering and braking forces to the suspension setup. Achieving optimal performance requires finding the right balance between lightweight design and ample stiffness, crucial for maintaining precise steering geometry and overall vehicle dynamics, especially under intense loads.
Furthermore, there is a need to explore the system of structural optimization and seamlessly integrate Finite Element (FE) Models into the mathematical optimization process. This thesis explores a technique for parametric structural optimization utilizing finite element analysis and response surfaces to minimize the weight of the upright. Various constraints such as frequency, stress, displacement, and fatigue are taken into consideration during this optimization process.
A parametric finite element model of the upright was designed, along with the mathematical formulation of the optimization problem as a nonlinear programming problem, based on the design objectives and suspension geometry. By conducting parameter sensitivity analysis, three design variables were chosen from a pool of five, and response surfaces were constructed to represent the constraints and objective function to be used to solve the optimization problem using Sequential Quadratic Programming (SQP).
To streamline the process of parameter sensitivity analysis and response surface development, a Python scripting procedure was employed to automate the finite element job analysis and results extraction. The optimized upright design resulted in overall weight reduction of 25.3% from the maximum weight design of the parameterized upright. / Master of Science / The success of any racing car depends on three key factors: its speed, handling and reliability. In a highly competitive environment where lap times are extremely tight, even slight variations in components can significantly affect performance and consequently, lap times. At the heart of a race car's performance lies the upright—a critical component of its suspension system. The upright serves to link the suspension arms to the wheels, effectively transmitting steering and braking forces to the suspension setup. To achieve the best performance, upright must be as light as possible but it needs to be strong enough to ensure that the car is predictable when turning in a corner or while braking.
Additionally, there is a need to explore methods of structural optimization and integrate finite element analysis seamlessly into the optimization process. Finite element analysis (FEA) is the use of part models, simulations, and calculations to predict and understand how an object might behave under certain physical conditions. This thesis examines a technique for optimizing the upright by designing it with numerous adjustable features for testing and then utilizing response surfaces to minimize its weight. Throughout this process, factors such as vibration, stress, deformation, and fatigue are carefully considered.
A detailed parametric finite element model of the upright was developed, alongside the formulation of the optimization problem as a nonlinear programming problem, based on the objectives of the design and the geometry of the suspension. Through rigorous testing of parameters for optimization potential, design variables are selected for optimization. Response surfaces were then constructed to represent the constraints and objective function necessary to solve the optimization problem using Sequential Quadratic Programming (SQP).
To enhance the efficiency of this process, a Python script was created to handle specific tasks within the finite element solver. This automation streamlined the analysis of the finite element model and the extraction of results. Ultimately, the optimized design of the upright yielded a 25.3% reduction in weight compared to its maximum weight configuration.
|
262 |
Development and Applications of Finite Elements in Time DomainPark, Sungho 04 December 1996 (has links)
A bilinear formulation is used for developing the time finite element method (TFM) to obtain transient responses of both linear, nonlinear, damped and undamped systems. Also the formulation, used in the h-, p- and hp-versions, is extended and found to be readily amenable to multi-degree-of-freedom systems. The resulting linear and nonlinear algebraic equations for the transient response are differentiated to obtain the sensitivity of the response with respect to various design parameters. The present developments were tested on a series of linear and nonlinear examples and were found to yield, when compared with other methods, excellent results for both the transient response and its sensitivity to system parameters. Mostly, the results were obtained using the Legendre polynomials as basis functions, though, in some cases other orthogonal polynomials namely, Hermite, Chebyshev, and integrated Legendre polynomials were also employed (but to no great advantage). A key advantage of TFM, and the one often overlooked in its past applications, is the ease in which the sensitivity of the transient response with respect to various design parameters can be obtained. Since a considerable effort is spent in determining the sensitivity of the response with respect to system parameters in many algorithms for parametric identification, an identification procedure based on the TFM is developed and tested for a number of nonlinear single-and two-degree-of-freedom system problems. An advantage of the TFM is the easy calculation of the sensitivity of the transient response with respect to various design parameters, a key requirement for gradient-based parameter identification schemes. The method is simple, since one obtains the sensitivity of the response to system parameters by differentiating the algebraic equations, not original differential equations. These sensitivities are used in Levenberg-Marquardt iterative direct method to identify parameters for nonlinear single- and two-degree-of-freedom systems. The measured response was simulated by integrating the example nonlinear systems using the given values of the system parameters. To study the influence of the measurement noise on parameter identification, random noise is added to the simulated response. The accuracy and the efficiency of the present method is compared to a previously available approach that employs a multistep method to integrate nonlinear differential equations. It is seen, for the same accuracy, the present approach requires fewer data points. Finally, the TFM for optimal control problems based on Hamiltonian weak formulation is proposed by adopting the p- and hp-versions as a finite element discretization process. The p-version can be used to improve the accuracy of the solution by adding more unknowns to each element without refining the mesh. The usage of hierarchical type of shape functions can lead to a significant saving in computational effort for a given accuracy. A set of Legendre polynomials are chosen as higher order shape functions and applied to two simple minimization problems for optimal control. The proposed formulation provides very accurate results for these problems. / Ph. D.
|
263 |
Asymptotic Results for Model Robust RegressionStarnes, Brett Alden 31 December 1999 (has links)
Since the mid 1980's many statisticians have studied methods for combining parametric and nonparametric esimates to improve the quality of fits in a regression problem. Notably in 1987, Einsporn and Birch proposed the Model Robust Regression estimate (MRR1) in which estimates of the parametric function, ƒ, and the nonparametric function, 𝑔, were combined in a straightforward fashion via the use of a mixing parameter, λ. This technique was studied extensively at small samples and was shown to be quite effective at modeling various unusual functions. In 1995, Mays and Birch developed the MRR2 estimate as an alternative to MRR1. This model involved first forming the parametric fit to the data, and then adding in an estimate of 𝑔 according to the lack of fit demonstrated by the error terms. Using small samples, they illustrated the superiority of MRR2 to MRR1 in most situations. In this dissertation we have developed asymptotic convergence rates for both MRR1 and MRR2 in OLS and GLS (maximum likelihood) settings. In many of these settings, it is demonstrated that the user of MRR1 or MRR2 achieves the best convergence rates available regardless of whether or not the model is properly specified. This is the "Golden Result of Model Robust Regression". It turns out that the selection of the mixing parameter is paramount in determining whether or not this result is attained. / Ph. D.
|
264 |
Coherent Mitigation of Radio Frequency Interference in 10-100 MHzLee, Kyehun 07 October 2008 (has links)
This dissertation describes methods of mitigating radio frequency interference (RFI) in the frequency range 10-100 MHz, developing and evaluating coherent methods with which RFI is subtracted from the afflicted data, nominally resulting in no distortion of the underlying signals. This approach is of interest in weak signal applications such as radio astronomy, where the signal of interest may have interference-to-noise ratio much less than one, and so can be easily distorted by other methods. Environmental noise in this band is strong and non-white, so a realistic noise model is developed, with which we characterize the performance of signal parameter estimation, a key component of the proposed algorithms. Two classes of methods are considered: "generic" parameter estimation/subtraction (PE/S) and a modulation-specific form known as demodulation-remodulation ("demod--remod") PE/S. It is demonstrated for RFI in the form of narrowband FM and Broadcast FM that generic PE/S has the problem of severely distorting underlying signals of interest and demod-remod PE/S is less prone to this problem. Demod-remod PE/S is also applied and evaluated for RFI in the form of Digital TV signals. In both cases, we compare the performance of the demod-remod PE/S with that of a traditional adaptive canceling method employing a reference antenna, and propose a hybrid method to further improve performance. A new metric for "toxicity" is defined and employed to determine the degree to which RFI mitigation damages the underlying signal of interest. / Ph. D.
|
265 |
A Polynomial Chaos Approach to Control DesignTempleton, Brian Andrew 11 September 2009 (has links)
A method utilizing H2 control concepts and the numerical method of Polynomial Chaos was developed in order to create a novel robust probabilistically optimal control approach. This method was created for the practical reason that uncertainty in parameters tends to be inherent in system models. As such, the development of new methods utilizing probability density functions (PDFs) was desired.
From a more theoretical viewpoint, the utilization of Polynomial Chaos for studying and designing control systems has not been very thoroughly investigated. The current work looks at expanding the H2 and related Linear Quadratic Regulator (LQR) control problems for systems with parametric uncertainty. This allows solving deterministic linear equations that represent probabilistic linear differential equations. The application of common LTI (Linear Time Invariant) tools to these expanded systems are theoretically justified and investigated. Examples demonstrating the utilized optimization process for minimizing the H2 norm and parallels to LQR design are presented.
The dissertation begins with a thorough background section that reviews necessary probability theory. Also, the connection between Polynomial Chaos and dynamic systems is explained. Next, an overview of related control methods, as well as an in-depth review of current Polynomial Chaos literature is given. Following, formal analysis, related to the use of Polynomial Chaos, is provided. This lays the ground for the general method of control design using Polynomial Chaos and H2. Then an experimental section is included that demonstrates controller synthesis for a constructed probabilistic system. The experimental results lend support to the method. / Ph. D.
|
266 |
Time-Variant Components to Improve Bandwidth and Noise Performance of AntennasLoghmannia, Pedram 18 January 2021 (has links)
Without noise, a wireless system would be able to transmit and receive signals over an arbitrary long-distance. However, practical wireless systems are not noise-free, leading to a limited communication range. Thus, the design of low-noise devices (such as antennas, amplifiers, and filters) is essential to increase the communication range. Also, it is well known that the noise performance of a receiving radio is primarily determined by the frontend including the antenna, filter, and a low-noise amplifier. In our first design, we intend to reduce the noise level of the receiving system by integrating a parametric amplifier into the slot antenna. The parametric amplifier utilizes nonlinear and/or time-variant properties of reactive elements (capacitors and/or inductors) to amplify radio frequency signals. Also, the parametric amplifier offers superior noise performance due to its reactive nature. We utilize the parametric amplifier to design a low-noise active matching circuit for electrically small antennas in our second design. Using Chu's limit and the Bode-Fano bound, we show a trade-off between the noise and bandwidth of the electrically small antennas. In particular, to make the small antenna wideband, one needs to introduce a mismatch between the antenna and the amplifier. Due to the mismatch, the effect of the low-noise amplifier becomes even more critical and that is why we choose the parametric amplifier as a natural candidate. As a realized design, a loop antenna is configured as a receiver, and the up-converter parametric amplifier is connected to it leading to a low-noise and wideband active matching circuit. The structure is simulated using a hybrid simulation technique and its noise performance is compared to the transistor counterpart. Our simulation and measurement results show more than 20 times bandwidth improvement at the expense of a 2 dB increase in the noise figure compared to the passive antenna counterpart. / Doctor of Philosophy / Nowadays, there is a high demand for compact and high-speed electronic devices such as cellphones, tablets, laptops, etc. It is therefore essential to design a miniaturized wideband antenna. Unfortunately, a trade-off exists between the bandwidth and gain of small antennas. The trade-off is based on some fundamental limits and extends to all small and passive antennas, regardless of their shape or structure. By using an active component such as an amplifier, the gain-bandwidth trade-off can be improved. However, we show that the active component adds noise to the receiving system leading to a new trade-off between noise and bandwidth in the receiving structures. In other words, utilizing the active component does not solve the problem and just replaces the gain-bandwidth trade-off with the noise-bandwidth trade-off. To improve the noise-bandwidth trade-off, we propose a new receiving structure in which we use the parametric amplifier instead of a commercially available transistor amplifier. The noise performance of the parametric amplifier is extremely better than the transistor amplifier leading to lower noise for the specified bandwidth. In particular, we improved the noise performance of the receiving system by 3 dB leading to doubling the communication distance.
|
267 |
Efficient 𝐻₂-Based Parametric Model Reduction via Greedy SearchCooper, Jon Carl 19 January 2021 (has links)
Dynamical systems are mathematical models of physical phenomena widely used throughout the world today. When a dynamical system is too large to effectively use, we turn to model reduction to obtain a smaller dynamical system that preserves the behavior of the original. In many cases these models depend on one or more parameters other than time, which leads to the field of parametric model reduction.
Constructing a parametric reduced-order model (ROM) is not an easy task, and for very large parametric systems it can be difficult to know how well a ROM models the original system, since this usually involves many computations with the full-order system, which is precisely what we want to avoid. Building off of efficient 𝐻-infinity approximations, we develop a greedy algorithm for efficiently modeling large-scale parametric dynamical systems in an 𝐻₂-sense.
We demonstrate the effectiveness of this greedy search on a fluid problem, a mechanics problem, and a thermal problem. We also investigate Bayesian optimization for solving the optimization subproblem, and end with extending this algorithm to work with MIMO systems. / Master of Science / In the past century, mathematical modeling and simulation has become the third pillar of scientific discovery and understanding, alongside theory and experimentation. Mathematical models are used every day, and are essential to modern engineering problems. Some of these mathematical models depend on quantities other than just time, parameters such as the viscosity of a fluid or the strength of a spring. These models can sometimes become so large and complicated that it can take a very long time to run simulations with the models. In such a case, we use parametric model reduction to come up with a much smaller and faster model that behaves like the original model. But when these large models vary highly with the parameters, it can also become very expensive to reduce these models accurately.
Algorithms already exist for quickly computing reduced-order models (ROMs) with respect to one measure of how "good" the ROM is. In this thesis we develop an algorithm for quickly computing the ROM with respect to a different measure - one that is more closely tied to how the models are simulated.
|
268 |
Parametric Model for Assessing Factors that Influence Highway Bridge Service LifeLiu, Jianqiu 13 March 2009 (has links)
Infrastructure management must move from a perspective that may singularly emphasize facility condition assessment to a broader view that involves nonphysical factors, which may substantially impact facility performance and shorten its service life. Socioeconomic, technological, regulatory, and user value changes can substantially increase the service expectations of existing facilities. Based on a theoretical framework drawn from prior work, this research develops a new approach to model infrastructure performance and assess factors that influence the remaining service life of highway bridges. Key parameters that impact the serviceability of highway bridges are identified and incorporated into a system dynamics model. This platform supports parametric scenario analysis and is applied in several cases to test how various factors influence bridge service life and performance. This decision support system provides a new approach for modeling serviceability over time and gives decision-makers an indication of: (a) the gap between society's service expectations and the service level provided and (b) the remaining service life of a highway bridge. / Ph. D.
|
269 |
Rates and dates: Evaluating rhythmicity and cyclicity in sedimentary and biomineral recordsDexter, Troy Anthony 05 June 2011 (has links)
It is important to evaluate periodic fluctuations in environment or climate recorded through time to better understand the nature of Earth's history as well as to develop ideas about what the future may hold. There exist numerous proxies by which these environmental patterns can be demonstrated and analyzed through various time scales; from sequence stratigraphic bundles of transgressive-regressive cycles that demonstrate eustatic changes in global sea level, to the geochemical composition of a skeleton that records fluctuations in ocean temperature through the life of the biomineralizing organism. This study examines some of the methods by which we can analyze environmental fluctuations recorded at different time scales. The first project examines the methods by which extrabasinal orbital forcing (i.e. Milankovitch cycles) can be tested in the rock record. In order to distinguish these patterns, computer generated carbonate rock records were simulated with the resulting outcrops tested using common methods. These simulations were built upon eustatic sea level fluctuations with periods similar to what has been demonstrated in the rock record, as well as maintaining the many factors that affect the resultant rock composition such as tectonics, subsidence, and erosion. The result demonstrated that substantially large sea level fluctuations, such as those that occur when the planet is in an icehouse condition, are necessary to produce recognizable and preservable patterns that are otherwise overwhelmed by other depositional factors. The second project examines the temporal distribution of the bivalve Semele casali from Ubatuba Bay, Brazil by using amino acid racemization (AAR) calibrated with ¹⁴C radiometric dates. This data set is one of the largest ever compiled and demonstrates that surficial shell assemblages in the area have very long residence times extending back in time 10,000 years. The area has had very little change in sea level and the AAR ratios which are highly temperature dependent could be calibrated across sites varying from 10 to 53 meters in water depth. Long time scales of dated shells provide us with an opportunity to study climate fluctuations such as El Niño southern oscillation. The third project describes a newly developed method for estimating growth rates in organisms using closely related species from similar environments statistically analyzed for error using a jackknife corrected parametric bootstrap. As geochemical analyses get more precise while using less material, data can be collected through the skeleton of a biomineralizing organism, thus revealing information about environmental shifts at scales shorter than a year. For such studies, the rate of growth of an organism has substantial effects on the interpretation of results, and such rates of growth are difficult to ascertain, particularly in fossilized specimens. This method removes the need for direct measures of growth rates and even the most conservative estimates of growth rates are useful in constraining the age ranges of geochemical intra-skeletal studies, thus elucidating the likely time period under analysis. This study assesses the methods by which periodic environmental fluctuations at greatly varying time scales can be used to evaluate our understanding of earth processes using rigorous quantitative strategies. / Ph. D.
|
270 |
Fusing Modeling and Testing to Enhance Environmental Testing ApproachesDevine, Timothy Andrew 09 July 2019 (has links)
A proper understanding of the dynamics of a mechanical system is crucial to ensure the highest levels of performance. The understanding is frequently determined through modeling and testing of components. Modeling provides a cost effective method for rapidly developing a knowledge of the system, however the model is incapable of accounting for fluctuations that occur in physical spaces. Testing, when performed properly, provides a near exact understanding of how a pat or assembly functions, however can be expensive both fiscally and temporally.
Often, practitioners of the two disciplines work in parallel, never bothering to intersect with the other group. Further advancement into ways to fuse modeling and testing together is able to produce a more comprehensive understanding of dynamic systems while remaining inexpensive in terms of computation, financial cost, and time. Due to this, the goal of the presented work is to develop ways to merge the two branches to include test data in models for operational systems. This is done through a series of analytical and experimental tasks examining the boundary conditions of various systems.
The first venue explored was an attempt at modeling unknown boundary conditions from an operational environment by modeling the same system in known configurations using a controlled environment, such as what is seen in a laboratory test. An analytical beam was studied under applied environmental loading with grounding stiffnesses added to simulate an operational condition and the response was attempted to be matched by a free boundaries beam with a reduced number of excitation points. Due to the properties of the inverse problem approach taken, the response between the two systems matched at control locations, however at non-control locations the responses showed a large degree of variation. From the mismatch in mechanical impedance, it is apparent that improperly representing boundary conditions can have drastic effects on the accuracy of models and recreational tests.
With the progression now directed towards modeling and testing of boundary conditions, methods were explored to combine the two approaches working together in harmony. The second portion of this work focuses on modeling an unknown boundary connection using a collection of similar testable boundary conditions to parametrically interpolate to the unknown configuration. This was done by using data driven models of the known systems as the interpolating functions, with system boundary stiffness being the varied parameter. This approach yielded near identical parametric model response to the original system response in analytical systems and showed some early signs of promise for an experimental beam.
After the two conducted studies, the potential for extending a parametric data driven model approach to other systems is discussed. In addition to this, improvements to the approach are discussed as well as the benefits it brings. / Master of Science / A proper understanding of the dynamics of a mechanical system in a severe environment is crucial to ensure the highest levels of performance. The understanding is frequently determined through modeling and testing of components. Modeling provides a cost-effective method for rapidly developing a knowledge of the system; however, the model is incapable of accounting for fluctuations that occur in physical spaces. Testing, when performed properly, provides a near exact understanding of how a pat or assembly functions, however, can be expensive both fiscally and temporally. Often, practitioners of the two disciplines work in parallel, never bothering to intersect with the other group and favoring one approach over the other for various reasons. Further advancement into ways to fuse modeling and testing together can produce a more comprehensive understanding of dynamic systems subject to environmental excitation while remaining inexpensive in terms of computation, financial cost, and time.
Due to this, the presented work aims to develop ways to merge the two branches to include test data in models for operational systems. This is done through a series of analytical and experimental tasks examining the boundary conditions of various systems and attempting to replicate the system response using inverse approaches at first. This is then proceeded by modeling boundary stiffnesses using data-driven modeling and parametric modeling approaches. The validity and impact these methods may have are also discussed.
|
Page generated in 0.0408 seconds