Spelling suggestions: "subject:"stochastic analysis."" "subject:"ctochastic analysis.""
231 |
QUANTUM ACTIVATION FUNCTIONS FOR NEURAL NETWORK REGULARIZATIONChristopher Alfred Hickey (16379193) 18 June 2023 (has links)
<p> The Bias-Variance Trade-off, where restricting the size of a hypothesis class can limit the generalization error of a model, is a canonical problem in Machine Learning, and a particular issue for high-variance models like Neural Networks that do not have enough parameters to enter the interpolating regime. Regularization techniques add bias to a model to lower testing error at the cost of increasing training error. This paper applies quantum circuits as activation functions in order to regularize a Feed-Forward Neural Network. The network using Quantum Activation Functions is compared against a network of the same dimensions except using Rectified Linear Unit (ReLU) activation functions, which can fit any arbitrary function. The Quantum Activation Function network is then shown to have comparable training performance to ReLU networks, both with and without regularization, for the tasks of binary classification, polynomial regression, and regression on a multicollinear dataset, which is a dataset whose design matrix is rank-deficient. The Quantum Activation Function network is shown to achieve regularization comparable to networks with L2-Regularization, the most commonly used method for neural network regularization today, with regularization parameters in the range of λ ∈ [.1, .5], while still allowing the model to maintain enough variance to achieve low training error. While there are limitations to the current physical implementation of quantum computers, there is potential for future architecture, or hardware-based, regularization methods that leverage the aspects of quantum circuits that provide lower generalization error. </p>
|
232 |
Optimal Power Control of a Wind Turbine Power Generation SystemXue, Jie 27 September 2012 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This thesis focuses on optimization of wind power tracking control systems in
order to capture maximum wind power for the generation system. In this work, a
mathematical simulation model is developed for a variable speed wind turbine power
generation system. The system consists a wind turbine with necessary transmission
system, and a permanent magnet synchronous generator and its vector control system.
A new fuzzy based hill climbing method for power tracking control is proposed
and implemented to optimize the wind power for the system under various conditions.
Two existing power tracking control methods, the tip speed ratio (TSR) control
method and the speed sensorless control method are also implemented with the wind
power system. The computer simulations with a 5 KW wind power generation system
are performed. The results from the proposed control method are compared
with those obtained using the two existing methods. It is illustrated that the proposed
method generally outperforms the two existing methods, especially when the
operating point is far away from the maximum point. The proposed control method
also has similar stable characteristic when the operating point is close to the peak
point in comparison with the existing methods. The proposed fuzzy control method
is computationally efficient and can be easily implemented in real-time.
|
233 |
Компьютерные методы исследования нелинейных динамических систем : магистерская диссертация / Computer methods for studying nonlinear dynamic systemsСатов, А. В., Satov, A. V. January 2021 (has links)
Работа содержит описание построения доверительной полосы стохастического хаоса и реализацию алгоритмов исследования n-мерных моделей. В работе рассматривается дискретная модель, представленная в виде нелинейной динамической системы разностных уравнений, которая описывает динамику взаимодействия потребителей. Выделяются две задачи, которые были поставлены и выполнены в рамках данной работы для расширения программного инструментария исследования динамических систем такого рода. Для двумерного случая осуществляется стохастический анализ чувствительности хаоса через построение доверительной области с использованием критических линий. Помимо этого, описывается разработанный и реализованный алгоритм построения внешней границы хаоса. Производится переход к n-мерному варианту модели (взаимодействие n потребителей). Выделяется 4 алгоритма для исследования n-мерной модели: 1. построение фазовой траектории, 2. построение бифуркационной диаграммы, 3. построение карты режимов, 4. построение показателей Ляпунова. Описывается реализация данных алгоритмов с уклоном в параллельные вычисления. Реализация алгоритмов выполнена на языке программирования C# (платформа .NET) в виде консольного приложения для запуска параллельных вычислений на вычислительном кластере УрФУ. / The work contains description of confidence band construction of a stochastic chaos and realization of algorithms for n-dimensional models studying. The thesis considers a discrete model presented in the form of a nonlinear dynamic system of difference equations, which describes the dynamic of consumer interaction. There are two task that were set and performed in this work to expand the software tools for research dynamic sys-tems of this kind. For the two-dimensional case, a stochastic analysis of the sensitivity of chaos is carried out through the construction of a confidence band using critical lines. In addition, there is description and implementation of algorithm, that can build outer boundary of chaos. A transition is made to the n-dimensional version of the model (interaction of n consumers). There are 4 algorithms for studying the n-dimensional model: 1. phase trajectory building, 2. bifurcation diagram building, 3. mode map building, 4. Lyapunov components building. Algorithm implementation is described with a bias in parallel computations. The algorithms are implemented with C# programming language (.NET platform) in the form of a console application for running parallel computations on the computing cluster of the Ural Federal University.
|
234 |
Statistical mechanics-based reduced-order modeling of turbulence in reactor systemsMary Catherine Ross (17879888) 01 February 2024 (has links)
<p dir="ltr">New system-level codes are being developed for advanced reactors for safety analysis and licensing purposes. Thermal-hydraulics of advanced reactors is a challenging problem due to complex flow scenarios assisted by free jets and stratified flows that lead to turbulent mixing. For these reasons, the 0D or 1D models used for reactor plena in traditional safety analysis codes like RELAP cannot capture the physics accurately and introduce a large degree of modeling uncertainty. System-level calculation codes based on the advection-diffusion equation neglect turbulent fluctuations. These fluctuations are extremely important as they introduce higher-order moments, which are responsible for vortex stretching and the passage of energy to smaller scales. Alternatively, extremely detailed simulations with velocity coupling from the Navier-Stokes equations are able to capture turbulence effects accurately using DNS. These solutions are accurate because they resolve the flow into the smallest possible length and time scales (Kolmogorov scale) important to the flow, which makes DNS computationally expensive for simple geometries and impossible at the system level.</p><p dir="ltr">The flow field can be described through a reduced-order model using the principles of statistical mechanics. Statistical mechanics-based methods provide a method for extracting statistics from data and modeling that data using easily represented differential equations. The Kramers-Moyal (KM) expansion method can be used as a subgrid-scale (SGS) closure for solving the momentum equation. The stochastic Burgers equation is solved using DNS, and the DNS solutions are used to calculate the KM coefficients, which are then implemented as an SGS closure model. The KM method outperforms traditional methods in capturing the multi-scale behavior of Burgers turbulence. The functional dependencies of the KM coefficients are also uniform for several boundary conditions, meaning the closure model can be extended to multiple flow scenarios. </p><p dir="ltr">For the case of the Navier-Stokes equations, each particle trajectory tends to follow some scaling law. Kolmogorov hypothesized that the flow velocity field follows a -5/3 scaling in the inertial region where Markovian characteristics can be invoked to model the interaction between eddies of adjacent sizes. This law holds true in the inertial region where the flow is Markovian. For scalar turbulence, the scaling laws are affected by thermal diffusion. If a fluid has a Prandtl number close to one, the thermal behavior is dominated by momentum, so the spectra for velocity and temperature are similar. For small Prandtl number fluids, such as liquid metals, the thermal diffusion dominates the lower scales and the slope of the spectrum shifts from the -5/3 slope to a -3 slope, also called the Batchelor region. System-level thermal hydraulics codes need to be able to capture these behaviors for a range of Prandtl number fluids. The KM-based model can also be used as a surrogate for velocity or temperature fluctuations in scalar turbulence. Using DNS solutions for turbulent channel flow, the KM model is used to provide a surrogate for temperature and velocity signals at different wall locations in the channel for Pr = 0.004, Pr = 0.025, and Pr = 0.71. The KM surrogate matches well for all wall locations, but is not able to capture the viscous dissipation in the velocity signal, or the thermal dissipation in the low Prandtl number cases. The dissipation can be captured by implementing a Gaussian filter.</p><p dir="ltr">Statistical mechanics-based methods are not limited to modeling turbulence in a reactor. Renewable power generation, such as wind, can be modeled using the Ornstein-Uhlenbeck (OU) method, which allows the long-term trends and short-term fluctuations of wind power to be decoupled. This allows for large fluctuations in wind power to be scaled down to a level that a reactor can accommodate safely. </p><p dir="ltr">Since statistical mechanics methods are based in physics, the calculated coefficients provide some information about the inputted signal. In a high-temperature gas-cooled reactor, strong heating can cause flow that is expected to be turbulent to show laminar characteristics. This laminarization results in reduced heat removal. The KM coefficients can be used to classify the laminarization from probed velocity signals more effectively than traditional statistical analyses.</p>
|
235 |
Quantitative Methods of Statistical ArbitrageBoming Ning (18414465) 22 April 2024 (has links)
<p dir="ltr">Statistical arbitrage is a prevalent trading strategy which takes advantage of mean reverse property of spreads constructed from pairs or portfolios of assets. Utilizing statistical models and algorithms, statistical arbitrage exploits and capitalizes on the pricing inefficiencies between securities or within asset portfolios. </p><p dir="ltr">In chapter 2, We propose a framework for constructing diversified portfolios with multiple pairs trading strategies. In our approach, several pairs of co-moving assets are traded simultaneously, and capital is dynamically allocated among different pairs based on the statistical characteristics of the historical spreads. This allows us to further consider various portfolio designs and rebalancing strategies. Working with empirical data, our experiments suggest the significant benefits of diversification within our proposed framework.</p><p dir="ltr">In chapter 3, we explore an optimal timing strategy for the trading of price spreads exhibiting mean-reverting characteristics. A sequential optimal stopping framework is formulated to analyze the optimal timings for both entering and subsequently liquidating positions, all while considering the impact of transaction costs. Then we leverages a refined signature optimal stopping method to resolve this sequential optimal stopping problem, thereby unveiling the precise entry and exit timings that maximize gains. Our framework operates without any predefined assumptions regarding the dynamics of the underlying mean-reverting spreads, offering adaptability to diverse scenarios. Numerical results are provided to demonstrate its superior performance when comparing with conventional mean reversion trading rules.</p><p dir="ltr">In chapter 4, we introduce an innovative model-free and reinforcement learning based framework for statistical arbitrage. For the construction of mean reversion spreads, we establish an empirical reversion time metric and optimize asset coefficients by minimizing this empirical mean reversion time. In the trading phase, we employ a reinforcement learning framework to identify the optimal mean reversion strategy. Diverging from traditional mean reversion strategies that primarily focus on price deviations from a long-term mean, our methodology creatively constructs the state space to encapsulate the recent trends in price movements. Additionally, the reward function is carefully tailored to reflect the unique characteristics of mean reversion trading.</p>
|
236 |
Lévy-Type Processes under Uncertainty and Related Nonlocal EquationsHollender, Julian 17 October 2016 (has links) (PDF)
The theoretical study of nonlinear expectations is the focus of attention for applications in a variety of different fields — often with the objective to model systems under incomplete information. Especially in mathematical finance, advances in the theory of sublinear expectations (also referred to as coherent risk measures) lay the theoretical foundation for modern approaches to evaluations under the presence of Knightian uncertainty. In this book, we introduce and study a large class of jump-type processes for sublinear expectations, which can be interpreted as Lévy-type processes under uncertainty in their characteristics. Moreover, we establish an existence and uniqueness theory for related nonlinear, nonlocal Hamilton-Jacobi-Bellman equations with non-dominated jump terms.
|
237 |
n-TARP: A Random Projection based Method for Supervised and Unsupervised Machine Learning in High-dimensions with Application to Educational Data AnalysisYellamraju Tarun (6630578) 11 June 2019 (has links)
Analyzing the structure of a dataset is a challenging problem in high-dimensions as the volume of the space increases at an exponential rate and typically, data becomes sparse in this high-dimensional space. This poses a significant challenge to machine learning methods which rely on exploiting structures underlying data to make meaningful inferences. This dissertation proposes the <i>n</i>-TARP method as a building block for high-dimensional data analysis, in both supervised and unsupervised scenarios.<div><br></div><div>The basic element, <i>n</i>-TARP, consists of a random projection framework to transform high-dimensional data to one-dimensional data in a manner that yields point separations in the projected space. The point separation can be tuned to reflect classes in supervised scenarios and clusters in unsupervised scenarios. The <i>n</i>-TARP method finds linear separations in high-dimensional data. This basic unit can be used repeatedly to find a variety of structures. It can be arranged in a hierarchical structure like a tree, which increases the model complexity, flexibility and discriminating power. Feature space extensions combined with <i>n</i>-TARP can also be used to investigate non-linear separations in high-dimensional data.<br></div><div><br></div><div>The application of <i>n</i>-TARP to both supervised and unsupervised problems is investigated in this dissertation. In the supervised scenario, a sequence of <i>n</i>-TARP based classifiers with increasing complexity is considered. The point separations are measured by classification metrics like accuracy, Gini impurity or entropy. The performance of these classifiers on image classification tasks is studied. This study provides an interesting insight into the working of classification methods. The sequence of <i>n</i>-TARP classifiers yields benchmark curves that put in context the accuracy and complexity of other classification methods for a given dataset. The benchmark curves are parameterized by classification error and computational cost to define a benchmarking plane. This framework splits this plane into regions of "positive-gain" and "negative-gain" which provide context for the performance and effectiveness of other classification methods. The asymptotes of benchmark curves are shown to be optimal (i.e. at Bayes Error) in some cases (Theorem 2.5.2).<br></div><div><br></div><div>In the unsupervised scenario, the <i>n</i>-TARP method highlights the existence of many different clustering structures in a dataset. However, not all structures present are statistically meaningful. This issue is amplified when the dataset is small, as random events may yield sample sets that exhibit separations that are not present in the distribution of the data. Thus, statistical validation is an important step in data analysis, especially in high-dimensions. However, in order to statistically validate results, often an exponentially increasing number of data samples are required as the dimensions increase. The proposed <i>n</i>-TARP method circumvents this challenge by evaluating statistical significance in the one-dimensional space of data projections. The <i>n</i>-TARP framework also results in several different statistically valid instances of point separation into clusters, as opposed to a unique "best" separation, which leads to a distribution of clusters induced by the random projection process.<br></div><div><br></div><div>The distributions of clusters resulting from <i>n</i>-TARP are studied. This dissertation focuses on small sample high-dimensional problems. A large number of distinct clusters are found, which are statistically validated. The distribution of clusters is studied as the dimensionality of the problem evolves through the extension of the feature space using monomial terms of increasing degree in the original features, which corresponds to investigating non-linear point separations in the projection space.<br></div><div><br></div><div>A statistical framework is introduced to detect patterns of dependence between the clusters formed with the features (predictors) and a chosen outcome (response) in the data that is not used by the clustering method. This framework is designed to detect the existence of a relationship between the predictors and response. This framework can also serve as an alternative cluster validation tool.<br></div><div><br></div><div>The concepts and methods developed in this dissertation are applied to a real world data analysis problem in Engineering Education. Specifically, engineering students' Habits of Mind are analyzed. The data at hand is qualitative, in the form of text, equations and figures. To use the <i>n</i>-TARP based analysis method, the source data must be transformed into quantitative data (vectors). This is done by modeling it as a random process based on the theoretical framework defined by a rubric. Since the number of students is small, this problem falls into the small sample high-dimensions scenario. The <i>n</i>-TARP clustering method is used to find groups within this data in a statistically valid manner. The resulting clusters are analyzed in the context of education to determine what is represented by the identified clusters. The dependence of student performance indicators like the course grade on the clusters formed with <i>n</i>-TARP are studied in the pattern dependence framework, and the observed effect is statistically validated. The data obtained suggests the presence of a large variety of different patterns of Habits of Mind among students, many of which are associated with significant grade differences. In particular, the course grade is found to be dependent on at least two Habits of Mind: "computation and estimation" and "values and attitudes."<br></div>
|
238 |
A Stochastic Analysis Framework for Real-Time Systems under Preemptive Priority-Driven SchedulingAzhar, Muhammad January 2011 (has links)
This thesis work describes how to apply the stochastic analysis framework, presented in [1] for general priority-driven periodic real-time systems. The proposed framework is applicable to compute the response time distribution, the worst-case response time, and the deadline miss probability of the task under analysis in the fixed-priority driven scheduling system. To be specific, we modeled the task execution time by using the beta distribution. Moreover, we have evaluated the existing stochastic framework on a wide range of periodic systems with the help of defined evaluation parameters. In addition we have refined the notations used in system model and also developed new mathematics in order to facilitate the understanding with the concept. We have also introduced new concepts to obtain and validate the exact probabilistic task response time distribution. Another contribution of this thesis is that we have extended the existing system model in order to deal with stochastic release time of a job. Moreover, a new algorithm is developed and validated using our extended framework where the stochastic dependencies exist due to stochastic release time patterns. / This is Second Version of the report. Submitted after few modifications made on the order of Thomas Nolte (Thesis Examiner). / START - Stochastic Real-Time Analysis of Embedded Software Systems
|
239 |
Methods For Forward And Inverse Problems In Nonlinear And Stochastic Structural DynamicsSaha, Nilanjan 11 1900 (has links)
A main thrust of this thesis is to develop and explore linearization-based numeric-analytic integration techniques in the context of stochastically driven nonlinear oscillators of relevance in structural dynamics. Unfortunately, unlike the case of deterministic oscillators, available numerical or numeric-analytic integration schemes for stochastically driven oscillators, often modelled through stochastic differential equations (SDE-s), have significantly poorer numerical accuracy. These schemes are generally derived through stochastic Taylor expansions and the limited accuracy results from difficulties in evaluating the multiple stochastic integrals. We propose a few higher-order methods based on the stochastic version of transversal linearization and another method of linearizing the nonlinear drift field based on a Girsanov change of measures. When these schemes are implemented within a Monte Carlo framework for computing the response statistics, one typically needs repeated simulations over a large ensemble. The statistical error due to the finiteness of the ensemble (of size N, say)is of order 1/√N, which implies a rather slow convergence as N→∞. Given the prohibitively large computational cost as N increases, a variance reduction strategy that enables computing accurate response statistics for small N is considered useful. This leads us to propose a weak variance reduction strategy. Finally, we use the explicit derivative-free linearization techniques for state and parameter estimations for structural systems using the extended Kalman filter (EKF). A two-stage version of the EKF (2-EKF) is also proposed so as to account for errors due to linearization and unmodelled dynamics.
In Chapter 2, we develop higher order locally transversal linearization (LTL) techniques for strong and weak solutions of stochastically driven nonlinear oscillators. For developing the higher-order methods, we expand the non-linear drift and multiplicative diffusion fields based on backward Euler and Newmark expansions while simultaneously satisfying the original vector field at the forward time instant where we intend to find the discretized solution. Since the non-linear vector fields are conditioned on the solution we wish to determine, the methods are implicit. We also report explicit versions of such linearization schemes via simple modifications. Local error estimates are provided for weak solutions.
Weak linearized solutions enable faster computation vis-à-vis their strong counterparts. In Chapter 3, we propose another weak linearization method for non-linear oscillators under stochastic excitations based on Girsanov transformation of measures. Here, the non-linear drift vector is appropriately linearized such that the resulting SDE is analytically solvable. In order to account for the error in replacing of non-linear drift terms, the linearized solutions are multiplied by scalar weighting function. The weighting function is the solution of a scalar SDE(i.e.,Radon-Nikodym derivative). Apart from numerically illustrating the method through applications to non-linear oscillators, we also use the Girsanov transformation of measures to correct the truncation errors in lower order discretizations.
In order to achieve efficiency in the computation of response statistics via Monte Carlo simulation, we propose in Chapter 4 a weak variance reduction strategy such that the ensemble size is significantly reduced without seriously affecting the accuracy of the predicted expectations of any smooth function of the response vector. The basis of the variance reduction strategy is to appropriately augment the governing system equations and then weakly replace the associated stochastic forcing functions through variance-reduced functions. In the process, the additional computational cost due to system augmentation is generally far less besides the accrued advantages due to a drastically reduced ensemble size. The variance reduction scheme is illustrated through applications to several non-linear oscillators, including a 3-DOF system.
Finally, in Chapter 5, we exploit the explicit forms of the LTL techniques for state and parameters estimations of non-linear oscillators of engineering interest using a novel derivative-free EKF and a 2-EKF. In the derivative-free EKF, we use one-term, Euler and Newmark replacements for linearizations of the non-linear drift terms. In the 2-EKF, we use bias terms to account for errors due to lower order linearization and unmodelled dynamics in the mathematical model. Numerical studies establish the relative advantages of EKF-DLL as well as 2-EKF over the conventional forms of EKF.
The thesis is concluded in Chapter 6 with an overall summary of the contributions made and suggestions for future research.
|
240 |
Conception robuste de structures périodiques à non-linéarités fonctionnelles / Robust design of periodic structures with functional nonlinearitiesChikhaoui, Khaoula 27 January 2017 (has links)
L’analyse dynamique des structures de grandes dimensions incluant de nombreux paramètres incertains et des non-linéarités localisées ou réparties peut être numériquement prohibitive. Afin de surmonter ce problème, des modèles d’approximation peuvent être développés pour reproduire avec précision et à faible coût de calcul la réponse de la structure.L’objectif de la première partie de ce mémoire est de développer des modèles numériques robustes vis-à-vis des modifications structurales (non-linéarités localisées, perturbations ou incertitudes paramétriques) et « légers » au sens de la réduction de la taille. Ces modèles sont construits, selon les approches de condensation directe et par synthèse modale, en enrichissant des bases de réduction tronquées, modale et de Craig-Bampton respectivement, avec des résidus statiques prenant compte des modifications structurales. Pour propager les incertitudes, l’accent est mis particulièrement sur la méthode du chaos polynomial généralisé. Sa combinaison avec les modèles réduits ainsi obtenus permet de créer des métamodèles mono et bi-niveaux, respectivement. Les deux métamodèles proposés sont comparés à d’autres métamodèles basés sur les méthodes du chaos polynomial généralisé et du Latin Hypercube appliquées sur des modèles complets et réduits. Les métamodèles proposés permettent d’approximer les comportements structuraux avec un coût de calcul raisonnable et sans perte significative de précision.La deuxième partie de ce mémoire est consacrée à l’analyse dynamique des structures périodiques non-linéaires en présence des imperfections : perturbations des paramètres structuraux ou incertitudes paramétriques. Deux études : déterministe ou stochastique, respectivement, sont donc menées. Pour ces deux configurations, un modèle analytique discret générique est proposé. Il consiste à appliquer la méthode des échelles multiples et la méthode de perturbation pour résoudre l’équation de mouvement et de projecter la solution obtenue sur des modes d’ondes stationnaires. Le modèle proposé conduit à un ensemble d’équations algébriques complexes couplées, fonctions du nombre et des positions des imperfections dans la structure. La propagation des incertitudes à travers le modèle ainsi construit est finalement assurée par les méthodes du Latin Hypercube et du chaos polynomial généralisé. La robustesse de la dynamique collective vis-à-vis des imperfections est étudiée à travers l’analyse statistique de la dispersion des réponses fréquentielles et des bassins d’attraction dans le domaine de multistabilité. L’étude numérique montre que la présence des imperfections dans une structure périodique renforce sa non-linéarité, élargit son domaine de multistabilité et génère une multiplicité de branches multimodale. / Dynamic analysis of large scale structures including several uncertain parameters and localized or distributed nonlinearities may be computationally unaffordable. In order to overcome this issue, approximation models can be developed to reproduce accurately the structural response at a low computational cost.The purpose of the first part of this thesis is to develop numerical models which must be robust against structural modifications (localized nonlinearities, parametric uncertainties or perturbations) and reduce the size of the initial problem. These models are created, according to the direct condensation and the component mode synthesis, by enriching truncated reduction modal bases and Craig-Bampton transformations, respectively, with static residual vectors accounting for the structural modifications. To propagate uncertainties through these first-level and second-level reduced order models, respectively, we focus particularly on the generalized polynomial chaos method. This methods combination allows creating first-level and second-level metamodels, respectively. The two proposed metamodels are compared to other metamodels based on the polynomial chaos method and Latin Hypercube method applied on reduced and full models. The proposed metamodels allow approximating the structural behavior at a low computational cost without a significant loss of accuracy.The second part of this thesis is devoted to the dynamic analysis of nonlinear periodic structures in presence of imperfections: parametric perturbations or uncertainties. Deterministic or stochastic analyses, respectively, are therefore carried out. For both configurations, a generic discrete analytical model is proposed. It consists in applying the multiple scales method and the perturbation theory to solve the equation of motion and then on projecting the resulting solution on standing wave modes. The proposed model leads to a set of coupled complex algebraic equations, depending on the number and positions of imperfections in the structure. Uncertainty propagation through the proposed model is finally done using the Latin Hypercube method and the generalized polynomial chaos expansion. The robustness the collective dynamics against imperfections is studied through statistical analysis of the frequency responses and the basins of attraction dispersions in the multistability domain. Numerical results show that the presence of imperfections in a periodic structure strengthens its nonlinearity, expands its multistability domain and generates a multiplicity of multimodal branches.
|
Page generated in 0.0853 seconds