• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 531
  • 232
  • 68
  • 48
  • 28
  • 25
  • 20
  • 17
  • 13
  • 12
  • 8
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1179
  • 1033
  • 203
  • 194
  • 173
  • 161
  • 155
  • 147
  • 123
  • 121
  • 106
  • 96
  • 90
  • 84
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

A regression-based approach for simulating feedfoward active noise control, with application to fluid-structure interaction problems

Ruckman, Christopher E. 06 June 2008 (has links)
This dissertation presents a set of general numerical tools for simulating feedforward active noise control in the frequency domain. Feedforward control is numerically similar to linear least squares regression, and can take advantage of various numerical techniques developed in the statistics literature for use with regression. Therefore, an important theme of this work is to look at the control problem from a statistical point of view, and explore the analogies between feedforward control and basic statistical principles of regression. Motivating the numerical approach is the need to simulate active noise control for systems whose dynamics must be modeled numerically because analytical solutions do not exist, e.g., fluid-structure interaction problems. Plant dynamics for examples in the present work are modeled using a finite-element / boundary-element computer program, and the associated numerical methods are general enough for us with many types of problems. The derivation is presented in the context of active structural-acoustic control (ASAC), in which sound radiating from a vibrating structure is controlled by applying time-harmonic vibrational inputs directly on the structure. First, a feedforward control simulation is developed for a submerged spherical shell using both analytical and numerical techniques; the numerical formulation is found by discretizing the integrations used in the analytical approach. ASAC is shown to be effective for controlling radiation from the spherical shell. For a point-force disturbance at low frequencies, a single control input can reduce the radiated power by up to 20 dB (ignoring the possibility of measurement noise). A more general numerical methodology is then developed based on weighted least-squares regression in the complex domain. It is shown that basic regression diagnostics, which are used in the statistics literature to describe the quality and reliability of a regression, can be used to model the effects of error sensor measurement noise to produce a more realistic simulation. Numerical results are presented for a finite-length, fluid-loaded cylindrical shell with clamped, rigid end closures. It is shown that when the controller reduces the radiated power by less than 2 dB, the control simulation is usually invalid for statistical reasons. Also developed are confidence intervals for the individual control input magnitudes, and prediction intervals which help evaluate the sensitivity to measurement noise for the regression as a whole. Collinearity, a type of numerical ill-conditioning that can corrupt regression results, is demonstrated to occur in an example feedforward control simulation. The effects of collinearity are discussed, and a basic diagnostic is developed to detect and analyze collinearity. Subset selection, a numerical procedure for improving regressions, is shown to correspond to optimizing actuator locations for best control system performance. Exhaustive-search subset selection is used to optimize actuator locations for a sample structure. Finally, a convenient method is given for investigating alternate controller formulations, and examples of several alternate controllers are given including a wavenumber-domain controller. Numerical results for a cylindrical shell give insight to the mechanisms used by the control system, and a new visualization technique is used to relate farfield pressure distributions to surface velocity distributions using wavenumber analysis. / Ph. D.
472

Analysis of Static and Dynamic Deformations of Laminated Composite Structures by the Least-Squares Method

Burns, Devin James 27 October 2021 (has links)
Composite structures, such as laminated beams, plates and shells, are widely used in the automotive, aerospace and marine industries due to their superior specific strength and tailor-able mechanical properties. Because of their use in a wide range of applications, and their commonplace in the engineering design community, the need to accurately predict their behavior to external stimuli is crucial. We consider in this thesis the application of the least-squares finite element method (LSFEM) to problems of static deformations of laminated and sandwich plates and transient plane stress deformations of sandwich beams. Models are derived to express the governing equations of linear elasticity in terms of layer-wise continuous variables for composite plates and beams, which allow inter-laminar continuity conditions at layer interfaces to be satisfied. When Legendre-Gauss-Lobatto (LGL) basis functions with the LGL nodes taken as integration points are used to approximate the unknown field variables, the methodology yields a system of discrete equations with a symmetric positive definite coefficient matrix. The main goal of this research is to determine the efficacy of the LSFEM in accurately predicting stresses in laminated composites when subjected to both quasi-static and transient surface tractions. Convergence of the numerical algorithms with respect to the LGL basis functions in space and time (when applicable) is also considered and explored. In the transient analysis of sandwich beams, we study the sensitivity of the first failure load to the beam's aspect ratio (AR), facesheet-core thickness ratio (FCTR) and facesheet-core stiffness ratio (FCSR). We then explore how failure of sandwich beams is affected by considering facesheet and core materials with different in-plane and transverse stiffness ratios. Computed results are compared to available analytical solutions, published results and those found by using the commercial FE software ABAQUS where appropriate / Master of Science / Composite materials are formed by combining two or more materials on a macroscopic scale such that they have better engineering properties than either material individually. They are usually in the form of a laminate comprised of numerous plies with each ply having unidirectional fibers. Laminates are used in all sorts of engineering applications, ranging from boat hulls, racing car bodies and storage tanks. Unlike their homogeneous material counterparts, such as metals, laminated composites present structural designers and analysts a number of computational challenges. Chief among these challenges is the satisfaction of the so-called continuity conditions, which require certain quantities to be continuous at the interfaces of the composite's layers. In this thesis, we use a mathematical model, called a state-space model, that allows us to simultaneously solve for these quantities in the composite structure's domain and satisfy the continuity conditions at layer interfaces. To solve the governing equations that are derived from this model, we use a numerical technique called the least-squares method which seeks to minimize the squares of the governing equations and the associated side condition residuals over the computational domain. With this mathematical model and numerical method, we investigate static and dynamic deformations of laminated composites structures. The goal of this thesis is to determine the efficacy of the proposed methodology in predicting stresses in laminated composite structures when subjected to static and transient mechanical loading.
473

DETERMINACIÓN DE COMUNIDADES FITOPLACTÓNICAS MEDIANTE ESPECTROSCOPÍA VISIBLE Y SU RELACIÓN CON LOS RECUENTOS POR MICROSCOPIA DE EPIFLUORESCENCIA

MARTÍNEZ GUIJARRO, MARÍA REMEDIOS 11 February 2010 (has links)
El fitoplancton es uno de los compuestos orgánicos de las aguas naturales y su diagnóstico es importante para evaluar el estado ecológico de los ecosistemas acuáticos, entre ellos las aguas costeras y de transición. El enriquecimiento de nutrientes antropogénicos y las alteraciones en la cadena de alimentación, incluyendo la reducción de consumidores de fitoplancton, produce un aumento espectacular de las existencias de fitoplancton. Esto ha causado cambios significativos en los ciclos de nutrientes de las áreas costeras, en la calidad del agua, en la biodiversidad y en el estado global del ecosistema. La caracterización de las comunidades fitoplanctónicas en ecosistemas acuáticos mediante el método de los recuentos microscópicos por epifluorescencia, es una tarea costosa en tiempo, material y recursos humanos altamente cualificados. El objetivo de este trabajo es, sin pretender sustituir a los recuentos con el microscopio sino complementarlos, poner a punto una técnica mediante espectrofotometría que disminuya estos costes, realizando medidas de espectros de absorción en el rango del visible en las muestras. Para llevar a cabo este trabajo se han tomado muestras en cinco zonas de la costa mediterránea de España. Estas zonas corresponden a ecosistemas acuáticos en los que influyen tanto las aguas continentales como las del mar Mediterráneo, es decir, zonas costeras influidas por aguas continentales (plumas continentales) y zonas continentales influidas por aguas marinas (estuarios). Las muestras tomadas presentan un gradiente de salinidad, en función de una mayor o menor influencia continental y también en función de la capa superficial de menor salinidad que yace sobre las aguas salinas más densas. En estas muestras con distintas salinidades también existen unas diferencias cualitativas y cuantitativas de la composición fitoplanctónica. / Martínez Guijarro, MR. (2010). DETERMINACIÓN DE COMUNIDADES FITOPLACTÓNICAS MEDIANTE ESPECTROSCOPÍA VISIBLE Y SU RELACIÓN CON LOS RECUENTOS POR MICROSCOPIA DE EPIFLUORESCENCIA [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/7106
474

Parameter Estimation Methods for Ordinary Differential Equation Models with Applications to Microbiology

Krueger, Justin Michael 04 August 2017 (has links)
The compositions of in-host microbial communities (microbiota) play a significant role in host health, and a better understanding of the microbiota's role in a host's transition from health to disease or vice versa could lead to novel medical treatments. One of the first steps toward this understanding is modeling interaction dynamics of the microbiota, which can be exceedingly challenging given the complexity of the dynamics and difficulties in collecting sufficient data. Methods such as principal differential analysis, dynamic flux estimation, and others have been developed to overcome these challenges for ordinary differential equation models. Despite their advantages, these methods are still vastly underutilized in mathematical biology, and one potential reason for this is their sophisticated implementation. While this work focuses on applying principal differential analysis to microbiota data, we also provide comprehensive details regarding the derivation and numerics of this method. For further validation of the method, we demonstrate the feasibility of principal differential analysis using simulation studies and then apply the method to intestinal and vaginal microbiota data. In working with these data, we capture experimentally confirmed dynamics while also revealing potential new insights into those dynamics. We also explore how we find the forward solution of the model differential equation in the context of principal differential analysis, which amounts to a least-squares finite element method. We provide alternative ideas for how to use the least-squares finite element method to find the forward solution and share the insights we gain from highlighting this piece of the larger parameter estimation problem. / Ph. D. / In this age of “big data,” scientists increasingly rely on mathematical models for analyzing the data and drawing insights from them. One particular area where this is especially true is in medicine where researchers have found that the naturally occurring bacteria within individuals play a significant role in their health and well-being. Understanding the bacteria’s role requires that we understand their interactions with each other and with their hosts so that we can predict how changes in bacterial populations will affect individuals’ health. Given the number and complexity of these interactions, creating good models for them is a difficult task that traditional methods often fail to complete. The goal of this work is to promote the awareness of alternatives to the traditional modeling methods and to present a particular alternative in a way that is accessible to readers to encourage its and other methods’ use. With this goal and the medical application as the focus of our work, we discuss some of the traditional methods for constructing such models, discuss some of the more modern alternatives, and describe in detail the method we use. We explain the derivation of the method and apply it to both an example problem and two separate experimental studies to demonstrate its usefulness, and in the case of the experimental studies, we gain interesting insights into the bacterial interactions captured by the data. We then focus on some of the method’s numerical details to further highlight its strengths and to identify how we can improve its application.
475

Convergence Analysis of Mean Shift Type Algorithms / 平均値シフト型アルゴリズムの収束解析

Yamasaki, Ryoya 25 March 2024 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第25440号 / 情博第878号 / 新制||情||147(附属図書館) / 京都大学大学院情報学研究科システム科学専攻 / (主査)教授 田中 利幸, 教授 下平 英寿, 教授 山下 信雄 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
476

An Empirical Investigation of Critical Success Factors for Continuous Improvement Projects in Hospitals

Gonzalez Aleu Gonzalez, Fernando 17 August 2016 (has links)
A continuous improvement project (CIP) is a structured improvement project using a team of people "typically representing different departments or units in the organization" working to improve a process or work area over a relatively short period of time, such as a few days or up to several months. A CIP may use different improvement methodologies and tools, and may thus be defined according to the improvement approach. For instance, an organization adopting Lean as an improvement approach is likely to have CIPs implementing Lean tools, such as 5S or value stream mapping. These projects may be referred to as Lean projects in general, although they may also represent accelerated improvement projects such as Kaizen events, Kaizen blitz, or rapid improvement projects. Alternatively, an organization utilizing Six Sigma as an improvement approach may have Six Sigma projects that use the Define-Measure-Analyze-Improve-Control (DMAIC) process and statistical tools. Some organizations adopt an integrated improvement approach, such as Lean Six Sigma, and therefore may have CIPs with an even broader set of tools from which to choose. Lastly, many organizations may have an improvement approach not characterized by any single set of improvement processes and tools, and thus, may be thought of generally as process improvement, or quality improvement, projects using a traditional methodology as plan-do-study/check-act (PDSA or PDCA). In this dissertation, all of these types of improvement projects are referred as CIPs. Since the 1980s, hospitals have been using CIPs to address some of the problems in hospitals, such as quality in healthcare delivery, internal process efficiency, communication and coordination, and the cost of services. Some hospitals have achieved significant improvements, such as reducing the turnaround time for clinical laboratory results by 60 percent and reducing instrumentation decontaminations and sterilization cycle time by 70 percent. However, as with many other companies, hospitals often experience difficulty achieving their desired level of improvements with CIPs. Therefore, the purpose of this dissertation is to identify the critical success factors (CSFs) related to CIP success. In order to achieve this goal, five objectives were achieved: creating a methodology to assess the maturity or evolution of a research field (manuscript #1), identifying a comprehensive list of CSFs for CIPs (manuscript #2), assessing the maturity of the published literature on CIPs in hospitals (manuscript #3), identifying the most important factors related to CIPs in hospitals (manuscript #4) , and conducting an empirical investigation to define the CSFs for CIPs in hospital settings (manuscript #5 and #6). This investigation was conducted in three phases: research framing, variable reduction, and model development and testing. During these phases, the researcher used the following methodologies and data collection tools: systematic literature review, maturity framework (developed as part of this dissertation), expert study, retrospective survey questionnaire, exploratory factor analysis, partial-least squares structural equation modeling, and regression modeling. A maturity framework with nine dimensions was created (manuscript #1) and applied in order to identify a list of 53 factors related to CIP in general, involving any organization (manuscript #2). Additionally, the maturity framework was used to assess the literature available on CIPs in hospitals, considering only the authorship characteristic dimension (manuscript #3). Considering the frequency of new authors per year, the relative new integration of research groups, and the limited set of predominant authors, the research field, or area, of CIPs in hospitals is one with opportunities for improving maturity. Using the systematic literature review from manuscript #3, the list of 53 factors, and the list of predominant authors, a review of the literature was conducted, along with an expert study to more fully characterize the importance of various factors (manuscript #4). A conclusion from this particular work was that it is not possible to reduce the list of 53 factors based on these results, thus, a field study using the complete comprehensive list of factors was determined to have stronger practical implications. A field study was conducted to identify factors most related to CIP perceived success (manuscript #5) and CIP goal achievement (manuscript #6). The final results and practical implications of this dissertation consist in the identification of the following CSFs for CIP success in hospitals: Goal Characteristics, Organizational Processes, Improvement Processes, and Team Operation. These CSFs include several specific factors that, to the researcher's knowledge, have not been previously studied in empirical investigations: goal development process, organizational policies and procedures, CIP progress reporting, and CIP technical documentation. Practitioners involved with CIPs, such as CIP leaders, facilitators, stakeholders/customers, and continuous improvement managers/leaders, can utilize these results to increase the likelihood of success by considering these factors in planning and conducting CIPs. / Ph. D.
477

New Methods for Synchrophasor Measurement

Zhang, Yingchen 09 February 2011 (has links)
Recent developments in smart grid technology have spawned interest in the use of phasor measurement units to help create a reliable power system transmission and distribution infrastructure. Wide-area monitoring systems (WAMSs) utilizing synchrophasor measurements can help with understanding, forecasting, or even controlling the status of power grid stability in real-time. A power system Frequency Monitoring Network (FNET) was first proposed in 2001 and was established in 2004. As a pioneering WAMS, it serves the entire North American power grid through advanced situational awareness techniques, such as real-time event alerts, accurate event location estimation, animated event visualization, and post event analysis. Traditionally, Phasor Measurement Units (PMUs) have utilized signals obtained from current transformers (CTs) to compute current phasors. Unfortunately, this requires that CTs must be directly connected with buses, transformers or power lines. Chapters 2, 3 will introduce an innovative phasor measurement instrument, the Non-contact Frequency Disturbance Recorder (NFDR), which uses the magnetic and electric fields generated by power transmission lines to obtain current phasor measurements. The NFDR is developed on the same hardware platform as the Frequency Disturbance Recorder (FDR), which is actually a single phase PMU. Prototype testing of the NFDR in both the laboratory and the field environments were performed. Testing results show that measurement accuracy of the NFDR satisfies the requirements for power system dynamics observation. Researchers have been developing various techniques in power system phasor measurement and frequency estimation, due to their importance in reflecting system health. Each method has its own pros and cons regarding accuracy and speed. The DFT (Discrete Fourier Transform) based algorithm that is adopted by the FDR device is particularly suitable for tracking system dynamic changes and is immune to harmonic distortions, but it has not proven to be very robust when the input signal is polluted by random noise. Chapter 4 will discuss the Least Mean Squares-based methods for power system frequency tracking, compared with a DFT-based algorithm. Wide-area monitoring systems based on real time PMU measurements can provide great visibility to the angle instability conditions. Chapter 5 focuses on developing an early warning algorithm on the FNET platform. / Ph. D.
478

Non-invasive Estimation of Skin Chromophores Using Hyperspectral Imaging

Karambor Chakravarty, Sriya 07 March 2024 (has links)
Melanomas account for more than 1.7% of global cancer diagnoses and about 1% of all skin cancer diagnoses in the United States. This type of cancer occurs in the melanin-producing cells in the epidermis and exhibits distinctive variations in melanin and blood concentration values in the form of skin lesions. The current approach for evaluating skin cancer lesions involves visual inspection with a dermatoscope, typically followed by biopsy and histopathological analysis. However, to decrease the risk of misdiagnosis in this process requires invasive biopsies, contributing to the emotional and financial distress of patients. The implementation of a non-invasive imaging technique to aid the analysis of skin lesions in the early stages can potentially mitigate these consequences. Hyperspectral imaging (HSI) has shown promise as a non-invasive technique to analyze skin lesions. Images taken of human skin using a hyperspectral camera are a result of numerous elements in the skin. Being a turbid, inhomogeneous material, the skin has chromophores and scattering agents, which interact with light and produce characteristic back-scattered energy that can be harnessed and examined with an HSI camera. To achieve this in this study, a mathematical model of the skin is used to extract meaningful information from the hyperspectral data in the form of parameters such as melanin concentration, blood volume fraction and blood oxygen saturation in the skin. The human skin is modelled as a bi-layer planar system, whose surface reflectance is theoretically calculated using the Kubelka-Munk theory and absorption laws by Beer and Lambert. The model is evaluated for its sensitivity to the parameters and then fitted to measured hyperspectral data of four volunteer subjects in different conditions. Mean values of melanin, blood volume fraction and oxygen saturation obtained for each of the subjects are reported and compared with theoretical values from literature. Sensitivity analysis revealed wavelengths and wavelength groups which resulted in maximum change in percentage reflectance calculated from the model were 450 and 660 nm for melanin, 500 - 520 nm and 590 - 625 nm for blood volume fraction and 606, 646 and 750 nm for blood oxygen saturation. / Master of Science / Melanoma, the most serious type of skin cancer, develops in the melanin-producing cells in the epidermis. A characteristic marker of skin lesions is the abrupt variations in melanin and blood concentration in areas of the lesion. The present technique to inspect skin cancer lesions involves dermatoscopy, which is a qualitative visual analysis of the lesion’s features using a few standardized techniques such as the 7-point checklist and the ABCDE rule. Typically, dermatoscopy is followed by a biopsy and then a histopathological analysis of the biopsy. To reduce the possibility of misdiagnosing actual melanomas, a considerable number of dermoscopically unclear lesions are biopsied, increasing emotional, financial, and medical consequences. A non-invasive imaging technique to analyze skin lesions during the dermoscopic stage can help alleviate some of these consequences. Hyperspectral imaging (HSI) is a promising methodology to non-invasively analyze skin lesions. Images taken of human skin using a hyperspectral camera are a result of numerous elements in the skin. Being a turbid, inhomogeneous material, the skin has chromophores and scattering agents, which interact with light and produce characteristic back-scattered energy that can be harnessed and analyzed with an HSI camera. In this study, a mathematical model of the skin is used to extract meaningful information from the hyperspectral data in the form of melanin concentration, blood volume fraction and blood oxygen saturation. The mean and standard deviation of these estimates are reported and compared with theoretical values from the literature. The model is also evaluated for its sensitivity with respect to these parameters to identify the most relevant wavelengths.
479

Investigative Application of the Intrinsic Extended Finite Element Method for the Computational Characterization of Composite Materials

Fave, Sebastian Philipp 05 September 2014 (has links)
Computational micromechanics analysis of carbon nanotube-epoxy nanocomposites, containing aligned nanotubes, is performed using the mesh independent intrinsic extended finite element method (IXFEM). The IXFEM employs a localized intrinsic enrichment strategy to treat arbitrary discontinuities defined through the level-set method separate from the problem domain discretization, i.e. the finite element (FE) mesh. A global domain decomposition identifies local subdomains for building distinct partition of unities that appropriately suit the approximation. Specialized inherently enriched shape functions, constructed using the moving least square method, enhance the approximation space in the vicinity of discontinuity interfaces, maintaining accuracy of the solution, while standard FE shape functions are used elsewhere. Comparison of the IXFEM in solving validation problems with strong and weak discontinuities against a standard finite element method (FEM) and analytic solutions validates the enriched intrinsic bases, and shows anticipated trends in the error convergence rates. Applying the IXFEM to model composite materials, through a representative volume element (RVE), the filler agents are defined as individual weak bimaterial interfaces. Though a series of RVE studies, calculating the effective elastic material properties of carbon nanotube-epoxy nanocomposite systems, the benefits in substituting the conventional mesh dependent FEM with the mesh independent IXFEM when completing micromechanics analysis, investigating effects of high filler count or an evolving microstructure, are demonstrated. / Master of Science
480

Parametric Dynamical Systems: Transient Analysis and Data Driven Modeling

Grimm, Alexander Rudolf 02 July 2018 (has links)
Dynamical systems are a commonly used and studied tool for simulation, optimization and design. In many applications such as inverse problem, optimal control, shape optimization and uncertainty quantification, those systems typically depend on a parameter. The need for high fidelity in the modeling stage leads to large-scale parametric dynamical systems. Since these models need to be simulated for a variety of parameter values, the computational burden they incur becomes increasingly difficult. To address these issues, parametric reduced models have encountered increased popularity in recent years. We are interested in constructing parametric reduced models that represent the full-order system accurately over a range of parameters. First, we define a global joint error mea- sure in the frequency and parameter domain to assess the accuracy of the reduced model. Then, by assuming a rational form for the reduced model with poles both in the frequency and parameter domain, we derive necessary conditions for an optimal parametric reduced model in this joint error measure. Similar to the nonparametric case, Hermite interpolation conditions at the reflected images of the poles characterize the optimal parametric approxi- mant. This result extends the well-known interpolatory H2 optimality conditions by Meier and Luenberger to the parametric case. We also develop a numerical algorithm to construct locally optimal reduced models. The theory and algorithm are data-driven, in the sense that only function evaluations of the parametric transfer function are required, not access to the internal dynamics of the full model. While this first framework operates on the continuous function level, assuming repeated transfer function evaluations are available, in some cases merely frequency samples might be given without an option to re-evaluate the transfer function at desired points; in other words, the function samples in parameter and frequency are fixed. In this case, we construct a parametric reduced model that minimizes a discretized least-squares error in the finite set of measurements. Towards this goal, we extend Vector Fitting (VF) to the parametric case, solving a global least-squares problem in both frequency and parameter. The output of this approach might lead to a moderate size reduced model. In this case, we perform a post- processing step to reduce the output of the parametric VF approach using H2 optimal model reduction for a special parametrization. The final model inherits the parametric dependence of the intermediate model, but is of smaller order. A special case of a parameter in a dynamical system is a delay in the model equation, e.g., arising from a feedback loop, reaction time, delayed response and various other physical phenomena. Modeling such a delay comes with several challenges for the mathematical formulation, analysis, and solution. We address the issue of transient behavior for scalar delay equations. Besides the choice of an appropriate measure, we analyze the impact of the coefficients of the delay equation on the finite time growth, which can be arbitrary large purely by the influence of the delay. / Ph. D. / Mathematical models play an increasingly important role in the sciences for experimental design, optimization and control. These high fidelity models are often computationally expensive and may require large resources, especially for repeated evaluation. Parametric model reduction offers a remedy by constructing models that are accurate over a range of parameters, and yet are much cheaper to evaluate. An appropriate choice of quality measure and form of the reduced model enable us to characterize these high quality reduced models. Our first contribution is a characterization of optimal parametric reduced models and an efficient implementation to construct them. While this first framework assumes we have access to repeated evaluations of the full model, in some cases merely measurement data might be available. In this case, we construct a parametric model that fits the measurements in a least squares sense. The output of this approach might lead to a moderate size reduced model, which we address with a post-processing step that reduces the model size while maintaining important properties. A special case of a parameter is a delay in the model equation, e.g., arising from a feedback loop, reaction time, delayed response and various other physical phenomena. While asymptotically stable solutions eventually vanish, they might grow large before asymptotic behavior takes over; this leads to the notion of transient behavior, which is our main focus for a simple class of delay equations. Besides the choice of an appropriate measure, we analyze the impact of the structure of the delay equation on the transient growth, which can be arbitrary large purely by the influence of the delay.

Page generated in 0.0702 seconds