• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 72
  • 72
  • 48
  • 33
  • 17
  • 11
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Effect Of Shear Walls On The Behavior Of Reinforced Concrete Buildings Under Earthquake Loading

Comlekoglu, Hakki Gurhan 01 December 2009 (has links) (PDF)
An analytical study was performed to evaluate the effect of shear wall ratio on the dynamic behavior of mid-rise reinforced concrete structures. The primary aim of this study is to examine the influence of shear wall area to floor area ratio on the dynamic performance of a building. Besides, the effect of shear wall configuration and area of existing columns on the seismic performance of the buildings were also investigated. For this purpose, twenty four mid-rise building models that have five and eight stories and shear wall ratios ranging between 0.51 and 2.17 percent in both directions were generated. These building models were examined by carrying out nonlinear time-history analyses using PERFORM 3D. The analytical model used in this study was verified by comparing the analytical results with the experimental results of a full-scale seven-story reinforced concrete shear wall building that was tested for U.S.-Japan Cooperative Research Program in 1981. In the analyses, seven different ground motion time histories were used and obtained data was averaged and utilized in the evaluation of the seismic performance. Main parameters affecting the overall performance were taken as roof and interstory drifts, their distribution throughout the structure and the base shear characteristics. The analytical results indicated that at least 1.0 percent shear wall ratio should be provided in the design of mid-rise buildings, in order to control observed drift. In addition / when the shear wall ratio increased beyond 1.5 percent, it was observed that the improvement of the seismic performance is not as significant.
32

Vertical Ground Motion Influence On Seismically Isolated &amp / Unisolated Bridges

Reyhanogullari, Naim Eser 01 April 2010 (has links) (PDF)
In this study influences of vertical ground motion on seismically isolated bridges were investigated for seven different earthquake data. One assessment of bearing effect involves the calculation of vertical earthquake load on the seismically isolated bridges. This paper investigates the influence of vertical earthquake excitation on the response of briefly steel girder composite bridges (SCB) with and without seismic isolation through specifically selected earthquakes. In detail, the bridge is composed of 30m long three spans, concrete double piers at each axis supported by mat foundations with pile systems. At both end of the spans there exists concrete abutments to support superstructure of the bridge. SCBs which were seismically isolated with nine commonly preferred different lead&amp / #8208 / rubber bearings (LRB) under each steel girder were analyzed. Then, the comparisons were made with a SCB without seismic isolation. Initially, a preliminary design was made and reasonable sections for the bridge have been obtained. As a result of this, the steel girder bridge sections were checked with AASHTO provisions and analytical model was updated accordingly. Earthquake records were thought as the main loading sources. Hence both cases were exposed to tri&amp / #8208 / axial earthquake loads in order to understand the effects under such circumstances. Seven near fault earthquake data were selected by considering possession of directivity. Several runs using the chosen earthquakes were performed in order to be able to derive satisfactory comparisons between different types of isolators. Analytical calculations were conducted using well known structural analysis software (SAS) SAP2000. Nonlinear time history analysis was performed using the analytical model of the bridge with and without seismic isolation. Response data collected from SAS was used to determine the vertical load on the piers and middle span midspan moment on the steel girders due to the vertical and horizontal component of excitation. Comparisons dealing with the effects of horizontal only and horizontal plus vertical earthquake loads were introduced.
33

Flood forecasting using time series data mining

Damle, Chaitanya 01 June 2005 (has links)
Earthquakes, floods, rainfall represent a class of nonlinear systems termed chaotic, in which the relationships between variables in a system are dynamic and disproportionate, however completely deterministic. Classical linear time series models have proved inadequate in analysis and prediction of complex geophysical phenomena. Nonlinear approaches such as Artificial Neural Networks, Hidden Markov Models and Nonlinear Prediction are useful in forecasting of daily discharge values in a river. The focus of these methods is on forecasting magnitudes of future discharge values and not the prediction of floods. Chaos theory provides a structured explanation for irregular behavior and anomalies in systems that are not inherently stochastic. Time Series Data Mining methodology combines chaos theory and data mining to characterize and predict complex, nonperiodic and chaotic time series. Time Series Data Mining focuses on the prediction of events.
34

非線性時間序列轉折區間認定之模糊統計分析 / Fuzzy Statistical Analysis for Change Periods Detection in Nonlinear Time Series

陳美惠 Unknown Date (has links)
Many papers have been presented on the study of change points detection. Nonetheless, we would like to point out that in dealing with the time series with switching regimes, we should also take the characteristics of change periods into account. Because many patterns of change structure in time series exhibit a certain kind of duration, those phenomena should not be treated as a mere sudden turning at a certain time. In this paper, we propose procedures about change periods detection for nonlinear time series. One of the detecting statistical methods is an application of fuzzy classification and generalization of Inclan and Tiao’s result. Moreover, we develop the genetic-based searching procedure, which is based on the concepts of leading genetic model. Simulation results show that the performance of these procedures is efficient and successful. Finally, two empirical applications about change periods detection for Taiwan monthly visitors arrival and exchange rate are demonstrated.
35

Optimized Distribution of Strength in Buckling-Restrained Brace Frames in Tall Buildings

Oxborrow, Graham Thomas 02 July 2009 (has links) (PDF)
Nonlinear time history analysis is increasingly being used in the design of tall steel structures, but member sizes still must be determined by a designer before an analysis can be performed. Often the distribution of story strength is still based on an assumed first mode response as determined from the Equivalent Lateral Force (ELF) procedure. For tall buckling restrained braced frames (BRBFs), two questions remain unanswered: what brace distribution will minimize total brace area, while satisfying story drift and ductility limits, and is the ELF procedure an effective approximation of that distribution? In order to investigate these issues, an optimization algorithm was incorporated into the OpenSees dynamic analysis platform. The resulting program uses a genetic algorithm to determine optimum designs that satisfy prescribed drift/ductility limits during nonlinear time history analyses. The computer program was used to investigate the optimized distribution of brace strength in BRBFs with different heights. The results of the study provide insight into efficient design of tall buildings in high seismic areas and evaluate the effectiveness of the ELF procedure.
36

Addressing nonlinear systems with information-theoretical techniques

Castelluzzo, Michele 07 July 2023 (has links)
The study of experimental recording of dynamical systems often consists in the analysis of signals produced by that system. Time series analysis consists of a wide range of methodologies ultimately aiming at characterizing the signals and, eventually, gaining insights on the underlying processes that govern the evolution of the system. A standard way to tackle this issue is spectrum analysis, which uses Fourier or Laplace transforms to convert time-domain data into a more useful frequency space. These analytical methods allow to highlight periodic patterns in the signal and to reveal essential characteristics of linear systems. Most experimental signals, however, exhibit strange and apparently unpredictable behavior which require more sophisticated analytical tools in order to gain insights into the nature of the underlying processes generating those signals. This is the case when nonlinearity enters into the dynamics of a system. Nonlinearity gives rise to unexpected and fascinating behavior, among which the emergence of deterministic chaos. In the last decades, chaos theory has become a thriving field of research for its potential to explain complex and seemingly inexplicable natural phenomena. The peculiarity of chaotic systems is that, despite being created by deterministic principles, their evolution shows unpredictable behavior and a lack of regularity. These characteristics make standard techniques, like spectrum analysis, ineffective when trying to study said systems. Furthermore, the irregular behavior gives the appearance of these signals being governed by stochastic processes, even more so when dealing with experimental signals that are inevitably affected by noise. Nonlinear time series analysis comprises a set of methods which aim at overcoming the strange and irregular evolution of these systems, by measuring some characteristic invariant quantities that describe the nature of the underlying dynamics. Among those quantities, the most notable are possibly the Lyapunov ex- ponents, that quantify the unpredictability of the system, and measure of dimension, like correlation dimension, that unravel the peculiar geometry of a chaotic system’s state space. These methods are ultimately analytical techniques, which can often be exactly estimated in the case of simulated systems, where the differential equations governing the system’s evolution are known, but can nonetheless prove difficult or even impossible to compute on experimental recordings. A different approach to signal analysis is provided by information theory. Despite being initially developed in the context of communication theory, by the seminal work of Claude Shannon in 1948, information theory has since become a multidisciplinary field, finding applications in biology and neuroscience, as well as in social sciences and economics. From the physical point of view, the most phenomenal contribution from Shannon’s work was to discover that entropy is a measure of information and that computing the entropy of a sequence, or a signal, can answer to the question of how much information is contained in the sequence. Or, alternatively, considering the source, i.e. the system, that generates the sequence, entropy gives an estimate of how much information the source is able to produce. Information theory comprehends a set of techniques which can be applied to study, among others, dynamical systems, offering a complementary framework to the standard signal analysis techniques. The concept of entropy, however, was not new in physics, since it had actually been defined first in the deeply physical context of heat exchange in thermodynamics in the 19th century. Half a century later, in the context of statistical mechanics, Boltzmann reveals the probabilistic nature of entropy, expressing it in terms of statistical properties of the particles’ motion in a thermodynamic system. A first link between entropy and the dynamical evolution of a system is made. In the coming years, following Shannon’s works, the concept of entropy has been further developed through the works of, to only cite a few, Von Neumann and Kolmogorov, being used as a tool for computer science and complexity theory. It is in particular in Kolmogorov’s work, that information theory and entropy are revisited from an algorithmic perspective: given an input sequence and a universal Turing machine, Kolmogorov found that the length of the shortest set of instructions, i.e. the program, that enables the machine to compute the input sequence was related to the sequence’s entropy. This definition of the complexity of a sequence already gives hint of the differences between random and deterministic signals, in the fact that a truly random sequence would require as many instructions for the machine as the size of the input sequence to compute, as there is no other option than programming the machine to copy the sequence point by point. On the other hand, a sequence generated by a deterministic system would simply require knowing the rules governing its evolution, for example the equations of motion in the case of a dynamical system. It is therefore through the work of Kolmogorov, and also independently by Sinai, that entropy is directly applied to the study of dynamical systems and, in particular, deterministic chaos. The so-called Kolmogorov-Sinai entropy, in fact, is a well-established measure of how complex and unpredictable a dynamical system can be, based on the analysis of trajectories in its state space. In the last decades, the use of information theory on signal analysis has contributed to the elaboration of many entropy-based measures, such as sample entropy, transfer entropy, mutual information and permutation entropy, among others. These quantities allow to characterize not only single dynamical systems, but also highlight the correlations between systems and even more complex interactions like synchronization and chaos transfer. The wide spectrum of applications of these methods, as well as the need for theoretical studies to provide them a sound mathematical background, make information theory still a thriving topic of research. In this thesis, I will approach the use of information theory on dynamical systems starting from fundamental issues, such as estimating the uncertainty of Shannon’s entropy measures on a sequence of data, in the case of an underlying memoryless stochastic process. This result, beside giving insights on sensitive and still-unsolved aspects when using entropy-based measures, provides a relation between the maximum uncertainty on Shannon’s entropy estimations and the size of the available sequences, thus serving as a practical rule for experiment design. Furthermore, I will investigate the relation between entropy and some characteristic quantities in nonlinear time series analysis, namely Lyapunov exponents. Some examples of this analysis on recordings of a nonlinear chaotic system are also provided. Finally, I will discuss other entropy-based measures, among them mutual information, and how they compare to analytical techniques aimed at characterizing nonlinear correlations between experimental recordings. In particular, the complementarity between information-theoretical tools and analytical ones is shown on experimental data from the field of neuroscience, namely magnetoencefalography and electroencephalography recordings, as well as mete- orological data.
37

Electrochemical studies of external forcing of periodic oscillating systems and fabrication of coupled microelectrode array sensors

Clark, David 01 May 2020 (has links)
This dissertation describes the electrochemical behavior of nickel and iron that was studied in different acid solutions via linear sweep voltammetry, cyclic voltammetry, and potentiostatic measurements over a range of temperatures at specific potential ranges. The presented work displays novel experiments where a nickel electrode was heated locally with an inductive heating system, and a platinum (Pt) electrode was used to change the proton concentration at iron and nickel electrode surfaces to control the periodic oscillations (frequency and amplitude) produced and to gain a greater understanding of the systems (kinetics), oscillatory processes, and corrosion processes. Temperature pulse voltammetry, linear sweep voltammetry, and cyclic voltammetry were used for temperature calibration at different heating conditions. Several other metal systems (bismuth, lead, zinc, and silver) also produce periodic oscillations as corrosion occurs; however, creating these with pure metal electrodes is very expensive. In this work, metal systems were created via electrodeposition by using inexpensive, efficient, coupled microelectrode array sensors (CMASs) as a substrate. CMASs are integrated devices with multiple electrodes that are connected externally in a circuit in which all of the electrodes have the same amount of potential applied or current passing through them. CMASs have been used for many years to study different forms of corrosion (crevice corrosion, pitting corrosion, intergranular corrosion, and galvanic corrosion), and they are beneficial because they can simulate single electrodes of the same size. The presented work also demonstrates how to construct CMASs and shows that the unique phenomena of periodic oscillations that can be created and studied by using coated and bare copper CMASs. Furthermore, these systems can be controlled by implementing external forcing with a Pt electrode at the CMAS surface. The data from the single Ni electrode experiments and CMAS experiments were analyzed by using the Nonlinear Time-Series Analysis approach.
38

Next generation seismic fragility curves for california bridges incorporating the evolution in seismic design philosophy

Ramanathan, Karthik Narayan 02 July 2012 (has links)
Quantitative and qualitative assessment of the seismic risk to highway bridges is crucial in pre-earthquake planning, and post-earthquake response of transportation systems. Such assessments provide valuable knowledge about a number of principal effects of earthquakes such as traffic disruption of the overall highway system, impact on the regions' economy and post-earthquake response and recovery, and more recently serve as measures to quantify resilience. Unlike previous work, this study captures unique bridge design attributes specific to California bridge classes along with their evolution over three significant design eras, separated by the historic 1971 San Fernando and 1989 Loma Prieta earthquakes (these events affected changes in bridge seismic design philosophy). This research developed next-generation fragility curves for four multispan concrete bridge classes by synthesizing new knowledge and emerging modeling capabilities, and by closely coordinating new and ongoing national research initiatives with expertise from bridge designers. A multi-phase framework was developed for generating fragility curves, which provides decision makers with essential tools for emergency response, design, planning, policy support, and maximizing investments in bridge retrofit. This framework encompasses generational changes in bridge design and construction details. Parameterized high-fidelity three-dimensional nonlinear analytical models are developed for the portfolios of bridge classes within different design eras. These models incorporate a wide range of geometric and material uncertainties, and their responses are characterized under seismic loadings. Fragility curves were then developed considering the vulnerability of multiple components and thereby help to quantify the performance of highway bridge networks and to study the impact of seismic design principles on the performance within a bridge class. This not only leads to the development of fragility relations that are unique and better suited for bridges in California, but also leads to the creation of better bridge classes and sub-bins that have more consistent performance characteristics than those currently provided by the National Bridge Inventory. Another important feature of this research is associated with the development of damage state definitions and grouping of bridge components in a way that they have similar consequences in terms of repair and traffic implications following a seismic event. These definitions are in alignment with the California Department of Transportation's design and operational experience, thereby enabling better performance assessment, emergency response, and management in the aftermath of a seismic event. The fragility curves developed as a part of this research will be employed in ShakeCast, a web-based post-earthquake situational awareness application that automatically retrieves earthquake shaking data and generates potential damage assessment notifications for emergency managers and responders. / Errata added at request of advisor and approved by Graduate Office, March 15 2016.
39

Employing nonlinear time series analysis tools with stable clustering algorithms for detecting concept drift on data streams / Aplicando ferramentas de análise de séries temporais não lineares e algoritmos de agrupamento estáveis para a detecção de mudanças de conceito em fluxos de dados

Costa, Fausto Guzzo da 17 August 2017 (has links)
Several industrial, scientific and commercial processes produce open-ended sequences of observations which are referred to as data streams. We can understand the phenomena responsible for such streams by analyzing data in terms of their inherent recurrences and behavior changes. Recurrences support the inference of more stable models, which are deprecated by behavior changes though. External influences are regarded as the main agent actuacting on the underlying phenomena to produce such modifications along time, such as new investments and market polices impacting on stocks, the human intervention on climate, etc. In the context of Machine Learning, there is a vast research branch interested in investigating the detection of such behavior changes which are also referred to as concept drifts. By detecting drifts, one can indicate the best moments to update modeling, therefore improving prediction results, the understanding and eventually the controlling of other influences governing the data stream. There are two main concept drift detection paradigms: the first based on supervised, and the second on unsupervised learning algorithms. The former faces great issues due to the labeling infeasibility when streams are produced at high frequencies and large volumes. The latter lacks in terms of theoretical foundations to provide detection guarantees. In addition, both paradigms do not adequately represent temporal dependencies among data observations. In this context, we introduce a novel approach to detect concept drifts by tackling two deficiencies of both paradigms: i) the instability involved in data modeling, and ii) the lack of time dependency representation. Our unsupervised approach is motivated by Carlsson and Memolis theoretical framework which ensures a stability property for hierarchical clustering algorithms regarding to data permutation. To take full advantage of such framework, we employed Takens embedding theorem to make data statistically independent after being mapped to phase spaces. Independent data were then grouped using the Permutation-Invariant Single-Linkage Clustering Algorithm (PISL), an adapted version of the agglomerative algorithm Single-Linkage, respecting the stability property proposed by Carlsson and Memoli. Our algorithm outputs dendrograms (seen as data models), which are proven to be equivalent to ultrametric spaces, therefore the detection of concept drifts is possible by comparing consecutive ultrametric spaces using the Gromov-Hausdorff (GH) distance. As result, model divergences are indeed associated to data changes. We performed two main experiments to compare our approach to others from the literature, one considering abrupt and another with gradual changes. Results confirm our approach is capable of detecting concept drifts, both abrupt and gradual ones, however it is more adequate to operate on complicated scenarios. The main contributions of this thesis are: i) the usage of Takens embedding theorem as tool to provide statistical independence to data streams; ii) the implementation of PISL in conjunction with GH (called PISLGH); iii) a comparison of detection algorithms in different scenarios; and, finally, iv) an R package (called streamChaos) that provides tools for processing nonlinear data streams as well as other algorithms to detect concept drifts. / Diversos processos industriais, científicos e comerciais produzem sequências de observações continuamente, teoricamente infinitas, denominadas fluxos de dados. Pela análise das recorrências e das mudanças de comportamento desses fluxos, é possível obter informações sobre o fenômeno que os produziu. A inferência de modelos estáveis para tais fluxos é suportada pelo estudo das recorrências dos dados, enquanto é prejudicada pelas mudanças de comportamento. Essas mudanças são produzidas principalmente por influências externas ainda desconhecidas pelos modelos vigentes, tal como ocorre quando novas estratégias de investimento surgem na bolsa de valores, ou quando há intervenções humanas no clima, etc. No contexto de Aprendizado de Máquina (AM), várias pesquisas têm sido realizadas para investigar essas variações nos fluxos de dados, referidas como mudanças de conceito. Sua detecção permite que os modelos possam ser atualizados a fim de apurar a predição, a compreensão e, eventualmente, controlar as influências que governam o fluxo de dados em estudo. Nesse cenário, algoritmos supervisionados sofrem com a limitação para rotular os dados quando esses são gerados em alta frequência e grandes volumes, e algoritmos não supervisionados carecem de fundamentação teórica para prover garantias na detecção de mudanças. Além disso, algoritmos de ambos paradigmas não representam adequadamente as dependências temporais entre observações dos fluxos. Nesse contexto, esta tese de doutorado introduz uma nova metodologia para detectar mudanças de conceito, na qual duas deficiências de ambos paradigmas de AM são confrontados: i) a instabilidade envolvida na modelagem dos dados, e ii) a representação das dependências temporais. Essa metodologia é motivada pelo arcabouço teórico de Carlsson e Memoli, que provê uma propriedade de estabilidade para algoritmos de agrupamento hierárquico com relação à permutação dos dados. Para usufruir desse arcabouço, as observações são embutidas pelo teorema de imersão de Takens, transformando-as em independentes. Esses dados são então agrupados pelo algoritmo Single-Linkage Invariante à Permutação (PISL), o qual respeita a propriedade de estabilidade de Carlsson e Memoli. A partir dos dados de entrada, esse algoritmo gera dendrogramas (ou modelos), que são equivalentes a espaços ultramétricos. Modelos sucessivos são comparados pela distância de Gromov-Hausdorff a fim de detectar mudanças de conceito no fluxo. Como resultado, as divergências dos modelos são de fato associadas a mudanças nos dados. Experimentos foram realizados, um considerando mudanças abruptas e o outro mudanças graduais. Os resultados confirmam que a metodologia proposta é capaz de detectar mudanças de conceito, tanto abruptas quanto graduais, no entanto ela é mais adequada para cenários mais complicados. As contribuições principais desta tese são: i) o uso do teorema de imersão de Takens para transformar os dados de entrada em independentes; ii) a implementação do algoritmo PISL em combinação com a distância de Gromov-Hausdorff (chamado PISLGH); iii) a comparação da metodologia proposta com outras da literatura em diferentes cenários; e, finalmente, iv) a disponibilização de um pacote em R (chamado streamChaos) que provê tanto ferramentas para processar fluxos de dados não lineares quanto diversos algoritmos para detectar mudanças de conceito.
40

[en] HIGH FREQUENCY DATA AND PRICE-MAKING PROCESS ANALYSIS: THE EXPONENTIAL MULTIVARIATE AUTOREGRESSIVE CONDITIONAL MODEL - EMACM / [pt] ANÁLISE DE DADOS DE ALTA FREQÜÊNCIA E DO PROCESSO DE FORMAÇÃO DE PREÇOS: O MODELO MULTIVARIADO EXPONENCIAL - EMACM

GUSTAVO SANTOS RAPOSO 04 July 2006 (has links)
[pt] A modelagem de dados que qualificam as transações de ativos financeiros, tais como, preço, spread de compra e venda, volume e duração, vem despertando o interesse de pesquisadores na área de finanças, levando a um aumento crescente do número de publicações referentes ao tema. As primeiras propostas se limitaram aos modelos de duração. Mais tarde, o impacto da duração sobre a volatilidade instantânea foi analisado. Recentemente, Manganelli (2002) incluiu dados referentes aos volumes transacionados dentro de um modelo vetorial. Neste estudo, nós estendemos o trabalho de Manganelli através da inclusão do spread de compra e venda num modelo vetorial autoregressivo, onde as médias condicionais do spread, volume, duração e volatilidade instantânea são descritas a partir de uma formulação exponencial chamada Exponential Multivariate Autoregressive Conditional Model (EMACM). Nesta nova proposta, não se fazem necessárias a adoção de quaisquer restrições nos parâmetros do modelo, o que facilita o procedimento de estimação por máxima verossimilhança e permite a utilização de testes de Razão de Verossimilhança na especificação da forma funcional do modelo (estrutura de interdependência). Em paralelo, a questão de antecipar movimentos nos preços de ativos financeiros é analisada mediante a utilização de um procedimento integrado, no qual, além da modelagem de dados financeiros de alta freqüência, faz-se uso de um modelo probit ordenado contemporâneo. O EMACM é empregado com o objetivo de capturar a dinâmica associada às variáveis e sua função de previsão é utilizada como proxy para a informação contemporânea necessária ao modelo de previsão de preços proposto. / [en] The availability of high frequency financial transaction data - price, spread, volume and duration -has contributed to the growing number of scientific articles on this topic. The first proposals were limited to pure duration models. Later, the impact of duration over instantaneous volatility was analyzed. More recently, Manganelli (2002) included volume into a vector model. In this document, we extended his work by including the bid-ask spread into the analysis through a vector autoregressive model. The conditional means of spread, volume and duration along with the volatility of returns evolve through transaction events based on an exponential formulation we called Exponential Multivariate Autoregressive Conditional Model (EMACM). In our proposal, there are no constraints on the parameters of the VAR model. This facilitates the maximum likelihood estimation of the model and allows the use of simple likelihood ratio hypothesis tests to specify the model and obtain some clues about the interdependency structure of the variables. In parallel, the problem of stock price forecasting is faced through an integrated approach in which, besides the modeling of high frequency financial data, a contemporary ordered probit model is used. Here, EMACM captures the dynamic that high frequency variables present, and its forecasting function is taken as a proxy to the contemporaneous information necessary to the pricing model.

Page generated in 0.055 seconds