Spelling suggestions: "subject:"iic"" "subject:"csic""
61 |
Automated construction of generalized additive neural networks for predictive data mining / Jan Valentine du ToitDu Toit, Jan Valentine January 2006 (has links)
In this thesis Generalized Additive Neural Networks (GANNs) are studied in the context of predictive Data Mining. A GANN is a novel neural network implementation of a Generalized Additive Model. Originally GANNs were constructed interactively by considering partial residual plots.
This methodology involves subjective human judgment, is time consuming, and can result in suboptimal
results. The newly developed automated construction algorithm solves these difficulties by
performing model selection based on an objective model selection criterion. Partial residual plots
are only utilized after the best model is found to gain insight into the relationships between inputs
and the target. Models are organized in a search tree with a greedy search procedure that identifies
good models in a relatively short time. The automated construction algorithm, implemented
in the powerful SAS® language, is nontrivial, effective, and comparable to other model selection
methodologies found in the literature. This implementation, which is called AutoGANN, has a
simple, intuitive, and user-friendly interface. The AutoGANN system is further extended with an
approximation to Bayesian Model Averaging. This technique accounts for uncertainty about the
variables that must be included in the model and uncertainty about the model structure. Model
averaging utilizes in-sample model selection criteria and creates a combined model with better predictive
ability than using any single model. In the field of Credit Scoring, the standard theory of
scorecard building is not tampered with, but a pre-processing step is introduced to arrive at a more
accurate scorecard that discriminates better between good and bad applicants. The pre-processing
step exploits GANN models to achieve significant reductions in marginal and cumulative bad rates.
The time it takes to develop a scorecard may be reduced by utilizing the automated construction
algorithm. / Thesis (Ph.D. (Computer Science))--North-West University, Potchefstroom Campus, 2006.
|
62 |
An analysis of the works of G.C. Oosthuizen on the Shembe ChurchZwane, Protas Linda 02 1900 (has links)
Text in English / The membership of the African Independent Churches is growing day by day. Research into the growth ohhis phenomenon is being conducted by various scholars. G.C. Oosthuizen studied the African Independent Churches in general, and the Shembe Church, in particular. This study
examines Oosthuizen' s research of the African Independent Churches by analysing the three books that he devoted specifically to the Shembe Church. A set of five criteria is developed to evaluate Oosthuizen as a researcher. The study finds that his background and formation affected the research he conducted and contributed to the type of picture he portrayed of the Shembe Church. Oosthuizen, as a scholar of religion, sometimes allowed his theological interests to influence his research. As an empirical researcher Oosthuizen attempted to let the AICs "speak for themselves" but his theological interests caused him to make value judgements which influenced his research findings. / Christian Spirituality, Church History and Missiology / M. Th. (Missiology)
|
63 |
Mensuração da biomassa e construção de modelos para construção de equações de biomassa / Biomass measurement and models selection for biomass equationsEdgar de Souza Vismara 07 May 2009 (has links)
O interesse pela quantificação da biomassa florestal vem crescendo muito nos últimos anos, sendo este crescimento relacionado diretamente ao potencial que as florestas tem em acumular carbono atmosférico na sua biomassa. A biomassa florestal pode ser acessada diretamente, por meio de inventário, ou através de modelos empíricos de predição. A construção de modelos de predição de biomassa envolve a mensuração das variáveis e o ajuste e seleção de modelos estatísticos. A partir de uma amostra destrutiva de de 200 indivíduos de dez essências florestais distintas advindos da região de Linhares, ES., foram construídos modelos de predição empíricos de biomassa aérea visando futuro uso em projetos de reflorestamento. O processo de construção dos modelos consistiu de uma análise das técnicas de obtenção dos dados e de ajuste dos modelos, bem como de uma análise dos processos de seleção destes a partir do critério de Informação de Akaike (AIC). No processo de obtenção dos dados foram testadas a técnica volumétrica e a técnica gravimétrica, a partir da coleta de cinco discos de madeira por árvore, em posições distintas no lenho. Na técnica gravimétrica, estudou-se diferentes técnicas de composição do teor de umidade dos discos para determinação da biomassa, concluindo-se como a melhor a que utiliza a média aritmética dos discos da base, meio e topo. Na técnica volumétrica, estudou-se diferentes técnicas de composição da densidade do tronco com base nas densidades básicas dos discos, concluindo-se que em termos de densidade do tronco, a média aritmética das densidades básicas dos cinco discos se mostrou como melhor técnica. Entretanto, quando se multiplica a densidade do tronco pelo volume deste para obtenção da biomassa, a utilização da densidade básica do disco do meio se mostrou superior a todas as técnicas. A utilização de uma densidade básica média da espécie para determinação da biomassa, via técnica volumétrica, se apresentou como uma abordagem inferior a qualquer técnica que utiliza informação da densidade do tronco das árvores individualmente. Por fim, sete modelos de predição de biomassa aérea de árvores considerando seus diferentes compartimentos foram ajustados, a partir das funções de Spurr e Schumacher-Hall, com e sem a inclusão da altura como variável preditora. Destes modelos, quatro eram gaussianos e três eram lognormais. Estes mesmos sete modelos foram ajustados incluindo a medida de penetração como variável preditora, totalizando quatorze modelos testados. O modelo de Schumacher-Hall se mostrou, de maneira geral, superior ao modelo de Spurr. A altura só se mostrou efetiva na explicação da biomassa das árvores quando em conjunto com a medida de penetração. Os modelos selecionados foram do grupo que incluíram a medida de penetração no lenho como variável preditora e , exceto o modelo de predição da biomassa de folhas, todos se mostraram adequados para aplicação na predição da biomassa aérea em áreas de reflorestamento. / Forest biomass measurement implies a destructive procedure, thus forest inventories and biomass surveys apply indirect procedure for the determination of biomass of the different components of the forest (wood, branches, leaves, roots, etc.). The usual approch consists in taking a destructive sample for the measurment of trees attributes and an empirical relationship is established between the biomass and other attributes that can be directly measured on standing trees, e.g., stem diameter and tree height. The biomass determination of felled trees can be achived by two techniques: the gravimetric technique, that weights the components in the field and take a sample for the determination of water content in the laboratory; and the volumetric technique, that determines the volume of the component in the field and take a sample for the determination of the wood specific gravity (wood basic density) in the laboratory. The gravimetric technique applies to all components of the trees, while the volumetric technique is usually restricted to the stem and large branches. In this study, these two techniques are studied in a sample fo 200 trees of 10 different species from the region of Linhares, ES. In each tree, 5 cross-sections of the stem were taken to investigate the best procedure for the determination of water content in gravimetric technique and for determination of the wood specific gravity in the volumetric technique. Also, Akaike Information Criterion (AIC) was used to compare different statistical models for the prediction o tree biomass. For the stem water content determination, the best procedure as the aritmetic mean of the water content from the cross-sections in the base, middle and top of the stem. In the determination of wood specific gravity, the best procedure was the aritmetic mean of all five cross-sections discs of the stem, however, for the determination of the biomass, i.e., the product of stem volume and wood specific gravity, the best procedure was the use of the middle stem cross-section disc wood specific gravity. The use of an average wood specific gravity by species showed worse results than any procedure that used information of wood specific gravity at individual tree level. Seven models, as variations of Spurr and Schumacher-Hall volume equation models, were tested for the different tree components: wood (stem and large branches), little branches, leaves and total biomass. In general, Schumacher-Hall models were better than Spurr based models, and models that included only diameter (DBH) information performed better than models with diameter and height measurements. When a measure of penetration in the wood, as a surrogate of wood density, was added to the models, the models with the three variables: diameter, height and penetration, became the best models.
|
64 |
Caractérisation de vortex intraventriculaires par échographie Doppler ultrarapideFaurie, Julia 07 1900 (has links)
Les maladies cardiaques sont une cause majeure de mortalité dans le monde (la première
cause en Amérique du nord [192]), et la prise en charge de ses maladies entraîne des coûts
élevés pour la société. La prévalence de l’insuffisance cardiaque augmente fortement avec
l’âge, et, avec une population vieillissante, elle va demeurer une préoccupation croissante dans
le futur, non seulement pour les pays industrialisés mais aussi pour ceux en développement.
Ainsi il est important d’avoir une bonne compréhension de son mécanisme pour obtenir
des diagnostics précoces et un meilleur prognostic pour les patients. Parmi les différentes
formes d’insuffisance cardiaque, on trouve la dysfonction diastolique qui se traduit par une
déficience du remplissage du ventricule. Pour une meilleure compréhension de ce mécanisme,
de nombreuses études se sont intéressées au mouvement du sang dans le ventricule. On sait
notamment qu’au début de la diastole le flux entrant prend la forme d’un anneau vortical (ou
vortex ring). La formation d’un vortex ring par le flux sanguin après le passage d’une valve a
été décrite pour la première fois en 1513 par Léonard de Vinci (Fig. 0.1). En effet après avoir
moulé l’aorte dans du verre et ajouter des graines pour observer le flux se déplaçant dans son
fantôme, il a décrit l’apparition du vortex au passage de la valve aortique. Ces travaux ont pu
être confirmés 500 ans plus tard avec l’apparition de l’IRM [66]. Dans le ventricule, le même
phénomène se produit après la valve mitrale, c’est ce qu’on appelle le vortex diastolique. Or,
le mouvement d’un fluide (ici le sang) est directement relié a son environnement : la forme
du ventricule, la forme de la valve, la rigidité des parois... L’intérêt est donc grandissant
pour étudier de manière plus approfondie ce vortex diastolique qui pourrait apporter de
précieuses informations sur la fonction diastolique. Les modalités d’imagerie permettant de
le visualiser sont l’IRM et l’échographie. Cette thèse présente l’ensemble des travaux effectués
pour permettre une meilleure caractérisation du vortex diastolique dans le ventricule gauche
par imagerie ultrasonore Doppler. Pour suivre la dynamique de ce vortex dans le temps, il
est important d’obtenir une bonne résolution temporelle. En effet, la diastole ventriculaire
dure en moyenne 0.5 s pour un coeur humain au repos, une cadence élevée est donc essentielle
pour suivre les différentes étapes de la diastole. La qualité des signaux Doppler est également
primordiale pour obtenir une bonne estimation des vitesses du flux sanguin dans le ventricule.
Pour étudier ce vortex, nous nous sommes intéressés à la mesure de sa vorticité en son centre
v
et à l’évolution de cette dernière dans le temps. Le travail se divise ainsi en trois parties,
pour chaque un article a été rédigé :
1. Développement d’une séquence Doppler ultrarapide : La séquence se base sur l’utilisation
d’ondes divergentes qui permettent d’atteindre une cadence d’image élevée.
Associée à la vortographie, une méthode pour localiser le centre du vortex diastolique
et en déduire sa vorticité, nous avons pu suivre la dynamique de la vorticité
dans le temps. Cette séquence a permis d’établir une preuve de concept grâce à des
acquisitions in vitro et in vivo sur des sujets humains volontaires.
2. Développement d’une séquence triplex : En se basant sur la séquence ultrarapide Doppler,
on cherche ici à ajouter des informations supplémentaires, notamment sur le
mouvement des parois. La séquence triplex permet non seulement de récupérer le
mouvement sanguin avec une haute cadence d’images mais aussi le Doppler tissulaire.
Au final, nous avons pu déduire les Doppler couleur, tissulaire, et spectral, en plus
d’un Bmode de qualité grâce à la compensation de mouvement. On peut alors observer
l’interdépendance entre la dynamique du vortex et celle des parois, en récupérant
tous les indices nécessaires sur le même cycle cardiaque avec une acquisition unique.
3. Développement d’un filtre automatique : La quantification de la vorticité dépend
directement des vitesses estimées par le Doppler. Or, en raison de leur faible
amplitude, les signaux sanguins doivent être filtrés. En effet lors de l’acquisition les
signaux sont en fait une addition des signaux sanguins et tissulaires. Le filtrage est
une étape essentielle pour une estimation précise et non biaisée de la vitesse. La
dernière partie de ce doctorat s’est donc concentrée sur la mise au point d’un filtre
performant qui se base sur les dimensions spatiales et temporelles des acquisitions.
On effectue ainsi un filtrage du tissu mais aussi du bruit. Une attention particulière
a été portée à l’automatisation de ce filtre avec l’utilisation de critères d’information
qui se basent sur la théorie de l’information. / Heart disease is one of the leading causes of death in the world (first cause in North America
[192]), and causes high health care costs for society. The prevalence of heart failure increases
dramatically with age and, due to the ageing of the population, will remain a major concern in
the future, not only for developed countries, but also for developing countries. It is therefore
crucial to have a good understanding of its mechanism to obtain an early diagnosis and a
better prognosis for patients. Diastolic dysfunction is one of the variations of heart failure
and leads to insufficient filling of the ventricle. To better understand the dysfunction, several
studies have examined the blood motion in the ventricle. It is known that at the beginning of
diastole, the filling flow creates a vortex pattern known as a vortex ring. This development of
the ring by blood flow after passage through a valve was first described in 1513 by Leonardo
Da Vinci (Fig. 0.1). After molding a glass phantom in an aorta and adding seeds to visually
observe the flow through the phantom, he could describe the vortex ring development of
the blood coming out of the aortic valve. His work was confirmed 500 years later with the
emergence of MRI [66]. The same pattern can be observed in the left ventricle when the flow
emerges from the mitral valve, referred to as the diastolic vortex. The flow motion (in our
case the blood) is directly related to its environment : shape of the ventricle, shape of the
valve, stiffness of the walls... There is therefore a growing interest in further studies on this
diastolic vortex that could lead to valuable information on diastolic function. The imaging
modalities which can be used to visualize the vortex are MRI and ultrasound. This thesis
presents the work carried out to allow a better characterization of the diastolic vortex in the
left ventricle by Doppler ultrasound imaging. For temporal monitoring of vortex dynamics, a
high temporal resolution is required, since the ventricular diastole is about 0.5 s on average
for a resting human heart. The quality of Doppler signals is also of utmost importance to
get an accurate estimate of the blood flow velocity in the ventricle. To study this vortex, we
focused on evaluating the core vorticity evaluation and especially on its evolution in time.
The work is divided in three parts, and for each of them an article has been written :
1. Ultrafast Doppler sequence : The sequence is based on diverging waves, which resulted
in a high frame rate. In combination with vortography, a method to locate the vortex
core and derive its vorticity, the vortex dynamics could be tracked over time. This
ix
sequence could establish a proof of concept based on in vitro and in vivo acquisitions
on healthy human volunteers.
2. Triplex sequence : Based on the ultrafast sequence, we were interested in adding information
on the wall motion. The triplex sequence is able to recover not only the
blood motion with a high framerate but also tissue Doppler. In the end, we could
derive color, tissue, and spectral Doppler, along with a high quality Bmode by using
motion compensation. The interdependence between vortex and walls dynamics could
be highlighted by acquiring all the required parameters over a single cardiac cycle.
3. Automatic clutter filter : Vorticity quantification depends directly on the estimation
of Doppler velocity. However, due to their low amplitude, blood signals must be filtered.
Indeed, acquired signals are actually an addition of tissue and blood signals.
Filtering is a critical step for an unbiased and accurate velocity estimation. The last
part of this doctoral thesis has focused on the design of an efficient filter that takes
advantage of the temporal and spatial dimensions of the acquisitions. Thus the tissue
alongside the noise is removed. Particular care was taken to automatize the filter by
applying information criteria based on information theory.
|
65 |
Comparing Resource Abundance And Intake At The Reda And Wisla River EstuariesZahid, Saman January 2021 (has links)
The migratory birds stop at different stopover sites during migration. The presence of resources in these stopover sites is essential to regain the energy of these birds. This thesis aims to compare the resource abundance and intake at the two stopover sites: Reda and Wisla river estuaries. How a bird's mass changes during its stay at an estuary is considered as a proxy for the resource abundance of a site. The comparison is made on different subsets, including those which has incomplete data, i.e. next day is not exactly one day after the previous capture. Multiple linear regression, Generalized additive model and Linear mixed effect model are used for analysis. Expectation maximization and an iterative predictive process are implemented to deal with incomplete data. We found that Reda has higher resource abundance and intake as compared to that of Wisla river estuary.
|
66 |
Dynamic prediction of repair costs in heavy-duty trucksSaigiridharan, Lakshidaa January 2020 (has links)
Pricing of repair and maintenance (R&M) contracts is one among the most important processes carried out at Scania. Predictions of repair costs at Scania are carried out using experience-based prediction methods which do not involve statistical methods for the computation of average repair costs for contracts terminated in the recent past. This method is difficult to apply for a reference population of rigid Scania trucks. Hence, the purpose of this study is to perform suitable statistical modelling to predict repair costs of four variants of rigid Scania trucks. The study gathers repair data from multiple sources and performs feature selection using the Akaike Information Criterion (AIC) to extract the most significant features that influence repair costs corresponding to each truck variant. The study proved to show that the inclusion of operational features as a factor could further influence the pricing of contracts. The hurdle Gamma model, which is widely used to handle zero inflations in Generalized Linear Models (GLMs), is used to train the data which consists of numerous zero and non-zero values. Due to the inherent hierarchical structure within the data expressed by individual chassis, a hierarchical hurdle Gamma model is also implemented. These two statistical models are found to perform much better than the experience-based prediction method. This evaluation is done using the mean absolute error (MAE) and root mean square error (RMSE) statistics. A final model comparison is conducted using the AIC to draw conclusions based on the goodness of fit and predictive performance of the two statistical models. On assessing the models using these statistics, the hierarchical hurdle Gamma model was found to perform predictions the best
|
67 |
[pt] CUMULATIVIDADE E SINERGIA: UMA ABORDAGEM ESPACIAL INTEGRADA PARA AVALIAÇÃO DE IMPACTOS SOCIOAMBIENTAIS NA BAÍA DE SEPETIBA, RJ / [en] CUMULATIVITY AND SYNERGISM: AN INTEGRATE SPACIAL APPROACH TO SOCIAL AND ENVIRONMENT IMPACT ASSESSMENT IN SEPETIBA BAY, RJANDRESSA DE OLIVEIRA SPATA 25 October 2021 (has links)
[pt] A Avaliação de Impactos Cumulativos (AIC) ainda é um instrumento pouco
difundido no Brasil, apesar de a identificação de processos de cumulatividade e
sinergia ser uma exigência do Conselho Nacional de Meio Ambiente (CONAMA)
no escopo dos Estudos de Impacto para fins de licenciamento ambiental. Em partes,
isso se deve a dificuldades técnicas e metodológicas e, em partes, à
indisponibilidade de informações públicas que permitam tal análise. No Brasil,
verifica-se a ausência de metodologias consolidadas que possibilitem uma análise
efetiva dos impactos socioambientais cumulativos e sinérgicos, diferentemente de
outros países como Estados Unidos, Canadá e os pertencentes à União Europeia,
considerados referência no tema. A partir da análise de tais metodologias, observase
que não somente a AIC pode servir ao processo de licenciamento, como também
pode ser adotada de forma complementar e integrada a outros instrumentos para
fins de planejamento, ordenamento e gestão territorial, como o Zonemanento
Ecológico-Econômico (ZEE), desde que lançada luz sobre as dificuldades, entraves
e as responsabilidades para a efetivação desses dois instrumentos, em separado, e
de forma conjunta. A adoção dessa abordagem mostra-se como uma alternativa à
gestão e ao planejamento socioambiental da Baía de Sepetiba, que desde a década
de 1970 passa por um processo de degradação socioambiental severo, agravado a
partir de 2000, pela ampliação do Polo Industrial de Sepetiba, destacando-se a
instalação e a operação de três empreendimentos emblemáticos: a Companhia
Siderúrgica do Atlântico (CSA), Porto Sudeste e o Complexo Naval de Itaguaí. / [en] Cumulative Impact Assessment (CIA) is not disseminated in Brazil, although
Environmental National Counsel (CONAMA) requires cumulativity and
synergistic processes to be identified in the scope of Environmental Impact
Assessment due to environmental permitting processes. Part of it is caused by
methodological and technical issues, and part is caused because of public
information gaps. In Brazil there are no consolidated methodologies to identify and
analize social and environmental impacts caused by cumulative and synergistic
processes, unlike United States, Canada and European Union which are considered
benchmarks concerning CIA. Once analyzed, these methodologies show that not
only CIA may be used as part of the environmental permitting process in Brazil,
but also may be integrated to other planning, ordering and management territorial
tools, such as the Ecological and Economic Zoning, since considering the
chalenges, dificulties and responsabilities concerning the separated or combined
adoption of these two instruments. This approach may be considered as an
alternative to social and environmental management and planning concerning
Sepetiba Bay, where, since the 1970 s there is a severe social and enviromental
degradation process in course, intensified by the 2000 s because of the expansion
of the Sepetiba’s Industrial Center, where three enterprises may be highlighted:
Companhia Siderúrgica do Atlântico (CSA), o Porto Sudeste and Complexo Naval
de Itaguaí.
|
68 |
CURE RATE AND DESTRUCTIVE CURE RATE MODELS UNDER PROPORTIONAL ODDS LIFETIME DISTRIBUTIONSFENG, TIAN January 2019 (has links)
Cure rate models, introduced by Boag (1949), are very commonly used while modelling
lifetime data involving long time survivors. Applications of cure rate models can be seen
in biomedical science, industrial reliability, finance, manufacturing, demography and criminology. In this thesis, cure rate models are discussed under a competing cause scenario,
with the assumption of proportional odds (PO) lifetime distributions for the susceptibles,
and statistical inferential methods are then developed based on right-censored data.
In Chapter 2, a flexible cure rate model is discussed by assuming the number of competing
causes for the event of interest following the Conway-Maxwell (COM) Poisson distribution,
and their corresponding lifetimes of non-cured or susceptible individuals can be
described by PO model. This provides a natural extension of the work of Gu et al. (2011)
who had considered a geometric number of competing causes. Under right censoring, maximum likelihood estimators (MLEs) are obtained by the use of expectation-maximization
(EM) algorithm. An extensive Monte Carlo simulation study is carried out for various scenarios,
and model discrimination between some well-known cure models like geometric,
Poisson and Bernoulli is also examined. The goodness-of-fit and model diagnostics of the
model are also discussed. A cutaneous melanoma dataset example is used to illustrate the
models as well as the inferential methods.
Next, in Chapter 3, the destructive cure rate models, introduced by Rodrigues et al. (2011), are discussed under the PO assumption. Here, the initial number of competing
causes is modelled by a weighted Poisson distribution with special focus on exponentially
weighted Poisson, length-biased Poisson and negative binomial distributions. Then, a damage
distribution is introduced for the number of initial causes which do not get destroyed.
An EM-type algorithm for computing the MLEs is developed. An extensive simulation
study is carried out for various scenarios, and model discrimination between the three
weighted Poisson distributions is also examined. All the models and methods of estimation
are evaluated through a simulation study. A cutaneous melanoma dataset example is used
to illustrate the models as well as the inferential methods.
In Chapter 4, frailty cure rate models are discussed under a gamma frailty wherein the
initial number of competing causes is described by a Conway-Maxwell (COM) Poisson
distribution in which the lifetimes of non-cured individuals can be described by PO model.
The detailed steps of the EM algorithm are then developed for this model and an extensive
simulation study is carried out to evaluate the performance of the proposed model and the
estimation method. A cutaneous melanoma dataset as well as a simulated data are used for
illustrative purposes.
Finally, Chapter 5 outlines the work carried out in the thesis and also suggests some
problems of further research interest. / Thesis / Doctor of Philosophy (PhD)
|
69 |
Settling-Time Improvements in Positioning Machines Subject to Nonlinear Friction Using Adaptive Impulse ControlHakala, Tim 31 January 2006 (has links) (PDF)
A new method of adaptive impulse control is developed to precisely and quickly control the position of machine components subject to friction. Friction dominates the forces affecting fine positioning dynamics. Friction can depend on payload, velocity, step size, path, initial position, temperature, and other variables. Control problems such as steady-state error and limit cycles often arise when applying conventional control techniques to the position control problem. Studies in the last few decades have shown that impulsive control can produce repeatable displacements as small as ten nanometers without limit cycles or steady-state error in machines subject to dry sliding friction. These displacements are achieved through the application of short duration, high intensity pulses. The relationship between pulse duration and displacement is seldom a simple function. The most dependable practical methods for control are self-tuning; they learn from online experience by adapting an internal control parameter until precise position control is achieved. To date, the best known adaptive pulse control methods adapt a single control parameter. While effective, the single parameter methods suffer from sub-optimal settling times and poor parameter convergence. To improve performance while maintaining the capacity for ultimate precision, a new control method referred to as Adaptive Impulse Control (AIC) has been developed. To better fit the nonlinear relationship between pulses and displacements, AIC adaptively tunes a set of parameters. Each parameter affects a different range of displacements. Online updates depend on the residual control error following each pulse, an estimate of pulse sensitivity, and a learning gain. After an update is calculated, it is distributed among the parameters that were used to calculate the most recent pulse. As the stored relationship converges to the actual relationship of the machine, pulses become more accurate and fewer pulses are needed to reach each desired destination. When fewer pulses are needed, settling time improves and efficiency increases. AIC is experimentally compared to conventional PID control and other adaptive pulse control methods on a rotary system with a position measurement resolution of 16000 encoder counts per revolution of the load wheel. The friction in the test system is nonlinear and irregular with a position dependent break-away torque that varies by a factor of more than 1.8 to 1. AIC is shown to improve settling times by as much as a factor of two when compared to other adaptive pulse control methods while maintaining precise control tolerances.
|
Page generated in 0.0554 seconds