481 |
Development and Implementation of an Online Kraft Black Liquor Viscosity Soft SensorAlabi, Sunday Boladale January 2010 (has links)
The recovery and recycling of the spent chemicals from the kraft pulping process are economically and environmentally essential in an integrated kraft pulp and paper mill. The recovery process can be optimised by firing high-solids black liquor in the recovery boiler. Unfortunately, due to a corresponding increase in the liquor viscosity, in many mills, black liquor is fired at reduced solids concentration to avoid possible rheological problems. Online measurement, monitoring and control of the liquor viscosity are deemed essential for the recovery boiler optimization. However, in most mills, including those in New Zealand, black liquor viscosity is not routinely measured.
Four batches of black liquors having solids concentrations ranging between 47 % and 70 % and different residual alkali (RA) contents were obtained from Carter Holt Harvey Pulp and Paper (CHHP&P), Kinleith mill, New Zealand. Weak black liquor samples were obtained by diluting the concentrated samples with deionised water. The viscosities of the samples at solids concentrations ranging from 0 to 70 % were measured using open-cup rotational viscometers at temperatures ranging from 0 to 115 oC and shear rates between 10 and 2000 s-1. The effect of post-pulping process, liquor heat treatment (LHT) on the liquors’ viscosities was investigated in an autoclave at a temperature >=180 oC for at least 15 mins.
The samples exhibit both Newtonian and non-Newtonian behaviours depending on temperature and solids concentration; the onsets of these behaviours are liquor-dependent. In conformity with the literature data, at high solids concentrations (> 50 %) and low temperatures, they exhibit shear-thinning behaviour with or without thixotropy but the shear-thinning/thixotropic characteristics disappear at high temperatures (>= 80 oC). Generally, when the apparent viscosities of the liquors are <= ~1000 cP, the liquors show a Newtonian or a near-Newtonian behaviour. These findings demonstrate that New Zealand black liquors can be safely treated as Newtonian fluids under industrial conditions. Further observations show that at low solids concentrations (< 50 %), viscosity is fairly independent of the RA content; however at solids concentrations >
50 %, viscosity decreases with increasing RA content of the liquor. This shows that the RA content of black liquor can be manipulated to control the viscosity of high-solids black liquors. The LHT process had negligible effect on the low-solids liquor viscosity but led to a significant and permanent reduction of the high-solids liquor viscosity by a factor of at least 6. Therefore, the incorporation of a LHT process into an existing kraft recovery process can help to obtain the benefits of high-solids liquor firing without a concern for the attending rheological problems.
A variety of the existing and proposed viscosity models using the traditional regression modelling tools and an artificial neural network (ANN) paradigm were obtained under different constraints. Hitherto, the existing models rely on the traditional regression tools and they were mostly applicable to limited ranges of process conditions.
On the one hand, composition-dependent models were obtained as a direct function of solids concentration and temperature, or solids concentration, temperature and shear rate; the relationships between these variables and the liquor viscosity are straight forward. The ANN-based models developed in this work were found to be superior to the traditional models in terms of accuracy, generalization capability and their applicability to a wide range of process conditions. If the parameters of the resulting ANN models can be successfully correlated with the liquor composition, the models would be suitable for online application. Unfortunately, black liquor viscosity depends on its composition in a complex manner; the direct correlation of its model parameters with the liquor composition is not yet a straight forward issue.
On the other hand, for the first time in the Australasia, the limitations of the composition-dependent models were addressed using centrifugal pump performance parameters, which are easy to measure online. A variety of centrifugal pump-based models were developed based on the estimated data obtained via the Hydraulic Institute viscosity correction method. This is opposed to the traditional approaches, which depend largely on actual experimental data that could be difficult and expensive to obtain. The resulting age-independent centrifugal pump-based model was implemented online as a black liquor viscosity soft sensor at the number 5 recovery boiler at the CHHP&P, Kinleith mill, New Zealand where its performance was evaluated. The results confirm its ability to effectively account for variations in the liquor composition. Furthermore, it was able to give robust viscosity estimates in the presence of the changing pump’s operating point. Therefore, it is concluded that this study opens a new and an effective way for kraft black liquor viscosity sensor development.
|
482 |
The Development of Neural Network Based System Identification and Adaptive Flight Control for an AutonomousHelicopter SystemShamsudin, Syariful Syafiq January 2013 (has links)
This thesis presents the development of self adaptive flight controller for an unmanned helicopter system under hovering manoeuvre. The neural network (NN) based model predictive control (MPC) approach is utilised in this work. We use this controller due to its ability to handle system constraints and the time varying nature of the helicopter dynamics. The non-linear NN based MPC controller is known to produce slow solution convergence due to high computation demand in the optimisation process. To solve this problem, the automatic flight controller system is designed using the NN based approximate predictive control (NNAPC) approach that relies on extraction of linear models from the non-linear NN model at each time step. The sequence of control input is generated using the prediction from the linearised model and the optimisation routine of MPC subject to the imposed hard constraints. In this project, the optimisation of the MPC objective criterion is implemented using simple and fast computation of the Hildreth's Quadratic Programming (QP) procedure.
The system identification of the helicopter dynamics is typically performed using the time regression network (NNARX) with the input variables. Their time lags are fed into a static feed-forward network such as the multi-layered perceptron (MLP) network. NN based modelling that uses the NNARX structure to represent a dynamical system usually requires a priori knowledge about the model order of the system. Low model order assumption generally leads to deterioration of model prediction accuracy. Furthermore, massive amount of weights in the standard NNARX model can result in an increased NN training time and limit the application of the NNARX model in a real-time application. In this thesis, three types of NN architectures are considered to represent the time regression network: the multi-layered perceptron (MLP), the hybrid multi-layered perceptron (HMLP) and the modified Elman network. The latter two architectures are introduced to improve the training time and the convergence rate of the NN model. The model structures for the proposed architecture are selected using the proposed Lipschitz coefficient and k-cross validation methods to determine the best network configuration that guarantees good generalisation performance for model prediction.
Most NN based modelling techniques attempt to model the time varying dynamics of a helicopter system using the off-line modelling approach which are incapable of representing the entire operating points of the flight envelope very well. Past research works attempt to update the NN model during flight using the mini-batch Levenberg-Marquardt (LM) training. However, due to the limited processing power available in the real-time processor, such approaches can only be employed to relatively small networks and they are limited to model uncoupled helicopter dynamics. In order to accommodate the time-varying properties of helicopter dynamics which change frequently during flight, a recursive Gauss-Newton (rGN) algorithm is developed to properly track the dynamics of the system under consideration.
It is found that the predicted response from the off-line trained neural network model is suitable for modelling the UAS helicopter dynamics correctly. The model structure of the MLP network can be identified correctly using the proposed validation methods. Further comparison with model structure selection from previous studies shows that the identified model structure using the proposed validation methods offers improvements in terms of generalisation error. Moreover, the minimum number of neurons to be included in the model can be easily determined using the proposed cross validation method. The HMLP and modified Elman networks are proposed in this work to reduce the total number of weights used in the standard MLP network. Reduction in the total number of weights in the network structure contributes significantly to the reduction in the computation time needed to train the NN model. Based on the validation test results, the model structure of the HMLP and modified Elman networks are found to be much smaller than the standard MLP network. Although the total number of weights for both of the HMLP and modified Elman networks are lower than the MLP network, the prediction performance of both of the NN models are on par with the prediction quality of the MLP network.
The identification results further indicate that the rGN algorithm is more adaptive to the changes in dynamic properties, although the generalisation error of repeated rGN is slightly higher than the off-line LM method. The rGN method is found capable of producing satisfactory prediction accuracy even though the model structure is not accurately defined. The recursive method presented here in this work is suitable to model the UAS helicopter in real time within the control sampling time and computational resource constraints. Moreover, the implementation of proposed network architectures such as the HMLP and modified Elman networks is found to improve the learning rate of NN prediction. These positive findings inspire the implementation of the real time recursive learning of NN models for the proposed MPC controller. The proposed system identification and hovering control of the unmanned helicopter system are validated in a 6 degree of freedom (DOF) safety test rig. The experimental results confirm the effectiveness and the robustness of the proposed controller under disturbances and parameter changes of the dynamic system.
|
483 |
Modelling and optimising of crude oil desalting processAl-Otaibi, Musleh B. January 2004 (has links)
The history of crude oil desalting/dehydration plant (DDP) has been marked in progressive phases-the simple gravity settling phase, the chemical treatment phase, the electrical enhancement phase and the dilution water phase. In recent times, the proper cachet would be the control-optimisation phase marked by terms such as "DDP process control", "desalter optimisation control" or "DDP automating technology". Another less perceptible aspect, but nonetheless important, has been both a punch listing of traditional plant boundaries and a grouping of factors that play the essential roles in a desalting/dehydration plant (DDP). Nowadays, modelling and optimising of a DDP performance has become more apparent in petroleum and chemical engineering, which has been traditionally concerned with production and refinery processing industries. Today's desalting/dehydration technology finds itself as an important factor in such diverse areas as petroleum engineering, environmental concerns, and advanced technology materials. The movement into these areas has created a need not only for sources useful for professionals but also for gathering relevant information essential in improving product quality and its impact on health, safety and environmental (HSE) aspects. All of the foregoing, clearly establishes the need for a comprehensive knowledge of DDP and emulsion theories, process modelling and optimisation techniques. The main objective of this work is to model and qualitatively optimise a desalting/dehydration plant. In due course, the contents of this thesis will cover in depth both the basic areas of emulsion treatment fundamentals, modelling desalting/dehydration processes and optimising the performance of desalting plants. In addition, emphasis is also placed on more advanced topics such as optimisation technology and process modifications. At the results and recommendation stage, the theme of this work-optimising desalting/dehydration plant will practically be furnished in an applicable scheme. Finally, a significant compendium of figures and experimental data are presented. This thesis, therefore, essentially presents the research and important principles of desalting/dehydration systems. It also gives the oil industry a wide breadth of important information presented in a concise and focused manner. In search of data quality and product on-line-improvement, this combination will be a powerful tool for operators and professionals in a decision support environment.
|
484 |
Evolution of grasping behaviour in anthropomorphic robotic arms with embodied neural controllersMassera, Gianluca January 2012 (has links)
The works reported in this thesis focus upon synthesising neural controllers for anthropomorphic robots that are able to manipulate objects through an automatic design process based on artificial evolution. The use of Evolutionary Robotics makes it possible to reduce the characteristics and parameters specified by the designer to a minimum, and the robot’s skills evolve as it interacts with the environment. The primary objective of these experiments is to investigate whether neural controllers that are regulating the state of the motors on the basis of the current and previously experienced sensors (i.e. without relying on an inverse model) can enable the robots to solve such complex tasks. Another objective of these experiments is to investigate whether the Evolutionary Robotics approach can be successfully applied to scenarios that are significantly more complex than those to which it is typically applied (in terms of the complexity of the robot’s morphology, the size of the neural controller, and the complexity of the task). The obtained results indicate that skills such as reaching, grasping, and discriminating among objects can be accomplished without the need to learn precise inverse internal models of the arm/hand structure. This would also support the hypothesis that the human central nervous system (cns) does necessarily have internal models of the limbs (not excluding the fact that it might possess such models for other purposes), but can act by shifting the equilibrium points/cycles of the underlying musculoskeletal system. Consequently, the resulting controllers of such fundamental skills would be less complex. Thus, the learning of more complex behaviours will be easier to design because the underlying controller of the arm/hand structure is less complex. Moreover, the obtained results also show how evolved robots exploit sensory-motor coordination in order to accomplish their tasks.
|
485 |
Bio-inspired adaptive sensingGonos, Theophile January 2012 (has links)
Sensor array calibration is a major problem in engineering, to which a biological approach may provide alternative solutions. For animals, perception is relative. The aim of this thesis is to show that the relativity of perception in the animal kingdom could also be applied to robotics with promising results. This thesis explores through various behaviours and environments the properties of homeostatic mechanisms in sensory cells. It shows not only that the phenomenon can solve partial failure of sensors but also that it can be used by robots to adapt to their (changing) environment. Moreover the system shows emergent properties as well as adaptation to the robot body or its behaviour. The homeostatic mechanisms in biological neurons maintain fi ring activity between predefi ned ranges. Our model is designed to correct out of range neuron activity over a relatively long period of time (seconds or minutes). The system is implemented in a robot’s sensory neurons and is the only form of adaptability used in the central network. The robot was fi rst tested extensively with a mechanism implemented for obstacle avoidance and wall following behaviours. The robot was not only able to deal with sensor manufacture defects, but to adapt to changing environments (e.g. adapting to a narrow environment when it was originally in an open world). Emergence of non-implemented behaviours has also been observed. For example, during wall following behaviour, the robot seemed, at some point, bored. It changed the direction it was following the wall. Or we also noticed during obstacle avoidance an emerging exploratory behaviour. The model has also been tested on more complex behaviours such as skototaxis, an escape response, and phonotaxis. Again, especially with skototaxis, emergent behaviours appeared such as unpredictability on where and when the robot will be hiding. It appears that the adaptation is not only driven by the environment but by the behaviour of the robot too. It is by the complex feedback between these two things that non-implemented behaviours emerge. We showed that homeostasis can be used to improve sensory signal processing in robotics and we also found evidence that the phenomenon can be a necessary step towards better behavioural adaptation to the environment.
|
486 |
Online optimisation of information transmission in stochastic spiking neural systemsKourkoulas-Chondrorizos, Alexandros January 2012 (has links)
An Information Theoretic approach is used for studying the effect of noise on various spiking neural systems. Detailed statistical analyses of neural behaviour under the influence of stochasticity are carried out and their results related to other work and also biological neural networks. The neurocomputational capabilities of the neural systems under study are put on an absolute scale. This approach was also used in order to develop an optimisation framework. A proof-of-concept algorithm is designed, based on information theory and the coding fraction, which optimises noise through maximising information throughput. The algorithm is applied with success to a single neuron and then generalised to an entire neural population with various structural characteristics (feedforward, lateral, recurrent connections). It is shown that there are certain positive and persistent phenomena due to noise in spiking neural networks and that these phenomena can be observed even under simplified conditions and therefore exploited. The transition is made from detailed and computationally expensive tools to efficient approximations. These phenomena are shown to be persistent and exploitable under a variety of circumstances. The results of this work provide evidence that noise can be optimised online in both single neurons and neural populations of varying structures.
|
487 |
Neuronal Network Analyses in vitro of Acute Individual and Combined Responses to Fluoxetine and EthanolXia,Yun 08 1900 (has links)
Embryonic murine neuronal networks cultured on microelectrode arrays were used to quantify acute electrophysiological effects of fluoxetine and ethanol. Spontaneously active frontal cortex cultures showed highly repeatable, dose-dependent sensitivities to both compounds. Cultures began to respond to fluoxetine at 3 µM and were shut off at 10-16 µM. EC50s mean ± S.D. for spike and burst rates were 4.1 ± 1.5 µM and 4.5 ± 1.1 µM (n=14). The fluoxetine inhibition was reversible and without effect on action potential wave shapes. Ethanol showed initial inhibition at 20 mM, with spike and burst rate EC50s at 52.0 ± 17.4 mM and 56.0 ± 17.0 mM (n=15). Ethanol concentrations above 100 -140 mM led to cessation of activity. Although ethanol did not change the shape and amplitude of action potentials, unit specific effects were found. The combined application of ethanol and fluoxetine was additive. Ethanol did not potentiate the effect of fluoxetine.
|
488 |
Artificial Neural Networks for Image ImprovementLind, Benjamin January 2017 (has links)
After a digital photo has been taken by a camera, it can be manipulated to be more appealing. Two ways of doing that are to reduce noise and to increase the saturation. With time and skills in an image manipulating program, this is usually done by hand. In this thesis, automatic image improvement based on artificial neural networks is explored and evaluated qualitatively and quantitatively. A new approach, which builds on an existing method for colorizing gray scale images is presented and its performance compared both to simpler methods and the state of the art in image denoising. Saturation is lowered and noise added to original images, which the methods receive as inputs to improve upon. The new method is shown to improve in some cases but not all, depending on the image and how it was modified before given to the method.
|
489 |
Neural Networks: Building a Better Index FundSacks, Maxwell 01 January 2017 (has links)
Big data has become a rapidly growing field amongst firms in the financial sector and thus many companies and researchers have begun implementing machine learning methods to sift through large portions of data. From this data, investment management firms have attempted to automate investment strategies, some successful and some unsuccessful. This paper will investigate an investment strategy by using a deep neural network to see whether the stocks picked from the network will out or underperform the Russell 2000.
|
490 |
Learning, self-organisation and homeostasis in spiking neuron networks using spike-timing dependent plasticityHumble, James January 2013 (has links)
Spike-timing dependent plasticity is a learning mechanism used extensively within neural modelling. The learning rule has been shown to allow a neuron to find the onset of a spatio-temporal pattern repeated among its afferents. In this thesis, the first question addressed is ‘what does this neuron learn?’ With a spiking neuron model and linear prediction, evidence is adduced that the neuron learns two components: (1) the level of average background activity and (2) specific spike times of a pattern. Taking advantage of these findings, a network is developed that can train recognisers for longer spatio-temporal input signals using spike-timing dependent plasticity. Using a number of neurons that are mutually connected by plastic synapses and subject to a global winner-takes-all mechanism, chains of neurons can form where each neuron is selective to a different segment of a repeating input pattern, and the neurons are feedforwardly connected in such a way that both the correct stimulus and the firing of the previous neurons are required in order to activate the next neuron in the chain. This is akin to a simple class of finite state automata. Following this, a novel resource-based STDP learning rule is introduced. The learning rule has several advantages over typical implementations of STDP and results in synaptic statistics which match favourably with those observed experimentally. For example, synaptic weight distributions and the presence of silent synapses match experimental data.
|
Page generated in 0.4934 seconds