• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 180
  • 31
  • 25
  • 21
  • 16
  • 11
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 643
  • 643
  • 643
  • 135
  • 134
  • 122
  • 119
  • 107
  • 92
  • 85
  • 72
  • 70
  • 69
  • 57
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Development and Implementation of an Online Kraft Black Liquor Viscosity Soft Sensor

Alabi, Sunday Boladale January 2010 (has links)
The recovery and recycling of the spent chemicals from the kraft pulping process are economically and environmentally essential in an integrated kraft pulp and paper mill. The recovery process can be optimised by firing high-solids black liquor in the recovery boiler. Unfortunately, due to a corresponding increase in the liquor viscosity, in many mills, black liquor is fired at reduced solids concentration to avoid possible rheological problems. Online measurement, monitoring and control of the liquor viscosity are deemed essential for the recovery boiler optimization. However, in most mills, including those in New Zealand, black liquor viscosity is not routinely measured. Four batches of black liquors having solids concentrations ranging between 47 % and 70 % and different residual alkali (RA) contents were obtained from Carter Holt Harvey Pulp and Paper (CHHP&P), Kinleith mill, New Zealand. Weak black liquor samples were obtained by diluting the concentrated samples with deionised water. The viscosities of the samples at solids concentrations ranging from 0 to 70 % were measured using open-cup rotational viscometers at temperatures ranging from 0 to 115 oC and shear rates between 10 and 2000 s-1. The effect of post-pulping process, liquor heat treatment (LHT) on the liquors’ viscosities was investigated in an autoclave at a temperature >=180 oC for at least 15 mins. The samples exhibit both Newtonian and non-Newtonian behaviours depending on temperature and solids concentration; the onsets of these behaviours are liquor-dependent. In conformity with the literature data, at high solids concentrations (> 50 %) and low temperatures, they exhibit shear-thinning behaviour with or without thixotropy but the shear-thinning/thixotropic characteristics disappear at high temperatures (>= 80 oC). Generally, when the apparent viscosities of the liquors are <= ~1000 cP, the liquors show a Newtonian or a near-Newtonian behaviour. These findings demonstrate that New Zealand black liquors can be safely treated as Newtonian fluids under industrial conditions. Further observations show that at low solids concentrations (< 50 %), viscosity is fairly independent of the RA content; however at solids concentrations > 50 %, viscosity decreases with increasing RA content of the liquor. This shows that the RA content of black liquor can be manipulated to control the viscosity of high-solids black liquors. The LHT process had negligible effect on the low-solids liquor viscosity but led to a significant and permanent reduction of the high-solids liquor viscosity by a factor of at least 6. Therefore, the incorporation of a LHT process into an existing kraft recovery process can help to obtain the benefits of high-solids liquor firing without a concern for the attending rheological problems. A variety of the existing and proposed viscosity models using the traditional regression modelling tools and an artificial neural network (ANN) paradigm were obtained under different constraints. Hitherto, the existing models rely on the traditional regression tools and they were mostly applicable to limited ranges of process conditions. On the one hand, composition-dependent models were obtained as a direct function of solids concentration and temperature, or solids concentration, temperature and shear rate; the relationships between these variables and the liquor viscosity are straight forward. The ANN-based models developed in this work were found to be superior to the traditional models in terms of accuracy, generalization capability and their applicability to a wide range of process conditions. If the parameters of the resulting ANN models can be successfully correlated with the liquor composition, the models would be suitable for online application. Unfortunately, black liquor viscosity depends on its composition in a complex manner; the direct correlation of its model parameters with the liquor composition is not yet a straight forward issue. On the other hand, for the first time in the Australasia, the limitations of the composition-dependent models were addressed using centrifugal pump performance parameters, which are easy to measure online. A variety of centrifugal pump-based models were developed based on the estimated data obtained via the Hydraulic Institute viscosity correction method. This is opposed to the traditional approaches, which depend largely on actual experimental data that could be difficult and expensive to obtain. The resulting age-independent centrifugal pump-based model was implemented online as a black liquor viscosity soft sensor at the number 5 recovery boiler at the CHHP&P, Kinleith mill, New Zealand where its performance was evaluated. The results confirm its ability to effectively account for variations in the liquor composition. Furthermore, it was able to give robust viscosity estimates in the presence of the changing pump’s operating point. Therefore, it is concluded that this study opens a new and an effective way for kraft black liquor viscosity sensor development.
72

Artificial Neural Networks for Image Improvement

Lind, Benjamin January 2017 (has links)
After a digital photo has been taken by a camera, it can be manipulated to be more appealing. Two ways of doing that are to reduce noise and to increase the saturation. With time and skills in an image manipulating program, this is usually done by hand. In this thesis, automatic image improvement based on artificial neural networks is explored and evaluated qualitatively and quantitatively. A new approach, which builds on an existing method for colorizing gray scale images is presented and its performance compared both to simpler methods and the state of the art in image denoising. Saturation is lowered and noise added to original images, which the methods receive as inputs to improve upon. The new method is shown to improve in some cases but not all, depending on the image and how it was modified before given to the method.
73

Towards efficient vehicle dynamics development : From subjective assessments to objective metrics, from physical to virtual testing

Gil Gómez, Gaspar January 2017 (has links)
Vehicle dynamics development is strongly based on subjective assessments (SA) of vehicle prototypes, which is expensive and time consuming. Consequently, in the age of computer- aided engineering (CAE), there is a drive towards reducing this dependency on physical test- ing. However, computers are known for their remarkable processing capacity, not for their feelings. Therefore, before SA can be computed, it is required to properly understand the cor- relation between SA and objective metrics (OM), which can be calculated by simulations, and to understand how this knowledge can enable a more efficient and effective development process. The approach to this research was firstly to identify key OM and SA in vehicle dynamics, based on the multicollinearity of OM and of SA, and on interviews with expert drivers. Sec- ondly, linear regressions and artificial neural network (ANN) were used to identify the ranges of preferred OM that lead to good SA-ratings. This result is the base for objective require- ments, a must in effective vehicle dynamics development and verification. The main result of this doctoral thesis is the development of a method capable of predicting SA from combinations of key OM. Firstly, this method generates a classification map of ve- hicles solely based on their OM, which allows for a qualitative prediction of the steering feel of a new vehicle based on its position, and that of its neighbours, in the map. This prediction is enhanced with descriptive word-clouds, which summarizes in a few words the comments of expert test drivers to each vehicle in the map. Then, a second superimposed ANN displays the evolution of SA-ratings in the map, and therefore, allows one to forecast the SA-rating for the new vehicle. Moreover, this method has been used to analyse the effect of the tolerances of OM requirements, as well as to verify the previously identified preferred range of OM. This thesis focused on OM-SA correlations in summer conditions, but it also aimed to in- crease the effectiveness of vehicle dynamics development in general. For winter conditions, where objective testing is not yet mature, this research initiates the definition and identifica- tion of robust objective manoeuvres and OM. Experimental data were used together with CAE optimisations and ANOVA-analysis to optimise the manoeuvres, which were verified in a second experiment. To improve the quality and efficiency of SA, Volvo’s Moving Base Driving Simulator (MBDS) was validated for vehicle dynamics SA-ratings. Furthermore, a tablet-app to aid vehicle dynamics SA was developed and validated. Combined this research encompasses a comprehensive method for a more effective and ob- jective development process for vehicle dynamics. This has been done by increasing the un- derstanding of OM, SA and their relations, which enables more effective SA (key SA, MBDS, SA-app), facilitates objective requirements and therefore CAE development, identi- fies key OM and their preferred ranges, and which allow to predict SA solely based on OM. / <p>QC 20170223</p> / iCOMSA
74

IMPROVED CAPABILITY OF A COMPUTATIONAL FOOT/ANKLE MODEL USING ARTIFICIAL NEURAL NETWORKS

Chande, Ruchi D 01 January 2016 (has links)
Computational joint models provide insight into the biomechanical function of human joints. Through both deformable and rigid body modeling, the structure-function relationship governing joint behavior is better understood, and subsequently, knowledge regarding normal, diseased, and/or injured function is garnered. Given the utility of these computational models, it is imperative to supply them with appropriate inputs such that model function is representative of true joint function. In these models, Magnetic Resonance Imaging (MRI) or Computerized Tomography (CT) scans and literature inform the bony anatomy and mechanical properties of muscle and ligamentous tissues, respectively. In the case of the latter, literature reports a wide range of values or average values with large standard deviations due to the inability to measure the mechanical properties of soft tissues in vivo. This makes it difficult to determine which values within the published literature to assign to computational models, especially patient-specific models. Therefore, while the use of published literature serves as a reasonable first approach to set up a computational model, a means of improving the supplied input data was sought. This work details the application of artificial neural networks (ANNs), specifically feedforward and radial basis function networks, to the optimization of ligament stiffnesses for the improved performance of pre- and post-operative, patient-specific foot/ankle computational models. ANNs are mathematical models that utilize learning rules to determine relationships between known sets of inputs and outputs. Using knowledge gained from these training data, the ANN may then predict outputs for similar, never‑before-seen inputs. Here, an optimal network of each ANN type was found, per mean square error and correlation data, and then both networks were used to predict optimal ligament stiffnesses corresponding to a single patient’s radiographic measurements. Both sets of predictions were ultimately supplied to the patient-specific computational models, and the resulting kinematics illustrated an improvement over the existing models that utilized literature-assigned stiffnesses. This research demonstrated that neural networks are a viable means to hone in on ligament stiffnesses for the overall objective of improving the predictive ability of a patient-specific computational model.
75

Groundwater Management Using Remotely Sensed Data in High Plains Aquifer

Ghasemian, Davood, Ghasemian, Davood January 2016 (has links)
Groundwater monitoring in regional scales using conventional methods is challenging since it requires a dense network monitoring well system and regular measurements. Satellite measurement of time-variable gravity from the Gravity Recovery and Climate Experiment (GRACE) mission since 2002 provided an exceptional opportunity to observe the variations in Terrestrial Water Storage (TWS) from space. This study has been divided into 3 parts: First different satellite and hydrological model data have been used to validate the TSW measurements derived from GRACE in High Plains Aquifer (HPA). Terrestrial Water Storage derived from GRACE was compared to TWS derived from a water budget whose inputs determined from independent datasets. The results were similar to each other both in magnitude and timing with a correlation coefficient of 0.55. The seasonal groundwater storage changes are also estimated using GRACE and auxiliary data for the period of 2004 to 2009, and results are compared to the local in situ measurements to test the capability of GRACE in detecting groundwater changes in this region. The results from comparing seasonal groundwater changes from GRACE and in situ measurements indicated a good agreement both in magnitude and seasonality with a correlation coefficient of 0.71. This finding reveals the worthiness of GRACE satellite data in detecting the groundwater level anomalies and the benefits of using its data in regional hydrological modelling. In the second part of the study the feasibility of the GRACE TWS for predicting groundwater level changes is investigated in different locations of the High Plains Aquifer. The Artificial Neural Networks (ANNs) are used to predict the monthly groundwater level changes. The input data employed in the ANN include monthly gridded GRACE TWS based on Release-05 of GRACE Level-3, precipitation, minimum and maximum temperature which are estimated from Parameter elevation Regression on Independent Slopes Model (PRISM), and the soil moisture estimations derived from Noah Land Surface Model for the period of January 2004 to December 2009. All the values for mentioned datasets are extracted at the location of 21 selected wells for the study period. The input data is divided into 3 parts which 60% is dedicated to training, 20% to validation, and 20% to testing. The output to the developed ANNs is the groundwater level change which is compared to the US Geological Survey's National Water Information well data. Results from statistical downscaling of GRACE data leaded to a significant improvement in predicting groundwater level changes, and the trained ensemble multi-layer perceptron shows a "good" to a "very good" performance based on the obtained Nash-Sutcliff Efficiency which demonstrates the capability of these data for downscaling. In the third part of this study the soil moisture from 4 different Land Surface models (NOAH, VIC, MOSAIC, and CLM land surface models) which are accessible through NASA Global Land Data Assimilation System (GLDAS) is included in developing the ANNs and the results are compared to each other to quantify the effect of soil moisture in the downscaling process of GRACE. The relative importance of each predictor was estimated using connection weight technique and it was found that the GRACE TWS is a significant parameter in the performance of Artificial Neural Network ensembles, and based on the Root Mean Squared (RMSE) and the correlation coefficients associated to the models in which the soil moisture from Noah and CLM Land Surface Models are used, it is found that using these datasets in process of downscaling GRACE delivers a higher correlated simulation values to the observed values.
76

Are we there yet? : Prediciting bus arrival times with an artificial neural network

Rideg, Johan, Markensten, Max January 2019 (has links)
Public transport authority UL (Upplands Lokaltrafik) aims to reduce emissions, air pollution, and traffic congestion by providing bus journeys as an alternative to using a car. In order to incentivise bus travel, accurate predictions are critical to potential passengers. Accurate arrival time predictions enable the passengers to spend less time waiting for the bus and revise their plan for connections when their bus runs late. According to literature, Artificial Neural Networks (ANN) has the ability to capture nonlinear relationships between time of day and position of the bus and its arrival time at upcoming bus stops. Using arrival times of buses on one line from July 2018 to February 2019, a data-set for supervised learning was curated and used to train an ANN. The ANN was implemented on data from the city buses and compared to one of the models currently in use. Analysis showed that the ANN was better able to handle the fluctuations in travel time during the day, only being outperformed at night. Before the ANN can be implemented, real time data processing must be added. To cement its practicality, whether its robustness can be improved upon should be explored as the current model is highly dependent on static bus routes.
77

Classifying Material Defects with Convolutional Neural Networks and Image Processing

Heidari, Jawid January 2019 (has links)
Fantastic progress has been made within the field of machine learning and deep neural networks in the last decade. Deep convolutional neural networks (CNN) have been hugely successful in imageclassification and object detection. These networks can automate many processes in the industries and increase efficiency. One of these processes is image classification implementing various CNN-models. This thesis addressed two different approaches for solving the same problem. The first approach implemented two CNN-models to classify images. The large pre-trained VGG-model was retrained using so-called transfer learning and trained only the top layers of the network. The other model was a smaller one with customized layers. The trained models are an end-to-end solution. The input is an image, and the output is a class score. The second strategy implemented several classical image processing algorithms to detect the individual defects existed in the pictures. This method worked as a ruled based object detection algorithm. Canny edge detection algorithm, combined with two mathematical morphology concepts, made the backbone of this strategy. Sandvik Coromant operators gathered approximately 1000 microscopical images used in this thesis. Sandvik Coromant is a leading producer of high-quality metal cutting tools. During the manufacturing process occurs some unwanted defects in the products. These defects are analyzed by taking images with a conventional microscopic of 100 and 1000 zooming capability. The three essential defects investigated in this thesis defined as Por, Macro and Slits. Experiments conducted during this thesis show that CNN-models is a good approach to classify impurities and defects in the metal industry, the potential is high. The validation accuracy reached circa 90 percentage, and the final evaluation accuracy was around 95 percentage , which is an acceptable result. The pretrained VGG-model reached a much higher accuracy than the customized model. The Canny edge detection algorithm combined dilation and erosion and contour detection produced a good result. It detected the majority of the defects existed in the images.
78

Towards Machine Learning Inference in the Data Plane

Langlet, Jonatan January 2019 (has links)
Recently, machine learning has been considered an important tool for various networkingrelated use cases such as intrusion detection, flow classification, etc. Traditionally, machinelearning based classification algorithms run on dedicated machines that are outside of thefast path, e.g. on Deep Packet Inspection boxes, etc. This imposes additional latency inorder to detect threats or classify the flows.With the recent advance of programmable data planes, implementing advanced function-ality directly in the fast path is now a possibility. In this thesis, we propose to implementArtificial Neural Network inference together with flow metadata extraction directly in thedata plane of P4 programmable switches, routers, or Network Interface Cards (NICs).We design a P4 pipeline, optimize the memory and computational operations for our dataplane target, a programmable NIC with Micro-C external support. The results show thatneural networks of a reasonable size (i.e. 3 hidden layers with 30 neurons each) can pro-cess flows totaling over a million packets per second, while the packet latency impact fromextracting a total of 46 features is 1.85μs.
79

Determinação da freqüência de ressonância de antenas tipo microfita triangular e retangular utilizando redes neurais artificiais /

Brinhole, Everaldo Ribeiro. January 2005 (has links)
Orientador: Naasson Pereira de Alcântara Junior / Banca: José Carlos Sartoti / Banca: José Alfredo Covolan Ulson / Resumo: Neste trabalho, apresenta-se o desenvolvimento de uma metodologia utilizando redes neurais artificiais, para auxiliar na determinação da freqüência de ressonância no projeto de antenas tipo microfita de equipamentos móveis, tanto para antenas retangulares como para antenas triangulares. Compararam-se modelos deterministas e modelos empíricos baseados em Redes Neurais Artificiais (RNA) da literatura pesquisada com os modelos apresentados neste trabalho. Apresentam-se modelos empíricos baseados em RNAs tipo Perceptron Multicamadas (PMC). Os modelos propostos também são capazes de serem integrados em um ambiente CAD (Computed Aided Design) para projetar antenas tipo microfita de equipamentos móveis. / Abstract: This work presents the development of models that can be used in the design of microstrip antennas for mobile communications. The antennas can be triangular or rectangular. The presented models are compared with deterministic and empirical models based on artificial neural networks (ANN) presented in the literature. The models are based on Perceptron Multilayer (PML). The models can be embedded in CAD systems, in order to design microstrip antennas for mobile communications. / Mestre
80

Analysing multifactor investing &amp; artificial neural network for modern stock market prediction

Roy, Samuel, Jönsson, Jakob January 2019 (has links)
In this research we investigate the relationship between multifactor investing and Artificial Neural Network (ANN) and contribute to modern stock market prediction. We present the components for multifactor investing i.e. value, quality, size, low volatility &amp; momentum as well as a methodology for ANN which provides the theory for the results. The return for the multifactor funds tested in this research is recorded below the benchmark used. However, the factors do have a dynamic relationship when testing for correlation and the multifactor regression analysis showed a high explanatory power (R2) for the funds. Based on the methodology of an ANN we establish that it is possible to use the knowledge from multifactor investing to train the technology with. When summarizing peer reviewed journals, we find that momentum have already been recurrently used in previous stock market prediction systems based on ANN, but the remaining factors have not. We conclude that there is an opportunity to use several factors to train an ANN due to their dynamic relationship and unique characteristics.

Page generated in 0.0914 seconds