• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 180
  • 31
  • 25
  • 21
  • 16
  • 11
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 644
  • 644
  • 644
  • 135
  • 134
  • 123
  • 119
  • 107
  • 93
  • 85
  • 73
  • 70
  • 69
  • 57
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
601

Sensory input encoding and readout methods for in vitro living neuronal networks

Ortman, Robert L. 06 July 2012 (has links)
Establishing and maintaining successful communication stands as a critical prerequisite for achieving the goals of inducing and studying advanced computation in small-scale living neuronal networks. The following work establishes a novel and effective method for communicating arbitrary "sensory" input information to cultures of living neurons, living neuronal networks (LNNs), consisting of approximately 20 000 rat cortical neurons plated on microelectrode arrays (MEAs) containing 60 electrodes. The sensory coding algorithm determines a set of effective codes (symbols), comprised of different spatio-temporal patterns of electrical stimulation, to which the LNN consistently produces unique responses to each individual symbol. The algorithm evaluates random sequences of candidate electrical stimulation patterns for evoked-response separability and reliability via a support vector machine (SVM)-based method, and employing the separability results as a fitness metric, a genetic algorithm subsequently constructs subsets of highly separable symbols (input patterns). Sustainable input/output (I/O) bit rates of 16-20 bits per second with a 10% symbol error rate resulted for time periods of approximately ten minutes to over ten hours. To further evaluate the resulting code sets' performance, I used the system to encode approximately ten hours of sinusoidal input into stimulation patterns that the algorithm selected and was able to recover the original signal with a normalized root-mean-square error of 20-30% using only the recorded LNN responses and trained SVM classifiers. Response variations over the course of several hours observed in the results of the sine wave I/O experiment suggest that the LNNs may retain some short-term memory of the previous input sample and undergo neuroplastic changes in the context of repeated stimulation with sensory coding patterns identified by the algorithm.
602

Geotechnical Site Characterization And Liquefaction Evaluation Using Intelligent Models

Samui, Pijush 02 1900 (has links)
Site characterization is an important task in Geotechnical Engineering. In situ tests based on standard penetration test (SPT), cone penetration test (CPT) and shear wave velocity survey are popular among geotechnical engineers. Site characterization using any of these properties based on finite number of in-situ test data is an imperative task in probabilistic site characterization. These methods have been used to design future soil sampling programs for the site and to specify the soil stratification. It is never possible to know the geotechnical properties at every location beneath an actual site because, in order to do so, one would need to sample and/or test the entire subsurface profile. Therefore, the main objective of site characterization models is to predict the subsurface soil properties with minimum in-situ test data. The prediction of soil property is a difficult task due to the uncertainities. Spatial variability, measurement ‘noise’, measurement and model bias, and statistical error due to limited measurements are the sources of uncertainities. Liquefaction in soil is one of the other major problems in geotechnical earthquake engineering. It is defined as the transformation of a granular material from a solid to a liquefied state as a consequence of increased pore-water pressure and reduced effective stress. The generation of excess pore pressure under undrained loading conditions is a hallmark of all liquefaction phenomena. This phenomena was brought to the attention of engineers more so after Niigata(1964) and Alaska(1964) earthquakes. Liquefaction will cause building settlement or tipping, sand boils, ground cracks, landslides, dam instability, highway embankment failures, or other hazards. Such damages are generally of great concern to public safety and are of economic significance. Site-spefific evaluation of liquefaction susceptibility of sandy and silty soils is a first step in liquefaction hazard assessment. Many methods (intelligent models and simple methods as suggested by Seed and Idriss, 1971) have been suggested to evaluate liquefaction susceptibility based on the large data from the sites where soil has been liquefied / not liquefied. The rapid advance in information processing systems in recent decades directed engineering research towards the development of intelligent models that can model natural phenomena automatically. In intelligent model, a process of training is used to build up a model of the particular system, from which it is hoped to deduce responses of the system for situations that have yet to be observed. Intelligent models learn the input output relationship from the data itself. The quantity and quality of the data govern the performance of intelligent model. The objective of this study is to develop intelligent models [geostatistic, artificial neural network(ANN) and support vector machine(SVM)] to estimate corrected standard penetration test (SPT) value, Nc, in the three dimensional (3D) subsurface of Bangalore. The database consists of 766 boreholes spread over a 220 sq km area, with several SPT N values (uncorrected blow counts) in each of them. There are total 3015 N values in the 3D subsurface of Bangalore. To get the corrected blow counts, Nc, various corrections such as for overburden stress, size of borehole, type of sampler, hammer energy and length of connecting rod have been applied on the raw N values. Using a large database of Nc values in the 3D subsurface of Bangalore, three geostatistical models (simple kriging, ordinary kriging and disjunctive kriging) have been developed. Simple and ordinary kriging produces linear estimator whereas, disjunctive kriging produces nonlinear estimator. The knowledge of the semivariogram of the Nc data is used in the kriging theory to estimate the values at points in the subsurface of Bangalore where field measurements are not available. The capability of disjunctive kriging to be a nonlinear estimator and an estimator of the conditional probability is explored. A cross validation (Q1 and Q2) analysis is also done for the developed simple, ordinary and disjunctive kriging model. The result indicates that the performance of the disjunctive kriging model is better than simple as well as ordinary kriging model. This study also describes two ANN modelling techniques applied to predict Nc data at any point in the 3D subsurface of Bangalore. The first technique uses four layered feed-forward backpropagation (BP) model to approximate the function, Nc=f(x, y, z) where x, y, z are the coordinates of the 3D subsurface of Bangalore. The second technique uses generalized regression neural network (GRNN) that is trained with suitable spread(s) to approximate the function, Nc=f(x, y, z). In this BP model, the transfer function used in first and second hidden layer is tansig and logsig respectively. The logsig transfer function is used in the output layer. The maximum epoch has been set to 30000. A Levenberg-Marquardt algorithm has been used for BP model. The performance of the models obtained using both techniques is assessed in terms of prediction accuracy. BP ANN model outperforms GRNN model and all kriging models. SVM model, which is firmly based on the theory of statistical learning theory, uses regression technique by introducing -insensitive loss function has been also adopted to predict Nc data at any point in 3D subsurface of Bangalore. The SVM implements the structural risk minimization principle (SRMP), which has been shown to be superior to the more traditional empirical risk minimization principle (ERMP) employed by many of the other modelling techniques. The present study also highlights the capability of SVM over the developed geostatistic models (simple kriging, ordinary kriging and disjunctive kriging) and ANN models. Further in this thesis, Liquefaction susceptibility is evaluated from SPT, CPT and Vs data using BP-ANN and SVM. Intelligent models (based on ANN and SVM) are developed for prediction of liquefaction susceptibility using SPT data from the 1999 Chi-Chi earthquake, Taiwan. Two models (MODEL I and MODEL II) are developed. The SPT data from the work of Hwang and Yang (2001) has been used for this purpose. In MODEL I, cyclic stress ratio (CSR) and corrected SPT values (N1)60 have been used for prediction of liquefaction susceptibility. In MODEL II, only peak ground acceleration (PGA) and (N1)60 have been used for prediction of liquefaction susceptibility. Further, the generalization capability of the MODEL II has been examined using different case histories available globally (global SPT data) from the work of Goh (1994). This study also examines the capabilities of ANN and SVM to predict the liquefaction susceptibility of soils from CPT data obtained from the 1999 Chi-Chi earthquake, Taiwan. For determination of liquefaction susceptibility, both ANN and SVM use the classification technique. The CPT data has been taken from the work of Ku et al.(2004). In MODEL I, cone tip resistance (qc) and CSR values have been used for prediction of liquefaction susceptibility (using both ANN and SVM). In MODEL II, only PGA and qc have been used for prediction of liquefaction susceptibility. Further, developed MODEL II has been also applied to different case histories available globally (global CPT data) from the work of Goh (1996). Intelligent models (ANN and SVM) have been also adopted for liquefaction susceptibility prediction based on shear wave velocity (Vs). The Vs data has been collected from the work of Andrus and Stokoe (1997). The same procedures (as in SPT and CPT) have been applied for Vs also. SVM outperforms ANN model for all three models based on SPT, CPT and Vs data. CPT method gives better result than SPT and Vs for both ANN and SVM models. For CPT and SPT, two input parameters {PGA and qc or (N1)60} are sufficient input parameters to determine the liquefaction susceptibility using SVM model. In this study, an attempt has also been made to evaluate geotechnical site characterization by carrying out in situ tests using different in situ techniques such as CPT, SPT and multi channel analysis of surface wave (MASW) techniques. For this purpose a typical site was selected wherein a man made homogeneous embankment and as well natural ground has been met. For this typical site, in situ tests (SPT, CPT and MASW) have been carried out in different ground conditions and the obtained test results are compared. Three CPT continuous test profiles, fifty-four SPT tests and nine MASW test profiles with depth have been carried out for the selected site covering both homogeneous embankment and natural ground. Relationships have been developed between Vs, (N1)60 and qc values for this specific site. From the limited test results, it was found that there is a good correlation between qc and Vs. Liquefaction susceptibility is evaluated using the in situ test data from (N1)60, qc and Vs using ANN and SVM models. It has been shown to compare well with “Idriss and Boulanger, 2004” approach based on SPT test data. SVM model has been also adopted to determine over consolidation ratio (OCR) based on piezocone data. Sensitivity analysis has been performed to investigate the relative importance of each of the input parameters. SVM model outperforms all the available methods for OCR prediction.
603

Melizmų sintezė dirbtinių neuronų tinklais / Melisma Synthesis Using Artificial Neural Networks

Leonavičius, Romas 12 January 2007 (has links)
Modern methods of speech synthesis are not suitable for restoration of song signals due to lack of vitality and intonation in the resulted sounds. The aim of presented work is to synthesize melismas met in Lithuanian folk songs, by applying Artificial Neural Networks. An analytical survey of rather a widespread literature is presented. First classification and comprehensive discussion of melismas are given. The theory of dynamic systems which will make the basis for studying melismas is presented and finally the relationship for modeling a melisma with nonlinear and dynamic systems is outlined. Investigation of the most widely used Linear Prediction Coding method and possibilities of its improvement. The modification of original Linear Prediction method based on dynamic LPC frame positioning is proposed. On its basis, the new melisma synthesis technique is presented. Developed flexible generalized melisma model, based on two Artificial Neural Networks – a Multilayer Perceptron and Adaline – as well as on two network training algorithms – Levenberg- Marquardt and the Least Squares error minimization – is presented. Moreover, original mathematical models of Fortis, Gruppett, Mordent and Trill are created, fit for synthesizing melismas, and their minimal sizes are proposed. The last chapter concerns experimental investigation, using over 500 melisma records, and corroborates application of the new mathematical models to melisma synthesis of one performer.
604

Melizmų sintezė dirbtinių neuronų tinklais / Melisma Synthesis Using Artificial Neural Networks

Leonavičius, Romas 12 January 2007 (has links)
Modern methods of speech synthesis are not suitable for restoration of song signals due to lack of vitality and intonation in the resulted sounds. The aim of presented work is to synthesize melismas met in Lithuanian folk songs, by applying Artificial Neural Networks. An analytical survey of rather a widespread literature is presented. First classification and comprehensive discussion of melismas are given. The theory of dynamic systems which will make the basis for studying melismas is presented and finally the relationship for modeling a melisma with nonlinear and dynamic systems is outlined. Investigation of the most widely used Linear Prediction Coding method and possibilities of its improvement. The modification of original Linear Prediction method based on dynamic LPC frame positioning is proposed. On its basis, the new melisma synthesis technique is presented. Developed flexible generalized melisma model, based on two Artificial Neural Networks – a Multilayer Perceptron and Adaline – as well as on two network training algorithms – Levenberg- Marquardt and the Least Squares error minimization – is presented. Moreover, original mathematical models of Fortis, Gruppett, Mordent and Trill are created, fit for synthesizing melismas, and their minimal sizes are proposed. The last chapter concerns experimental investigation, using over 500 melisma records, and corroborates application of the new mathematical models to melisma synthesis of one performer.
605

Teleconnection, Modeling, Climate Anomalies Impact and Forecasting of Rainfall and Streamflow of the Upper Blue Nile River Basin

Elsanabary, Mohamed Helmy Mahmoud Moustafa Unknown Date
No description available.
606

運用於高頻交易策略規劃之分散式類神經網路框架 / Distributed Framework of Artificial Neural Network for Planning High-Frequency Trading Strategies

何善豪, Ho, Shan Hao Unknown Date (has links)
在這份研究中,我們提出一個類分散式神經網路框架,此框架為高頻交易系統研究下之子專案。在系統中,我們透過資料探勘程序發掘財務時間序列中的模式,其中所採用的資料探勘演算法之一即為類神經網路。我們實作一個在分散式平台上訓練類神經網路的框架。我們採用Apache Spark來建立底層的運算叢集,因為它提供高效能的記憶體內運算(in-memory computing)。我們分析一些分散式後向傳導演算法(特別是用來預測財務時間序列的),加以調整,並將其用於我們的框架。我們提供了許多細部的選項,讓使用者在進行類神經網路建模時有很高的彈性。 / In this research, we introduce a distributed framework of artificial neural network (ANN) as a subproject under the research of a high-frequency trading (HFT) system. In the system, ANNs are used in the data mining process for identifying patterns in financial time series. We implement a framework for training ANNs on a distributed computing platform. We adopt Apache Spark to build the base computing cluster because it is capable of high performance in-memory computing. We investigate a number of distributed backpropagation algorithms and techniques, especially ones for time series prediction, and incorporate them into our framework with some modifications. With various options for the details, we provide the user with flexibility in neural network modeling.
607

Comparing generalized additive neural networks with multilayer perceptrons / Johannes Christiaan Goosen

Goosen, Johannes Christiaan January 2011 (has links)
In this dissertation, generalized additive neural networks (GANNs) and multilayer perceptrons (MLPs) are studied and compared as prediction techniques. MLPs are the most widely used type of artificial neural network (ANN), but are considered black boxes with regard to interpretability. There is currently no simple a priori method to determine the number of hidden neurons in each of the hidden layers of ANNs. Guidelines exist that are either heuristic or based on simulations that are derived from limited experiments. A modified version of the neural network construction with cross–validation samples (N2C2S) algorithm is therefore implemented and utilized to construct good MLP models. This algorithm enables the comparison with GANN models. GANNs are a relatively new type of ANN, based on the generalized additive model. The architecture of a GANN is less complex compared to MLPs and results can be interpreted with a graphical method, called the partial residual plot. A GANN consists of an input layer where each of the input nodes has its own MLP with one hidden layer. Originally, GANNs were constructed by interpreting partial residual plots. This method is time consuming and subjective, which may lead to the creation of suboptimal models. Consequently, an automated construction algorithm for GANNs was created and implemented in the SAS R statistical language. This system was called AutoGANN and is used to create good GANN models. A number of experiments are conducted on five publicly available data sets to gain insight into the similarities and differences between GANN and MLP models. The data sets include regression and classification tasks. In–sample model selection with the SBC model selection criterion and out–of–sample model selection with the average validation error as model selection criterion are performed. The models created are compared in terms of predictive accuracy, model complexity, comprehensibility, ease of construction and utility. The results show that the choice of model is highly dependent on the problem, as no single model always outperforms the other in terms of predictive accuracy. GANNs may be suggested for problems where interpretability of the results is important. The time taken to construct good MLP models by the modified N2C2S algorithm may be shorter than the time to build good GANN models by the automated construction algorithm / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
608

Comparing generalized additive neural networks with multilayer perceptrons / Johannes Christiaan Goosen

Goosen, Johannes Christiaan January 2011 (has links)
In this dissertation, generalized additive neural networks (GANNs) and multilayer perceptrons (MLPs) are studied and compared as prediction techniques. MLPs are the most widely used type of artificial neural network (ANN), but are considered black boxes with regard to interpretability. There is currently no simple a priori method to determine the number of hidden neurons in each of the hidden layers of ANNs. Guidelines exist that are either heuristic or based on simulations that are derived from limited experiments. A modified version of the neural network construction with cross–validation samples (N2C2S) algorithm is therefore implemented and utilized to construct good MLP models. This algorithm enables the comparison with GANN models. GANNs are a relatively new type of ANN, based on the generalized additive model. The architecture of a GANN is less complex compared to MLPs and results can be interpreted with a graphical method, called the partial residual plot. A GANN consists of an input layer where each of the input nodes has its own MLP with one hidden layer. Originally, GANNs were constructed by interpreting partial residual plots. This method is time consuming and subjective, which may lead to the creation of suboptimal models. Consequently, an automated construction algorithm for GANNs was created and implemented in the SAS R statistical language. This system was called AutoGANN and is used to create good GANN models. A number of experiments are conducted on five publicly available data sets to gain insight into the similarities and differences between GANN and MLP models. The data sets include regression and classification tasks. In–sample model selection with the SBC model selection criterion and out–of–sample model selection with the average validation error as model selection criterion are performed. The models created are compared in terms of predictive accuracy, model complexity, comprehensibility, ease of construction and utility. The results show that the choice of model is highly dependent on the problem, as no single model always outperforms the other in terms of predictive accuracy. GANNs may be suggested for problems where interpretability of the results is important. The time taken to construct good MLP models by the modified N2C2S algorithm may be shorter than the time to build good GANN models by the automated construction algorithm / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
609

CPFR銷售預測模式之探討

曾永勝 Unknown Date (has links)
協同規劃、預測與再補貨(Collaborative Planning, Forecasting and Replenishment; CPFR),是目前供應鏈管理下重要的討論議題;台灣近年來由於加入WTO與製造業外移使競爭壓力加劇,全球運籌需求提升,使廠商間的合作更加密切,且近年來企業資訊環境與基礎建設逐漸成熟,有助於協同商務之發展。在CPFR流程與供應鏈協同作業環境下,一個供需雙方協同且績效良好的銷售預測具有關鍵的重要性,是管理決策與協同合作時的重要依據;但是多數的企業並沒有一個結構化、有系統化的預測流程及方法,進行多點且不同方法之預測,這樣的銷售預測較無穩定的品質,亦較難提供管理者合理的數據解釋。 在CPFR流程下,強調買賣雙方透過完整、即時資訊的交流,進行短期、單一銷售預測,以提供雙方後續訂單預測、訂單補貨等決策的依據。本研究利用演算法(類神經網路和演化策略法)找出更適合混合性預測架構的解釋變數,再以較適合於實數解之演化策略法於修改黃蘭禎(2004)的三階段之預測模型架構,最後採用實驗方法,進行模型績效驗證。 / Collaborative Planning, forecasting and replenishment (CPFR) is an important issue of supply chain management currently. Because of the severer competition resulted from entrance into WTO and industry integration, cooperation between Taiwanese companies becomes more intensely; enterprises’ information environment and foundation construction attain to maturity also boost the development of collaboration business. In CPRF process and supply chain operation environment, it is critical that a good performance sale forecasting collaborated by both supplier and buyer sides, and it is also the basis of policy decision and collaboration. However, the majority of the companies lack for a structural and systematical forecasting process to proceed with a multi-points forecasting with different methods. This kind of sale forecasting is less of stable quality and is harder to provide the managers a reasonable statistics explanation. Under the CPRF process, both buyers and sellers are able to obtain the short-term and single sale forecasting by real time information communication. Furthermore, the follow-up order forecasting and replenishment strategy decision can be also established through this process. This research finds the variables that are more suitable to the mixed structure by usage of the algorithms, ANN and Evolution Strategy. And this research uses Evolution Strategy that is more suitable to real question to improve the mixed structure of Huang (2004). In the end, experimentation is adopted in order to verify the performance of the model.
610

EEG Data acquisition and automatic seizure detection using wavelet transforms in the newborn EEG.

Zarjam, Pega January 2003 (has links)
This thesis deals with the problem of newborn seizre detection from the Electroencephalogram (EEG) signals. The ultimate goal is to design an automated seizure detection system to assist the medical personnel in timely seizure detection. Seizure detection is vital as neurological diseases or dysfunctions in newborn infants are often first manifested by seizure and prolonged seizures can result in impaired neuro-development or even fatality. The EEG has proved superior to clinical examination of newborns in early detection and prognostication of brain dysfunctions. However, long-term newborn EEG signals acquisition is considerably more difficult than that of adults and children. This is because, the number of the electrodes attached to the skin is limited by the size of the head, the newborns EEGs vary from day to day, and the newborns are reluctant of being in the recording situation. Also, the movement of the newborn can create artifact in the recording and as a result strongly affect the electrical seizure recognition. Most of the existing methods for neonates are either time or frequency based, and, therefore, do not consider the non-stationarity nature of the EEG signal. Thus, notwithstanding the plethora of existing methods, this thesis applies the discrete wavelet transform (DWT) to account for the non-stationarity of the EEG signals. First, two methods for seizure detection in neonates are proposed. The detection schemes are based on observing the changing behaviour of a number of statistical quantities of the wavelet coefficients (WC) of the EEG signal at different scales. In the first method, the variance and mean of the WC are considered as a feature set to dassify the EEG data into seizure and non-seizure. The test results give an average seizure detection rate (SDR) of 97.4%. In the second method, the number of zero-crossings, and the average distance between adjacent extrema of the WC of certain scales are extracted to form a feature set. The test obtains an average SDR of 95.2%. The proposed feature sets are both simple to implement, have high detection rate and low false alarm rate. Then, in order to reduce the complexity of the proposed schemes, two optimising methods are used to reduce the number of selected features. First, the mutual information feature selection (MIFS) algorithm is applied to select the optimum feature subset. The results show that an optimal subset of 9 features, provides SDR of 94%. Compared to that of the full feature set, it is clear that the optimal feature set can significantly reduce the system complexity. The drawback of the MIFS algorithm is that it ignores the interaction between features. To overcome this drawback, an alternative algorithm, the mutual information evaluation function (MIEF) is then used. The MIEF evaluates a set of candidate features extracted from the WC to select an informative feature subset. This function is based on the measurement of the information gain and takes into consideration the interaction between features. The performance of the proposed features is evaluated and compared to that of the features obtained using the MIFS algorithm. The MIEF algorithm selected the optimal 10 features resulting an average SDR of 96.3%. It is also shown, an average SDR of 93.5% can be obtained with only 4 features when the MIEF algorithm is used. In comparison with results of the first two methods, it is shown that the optimal feature subsets improve the system performance and significantly reduce the system complexity for implementation purpose.

Page generated in 0.1027 seconds