• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 570
  • 336
  • 39
  • 21
  • 15
  • 12
  • 11
  • 8
  • 8
  • 8
  • 8
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 1191
  • 1191
  • 1191
  • 571
  • 556
  • 423
  • 157
  • 134
  • 129
  • 128
  • 120
  • 110
  • 94
  • 93
  • 92
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
991

A new approach to Decimation in High Order Boltzmann Machines

Farguell Matesanz, Enric 20 January 2011 (has links)
La Màquina de Boltzmann (MB) és una xarxa neuronal estocàstica amb l'habilitat tant d'aprendre com d'extrapolar distribucions de probabilitat. Malgrat això, mai ha arribat a ser tant emprada com d'altres models de xarxa neuronal, com ara el perceptró, degut a la complexitat tan del procés de simulació com d'aprenentatge: les quantitats que es necessiten al llarg del procés d'aprenentatge són normalment estimades mitjançant tècniques Monte Carlo (MC), a través de l'algorisme del Temprat Simulat (SA). Això ha portat a una situació on la MB és més ben aviat considerada o bé com una extensió de la xarxa de Hopfield o bé com una implementació paral·lela del SA. Malgrat aquesta relativa manca d'èxit, la comunitat científica de l'àmbit de les xarxes neuronals ha mantingut un cert interès amb el model. Una de les extensions més rellevants a la MB és la Màquina de Boltzmann d'Alt Ordre (HOBM), on els pesos poden connectar més de dues neurones simultàniament. Encara que les capacitats d'aprenentatge d'aquest model han estat analitzades per d'altres autors, no s'ha pogut establir una equivalència formal entre els pesos d'una MB i els pesos d'alt ordre de la HOBM. En aquest treball s'analitza l'equivalència entre una MB i una HOBM a través de l'extensió del mètode conegut com a decimació. Decimació és una eina emprada a física estadística que es pot també aplicar a cert tipus de MB, obtenint expressions analítiques per a calcular les correlacions necessàries per a dur a terme el procés d'aprenentatge. Per tant, la decimació evita l'ús del costós algorisme del SA. Malgrat això, en la seva forma original, la decimació podia tan sols ser aplicada a cert tipus de topologies molt poc densament connectades. La extensió que es defineix en aquest treball permet calcular aquests valors independentment de la topologia de la xarxa neuronal; aquest model es basa en afegir prou pesos d'alt ordre a una MB estàndard com per a assegurar que les equacions de la decimació es poden solucionar. Després, s'estableix una equivalència directa entre els pesos d'un model d'alt ordre, la distribució de probabilitat que pot aprendre i les matrius de Hadamard: les propietats d'aquestes matrius es poden emprar per a calcular fàcilment els pesos del sistema. Finalment, es defineix una MB estàndard amb una topologia específica que permet entendre millor la equivalència exacta entre unitats ocultes de la MB i els pesos d'alt ordre de la HOBM. / La Máquina de Boltzmann (MB) es una red neuronal estocástica con la habilidad de aprender y extrapolar distribuciones de probabilidad. Sin embargo, nunca ha llegado a ser tan popular como otros modelos de redes neuronals como, por ejemplo, el perceptrón. Esto es debido a la complejidad tanto del proceso de simulación como de aprendizaje: las cantidades que se necesitan a lo largo del proceso de aprendizaje se estiman mediante el uso de técnicas Monte Carlo (MC), a través del algoritmo del Temple Simulado (SA). En definitiva, la MB es generalmente considerada o bien una extensión de la red de Hopfield o bien como una implementación paralela del algoritmo del SA. Pese a esta relativa falta de éxito, la comunidad científica del ámbito de las redes neuronales ha mantenido un cierto interés en el modelo. Una importante extensión es la Màquina de Boltzmann de Alto Orden (HOBM), en la que los pesos pueden conectar más de dos neuronas a la vez. Pese a que este modelo ha sido analizado en profundidad por otros autores, todavía no se ha descrito una equivalencia formal entre los pesos de una MB i las conexiones de alto orden de una HOBM. En este trabajo se ha analizado la equivalencia entre una MB i una HOBM, a través de la extensión del método conocido como decimación. La decimación es una herramienta propia de la física estadística que también puede ser aplicada a ciertos modelos de MB, obteniendo expresiones analíticas para el cálculo de las cantidades necesarias en el algoritmo de aprendizaje. Por lo tanto, la decimación evita el alto coste computacional asociado al al uso del costoso algoritmo del SA. Pese a esto, en su forma original la decimación tan solo podía ser aplicada a ciertas topologías de MB, distinguidas por ser poco densamente conectadas. La extensión definida en este trabajo permite calcular estos valores independientemente de la topología de la red neuronal: este modelo se basa en añadir suficientes pesos de alto orden a una MB estándar como para asegurar que las ecuaciones de decimación pueden solucionarse. Más adelante, se establece una equivalencia directa entre los pesos de un modelo de alto orden, la distribución de probabilidad que puede aprender y las matrices tipo Hadamard. Las propiedades de este tipo de matrices se pueden usar para calcular fácilmente los pesos del sistema. Finalmente, se define una BM estándar con una topología específica que permite entender mejor la equivalencia exacta entre neuronas ocultas en la MB y los pesos de alto orden de la HOBM. / The Boltzmann Machine (BM) is a stochastic neural network with the ability of both learning and extrapolating probability distributions. However, it has never been as widely used as other neural networks such as the perceptron, due to the complexity of both the learning and recalling algorithms, and to the high computational cost required in the learning process: the quantities that are needed at the learning stage are usually estimated by Monte Carlo (MC) through the Simulated Annealing (SA) algorithm. This has led to a situation where the BM is rather considered as an evolution of the Hopfield Neural Network or as a parallel implementation of the Simulated Annealing algorithm. Despite this relative lack of success, the neural network community has continued to progress in the analysis of the dynamics of the model. One remarkable extension is the High Order Boltzmann Machine (HOBM), where weights can connect more than two neurons at a time. Although the learning capabilities of this model have already been discussed by other authors, a formal equivalence between the weights in a standard BM and the high order weights in a HOBM has not yet been established. We analyze this latter equivalence between a second order BM and a HOBM by proposing an extension of the method known as decimation. Decimation is a common tool in statistical physics that may be applied to some kind of BMs, that can be used to obtain analytical expressions for the n-unit correlation elements required in the learning process. In this way, decimation avoids using the time consuming Simulated Annealing algorithm. However, as it was first conceived, it could only deal with sparsely connected neural networks. The extension that we define in this thesis allows computing the same quantities irrespective of the topology of the network. This method is based on adding enough high order weights to a standard BM to guarantee that the system can be solved. Next, we establish a direct equivalence between the weights of a HOBM model, the probability distribution to be learnt and Hadamard matrices. The properties of these matrices can be used to easily calculate the value of the weights of the system. Finally, we define a standard BM with a very specific topology that helps us better understand the exact equivalence between hidden units in a BM and high order weights in a HOBM.
992

A Real-Time Classification approach of a Human Brain-Computer Interface based on Movement Related Electroencephalogram

Mileros, Martin D. January 2004 (has links)
A Real-Time Brain-Computer Interface is a technical system classifying increased or decreased brain activity in Real-Time between different body movements, actions performed by a person. Focus in this thesis will be on testing algorithms and settings, finding the initial time interval and how increased activity in the brain can be distinguished and satisfyingly classified. The objective is letting the system give an output somewhere within 250ms of a thought of an action, which will be faster than a persons reaction time. Algorithms in the preprocessing were Blind Signal Separation and the Fast Fourier Transform. With different frequency and time interval settings the algorithms were tested on an offline Electroencephalographic data file based on the "Ten Twenty" Electrode Application System, classified using an Artificial Neural Network. A satisfying time interval could be found between 125-250ms, but more research is needed to investigate that specific interval. A reduction in frequency resulted in a lack of samples in the sample window preventing the algorithms from working properly. A high frequency is therefore proposed to help keeping the sample window small in the time domain. Blind Signal Separation together with the Fast Fourier Transform had problems finding appropriate correlation using the Ten-Twenty Electrode Application System. Electrodes should be placed more selectively at the parietal lobe, in case of requiring motor responses.
993

Hydrodynamic characteristics of gas/liquid/fiber three-phase flows based on objective and minimally-intrusive pressure fluctuation measurements

Xie, Tao 27 September 2004 (has links)
Flow regime identification in industrial systems that rely on complex multi-phase flows is crucial for their safety, control, diagnostics, and operation. The objective of this investigation was to develop and demonstrate objective and minimally-intrusive flow regime classification methods for gas/water/paper pulp three-phase slurries, based on artificial neural network-assisted recognition of patterns in the statistical characteristics of pressure fluctuations. Experiments were performed in an instrumented three-phase bubble column featuring vertical, upward flow. The hydrodynamics of low consistency (LC) gas-liquid-fiber mixtures, over a wide range of superficial phase velocities, were investigated. Flow regimes were identified, gas holdup (void fraction) was measured, and near-wall pressure fluctuations were recorded using high-sensitivity pressure sensors. Artificial neural networks of various configurations were designed, trained and tested for the classification of flow regimes based on the recorded pressure fluctuation statistics. The feasibility of flow regime identification based on statistical properties of signals recorded by a single sensor was thereby demonstrated. The transportability of the developed method, whereby an artificial neural network trained and tested with a set of data is manipulated and used for the characterization of an unseen and different but plausibly similar data set, was also examined. An artificial neural network-based method was developed that used the power spectral characteristics of the normal pressure fluctuations as input, and its transportability between separate but in principle similar sensors was successfully demonstrated. An artificial neural network-based method was furthermore developed that enhances the transportability of the aforementioned artificial neural networks that were trained for flow pattern recognition. While a redundant system with multiple sensors is an obvious target application, such robustness of algorithms that provides transportability will also contribute to performance with a single sensor, shielding effects of calibration changes or sensor replacements.
994

Application Of ANN Techniques For Identification Of Fault Location In Distribution Networks

Ashageetha, H 10 1900 (has links)
Electric power distribution network is an important part of electrical power systems for delivering electricity to consumers. Electric power utilities worldwide are increasingly adopting the computer aided monitoring, control and management of electric power distribution systems to provide better services to the electrical consumers. Therefore, research and development activities worldwide are being carried out to automate the electric power distribution system. The power distribution system consists of a three-phase source supplying power through single-, two-, or three-phase distribution lines, switches, and transformers to a set of buses with a given load demand. In addition, unlike transmission systems, single-, two-, and three-phase sections exist in the network and single-, two-, and three-phase loads exist in the distribution networks. Further, most distribution systems are overhead systems, which are susceptible to faults caused by a variety of situations such as adverse weather conditions, equipment failure, traffic accidents, etc. When a fault occurs on a distribution line, it is very important for the utility to identify the fault location as quickly as possible for improving the service reliability. Hence, one of the crucial blocks in the operation of distribution system is that of fault detection and it’s location. The achievement of this objective depends on the success of the distribution automation system. The distribution automation system should be implemented quickly and accurately in order to isolate those affected branches from the healthy parts and to take alternative measures to restore normal power supply. Fault location in the distribution system is a difficult task due to its high complexity and difficulty caused by unique characteristics of the distribution system. These unique characteristics are discussed in the present work. In recent years, some techniques have been discussed for the location of faults, particularly in radial distribution systems. These methods use various algorithmic approaches, where the fault location is iteratively calculated by updating the fault current. Heuristic and Expert System approaches for locating fault in distribution system are also proposed which uses more measurements. Measurements are assumed to be available at the sending end of the faulty line segment, which are not true in reality as the measurements are only available at the substation and at limited nodes of the distribution networks through the use of remote terminal units. The emerging techniques of Artificial Intelligence (AI) can be a solution to this problem. Among the various AI based techniques like Expert systems, Fuzzy Set and ANN systems, the ANN approach for fault location is found to be encouraging. In this thesis, an ANN approaches with limited measurements are used to locate fault in long distribution networks with laterals. Initially the distribution system modeling (using actual a-b-c phase representation) for three-, two-, and single-phase laterals, three-, two-, and single- phase loads are described. Also an efficient three-phase load flow and short circuit analysis with loads are described which is used to simulate all types of fault conditions on distribution systems. In this work, function approximation (FA) is the main technique used and the classification techniques take a major supportive role to the FA problem. Fault location in distribution systems is explained as a FA problem, which is difficult to solve due to the various practical constraints particular to distribution systems. Incorporating classification techniques reduce this FA problem to simpler ones. The function that is approximated is the relation between the three-phase voltage and current measurements at the substation and at selected number of buses (inputs), and the line impedance of the fault points from the substation (outputs). This function is approximated by feed forward neural network (FFNN). Similarly for solving the classification problems such as fault type classification and source short circuit level classification, Radial Basis Probabilistic Neural Network (RBPNN) has been employed. The work presented in this thesis is the combinational use of FFNN and RBPNN for estimating the fault location. Levenberg Marquardt learning method, which is robust and fast, is used for training FFNN. A typical unbalanced 11-node test system, an IEEE 34 nodes test system and a practical 69- bus long distribution systems with different configurations are considered for the study. The results show that the proposed approaches of fault location gives accurate results in terms of estimated fault location. Practical situations in distribution systems such as unbalanced loading, three-, two-, and single- phase laterals, limited measurements available, all types of faults, a wide range of varying source short circuit levels, varying loading conditions, long feeders with multiple laterals and different network configurations are considered for the study. The result shows the feasibility of applying the proposed method in practical distribution system fault diagnosis.
995

Factors Affecting The Static And Dynamic Response Of Jointed Rock Masses

Garaga, Arunakumari 01 September 2008 (has links)
Infrastructure is developing at an extremely fast pace which includes construction of metros, underground storage places, railway bridges, caverns and tunnels. Very often these structures are found in or on the rock masses. Rock masses are seldom found in nature without joints or discontinuities. Jointed rocks are characterized by the presence of inherent discontinuities of varied sizes with different orientations and intensities, which can have significant effect on their mechanical response. Constructions involving jointed rocks often become challenging jobs for Civil Engineers as the instability of slopes or excavations in these jointed rocks poses serious concerns, sometimes leading to the failure of structures built on them. Experimental investigations on jointed rock masses are not always feasible and pose formidable problems to the engineers. Apart from the technical difficulties of extracting undisturbed rock samples, it is very expensive and time consuming to conduct the experiments on jointed rock masses of huge dimensions. The most popular methods of evaluating the rock mass behaviour are the Numerical methods. In this thesis, numerical modelling of jointed rock masses is carried out using computer program FLAC (Fast Lagrangian Analysis of Continua). The objective of the present study is to study the effect of various joint parameters on the response of jointed rock masses in static as well as seismic shaking conditions. This is achieved through systematic series of numerical simulations of jointed rocks in triaxial compression, in underground openings and in large rock slopes. This thesis is an attempt to study the individual effect of different joint parameters on the rock mass behaviour and to integrate these results to provide useful insight into the behaviour of jointed rock mass under various joint conditions. In practice, it is almost impossible to explore all of the joint systems or to investigate all their mechanical characteristics and implementing them explicitly in the model. In these cases, the use of the equivalent continuum model to simulate the behaviour of jointed rock masses could be valuable. Hence this approach is mainly used in this thesis. Some numerical simulations with explicitly modelled joints are also presented for comparison with the continuum modelling. The applicability of Artificial Neural Networks for the prediction of stress-strain response of jointed rocks is also explored. Static, pseudo-static and dynamic analyses of a large rock slope in Himalayas is carried out and parametric seismic analysis of rock slope is carried out with varying input shaking, material damping and shear strength parameters. Results from the numerical studies showed that joint inclination is the most influencing parameter for the jointed rock mass behaviour. Rock masses exhibit lowest strength at critical angle of joint inclination and the deformations around excavations will be highest when the joints are inclined at an angle close to the critical angle. However at very high confining pressures, the influence of joint inclination gets subdued. Under seismic base shaking conditions, the deformations of rock masses largely depend on the acceleration response with time, frequency content and duration rather than the peak amplitude or the magnitude of earthquake. All these aspects are discussed in the light of results from numerical studies presented in this thesis.
996

Υπολογιστική νοημοσύνη στην οικονομία και τη θεωρία παιγνίων

Παυλίδης, Νίκος 09 October 2008 (has links)
Η διατριβή πραγματεύεται το αντικείμενο της Υπολογιστικής Νοημοσύνης στην Οικονομική και Χρηματοοικονομική επιστήμη. Στο πρώτο μέρος της διατριβής αναπτύσσονται μέθοδοι ομαδοποίησης και υπολογιστικής νοημοσύνης για τη μοντελοποίηση και πρόβλεψη χρονολογικών σειρών ημερησίων συναλλαγματικών ισοτιμιών. Η προτεινόμενη μεθοδολογία κατασκευάζει τοπικούς προσέγγιστές, με τη μορφή νευρωνικών δικτύων, για ομάδες προτύπων στο χώρο εισόδων που αναγνωρίζονται από μη-επιβλεπόμενους αλγόριθμους ομαδοποίησης. Στη συνέχεια κατασκευάζονται τεχνικοί κανόνες συναλλαγών απευθείας από τα δεδομένα με τη χρήση γενετικού προγραμματισμού. Η επίδοση των νέων κανόνων συγκρίνεται με αυτή των γενικευμένων κανόνων κινητού μέσου. Το δεύτερο μέρος της διατριβής πραγματεύεται την εφαρμογή εξελικτικών αλγορίθμων για τον υπολογισμό και την εκτίμηση του πλήθους σημείων ισορροπίας σε προβλήματα από τη θεωρία παιγνίων και τη νέα οικονομική γεωγραφία. Πιο συγκεκριμένα, αξιολογείται η ικανότητα των εξελικτικών αλγορίθμων να εντοπίσουν σημεία ισορροπίας κατά Nash σε πεπερασμένα στρατηγικά παίγνια και προτείνονται τεχνικές για τον εντοπισμό περισσοτέρων του ενός σημείων ισορροπίας. Τέλος εφαρμόζονται κριτήρια από τη θεωρία υπολογισμού σταθερών σημείων και τη θεωρία τοπολογικού βαθμού για τη διερεύνηση της ύπαρξης και της υπολογιστικής πολυπλοκότητας του υπολογισμού βραχυχρόνιων σημείων ισορροπίας σε μοντέλα νέας οικονομικής γεωγραφίας. / The thesis investigates Computational Intelligence methods in Economics and Finance. The first part of the thesis is devoted to computational intelligence methods and unsupervised clustering methods for modeling and forecasting daily exchange rate time series. A methodology is proposed that relies on local approximation, using artificial neural networks, for subregions of the input space that are identified through unsupervised clustering algorithms. Furthermore, we employ genetic programming to construct novel trading rules directly from the data. The performance of the novel rules is compared to that of generalised moving average rules. In the second part of the thesis we employ evolutionary algorithms to compute and to estimate the number of equilibria in finite strategic games and new economic geography models. In particular, we investigate the capability of evolutionary and swarm intelligence algorithms to compute Nash equilibria and propose an approach for the computation of more than one equilibria. Finally we employ criteria from the theory on computation of fixed points and topological degree theory to investigate the existence and the computational complexity of computing short run equilibria in new economic geography models.
997

Understanding deep architectures and the effect of unsupervised pre-training

Erhan, Dumitru 10 1900 (has links)
Cette thèse porte sur une classe d'algorithmes d'apprentissage appelés architectures profondes. Il existe des résultats qui indiquent que les représentations peu profondes et locales ne sont pas suffisantes pour la modélisation des fonctions comportant plusieurs facteurs de variation. Nous sommes particulièrement intéressés par ce genre de données car nous espérons qu'un agent intelligent sera en mesure d'apprendre à les modéliser automatiquement; l'hypothèse est que les architectures profondes sont mieux adaptées pour les modéliser. Les travaux de Hinton (2006) furent une véritable percée, car l'idée d'utiliser un algorithme d'apprentissage non-supervisé, les machines de Boltzmann restreintes, pour l'initialisation des poids d'un réseau de neurones supervisé a été cruciale pour entraîner l'architecture profonde la plus populaire, soit les réseaux de neurones artificiels avec des poids totalement connectés. Cette idée a été reprise et reproduite avec succès dans plusieurs contextes et avec une variété de modèles. Dans le cadre de cette thèse, nous considérons les architectures profondes comme des biais inductifs. Ces biais sont représentés non seulement par les modèles eux-mêmes, mais aussi par les méthodes d'entraînement qui sont souvent utilisés en conjonction avec ceux-ci. Nous désirons définir les raisons pour lesquelles cette classe de fonctions généralise bien, les situations auxquelles ces fonctions pourront être appliquées, ainsi que les descriptions qualitatives de telles fonctions. L'objectif de cette thèse est d'obtenir une meilleure compréhension du succès des architectures profondes. Dans le premier article, nous testons la concordance entre nos intuitions---que les réseaux profonds sont nécessaires pour mieux apprendre avec des données comportant plusieurs facteurs de variation---et les résultats empiriques. Le second article est une étude approfondie de la question: pourquoi l'apprentissage non-supervisé aide à mieux généraliser dans un réseau profond? Nous explorons et évaluons plusieurs hypothèses tentant d'élucider le fonctionnement de ces modèles. Finalement, le troisième article cherche à définir de façon qualitative les fonctions modélisées par un réseau profond. Ces visualisations facilitent l'interprétation des représentations et invariances modélisées par une architecture profonde. / This thesis studies a class of algorithms called deep architectures. We argue that models that are based on a shallow composition of local features are not appropriate for the set of real-world functions and datasets that are of interest to us, namely data with many factors of variation. Modelling such functions and datasets is important if we are hoping to create an intelligent agent that can learn from complicated data. Deep architectures are hypothesized to be a step in the right direction, as they are compositions of nonlinearities and can learn compact distributed representations of data with many factors of variation. Training fully-connected artificial neural networks---the most common form of a deep architecture---was not possible before Hinton (2006) showed that one can use stacks of unsupervised Restricted Boltzmann Machines to initialize or pre-train a supervised multi-layer network. This breakthrough has been influential, as the basic idea of using unsupervised learning to improve generalization in deep networks has been reproduced in a multitude of other settings and models. In this thesis, we cast the deep learning ideas and techniques as defining a special kind of inductive bias. This bias is defined not only by the kind of functions that are eventually represented by such deep models, but also by the learning process that is commonly used for them. This work is a study of the reasons for why this class of functions generalizes well, the situations where they should work well, and the qualitative statements that one could make about such functions. This thesis is thus an attempt to understand why deep architectures work. In the first of the articles presented we study the question of how well our intuitions about the need for deep models correspond to functions that they can actually model well. In the second article we perform an in-depth study of why unsupervised pre-training helps deep learning and explore a variety of hypotheses that give us an intuition for the dynamics of learning in such architectures. Finally, in the third article, we want to better understand what a deep architecture models, qualitatively speaking. Our visualization approach enables us to understand the representations and invariances modelled and learned by deeper layers.
998

Species Distribution Modeling: Implications of Modeling Approaches, Biotic Effects, Sample Size, and Detection Limit

Wang, Lifei 14 January 2014 (has links)
When we develop and use species distribution models to predict species' current or potential distributions, we are faced with the trade-offs between model generality, precision, and realism. It is important to know how to improve and validate model generality while maintaining good model precision and realism. However, it is difficult for ecologists to evaluate species distribution models using field-sampled data alone because the true species response function to environmental or ecological factors is unknown. Species distribution models should be able to approximate the true characteristics and distributions of species if ecologists want to use them as reliable tools. Simulated data provide the advantage of being able to know the true species-environment relationships and control the causal factors of interest to obtain insights into the effects of these factors on model performance. I used a case study on Bythotrephes longimanus distributions from several hundred Ontario lakes and a simulation study to explore the effects on model performance caused by several factors: the choice of predictor variables, the model evaluation methods, the quantity and quality of the data used for developing models, and the strengths and weaknesses of different species distribution models. Linear discriminant analysis, multiple logistic regression, random forests, and artificial neural networks were compared in both studies. Results based on field data sampled from lakes indicated that the predictive performance of the four models was more variable when developed on abiotic (physical and chemical) conditions alone, whereas the generality of these models improved when including biotic (relevant species) information. When using simulated data, although the overall performance of random forests and artificial neural networks was better than linear discriminant analysis and multiple logistic regression, linear discriminant analysis and multiple logistic regression had relatively good and stable model sensitivity at different sample size and detection limit levels, which may be useful for predicting species presences when data are limited. Random forests performed consistently well at different sample size levels, but was more sensitive to high detection limit. The performance of artificial neural networks was affected by both sample size and detection limit, and it was more sensitive to small sample size.
999

Species Distribution Modeling: Implications of Modeling Approaches, Biotic Effects, Sample Size, and Detection Limit

Wang, Lifei 14 January 2014 (has links)
When we develop and use species distribution models to predict species' current or potential distributions, we are faced with the trade-offs between model generality, precision, and realism. It is important to know how to improve and validate model generality while maintaining good model precision and realism. However, it is difficult for ecologists to evaluate species distribution models using field-sampled data alone because the true species response function to environmental or ecological factors is unknown. Species distribution models should be able to approximate the true characteristics and distributions of species if ecologists want to use them as reliable tools. Simulated data provide the advantage of being able to know the true species-environment relationships and control the causal factors of interest to obtain insights into the effects of these factors on model performance. I used a case study on Bythotrephes longimanus distributions from several hundred Ontario lakes and a simulation study to explore the effects on model performance caused by several factors: the choice of predictor variables, the model evaluation methods, the quantity and quality of the data used for developing models, and the strengths and weaknesses of different species distribution models. Linear discriminant analysis, multiple logistic regression, random forests, and artificial neural networks were compared in both studies. Results based on field data sampled from lakes indicated that the predictive performance of the four models was more variable when developed on abiotic (physical and chemical) conditions alone, whereas the generality of these models improved when including biotic (relevant species) information. When using simulated data, although the overall performance of random forests and artificial neural networks was better than linear discriminant analysis and multiple logistic regression, linear discriminant analysis and multiple logistic regression had relatively good and stable model sensitivity at different sample size and detection limit levels, which may be useful for predicting species presences when data are limited. Random forests performed consistently well at different sample size levels, but was more sensitive to high detection limit. The performance of artificial neural networks was affected by both sample size and detection limit, and it was more sensitive to small sample size.
1000

Laser-induced plasma on polymeric materials and applications for the discrimination and identification of plastics

Boueri, Myriam 18 October 2010 (has links) (PDF)
Laser-Induced Breakdown Spectroscopy (LIBS) is an analytical technique that has the potential to detect all the elements present in the periodic table. The limit of detection can go below a few ppm and this regardless of the physical phase of the analyzed sample (solid, liquid or gas). Its simplicity of use, its rapidity to get results and its versatility provide this technique with attractive features. The technique is currently developed for applications in a large number of domains such as online control, spatial explorations and the environment. However the weakness of the LIBS technique, compared to other more conventional ones, is still its difficulty in providing reliable quantitative results, especially for inhomogeneous and complex matrix such as organic or biological materials. The work presented in this thesis includes a study of the properties of plasma induced from different organic materials. First, a study of the plasma induced on the surface of a Nylon sample at short time delays (~ns) was carried out using the time-resolved shadowgraph technique for different experimental parameters (laser energy, pulse duration, wavelength). Then, a complete diagnostics of the plasma was performed using the plasma emission spectroscopy. A detailed analysis of the emission spectra at different detection delays allowed us to determine the evolution of the temperatures of the different species in the plasma (atoms, ions and molecules). The homogeneity and the local thermodynamic equilibrium within the plasma was then experimentally checked and validated. We demonstrated that the optimisation of the signalto- noise ratio and a quantitative procedure, such as the calibration-free LIBS, can be put in place within a properly chosen detection window. In our experiments, such optimised detection configuration was further employed to record LIBS spectra from different families of polymer in order to identify and classify them. For this purpose, the chemometrics procedure of artificial neural networks (ANN) was used to process the recorded LIBS spectroscopic data. The promising results obtained in this thesis makes LIBS stand out as a potentially useful tool for real time identification of plastic materials. Finally, this work can also be considered as a base for the further studies of more complex materials such as biological tissues with LIBS.

Page generated in 0.1051 seconds