• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 570
  • 336
  • 39
  • 21
  • 15
  • 12
  • 11
  • 9
  • 8
  • 8
  • 8
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 1192
  • 1192
  • 1192
  • 572
  • 556
  • 423
  • 157
  • 134
  • 129
  • 128
  • 120
  • 110
  • 94
  • 93
  • 92
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1061

La reconnaissance automatisée des nannofossiles calcaires du Cénozoïque / The automatic recognition of the calcareous nannofossils of the Cenozoic

Barbarin, Nicolas 14 March 2014 (has links)
SYRACO est un SYstème de Reconnaissance Automatisée des COccolithes, développé à son origine par Luc Beaufort et Denis Dollfus à partir de 1995 et plus récemment avec Yves Gally. L'utilité d'un tel système est de permettre aux spécialistes un gain de temps majeur dans l'acquisition et le traitement des données. Dans ce travail, le système a été amélioré techniquement et sa reconnaissance a été étendue aux nannofossiles calcaires du Cénozoïque. Ce système fait le tri entre les nannofossiles et les non-nannofossiles avec une efficacité respectivement estimée à 75% et 90 %. Il s'appuie sur une nouvelle base d'images de référence d'espèces datant de l'Eocène Supérieur aux espèces vivantes, ce qui représente des centaines d'espèces avec une forte variabilité morphologique. Il permet de réaliser une classification en 39 morphogroupes par la combinaison de réseaux de neurones artificiels avec des modèles statistiques. Les résultats sont présentés sous forme de comptages automatisés, de données morphométriques (taille, masse...) et de mosaïques d'images. Il peut ainsi être utilisé pour des analyses biostratigraphiques et paléocéanographiques. / SYRACO is an automated recognition system of coccoliths, originally developed since 1995 by Luc Beaufort and Denis Dollfus, and more recently with the help of Yves Gally. The main purpose of this system is for specialists to save time in the acquisition and treatment of data. By this recent work, the system has been technically improved and its ability of recognition has been extended to calcareous nannofossils of the Cenozoic Era. It sorts nannofossils and non-nannofossils with a reliability respectively estimated to 75% and 90%. It is based on a new reference images database of species from the Upper Eocene up to living species. This represents hundreds of species with a high morphological variability. It leads to the establishment of a classification arranged in 39 morphogroups, combining artificial neural networks to statistical models. The results are presented as automated counting, morphometrical data (size, mass...) and mosaics of images. Those results can be valuable in biostratigraphical and paleoceanographical analyses.
1062

Définition d'un substrat computationnel bio-inspiré : déclinaison de propriétés de plasticité cérébrale dans les architectures de traitement auto-adaptatif / Design of a bio-inspired computing substrata : hardware plasticity properties for self-adaptive computing architectures

Rodriguez, Laurent 01 December 2015 (has links)
L'augmentation du parallélisme, sur des puces dont la densité d'intégration est en constante croissance, soulève un certain nombre de défis tels que le routage de l'information qui se confronte au problème de "goulot d'étranglement de données", ou la simple difficulté à exploiter un parallélisme massif et grandissant avec les paradigmes de calcul modernes issus pour la plupart, d'un historique séquentiel.Nous nous inscrivons dans une démarche bio-inspirée pour définir un nouveau type d'architecture, basée sur le concept d'auto-adaptation, afin de décharger le concepteur au maximum de cette complexité. Mimant la plasticité cérébrale, cette architecture devient capable de s'adapter sur son environnement interne et externe de manière homéostatique. Il s'inscrit dans la famille du calcul incorporé ("embodied computing") car le substrat de calcul n'est plus pensé comme une boite noire, programmée pour une tâche donnée, mais est façonné par son environnement ainsi que par les applications qu'il supporte.Dans nos travaux, nous proposons un modèle de carte neuronale auto-organisatrice, le DMADSOM (pour Distributed Multiplicative Activity Dependent SOM), basé sur le principe des champs de neurones dynamiques (DNF pour "Dynamic Neural Fields"), pour apporter le concept de plasticité à l'architecture. Ce modèle a pour originalité de s'adapter sur les données de chaque stimulus sans besoin d'un continuum sur les stimuli consécutifs. Ce comportement généralise les cas applicatifs de ce type de réseau car l'activité est toujours calculée selon la théorie des champs neuronaux dynamique. Les réseaux DNFs ne sont pas directement portables sur les technologies matérielles d'aujourd'hui de part leurs forte connectivité. Nous proposons plusieurs solutions à ce problème. La première consiste à minimiser la connectivité et d'obtenir une approximation du comportement du réseau par apprentissage sur les connexions latérales restantes. Cela montre un bon comportement dans certain cas applicatifs. Afin de s'abstraire de ces limitations, partant du constat que lorsqu'un signal se propage de proche en proche sur une topologie en grille, le temps de propagation représente la distance parcourue, nous proposons aussi deux méthodes qui permettent d'émuler, cette fois, l'ensemble de la large connectivité des Neural Fields de manière efficace et proche des technologies matérielles. Le premier substrat calcule les potentiels transmis sur le réseau par itérations successives en laissant les données se propager dans toutes les directions. Il est capable, en un minimum d'itérations, de calculer l'ensemble des potentiels latéraux de la carte grâce à une pondération particulière de l'ensemble des itérations.Le second passe par une représentation à spikes des potentiels qui transitent sur la grille sans cycles et reconstitue l'ensemble des potentiels latéraux au fil des itérations de propagation.Le réseau supporté par ces substrats est capable de caractériser les densités statistiques des données à traiter par l'architecture et de contrôler, de manière distribuée, l'allocation des cellules de calcul. / The increasing degree of parallelism on chip which comes from the always increasing integration density, raises a number of challenges such as routing information that confronts the "bottleneck problem" or the simple difficulty to exploit massive parallelism thanks to modern computing paradigms which derived mostly from a sequential history.In order to discharge the designer of this complexity, we design a new type of bio-inspired self-adaptive architecture. Mimicking brain plasticity, this architecture is able to adapt to its internal and external environment and becomes homeostatic. Belonging to the embodied computing theory, the computing substrate is no longer thought of as a black box, programmed for a given task, but is shaped by its environment and by applications that it supports.In our work, we propose a model of self-organizing neural map, DMADSOM (for Distributed Multiplicative Activity Dependent SOM), based on the principle of dynamic neural fields (DNF for "Dynamic Neural Fields"), to bring the concept of hardware plasticity. This model is able to adapt the data of each stimulus without need of a continuum on consecutive stimuli. This behavior generalizes the case of applications of such networks. The activity remains calculated using the dynamic neural field theory. The DNFs networks are not directly portable onto hardware technology today because of their large connectivity. We propose models that bring solutions to this problem. The first is to minimize connectivity and to approximate the global behavior thanks to a learning rule on the remaining lateral connections. This shows good behavior in some application cases. In order to reach the general case, based on the observation that when a signal travels from place to place on a grid topology, the delay represents the distance, we also propose two methods to emulate the whole wide connectivity of the Neural Field with respect to hardware technology constraints. The first substrate calculates the transmitted potential over the network by iteratively allowing the data to propagate in all directions. It is capable, in a minimum of iterations, to compute the lateral potentials of the map with a particular weighting of all iterations.The second involves a spike representation of the synaptic potential and transmits them on the grid without cycles. This one is hightly customisable and allows a very low complexity while still being capable to compute the lateral potentials.The network supported, by these substrates, is capable of characterizing the statistics densities of the data to be processed by the architecture, and to control in a distributed manner the allocation of computation cells.
1063

Verbesserung der Performance von virtuellen Sensoren in totzeitbehafteten Prozessen / Improvement of performance for virtual sensors in dead time processes

Dementyev, Alexander 12 December 2014 (has links) (PDF)
Modellbasierte virtuelle Sensoren (VS) ermöglichen die Messung von qualitätsbestimmenden Prozessparametern (bzw. Hilfsregelgrößen) dort, wo eine direkte Messung zu teuer oder gar nicht möglich ist. Für die adaptiven VS, die ihr internes Prozessmodell nach Data-Driven-Methode bilden (z. B. durch die Benutzung künstlicher neuronaler Netze (KNN)), besteht das Problem der Abschätzung der Prädiktionsstabilität. Aktuelle Lösungsansätze lösen dieses Problem nur für wenige KNN-Typen und erfordern enormen Entwurfs- und Rechenaufwand. In dieser Arbeit wird eine alternative Methode vorgestellt, welche für eine breite Klasse von KNN gilt und keinen hohen Entwurfs- und Rechenaufwand erfordert. Die neue Methode wurde anhand realer Anwendungsbeispiele getestet und hat sehr gute Ergebnisse geliefert. Für die nicht adaptiven virtuellen Sensoren wurde eine aufwandsreduzierte Adaption nach Smith-Schema vorgeschlagen. Dieses Verfahren ermöglicht die Regelung totzeitbehafteter und zeitvarianter Prozesse mit VS in einem geschlossenen Regelkreis. Im Vergleich zu anderen Regelungsstrategien konnte damit vergleichbare Regelungsqualität bei einem deutlich geringeren Entwurfsaufwand erzielt werden. / Model-based virtual sensors allow the measurement of parameters critical for process quality where a direct measurement is too expensive or not at all possible. For the adaptive virtual sensors built after data-driven method (e.g., by use of an ANN model) there is a problem of the prediction stability. Current solutions attempt to solve this problem only for a few ANN types and require a very high development effort. In this dissertation a new method for the solution of this problem is suggested, which is valid for a wide class of the ANNs and requires no high development effort. The new method was tested on real application examples and has delivered very good results. For the non-adaptive virtual sensors a simple adaptation mechanism was suggested. This technique allows the control of dead-time and time-variant processes in closed loop. Besides, in comparison to other control strategies the comparable results were achieved with smaller development effort.
1064

應用機器學習預測利差交易的收益 / Application of machine learning to predicting the returns of carry trade

吳佳真 Unknown Date (has links)
本研究提出了一個類神經網路機制,可以及時有效的預測利差交易(carry trade)的收益。為了實現及時性,我們將通過Tensorflow和圖形處理單元(GPU)來實作這個機制。此外,類神經網路機制需要處理具有概念飄移和異常值的時間序列數據。而我們將透過設計的實驗來驗證這個機制的及時性與有效性。 在實驗過程中,我們發現在演算法設置不同的參數將影響類神經網路的性能。本研究將討論不同參數下所產生的不同結果。實驗結果表明,我們所提出的類神經網路機制可以預測出利差交易的收益的動向。希望這個研究將對機器學習和金融領域皆有所貢獻。 / This research derives an artificial neural networks (ANN) mechanism for timely and effectively predicting the return of carry trade. To achieve the timeliness, the ANN mechanism is implemented via the infrastructure of TensorFlow and graphic processing unit (GPU). Furthermore, the ANN mechanism needs to cope with the time series data that may have concept-drifting phenomenon and outliers. An experiment is also designed to verify the timeliness and effectiveness of the proposed mechanism. During the experiment, we find that different parameters we set in the algorithm will affect the performance of the neural network. And this research will discuss the different results in different parameters. Our experiment result represents that the proposed ANN mechanism can predict movement of the returns of carry trade well. Hope this research would contribute for both machine learning and finance field.
1065

輿論對外匯趨勢的影響 / The effects of public opinions on exchange rate movements

林子翔, Lin, Tzu Hsiang Unknown Date (has links)
本研究要探討的是在新聞、論壇和社群媒體討論的相關訊息是否真的會影響匯率的運動的假設。對於這樣的研究目標,我們建立了一個實驗,首先以文字探勘技術應用在新聞、論壇與社群媒體來產生與匯率相關的數值表示。接著,機器學習技術應用於學習得到的數值表示和匯率波動之間的關係。最後,我們證明透過檢驗所獲得的關係的有效性的假設。在此研究中,我們提出一種兩階段的神經網路來學習與預測每日美金兌台幣匯率的走勢。不同於其他專注於新聞或者社群媒體的研究,我們將他們進行整合,並將論壇的討論納為輸入資料。不同的資料組合產生出多種觀點,而三個資料來源的不同組合可能會以不同的方式影響預測準確率。透過該方法,初步實驗的結果顯示此方法優於隨機漫步模型。 / This study wants to explore the hypothesis that the relevant information in the news, the posts in forums and discussions on the social media can really affect the daily movement of exchange rates. For such study objective, we set up an experiment, where the text mining technique is first applied to the news, the forum and the social media to generate numerical representations regarding the textual information relevant with the exchange rate. Then the machine learning technique is applied to learn the relationship between the derived numerical representations and the movement of exchange rates. At the end, we justify the hypothesis through examining the effectiveness of the obtained relationship. In this paper, we propose a hybrid neural networks to learn and forecast the daily movements of USD/TWD exchange rates. Different from other studies, which focus on news or social media, we integrate them and add the discussion of forum as input data. Different data combinations yield many views while different combination of three data sources might affect the forecasting accuracy rate in different ways. As a result of this method, the experiment result was better than random walk model.
1066

Artificial development of neural-symbolic networks

Townsend, Joseph Paul January 2014 (has links)
Artificial neural networks (ANNs) and logic programs have both been suggested as means of modelling human cognition. While ANNs are adaptable and relatively noise resistant, the information they represent is distributed across various neurons and is therefore difficult to interpret. On the contrary, symbolic systems such as logic programs are interpretable but less adaptable. Human cognition is performed in a network of biological neurons and yet is capable of representing symbols, and therefore an ideal model would combine the strengths of the two approaches. This is the goal of Neural-Symbolic Integration [4, 16, 21, 40], in which ANNs are used to produce interpretable, adaptable representations of logic programs and other symbolic models. One neural-symbolic model of reasoning is SHRUTI [89, 95], argued to exhibit biological plausibility in that it captures some aspects of real biological processes. SHRUTI's original developers also suggest that further biological plausibility can be ascribed to the fact that SHRUTI networks can be represented by a model of genetic development [96, 120]. The aims of this thesis are to support the claims of SHRUTI's developers by producing the first such genetic representation for SHRUTI networks and to explore biological plausibility further by investigating the evolvability of the proposed SHRUTI genome. The SHRUTI genome is developed and evolved using principles from Generative and Developmental Systems and Artificial Development [13, 105], in which genomes use indirect encoding to provide a set of instructions for the gradual development of the phenotype just as DNA does for biological organisms. This thesis presents genomes that develop SHRUTI representations of logical relations and episodic facts so that they are able to correctly answer questions on the knowledge they represent. The evolvability of the SHRUTI genomes is limited in that an evolutionary search was able to discover genomes for simple relational structures that did not include conjunction, but could not discover structures that enabled conjunctive relations or episodic facts to be learned. Experiments were performed to understand the SHRUTI fitness landscape and demonstrated that this landscape is unsuitable for navigation using an evolutionary search. Complex SHRUTI structures require that necessary substructures must be discovered in unison and not individually in order to yield a positive change in objective fitness that informs the evolutionary search of their discovery. The requirement for multiple substructures to be in place before fitness can be improved is probably owed to the localist representation of concepts and relations in SHRUTI. Therefore this thesis concludes by making a case for switching to more distributed representations as a possible means of improving evolvability in the future.
1067

Análise computadorizada dos discos intervertebrais lombares em imagens de ressonância magnética / Computer analysis of lumbar intervertebral disks in magnetic resonance imaging

Marcelo da Silva Barreiro 16 November 2016 (has links)
O disco intervertebral é uma estrutura cuja função é receber, amortecer e distribuir o impacto das cargas impostas sobre a coluna vertebral. O aumento da idade e a postura adotada pelo indivíduo podem levar à degeneração do disco intervertebral. Atualmente, a Ressonância Magnética (RM) é considerada o melhor e mais sensível método não invasivo de avaliação por imagem do disco intervertebral. Neste trabalho foram desenvolvidos métodos quantitativos computadorizados para auxílio ao diagnóstico da degeneração do disco intervertebral em imagens de ressonância magnética ponderadas em T2 da coluna lombar, de acordo com a escala de Pfirrmann, uma escala semi-quantitativa, com cinco graus de degeneração. Os algoritmos computacionais foram testados em um conjunto de dados que consiste de imagens de 300 discos, obtidos de 102 indivíduos, com diferentes graus de degeneração. Máscaras binárias de discos segmentados manualmente foram utilizadas para calcular seus centroides, visando criar um ponto de referência para possibilitar a extração de atributos. Uma análise de textura foi realizada utilizando a abordagem proposta por Haralick. Para caracterização de forma, também foram calculados os momentos invariantes definidos por Hu e os momentos centrais para cada disco. A classificação do grau de degeneração foi realizada utilizando uma rede neural artificial e o conjunto de atributos extraídos de cada disco. Uma taxa média de acerto na classificação de 87%, com erro padrão de 6,59% e uma área média sob a curva ROC (Receiver Operating Characteristic) de 0,92 indicam o potencial de aplicação dos algoritmos desenvolvidos como ferramenta de apoio ao diagnóstico da degeneração do disco intervertebral. / The intervertebral disc is a structure whose function is to receive, absorb and transmit the impact loads imposed on the spine. Increasing age and the posture adopted by the individual can lead to degeneration of the intervertebral disc. Currently, Magnetic Resonance Imaging (MRI) is considered the best and most sensitive noninvasive method to imaging evaluation of the intervertebral disc. In this work were developed methods for quantitative computer-aided diagnosis of the intervertebral disc degeneration in MRI T2 weighted images of the lumbar column according to Pfirrmann scale, a semi-quantitative scale with five degrees of degeneration. The algorithms were tested on a dataset of 300 images obtained from 102 subjects with varying degrees of degeneration. Binary masks manually segmented of the discs were used to calculate their centroids, to create a reference point to enable extraction of attributes. A texture analysis was performed using the approach proposed by Haralick. For the shape characterization, invariant moments defined by Hu and central moments were also calculated for each disc. The rating of the degree of degeneration was performed using an artificial neural network and the set of extracted attributes of each disk. An average rate of correct classification of 87%, with standard error 6.59% and an average area under the ROC curve (Receiver Operating Characteristic) of 0.92 indicates the potential application of the algorithms developed as a diagnostic support tool to the degeneration of the intervertebral disc.
1068

Mineração de estruturas musicais e composição automática utilizando redes complexas / Musical structures mining and composition using complex networks

Andrés Eduardo Coca Salazar 26 November 2014 (has links)
A teoria das redes complexas tem se tornado cada vez mais em uma poderosa teoria computacional capaz de representar, caracterizar e examinar sistemas com estrutura não trivial, revelando características intrínsecas locais e globais que facilitam a compreensão do comportamento e da dinâmica de tais sistemas. Nesta tese são exploradas as vantagens das redes complexas na resolução de problemas relacionados com tarefas do âmbito musical, especificamente, são estudadas três abordagens: reconhecimento de padrões, mineração e síntese de músicas. A primeira abordagem é desempenhada através do desenvolvimento de um método para a extração do padrão rítmico de uma peça musical de caráter popular. Nesse tipo de peças coexistem diferentes espécies de padrões rítmicos, os quais configuram uma hierarquia que é determinada por aspectos funcionais dentro da base rítmica. Os padrões rítmicos principais são caracterizados por sua maior incidência dentro do discurso musical, propriedade que é refletida na formação de comunidades dentro da rede. Técnicas de detecção de comunidades são aplicadas na extração dos padrões rítmicos, e uma medida para diferenciar os padrões principais dos secundários é proposta. Os resultados mostram que a qualidade da extração é sensível ao algoritmo de detecção, ao modo de representação do ritmo e ao tratamento dado às linhas de percussão na hora de gerar a rede. Uma fase de mineração foi desempenhada usando medidas topológicas sobre a rede obtida após a remoção dos padrões secundários. Técnicas de aprendizado supervisionado e não-supervisionado foram aplicadas para discriminar o gênero musical segundo os atributos calculados na fase de mineração. Os resultados revelam a eficiência da metodologia proposta, a qual foi constatada através de um teste de significância estatística. A última abordagem foi tratada mediante o desenvolvimento de modelos para a composição de melodias através de duas perspectivas, na primeira perspectiva é usada uma caminhada controlada por critérios sobre redes complexas predefinidas e na segunda redes neurais recorrentes e sistemas dinâmicos caóticos. Nesta última perspectiva, o modelo é treinado para compor uma melodia com um valor preestabelecido de alguma característica tonal subjetiva através de uma estratégia de controle proporcional que modifica a complexidade de uma melodia caótica, melodia que atua como entrada de inspiração da rede. / The theory of complex networks has become increasingly a powerful computational tool capable of representing, characterizing and examining systems with non-trivial structure, revealing both local and global intrinsic structures that facilitate the understanding of the behavior and dynamics of such systems. In this thesis, the virtues of complex networks in solving problems related to tasks within the musical scope are explored. Specifically, three approaches are studied: pattern recognition, data mining, and synthesis. The first perspective is addressed by developing a method for extracting the rhythmic pattern of a piece of popular music. In that type of musical pieces, there coexist different types of rhythm patterns which constitute a hierarchy determined by functional aspects within the basic rhythm. The main rhythmic patterns are characterized by a higher incidence within the musical discourse and this factor is reflected in the formation of communities within the network constructed from the music piece. Community detection techniques are applied in the extraction of rhythmic patterns, and a measure to distinguish the main patterns of the secondary is proposed. The results showed that the quality of extraction is sensitive to the detection algorithm, the method of representing rhythm, and treatment of percussion lines when generating the network. Data mining is performed using topological measures over the network obtained after the removal of secondary patterns. Techniques of supervised and unsupervised learning are applied to discriminate the musical genre according to the attributes calculated in the data mining phase. The quantitative results show the efficiency of the proposed methodology, which is confirmed by a test of statistical significance. Regarding the melody generation, an algorithm using a walk controlled by criteria on predefined complex networks has been developed, as well as the development of melody composition models using recurrent neural networks and chaotic dynamical systems. In the last approach, the model is trained to compose a melody with a subjective characteristic melodic value pre-established by a proportional control strategy that acts on the parameters of a chaotic melody as input inspiration.
1069

Ecological models at fish community and species level to support effective river restoration

Olaya Marín, Esther Julia 15 May 2013 (has links)
RESUMEN Los peces nativos son indicadores de la salud de los ecosistemas acuáticos, y se han convertido en un elemento de calidad clave para evaluar el estado ecológico de los ríos. La comprensión de los factores que afectan a las especies nativas de peces es importante para la gestión y conservación de los ecosistemas acuáticos. El objetivo general de esta tesis es analizar las relaciones entre variables biológicas y de hábitat (incluyendo la conectividad) a través de una variedad de escalas espaciales en los ríos Mediterráneos, con el desarrollo de herramientas de modelación para apoyar la toma de decisiones en la restauración de ríos. Esta tesis se compone de cuatro artículos. El primero tiene como objetivos modelar la relación entre un conjunto de variables ambientales y la riqueza de especies nativas (NFSR), y evaluar la eficacia de potenciales acciones de restauración para mejorar la NFSR en la cuenca del río Júcar. Para ello se aplicó un enfoque de modelación de red neuronal artificial (ANN), utilizando en la fase de entrenamiento el algoritmo Levenberg-Marquardt. Se aplicó el método de las derivadas parciales para determinar la importancia relativa de las variables ambientales. Según los resultados, el modelo de ANN combina variables que describen la calidad de ribera, la calidad del agua y el hábitat físico, y ayudó a identificar los principales factores que condicionan el patrón de distribución de la NFSR en los ríos Mediterráneos. En la segunda parte del estudio, el modelo fue utilizado para evaluar la eficacia de dos acciones de restauración en el río Júcar: la eliminación de dos azudes abandonados, con el consiguiente incremento de la proporción de corrientes. Estas simulaciones indican que la riqueza aumenta con el incremento de la longitud libre de barreras artificiales y la proporción del mesohabitat de corriente, y demostró la utilidad de las ANN como una poderosa herramienta para apoyar la toma de decisiones en el manejo y restauración ecológica de los ríos Mediterráneos. El segundo artículo tiene como objetivo determinar la importancia relativa de los dos principales factores que controlan la reducción de la riqueza de peces (NFSR), es decir, las interacciones entre las especies acuáticas, variables del hábitat (incluyendo la conectividad fluvial) y biológicas (incluidas las especies invasoras) en los ríos Júcar, Cabriel y Turia. Con este fin, tres modelos de ANN fueron analizados: el primero fue construido solamente con variables biológicas, el segundo se construyó únicamente con variables de hábitat y el tercero con la combinación de estos dos grupos de variables. Los resultados muestran que las variables de hábitat son los ¿drivers¿ más importantes para la distribución de NFSR, y demuestran la importancia ecológica de los modelos desarrollados. Los resultados de este estudio destacan la necesidad de proponer medidas de mitigación relacionadas con la mejora del hábitat (incluyendo la variabilidad de caudales en el río) como medida para conservar y restaurar los ríos Mediterráneos. El tercer artículo busca comparar la fiabilidad y relevancia ecológica de dos modelos predictivos de NFSR, basados en redes neuronales artificiales (ANN) y random forests (RF). La relevancia de las variables seleccionadas por cada modelo se evaluó a partir del conocimiento ecológico y apoyado por otras investigaciones. Los dos modelos fueron desarrollados utilizando validación cruzada k-fold y su desempeño fue evaluado a través de tres índices: el coeficiente de determinación (R2 ), el error cuadrático medio (MSE) y el coeficiente de determinación ajustado (R2 adj). Según los resultados, RF obtuvo el mejor desempeño en entrenamiento. Pero, el procedimiento de validación cruzada reveló que ambas técnicas generaron resultados similares (R2 = 68% para RF y R2 = 66% para ANN). La comparación de diferentes métodos de machine learning es muy útil para el análisis crítico de los resultados obtenidos a través de los modelos. El cuarto artículo tiene como objetivo evaluar la capacidad de las ANN para identificar los factores que afectan a la densidad y la presencia/ausencia de Luciobarbus guiraonis en la demarcación hidrográfica del Júcar. Se utilizó una red neuronal artificial multicapa de tipo feedforward (ANN) para representar relaciones no lineales entre descriptores de L. guiraonis con variables biológicas y de hábitat. El poder predictivo de los modelos se evaluó con base en el índice Kappa (k), la proporción de casos correctamente clasificados (CCI) y el área bajo la curva (AUC) característica operativa del receptor (ROC). La presencia/ausencia de L. guiraonis fue bien predicha por el modelo ANN (CCI = 87%, AUC = 0.85 y k = 0.66). La predicción de la densidad fue moderada (CCI = 62%, AUC = 0.71 y k = 0.43). Las variables más importantes que describen la presencia/ausencia fueron: radiación solar, área de drenaje y la proporción de especies exóticas de peces con un peso relativo del 27.8%, 24.53% y 13.60% respectivamente. En el modelo de densidad, las variables más importantes fueron el coeficiente de variación de los caudales medios anuales con una importancia relativa del 50.5% y la proporción de especies exóticas de peces con el 24.4%. Los modelos proporcionan información importante acerca de la relación de L. guiraonis con variables bióticas y de hábitat, este nuevo conocimiento podría utilizarse para apoyar futuros estudios y para contribuir en la toma de decisiones para la conservación y manejo de especies en los en los ríos Júcar, Cabriel y Turia. / Olaya Marín, EJ. (2013). Ecological models at fish community and species level to support effective river restoration [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/28853 / TESIS
1070

Програмски оквир заснован на машинском учењу за аутоматизацију обраде резултата фотоакустичних мерења / Programski okvir zasnovan na mašinskom učenju za automatizaciju obrade rezultata fotoakustičnih merenja / MACHINE LEARNING-BASED SOFTWARE FRAMEWORK FOR THEAUTOMATION OF PHOTOACOUSTIC MEASUREMENT DATAPROCESSING

Jordović Pavlović Miroslava 30 October 2020 (has links)
<p>Главни задатак истраживања приказаног у дисертацији је развој модела,<br />заснованог на алгоритмима машинског учења, који описује сложени<br />утицај мерног система на користан, експериментални сигнал са циљем<br />његове елиминације. Студија случаја је широко распрострањена<br />фотоакустична, трансмисиона мерна метода са ћелијом минималне<br />запремине. Мултидисциплинарност и комплексност проблема одредили<br />су следеће кораке у методологији решења: 1) развој софтвера за<br />генерисање симулираних експерименталних података, 2) развој<br />регресионог модела заснованог на трослојној неуронској мрежи, за<br />прецизну и поуздану карактеризацију детектора која се извршава у<br />реалном времену, 3) развој класификационог модела заснованог на<br />неуронској мрежи једноставне структуре за прецизну и поуздану<br />предикцију типа коришћеног детектора која се извршава у реалном<br />времену, 4) спрезање регресионог и класификационог модела уз развој<br />додатног софтвера за прилагођење модела стварном експерименту. На<br />овај начин заокружен је програмски оквир који извршава сложени задатак<br />издвајања &ldquo;правог&rdquo; сигнала oд изобличеног експерименталног сигнала<br />без ангажовања истраживача, односно извршава аутокорекцију.<br />Тестирање је извршено на више различитих детектора и више<br />различитих материјала у фотоаксустичном експерименту. Применом<br />развијеног програмског оквира конкурентност експерименталне технике<br />је знатно порасла: повећана је тачност и поузданост, проширен је мерни<br />опсег и смањено време обраде резултата мерења.</p> / <p>Glavni zadatak istraživanja prikazanog u disertaciji je razvoj modela,<br />zasnovanog na algoritmima mašinskog učenja, koji opisuje složeni<br />uticaj mernog sistema na koristan, eksperimentalni signal sa ciljem<br />njegove eliminacije. Studija slučaja je široko rasprostranjena<br />fotoakustična, transmisiona merna metoda sa ćelijom minimalne<br />zapremine. Multidisciplinarnost i kompleksnost problema odredili<br />su sledeće korake u metodologiji rešenja: 1) razvoj softvera za<br />generisanje simuliranih eksperimentalnih podataka, 2) razvoj<br />regresionog modela zasnovanog na troslojnoj neuronskoj mreži, za<br />preciznu i pouzdanu karakterizaciju detektora koja se izvršava u<br />realnom vremenu, 3) razvoj klasifikacionog modela zasnovanog na<br />neuronskoj mreži jednostavne strukture za preciznu i pouzdanu<br />predikciju tipa korišćenog detektora koja se izvršava u realnom<br />vremenu, 4) sprezanje regresionog i klasifikacionog modela uz razvoj<br />dodatnog softvera za prilagođenje modela stvarnom eksperimentu. Na<br />ovaj način zaokružen je programski okvir koji izvršava složeni zadatak<br />izdvajanja &ldquo;pravog&rdquo; signala od izobličenog eksperimentalnog signala<br />bez angažovanja istraživača, odnosno izvršava autokorekciju.<br />Testiranje je izvršeno na više različitih detektora i više<br />različitih materijala u fotoaksustičnom eksperimentu. Primenom<br />razvijenog programskog okvira konkurentnost eksperimentalne tehnike<br />je znatno porasla: povećana je tačnost i pouzdanost, proširen je merni<br />opseg i smanjeno vreme obrade rezultata merenja.</p> / <p>The main task of the research presented in this dissertation is the development<br />of the model based on machine learning algorithms, which describes the<br />complex influence of the measuring system on a useful, experimental signal,<br />with the aim of the elimination of this influence. The case study is a widespread<br />photoacoustic, transmission measurement method with minimum volume cell<br />configuration. Multidisciplinarity and complexity of the problem determined the<br />following steps in the solution methodology: 1) development of the software for<br />generating simulated experimental data, 2) development of the regression<br />model based on a three-layer neural network, for precise and reliable<br />characterization of detectors, performed in real time, 3) development of the<br />classification model based on a neural network of simple structure for precise<br />and reliable prediction of the type of detector in use, performed in real time, 4)<br />coupling of the regression and the classification model with the development<br />of additional software for adjustment of the model to a real experiment. In this<br />way, the program framework is completed, which performs the complex task<br />of extracting the &quot;true&quot; signal from the distorted experimental signal without the<br />involvement of researchers, performing, thus, the autocorrection. Testing was<br />performed on several different detectors and several different materials in a<br />photoacoustic experiment. With the application of the developed software<br />framework, the competitiveness of the experimental technique has<br />significantly increased: the accuracy and the reliability have been increased,<br />the measurement range has been expanded and the processing time of<br />measurement results has been reduced.</p>

Page generated in 0.2439 seconds