• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 306
  • 96
  • 41
  • 24
  • 17
  • 11
  • 9
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 614
  • 318
  • 204
  • 170
  • 140
  • 115
  • 102
  • 101
  • 88
  • 77
  • 65
  • 56
  • 55
  • 55
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Die vegetative Innervation der Pferdelunge

Hirschfeld, Anna 15 November 2019 (has links)
Die Recurrent airway obstruction (RAO), im deutschen auch als „Dämpfigkeit“ be-zeichnet, ist eine weltweit anerkannte und weit verbreitete Erkrankung der Luftwege beim Pferd, die durch eine hypersensitiv-vermittelte Entzündung der Atemwege und begleitende Neutrophilie charakterisiert ist. Ausgelöst durch ungünstige Umweltbedingungen umfasst der klassische Phänotyp dieses Krankheitsbildes Husten, Nasenausfluss, Dyspnoe und Leistungsabfall. Die pathophysiologischen Vorgänge äußern sich in Bronchialobstruktion, Schleimhypersekretion, Hyperreaktivität und Umbauvorgängen (Airway remodelling) der Atemwege. In der Literatur existieren bisher noch keine genaueren Daten zur sympathischen und parasympathischen Lungeninnervation beim Pferd. Die vorliegende Arbeit liefert erstmalig eine umfangreichere immunhistochemische Analyse der Nervenäste in der equinen Lunge. Durch Immunfluoreszenz-Markierungen von ChAT und TH wurden sympathische und parasympathische Fasern detektiert. Die hierfür eingesetzten hochgereinigten Antikörper haben sich hierbei als geeignete Marker für cholinerge bzw. katecholaminerge Zellstrukturen erwiesen. Hierbei gab es keinen Hinweis darauf, dass sich die Immunreaktivität im Faserverlauf ändert oder von kranial nach kaudal schwächer wird. Auffällig war die starke Immunreaktivität der ChAT in den untersuchten Gewebeschnitten eines an RAO erkrankten Pferdes, die auf eine Hochregulation des Parasympathikus im Verlauf dieser Lungenerkrankung deutet. Die zusätzliche Detektion weiterer neuronaler Marker wie z.B. MAP2 oder NF-L sowie von Mikroglia und Astrozyten erlaubte den Nachweis weiterer Veränderungen im Krankheitsverlauf. Die validierte Koexpression von katecholaminergen bzw. cholinergen Markerenzymen deutet auf eine autonome Regulationsweise mit dem Potential einer variablen Reaktion auf Umwelteinflüsse. Die in der vorliegenden Arbeit etablierte Immunfluoreszenz-Doppelmarkierung von cholinergen und katecholaminergen Zellstrukturen bildet eine solide Grundlage für weitere Untersuchungen in Pferdegeweben unter physiologischen und pathologischen Bedingungen.
312

Etablierung eines isoliert druckkonstant perfundierten Ex-vivo-Modells des equinen Larynx

Otto, Sven 14 May 2018 (has links)
Der Einsatz von Ex-vivo-Modellen ermöglicht die experimentelle Untersuchung isolierter Organe, die vom Gesamtorganismus unbeeinflusst sind. Für viele Spezies und Organe sind solche Modelle bereits beschrieben. Wird das Organ dabei perfundiert, stehen flusskonstante und druckkonstante Perfusion zur Auswahl. Für den Larynx des Pferdes existiert in der Literatur bisher nur ein einziges flusskonstant perfundiertes Modell. Im Bereich der Pferdemedizin können solche Modelle helfen, neue Behandlungsoptionen der sogenannten recurrent laryngeal neuropathy (RLN) zu entwickeln, bei der es zu einer Atrophie des Musculus cricoarytaenoideus dorsalis (CAD) kommt. Die Studie hatte zum Ziel, ein druckkonstant perfundiertes Ex-vivo-Modell des equinen Larynx zu etablieren, in dessen Fokus insbesondere die Funktionalität des CAD stand. Hierzu wurden in zwei Vorversuchsphasen verschiedene Parameter auf ihre Eignung als Marker für die Vitalität des entnommenen Kehlkopfes getestet. Die ermittelten Parameter wurden dann in den Hauptversuchen verwendet, um die Vitalität des perfundierten Kehlkopfes zu Beginn und am Ende der Perfusion zu untersuchen. Für die Untersuchungen wurden Kehlköpfe von 16 Pferden verwendet. Diese wurden im direkten Anschluss an die Euthanasie frisch entnommen und im Labor an einen Perfusionskreislauf angeschlossen. Hierzu wurde eine modifizierte Tyrode-Lösung beidseits über die Arteria thyroidea cranialis perfundiert. Die Lösung wurde über den gesamten Versuchszeitraum mit Carbogen begast und somit ein pH-Wert von 7,35 bis 7,45 gehalten. Es wurde ein konstanter Perfusionsdruck von 9,81 kPA eingestellt. Im Anschluss an die Adaptationsphase wurden die Vitalitätsparameter zur Überprüfung der Unversehrtheit und Funktionalität der arteriellen Gefäßversorgung und des CAD getestet. Neben der myogenen Autoregulation wurde die Reaktion der Gefäße auf die Zugabe von Noradrenalin (NA), Nitroprussid (NO) und Papaverin (Papa) als Vasokonstriktor (NA) und Vasodilatatoren (NO, Papa) ermittelt. Des Weiteren wurde die Kontraktilität und Funktionalität des CAD durch Messung des intramuskulären Druckes nach elektrischer Stimulation überprüft. Zusätzlich wurden aus dem Perfusat Proben zur Messung von Laktat und der Aktivität der Laktat-Dehydrogenase (LDH) zu drei Zeitpunkten entnommen. Für die statistische Auswertung wurden eine einfache ANOVA mit wiederholten Messungen sowie der Holm-Sidak-Test als Post-hoc-Test im Falle signifikanter Unterschiede verwendet. Dabei wurde eine Wahrscheinlichkeit von p < 0,05 als statistisch signifikant angenommen. Im Hauptversuch konnten an n = 5 Kehlköpfen Perfusionen über einen Zeitraum von 352 ± 18,59 Minuten durchgeführt werden. Die myogene Autoregulation zu Beginn der Perfusion war bei drei Kehlköpfen und zum Ende der Perfusion bei vier Kehlköpfen sichtbar. Auf die Zugabe von NA reagierten am Beginn der Perfusion vier Kehlköpfe und am Ende der Perfusion alle fünf Kehlköpfe mit einer Vasokonstriktion. NO erzeugte am Versuchsbeginn bei vier Kehlköpfen eine Vasodilatation. Die Zugabe von NO und Papa führte am Versuchsende in allen Fällen zu einer Vasodilatation. Die Kontraktilität des CAD nach elektrischer Stimulation konnte in allen Fällen am Versuchsbeginn und Versuchsende gemessen werden. Bei Überprüfung der Funktionalität des CAD zeigten sich insgesamt heterogene Messergebnisse. Im Verlauf der Perfusion stiegen sowohl die Konzentration des Laktats als auch die Aktivität der LDH statistisch signifikant an, lagen aber beide im Bereich der für das Pferd in der Literatur beschriebenen Normwerte. Die vorliegende Arbeit beschreibt erstmalig die Etablierung eines druckkonstant perfundierten Ex-vivo-Modells des equinen Larynx. Über verschiedene Vitalitätstests wurde die Intaktheit und Funktionalität des perfundierten Kehlkopfes überprüft. Die myogene Autoregulation hat sich als sinnvoller aber störanfälliger Test erwiesen. Die Applikation vasoaktiver Substanzen zur Überprüfung der Funktionalität der arteriellen Gefäße hat sich als sehr zuverlässig gezeigt. Der Test auf Kontraktilität des CAD hat sich als Vitalitätsparameter mit geringer Aussagekraft gezeigt. Der Test auf Funktionalität des CAD hingegen hatte eine höhere Aussagekraft über die Vitalität des entnommenen Kehlkopfes, zeigte aber auch interindividuelle Schwankungen. Das in der vorliegenden Arbeit beschriebene Ex-vivo-Modell stellt eine solide Grundlage für weitere Untersuchungen auf dem Gebiet der Kehlkopflähmung dar.
313

Recurrent brief depressive disorder reinvestigated : a community sample of adolescents and young adults

Pezawas, Lukas, Wittchen, Hans-Ulrich, Pfister, Hildegard, Angst, Jules, Lieb, Roselind, Kasper, Siegfried January 2003 (has links)
Background: This article presents prospective lower bound estimations of findings on prevalence, incidence, clinical correlates, severity markers, co-morbidity and course stability of threshold and subthreshold recurrent brief depressive disorder (RBD) and other mood disorders in a community sample of 3021 adolescents. Method: Data were collected at baseline (age 14–17) and at two follow-up interviews within an observation period of 42 months. Diagnostic assessment was based on the Munich Composite International Diagnostic Interview (M-CIDI). Results: Our data suggest that RBD is a prevalent (2.6%) clinical condition among depressive disorders (21.3%) being at least as prevalent as dysthymia (2.3%) in young adults over lifetime. Furthermore, RBD is associated with significant clinical impairment sharing many features with major depressive disorder (MDD). Suicide attempts were reported in 7.8% of RBD patients, which was similar to MDD (11.9%). However, other features, like gender distribution or co-morbidity patterns, differ essentially from MDD. Furthermore, the lifetime co-occurrence of MDD and RBD or combined depression represents a severe psychiatric condition. Conclusions: This study provides further independent support for RBD as a clinically significant syndrome that could not be significantly explained as a prodrome or residual of major affective disorders.
314

Structure, Dynamics and Self-Organization in Recurrent Neural Networks: From Machine Learning to Theoretical Neuroscience

Vilimelis Aceituno, Pau 03 July 2020 (has links)
At a first glance, artificial neural networks, with engineered learning algorithms and carefully chosen nonlinearities, are nothing like the complicated self-organized spiking neural networks studied by theoretical neuroscientists. Yet, both adapt to their inputs, keep information from the past in their state space and are able of learning, implying that some information processing principles should be common to both. In this thesis we study those principles by incorporating notions of systems theory, statistical physics and graph theory into artificial neural networks and theoretical neuroscience models. % TO DO: What is different in this thesis? -> classical signal processing with complex systems on top The starting point for this thesis is \ac{RC}, a learning paradigm used both in machine learning\cite{jaeger2004harnessing} and in theoretical neuroscience\cite{maass2002real}. A neural network in \ac{RC} consists of two parts, a reservoir – a directed and weighted network of neurons that projects the input time series onto a high dimensional space – and a readout which is trained to read the state of the neurons in the reservoir and combine them linearly to give the desired output. In classical \ac{RC}, the reservoir is randomly initialized and left untrained, which alleviates the training costs in comparison to other recurrent neural networks. However, this lack of training implies that reservoirs are not adapted to specific tasks and thus their performance is often lower than that of other neural networks. Our contribution has been to show how knowledge about a task can be integrated into the reservoir architecture, so that reservoirs can be tailored to specific problems without training. We do this design by identifying two features that are useful for machine learning: the memory of the reservoir and its power spectra. First we show that the correlations between neurons limit the capacity of the reservoir to retain traces of previous inputs, and demonstrate that those correlations are controlled by moduli of the eigenvalues of the adjacency matrix of the reservoir. Second, we prove that when the reservoir resonates at the frequencies that are present on the desired output signal, the performance of the readout increases. Knowing the features of the reservoir dynamics that we need, the next question is how to impose them. The simplest way to design a network with that resonates at a certain frequency is by adding cycles, which act as feedback loops, but this also induces correlations and hence memory modifications. To disentangle the frequencies and the memory design, we studied how the addition of cycles modifies the eigenvalues in the adjacency matrix of the network. Surprisingly, the shape of the eigenvalues is quite beautiful \cite{aceituno2019universal} and can be characterized using random matrix theory tools. Combining this knowledge with our result relating eigenvalues and correlations, we designed an heuristic that tailors reservoirs to specific tasks and showed that it improves upon state of the art \ac{RC} in three different machine learning tasks. Although this idea works in the machine learning version of \ac{RC}, there is one fundamental problem when we try to translate to the world of theoretical neuroscience: the proposed frequency adaptation requires prior knowledge of the task, which might not be plausible in a biological neural network. Therefore the following questions are whether those resonances can emerge by unsupervised learning, and which kind of learning rules would be required. Remarkably, these resonances can be induced by the well-known Spike Time-Dependent Plasticity (STDP) combined with homeostatic mechanisms. We show this by deriving two self-consistent equations: one where the activity of every neuron can be calculated from its synaptic weights and its external inputs and a second one where the synaptic weights can be obtained from the neural activity. By considering spatio-temporal symmetries in our inputs we obtained two families of solutions to those equations where a periodic input is enhanced by the neural network after STDP. This approach shows that periodic and quasiperiodic inputs can induce resonances that agree with the aforementioned \ac{RC} theory. Those results, although rigorous, are expressed on a language of statistical physics and cannot be easily tested or verified in real, scarce data. To make them more accessible to the neuroscience community we showed that latency reduction, a well-known effect of STDP\cite{song2000competitive} which has been experimentally observed \cite{mehta2000experience}, generates neural codes that agree with the self-consistency equations and their solutions. In particular, this analysis shows that metabolic efficiency, synchronization and predictions can emerge from that same phenomena of latency reduction, thus closing the loop with our original machine learning problem. To summarize, this thesis exposes principles of learning recurrent neural networks that are consistent with adaptation in the nervous system and also improve current machine learning methods. This is done by leveraging features of the dynamics of recurrent neural networks such as resonances and correlations in machine learning problems, then imposing the required dynamics into reservoir computing through control theory notions such as feedback loops and spectral analysis. Then we assessed the plausibility of such adaptation in biological networks, deriving solutions from self-organizing processes that are biologically plausible and align with the machine learning prescriptions. Finally, we relate those processes to learning rules in biological neurons, showing how small local adaptations of the spike times can lead to neural codes that are efficient and can be interpreted in machine learning terms.
315

Semantic Segmentation of Urban Scene Images Using Recurrent Neural Networks

Daliparthi, Venkata Satya Sai Ajay January 2020 (has links)
Background: In Autonomous Driving Vehicles, the vehicle receives pixel-wise sensor data from RGB cameras, point-wise depth information from the cameras, and sensors data as input. The computer present inside the Autonomous Driving vehicle processes the input data and provides the desired output, such as steering angle, torque, and brake. To make an accurate decision by the vehicle, the computer inside the vehicle should be completely aware of its surroundings and understand each pixel in the driving scene. Semantic Segmentation is the task of assigning a class label (Such as Car, Road, Pedestrian, or Sky) to each pixel in the given image. So, a better performing Semantic Segmentation algorithm will contribute to the advancement of the Autonomous Driving field. Research Gap: Traditional methods, such as handcrafted features and feature extraction methods, were mainly used to solve Semantic Segmentation. Since the rise of deep learning, most of the works are using deep learning to dealing with Semantic Segmentation. The most commonly used neural network architecture to deal with Semantic Segmentation was the Convolutional Neural Network (CNN). Even though some works made use of Recurrent Neural Network (RNN), the effect of RNN in dealing with Semantic Segmentation was not yet thoroughly studied. Our study addresses this research gap. Idea: After going through the existing literature, we came up with the idea of “Using RNNs as an add-on module, to augment the skip-connections in Semantic Segmentation Networks through residual connections.” Objectives and Method: The main objective of our work is to improve the Semantic Segmentation network’s performance by using RNNs. The Experiment was chosen as a methodology to conduct our study. In our work, We proposed three novel architectures called UR-Net, UAR-Net, and DLR-Net by implementing our idea to the existing networks U-Net, Attention U-Net, and DeepLabV3+ respectively. Results and Findings: We empirically showed that our proposed architectures have shown improvement in efficiently segmenting the edges and boundaries. Through our study, we found that there is a trade-off between using RNNs and Inference time of the model. Suppose we use RNNs to improve the performance of Semantic Segmentation Networks. In that case, we need to trade off some extra seconds during the inference of the model. Conclusion: Our findings will not contribute to the Autonomous driving field, where we need better performance in real-time. But, our findings will contribute to the advancement of Bio-medical Image segmentation, where doctors can trade-off those extra seconds during inference for better performance.
316

Self-Regulating Neurons. A model for synaptic plasticity in artificial recurrent neural networks

Ghazi-Zahedi, Keyan Mahmoud 04 February 2009 (has links)
Robustness and adaptivity are important behavioural properties observed in biological systems, which are still widely absent in artificial intelligence applications. Such static or non-plastic artificial systems are limited to their very specific problem domain. This work introducesa general model for synaptic plasticity in embedded artificial recurrent neural networks, which is related to short-term plasticity by synaptic scaling in biological systems. The model is general in the sense that is does not require trigger mechanisms or artificial limitations and it operates on recurrent neural networks of arbitrary structure. A Self-Regulation Neuron is defined as a homeostatic unit which regulates its activity against external disturbances towards a target value by modulation of its incoming and outgoing synapses. Embedded and situated in the sensori-motor loop, a network of these neurons is permanently driven by external stimuli andwill generally not settle at its asymptotically stable state. The system´s behaviour is determinedby the local interactions of the Self-Regulating Neurons. The neuron model is analysed as a dynamical system with respect to its attractor landscape and its transient dynamics. The latter is conducted based on different control structures for obstacle avoidance with increasing structural complexity derived from literature. The result isa controller that shows first traces of adaptivity. Next, two controllers for different tasks are evolved and their transient dynamics are fully analysed. The results of this work not only show that the proposed neuron model enhances the behavioural properties, but also points out the limitations of short-term plasticity which does not account for learning and memory.
317

Multivariate analysis of the parameters in a handwritten digit recognition LSTM system / Multivariat analys av parametrarna i ett LSTM-system för igenkänning av handskrivna siffror

Zervakis, Georgios January 2019 (has links)
Throughout this project, we perform a multivariate analysis of the parameters of a long short-term memory (LSTM) system for handwritten digit recognition in order to understand the model’s behaviour. In particular, we are interested in explaining how this behaviour precipitate from its parameters, and what in the network is responsible for the model arriving at a certain decision. This problem is often referred to as the interpretability problem, and falls under scope of Explainable AI (XAI). The motivation is to make AI systems more transparent, so that we can establish trust between humans. For this purpose, we make use of the MNIST dataset, which has been successfully used in the past for tackling digit recognition problem. Moreover, the balance and the simplicity of the data makes it an appropriate dataset for carrying out this research. We start by investigating the linear output layer of the LSTM, which is directly associated with the models’ predictions. The analysis includes several experiments, where we apply various methods from linear algebra such as principal component analysis (PCA) and singular value decomposition (SVD), to interpret the parameters of the network. For example, we experiment with different setups of low-rank approximations of the weight output matrix, in order to see the importance of each singular vector for each class of the digits. We found out that cutting off the fifth left and right singular vectors the model practically losses its ability to predict eights. Finally, we present a framework for analysing the parameters of the hidden layer, along with our implementation of an LSTM based variational autoencoder that serves this purpose. / I det här projektet utför vi en multivariatanalys av parametrarna för ett long short-term memory system (LSTM) för igenkänning av handskrivna siffror för att förstå modellens beteende. Vi är särskilt intresserade av att förklara hur detta uppträdande kommer ur parametrarna, och vad i nätverket som ligger bakom den modell som kommer fram till ett visst beslut. Detta problem kallas ofta för interpretability problem och omfattas av förklarlig AI (XAI). Motiveringen är att göra AI-systemen öppnare, så att vi kan skapa förtroende mellan människor. I detta syfte använder vi MNIST-datamängden, som tidigare framgångsrikt har använts för att ta itu med problemet med igenkänning av siffror. Dessutom gör balansen och enkelheten i uppgifterna det till en lämplig uppsättning uppgifter för att utföra denna forskning. Vi börjar med att undersöka det linjära utdatalagret i LSTM, som är direkt kopplat till modellernas förutsägelser. Analysen omfattar flera experiment, där vi använder olika metoder från linjär algebra, som principalkomponentanalys (PCA) och singulärvärdesfaktorisering (SVD), för att tolka nätverkets parametrar. Vi experimenterar till exempel med olika uppsättningar av lågrangordnade approximationer av viktutmatrisen för att se vikten av varje enskild vektor för varje klass av siffrorna. Vi upptäckte att om man skär av den femte vänster och högervektorn förlorar modellen praktiskt taget sin förmåga att förutsäga siffran åtta. Slutligen lägger vi fram ett ramverk för analys av parametrarna för det dolda lagret, tillsammans med vårt genomförande av en LSTM-baserad variational autoencoder som tjänar detta syfte.
318

Universality and Individuality in Recurrent Networks extended to Biologically inspired networks

Joshi, Nishant January 2020 (has links)
Activities in the motor cortex are found to be dynamical in nature. Modeling these activities and comparing them with neural recordings helps in understanding the underlying mechanism for the generation of these activities. For this purpose, Recurrent Neural networks or RNNs, have emerged as an appropriate tool. A clear understanding of how the design choices associated with these networks affect the learned dynamics and internal representation still remains elusive. A previous work exploring the dynamical properties of discrete time RNN architectures (LSTM, UGRNN, GRU, and Vanilla) such as the fixed point topology and the linearised dynamics remains invariant when trained on 3 bit Flip- Flop task. In contrast, they show that these networks have unique representational geometry. The goal for this work is to understand if these observations also hold for networks that are more biologically realistic in terms of neural activity. Therefore, we chose to analyze rate networks that have continuous dynamics and biologically realistic connectivity constraints and on spiking neural networks, where the neurons communicate via discrete spikes as observed in the brain. We reproduce the aforementioned study for discrete architectures and then show that the fixed point topology and linearized dynamics remain invariant for the rate networks but the methods are insufficient for finding the fixed points of spiking networks. The representational geometry for the rate networks and spiking networks are found to be different from the discrete architectures but very similar to each other. Although, a small subset of discrete architectures (LSTM) are observed to be close in representation to the rate networks. We show that although these different network architectures with varying degrees of biological realism have individual internal representations, the underlying dynamics while performing the task are universal. We also observe that some discrete networks have close representational similarities with rate networks along with the dynamics. Hence, these discrete networks can be good candidates for reproducing and examining the dynamics of rate networks. / Aktiviteter i motorisk cortex visar sig vara dynamiska till sin natur. Att modellera dessa aktiviteter och jämföra dem med neurala inspelningar hjälper till att förstå den underliggande mekanismen för generering av dessa aktiviteter. För detta ändamål har återkommande neurala nätverk eller RNN uppstått som ett lämpligt verktyg. En tydlig förståelse för hur designvalen associerade med dessa nätverk påverkar den inlärda dynamiken och den interna representationen är fortfarande svårfångad. Ett tidigare arbete som utforskar de dynamiska egenskaperna hos diskreta RNN- arkitekturer (LSTM, UGRNN, GRU och Vanilla), såsom fastpunkts topologi och linjäriserad dynamik, förblir oförändrad när de tränas på 3-bitars Flip- Flop-uppgift. Däremot visar de att dessa nätverk har unik representationsgeometri. Målet för detta arbete är att förstå om dessa observationer också gäller för nätverk som är mer biologiskt realistiska när det gäller neural aktivitet. Därför valde vi att analysera hastighetsnätverk som har kontinuerlig dynamik och biologiskt realistiska anslutningsbegränsningar och på spikande neurala nätverk, där neuronerna kommunicerar via diskreta spikar som observerats i hjärnan. Vi reproducerar den ovannämnda studien för diskreta arkitekturer och visar sedan att fastpunkts topologi och linjäriserad dynamik förblir oförändrad för hastighetsnätverken men metoderna är otillräckliga för att hitta de fasta punkterna för spiknätverk. Representationsgeometrin för hastighetsnätverk och spiknätverk har visat sig skilja sig från de diskreta arkitekturerna men liknar varandra. Även om en liten delmängd av diskreta arkitekturer (LSTM) observeras vara nära i förhållande till hastighetsnäten. Vi visar att även om dessa olika nätverksarkitekturer med varierande grad av biologisk realism har individuella interna representationer, är den underliggande dynamiken under uppgiften universell. Vi observerar också att vissa diskreta nätverk har nära representationslikheter med hastighetsnätverk tillsammans med dynamiken. Följaktligen kan dessa diskreta nätverk vara bra kandidater för att reproducera och undersöka dynamiken i hastighetsnät.
319

New Insights into the Spinal Recurrent Inhibitory Pathway Normally and After Motoneuron Regeneration

Obeidat, Ahmed Zayed 29 May 2013 (has links)
No description available.
320

Gene Network Inference and Expression Prediction Using Recurrent Neural Networks and Evolutionary Algorithms

Chan, Heather Y. 10 December 2010 (has links) (PDF)
We demonstrate the success of recurrent neural networks in gene network inference and expression prediction using a hybrid of particle swarm optimization and differential evolution to overcome the classic obstacle of local minima in training recurrent neural networks. We also provide an improved validation framework for the evaluation of genetic network modeling systems that will result in better generalization and long-term prediction capability. Success in the modeling of gene regulation and prediction of gene expression will lead to more rapid discovery and development of therapeutic medicine, earlier diagnosis and treatment of adverse conditions, and vast advancements in life science research.

Page generated in 0.0615 seconds