411 |
Event Detection and Extraction from News ArticlesWang, Wei 21 February 2018 (has links)
Event extraction is a type of information extraction(IE) that works on extracting the specific knowledge of certain incidents from texts. Nowadays the amount of available information (such as news, blogs, and social media) grows in exponential order. Therefore, it becomes imperative to develop algorithms that automatically extract the machine-readable information from large volumes of text data. In this dissertation, we focus on three problems in obtaining event-related information from news articles. (1) The first effort is to comprehensively analyze the performance and challenges in current large-scale event encoding systems. (2) The second problem involves event detection and critical information extractions from news articles. (3) Third, the efforts concentrate on event-encoding which aims to extract event extent and arguments from texts.
We start by investigating the two large-scale event extraction systems (ICEWS and GDELT) in the political science domain. We design a set of experiments to evaluate the quality of the extracted events from the two target systems, in terms of reliability and correctness. The results show that there exist significant discrepancies between the outputs of automated systems and hand-coded system and the accuracy of both systems are far away from satisfying. These findings provide preliminary background and set the foundation for using advanced machine learning algorithms for event related information extraction.
Inspired by the successful application of deep learning in Natural Language Processing (NLP), we propose a Multi-Instance Convolutional Neural Network (MI-CNN) model for event detection and critical sentences extraction without sentence level labels. To evaluate the model, we run a set of experiments on a real-world protest event dataset. The result shows that our model could be able to outperform the strong baseline models and extract the meaningful key sentences without domain knowledge and manually designed features.
We also extend the MI-CNN model and propose an MIMTRNN model for event extraction with distant supervision to overcome the problem of lacking fine level labels and small size training data. The proposed MIMTRNN model systematically integrates the RNN, Multi-Instance Learning, and Multi-Task Learning into a unified framework. The RNN module aims to encode into the representation of entity mentions the sequential information as well as the dependencies between event arguments, which are very useful in the event extraction task. The Multi-Instance Learning paradigm makes the system does not require the precise labels in entity mention level and make it perfect to work together with distant supervision for event extraction. And the Multi-Task Learning module in our approach is designed to alleviate the potential overfitting problem caused by the relatively small size of training data. The results of the experiments on two real-world datasets(Cyber-Attack and Civil Unrest) show that our model could be able to benefit from the advantage of each component and outperform other baseline methods significantly. / Ph. D. / Nowadays the amount of available information (such as news, blogs, and social media) grows in exponential order. The demand of making use of the massive on-line information during decision making process becomes increasing intensive. Therefore, it is imperative to develop algorithms that automatically extract the formatted information from large volumes of the unstructured text data. In this dissertation, we focus on three problems in obtaining event-related information from news articles. (1) The first effort is to comprehensively analyze the performance and challenges in current large-scale event encoding systems. (2) The second problem involves detecting the event and extracting key information about the event in the article. (3) Third, the efforts concentrate on extracting the arguments of the event from the text. We found that there exist significant discrepancies between the outputs of automated systems and hand-coded system and the accuracy of current event extraction systems are far away from satisfying. These findings provide preliminary background and set the foundation for using advanced machine learning algorithms for event related information extraction. Our experiments on two real-world event extraction tasks (Cyber-Attack and Civil Unrest) show the effectiveness of our deep learning approaches for detecting and extracting the event information from unstructured text data.
|
412 |
Multimodal Deep Learning for Multi-Label Classification and Ranking ProblemsDubey, Abhishek January 2015 (has links) (PDF)
In recent years, deep neural network models have shown to outperform many state of the art algorithms. The reason for this is, unsupervised pretraining with multi-layered deep neural networks have shown to learn better features, which further improves many supervised tasks. These models not only automate the feature extraction process but also provide with robust features for various machine learning tasks. But the unsupervised pretraining and feature extraction using multi-layered networks are restricted only to the input features and not to the output. The performance of many supervised learning algorithms (or models) depends on how well the output dependencies are handled by these algorithms [Dembczy´nski et al., 2012]. Adapting the standard neural networks to handle these output dependencies for any specific type of problem has been an active area of research [Zhang and Zhou, 2006, Ribeiro et al., 2012].
On the other hand, inference into multimodal data is considered as a difficult problem in machine learning and recently ‘deep multimodal neural networks’ have shown significant results [Ngiam et al., 2011, Srivastava and Salakhutdinov, 2012]. Several problems like classification with complete or missing modality data, generating the missing modality etc., are shown to perform very well with these models. In this work, we consider three nontrivial supervised learning tasks (i) multi-class classification (MCC),
(ii) multi-label classification (MLC) and (iii) label ranking (LR), mentioned in the order of increasing complexity of the output. While multi-class classification deals with predicting one class for every instance, multi-label classification deals with predicting more than one classes for every instance and label ranking deals with assigning a rank to each label for every instance. All the work in this field is associated around formulating new error functions that can force network to identify the output dependencies.
Aim of our work is to adapt neural network to implicitly handle the feature extraction (dependencies) for output in the network structure, removing the need of hand crafted error functions. We show that the multimodal deep architectures can be adapted for these type of problems (or data) by considering labels as one of the modalities. This also brings unsupervised pretraining to the output along with the input. We show that these models can not only outperform standard deep neural networks, but also outperform standard adaptations of neural networks for individual domains under various metrics over several data sets considered by us. We can observe that the performance of our models over other models improves even more as the complexity of the output/ problem increases.
|
413 |
Investigations of calorimeter clustering in ATLAS using machine learningNiedermayer, Graeme 11 January 2018 (has links)
The Large Hadron Collider (LHC) at CERN is designed to search for new physics by colliding protons with a center-of-mass energy of 13 TeV. The ATLAS detector is a multipurpose particle detector built to record these proton-proton collisions. In order to improve sensitivity to new physics at the LHC, luminosity increases are planned for 2018 and beyond. With this greater luminosity comes an increase in the number of simultaneous proton-proton collisions per bunch crossing (pile-up). This extra pile-up has adverse effects on algorithms for clustering the ATLAS detector's calorimeter cells. These adverse effects stem from overlapping energy deposits originating from distinct particles and could lead to difficulties in accurately reconstructing events. Machine learning algorithms provide a new tool that has potential to improve clustering performance. Recent developments in computer science have given rise to new set of machine learning algorithms that, in many circumstances, out-perform more conventional algorithms. One of these algorithms, convolutional neural networks, has been shown to have impressive performance when identifying objects in 2d or 3d arrays. This thesis will develop a convolutional neural network model for calorimeter cell clustering and compare it to the standard ATLAS clustering algorithm. / Graduate
|
414 |
Využití hlubokého učení pro rozpoznání textu v obrazu grafického uživatelského rozhraní / Deep Learning for OCR in GUIHamerník, Pavel January 2019 (has links)
Optical character recognition (OCR) has been a topic of interest for many years. It is defined as the process of digitizing a document image into a sequence of characters. Despite decades of intense research, OCR systems with capabilities to that of human still remains an open challenge. In this work there is presented a design and implementation of such system, which is capable of detecting texts in graphical user interfaces.
|
415 |
Využití neuronových sítí pro predikaci síťového provozu / Neural network utilization for etwork traffic predictionsPavela, Radek January 2009 (has links)
In this master’s thesis are discussed static properties of network traffic trace. There are also addressed the possibility of a predication with a focus on neural networks. Specifically, therefore recurrent neural networks. Training data were downloaded from freely accessible on the internet link. This is the captured packej of traffic of LAN network in 2001. They are not the most actual, but it is possible to use them to achieve the objective results of the work. Input data needed to be processed into acceptable form. In the Visual Studio 2005 was created program to aggregate the intensities of these data. The best combining appeared after 100 ms. This was achieved by the input vector, which was divided according to the needs of network training and testing part. The various types of networks operate with the same input data, thereby to make more objective results. In practical terms, it was necessary to verify the two principles. Principle of training and the principle of generalization. The first of the nominated designs require stoking training and verification training by using gradient and mean square error. The second one represents unknown designs application on neural network. It was monitored the response of network to these input data. It can be said that the best model seemed the Layer recurrent neural network (LRN). So, it was a solution developed in this direction, followed by searching the appropriate option of recurrent network and optimal configuration. Found a variant of topology is 10-10-1. It was used the Matlab 7.6, with an extension of Neural Network toolbox 6. The results are processed in the form of graphs and the final appreciation. All successful models and network topologies are on the enclosed CD. However, Neural Network toolbox reported some problems when importing networks. In creating this work wasn’t import of network functions practically used. The network can be imported, but the majority appear to be non-trannin. Unsuccessful models of networks are not presented in this master’s thesis, because it would be make a deterioration of clarity and orientation.
|
416 |
Síťový prvek s pokročilým řízením / Network Element with Advanced ControlZedníček, Petr January 2010 (has links)
The diploma thesis deal with finding and testing neural networks, whose characteristics and parameters suitable for the active management of network element. Solves optimization task priority switching of data units from input to output. Work is focused largely on the use of Hopfield and Kohonen networks and their optimization. Result of this work are two models. The first theory is solved in Matlab, where each comparing the theoretical results of neural networks. The second model is a realistic model of the active element designed in Simulink
|
417 |
UAV geolocalization in Swedish fields and forests using Deep Learning / Geolokalisering av UAVs över svenska fält och skogar med hjälp av djupinlärningRohlén, Andreas January 2021 (has links)
The ability for unmanned autonomous aerial vehicles (UAV) to localize themselves in an environment is fundamental for them to be able to function, even if they do not have access to a global positioning system. Recently, with the success of deep learning in vision based tasks, there have been some proposed methods for absolute geolocalization using vison based deep learning with satellite and UAV images. Most of these are only tested in urban environments, which begs the question: How well do they work in non-urban areas like forests and fields? One drawback of deep learning is that models are often regarded as black boxes, as it is hard to know why the models make the predictions they do, i.e. what information is important and is used for the prediction. To solve this, several neural network interpretation methods have been developed. These methods provide explanations so that we may understand these models better. This thesis investigates the localization accuracy of one geolocalization method in both urban and non-urban environments as well as applies neural network interpretation in order to see if it can explain the potential difference in localization accuracy of the method in these different environments. The results show that the method performs best in urban environments, getting a mean absolute horizontal error of 38.30m and a mean absolute vertical error of 16.77m, while it performed significantly worse in non-urban environments, getting a mean absolute horizontal error of 68.11m and a mean absolute vertical error 22.83m. Further, the results show that if the satellite images and images from the unmanned aerial vehicle are collected during different seasons of the year, the localization accuracy is even worse, resulting in a mean absolute horizontal error of 86.91m and a mean absolute vertical error of 23.05m. The neural network interpretation did not aid in providing an explanation for why the method performs worse in non-urban environments and is not suitable for this kind of problem. / Obemannade autonoma luftburna fordons (UAV) förmåga att lokaliera sig själva är fundamental för att de ska fungera, även om de inte har tillgång till globala positioneringssystem. Med den nyliga framgången hos djupinlärning applicerat på visuella problem har det kommit metoder för absolut geolokalisering med visuell djupinlärning med satellit- och UAV-bilder. De flesta av dessa metoder har bara blivit testade i stadsmiljöer, vilket leder till frågan: Hur väl fungerar dessa metoder i icke-urbana områden som fält och skogar? En av nackdelarna med djupinlärning är att dessa modeller ofta ses som svarta lådor eftersom det är svårt att veta varför modellerna gör de gissningar de gör, alltså vilken information som är viktig och används för gissningen. För att lösa detta har flera metoder för att tolka neurala nätverk utvecklats. Dessa metoder ger förklaringar så att vi kan förstå dessa modeller bättre. Denna uppsats undersöker lokaliseringsprecisionen hos en geolokaliseringsmetod i både urbana och icke-urbana miljöer och applicerar även en tolkningsmetod för neurala nätverk för att se ifall den kan förklara den potentialla skillnaden i precision hos metoden i dessa olika miljöer. Resultaten visar att metoden fungerar bäst i urbana miljöer där den får ett genomsnittligt absolut horisontellt lokaliseringsfel på 38.30m och ett genomsnittligt absolut vertikalt fel på 16.77m medan den presterade signifikant sämre i icke-urbana miljöer där den fick ett genomsnittligt absolut horisontellt lokaliseringsfel på 68.11m och ett genomsnittligt absolut vertikalt fel på 22.83m. Vidare visar resultaten att om satellitbilderna och UAV-bilderna är tagna från olika årstider blir lokaliseringsprecisionen ännu sämre, där metoden får genomsnittligt absolut horisontellt lokaliseringsfel på 86.91m och ett genomsnittligt absolut vertikalt fel på 23.05m. Tolkningsmetoden hjälpte inte i att förklara varför metoden fungerar sämre i icke-urbana miljöer och är inte passande att använda för denna sortens problem.
|
418 |
Investigation of hierarchical deep neural network structure for facial expression recognitionMotembe, Dodi 01 1900 (has links)
Facial expression recognition (FER) is still a challenging concept, and machines struggle to
comprehend effectively the dynamic shifts in facial expressions of human emotions. The
existing systems, which have proven to be effective, consist of deeper network structures that
need powerful and expensive hardware. The deeper the network is, the longer the training and
the testing. Many systems use expensive GPUs to make the process faster. To remedy the
above challenges while maintaining the main goal of improving the accuracy rate of the
recognition, we create a generic hierarchical structure with variable settings. This generic
structure has a hierarchy of three convolutional blocks, two dropout blocks and one fully
connected block. From this generic structure we derived four different network structures to
be investigated according to their performances. From each network structure case, we again
derived six network structures in relation to the variable parameters. The variable parameters
under analysis are the size of the filters of the convolutional maps and the max-pooling as
well as the number of convolutional maps. In total, we have 24 network structures to
investigate, and six network structures per case. After simulations, the results achieved after
many repeated experiments showed in the group of case 1; case 1a emerged as the top
performer of that group, and case 2a, case 3c and case 4c outperformed others in their
respective groups. The comparison of the winners of the 4 groups indicates that case 2a is the
optimal structure with optimal parameters; case 2a network structure outperformed other
group winners. Considerations were done when choosing the best network structure,
considerations were; minimum accuracy, average accuracy and maximum accuracy after 15
times of repeated training and analysis of results. All 24 proposed network structures were
tested using two of the most used FER datasets, the CK+ and the JAFFE. After repeated
simulations the results demonstrate that our inexpensive optimal network architecture
achieved 98.11 % accuracy using the CK+ dataset. We also tested our optimal network
architecture with the JAFFE dataset, the experimental results show 84.38 % by using just a
standard CPU and easier procedures. We also compared the four group winners with other
existing FER models performances recorded recently in two studies. These FER models used
the same two datasets, the CK+ and the JAFFE. Three of our four group winners (case 1a,
case 2a and case 4c) recorded only 1.22 % less than the accuracy of the top performer model
when using the CK+ dataset, and two of our network structures, case 2a and case 3c came in
third, beating other models when using the JAFFE dataset. / Electrical and Mining Engineering
|
419 |
Refinement of Raman spectra from extreme background and noise interferences: Cancer diagnostics using Raman spectroscopyGebrekidan, Medhanie Tesfay 01 March 2022 (has links)
Die Raman-Spektroskopie ist eine optische Messtechnik, die in der Lage ist, spektroskopische Information zu liefern, welche molekülspezifisch und einzigartig in Bezug auf die Eigenschaften der untersuchten Spezies sind. Sie ist ein unverzichtbares analytisches Instrument, das Anwendung in verschiedenen Bereichen findet, wie etwa der Medizin oder der in situ Beobachtung von chemischen Prozessen. Wegen ihren Eigenschaften, wie der hohen Spezifität und der Möglichkeit von Tracer-freien Messung, hat die Raman-Spektroskopie die Tumordiagnostik stark beeinflusst. Aufgrund einer äußerst starken Beeinflussung der Raman-Spektren durch Hintergrundsignale, ist das Isolieren und Interpretieren von Raman-Spektren eine große Herausforderung.
Im Rahmen dieser Arbeit wurden verschiedene Ansätze der Spektrenbearbeitung entwickelt, die benötigt werden um Raman-Spektren aus verrauschten und stark mit Hintergrundsignalen behafteten Rohspektren zu extrahieren. Diese Ansätze beinhalten im Speziellen eine auf dem Vector-Casting basierende Methode zur Rauschminimierung und eine auf dem deep neural networks basierende Methoden zur Entfernung von Rauschen und Hintergrundsignalen. Verschiedene neuronale Netze wurden mittels simulierter Spektren trainiert und an experimentell gemessenen Spektren evaluiert. Die im Rahmen dieser Arbeit vorgeschlagenen Ansätze wurden mit alternativen Methoden auf dem aktuellen Stand der Entwicklung unter Zuhilfenahme von verschiedenen Signal-Rausch-Verhältnissen, Standardabweichungen und dem Structural Similarity Index verglichen. Die hier entwickelten Ansätze zeigen gute Ergebnisse und sind bisher bekannten Methoden überlegen, vor allem für Raman-Spektren mit einem niedrigem Signal-Rausch-Verhältnis und extrem starken Fluoreszenz-Hintergrund. Zusätzlich erfordern die auf Deep Neural Networks basierten Methoden keinerlei menschliches Eingreifen.
Die Motivation hinter dieser Arbeit ist die Verbesserung der Raman-Spektroskopie, vor allem der Shifted-Excitation Raman Difference Spectroscopy (SERDS) hin zu einem noch besseren Instrument in der Prozessanalytik und Tumordiagnostik. Die Integration der oben genannten Ansätze zur Spektrenbearbeitung von SERDS in Kombination mit Methoden des maschinellen Lernens ermöglichen es, physiologische Schleimhaut, nicht-maligne Läsionen und orale Plattenepithelkarzinome mit einer Genauigkeit zu unterscheiden, die bisherigen Methoden überlegen ist.
Die spezifischen Merkmale in den bearbeiteten Raman-Spektren können verschiedenen chemischen Zusammensetzungen in den jeweiligen Geweben zugeordnet werden. Die Übertragbarkeit auf einen ähnlichen Ansatz zur Erkennung von Brusttumoren wurde überprüft.
Die bereinigten Raman-Spektren von normalem Brustgewebe, Fibroadenoma und invasiven Mammakarzinom konnten mithilfe der spektralen Eigenschaften von Proteinen, Lipiden und Nukleinsäuren unterschieden werden. Diese Erkenntnisse lassen das Potential von SERDS in Kombination mit Ansätzen des maschinellen Lernens als universelles Werkzeug zur Tumordiagnose erkennen.:Versicherung
Abstract
Zusammenfassung der Ergebnisse der Dissertation
Table of Contents
Abbreviations and symbols
1 Introduction
2 State of the art of the purification of Raman spectra
2.1 Experimental methods for the enhancement of the signal-to-background ratio and the signal-to-noise ratio
2.2 Mathematical methods for the extraction of pure Raman spectra from raw spectra
2.3 Raman based cancer diagnostics
2.4 Neural networks for the evaluation of Raman spectra
2.5 Objective
3 Application relevant fundaments
3.1 Basics of Raman spectroscopy
3.2 Simulation of raw Raman spectra
3.3 Shifted-excitation Raman difference Spectroscopy
3.4 Raman experimental setup
3.5 Mathematical method for Raman spectra refinement
3.6 Deep neural networks
4 Summary of the published results
4.1 A shifted-excitation Raman difference spectroscopy evaluation strategy for the efficient isolation of Raman spectra from extreme fluorescence interference
4.2 Vector casting for noise reduction
4.3 Refinement of spectra using a deep neural network; fully automated removal of noise and background
4.4 Breast Tumor Analysis using Shifted Excitation Raman difference Spectroscopy
4.5 Optical diagnosis of clinically apparent lesions of oral cavity by label free Raman spectroscopy
Conclusion / Raman spectroscopy is an optical measurement technique able to provide spectroscopic information that is molecule-specific and unique to the nature of the specimen under investigation. It is an invaluable analytical tool that finds application in several fields such as medicine and in situ chemical processing. Due to its high specificity and label-free features, Raman spectroscopy greatly impacted cancer diagnostics. However, retrieving and interpreting the Raman spectrum that contains the molecular information is challenging because of extreme background interference.
I have developed various spectra-processing approaches required to purify Raman spectra from noisy and heavily background interfered raw Raman spectra. In detail, these are a new noise reduction method based on vector casting and new deep neural networks for the efficient removal of noise and background. Several neural network models were trained on simulated spectra and then tested with experimental spectra. The here proposed approaches were compared with the state-of-the-art techniques via different signal-to-noise ratios, standard deviation, and the structural similarity index metric. The methods presented here perform well and are superior in comparison to what has been reported before, especially at small signal-to-noise ratios, and for extreme fluorescence interfered raw Raman spectra. Furthermore, the deep neural network-based methods do not rely on any human intervention.
The motivation behind this study is to make Raman spectroscopy, especially the shifted-excitation Raman difference spectroscopy (SERDS), an even better tool for process analytics and cancer diagnostics. The integration of the above-mentioned spectra-processing approaches into SERDS in combination with machine learning tools enabled the differentiation between physiological mucosa, non-malignant lesions, and oral squamous cell carcinomas with high accuracy, above the state of the art. The distinguishable features obtained in the purified Raman spectra are assignable to different chemical compositions of the respective tissues. The feasibility of a similar approach for breast tumors was also investigated. The purified Raman spectra of normal breast tissue, fibroadenoma, and invasive carcinoma were discriminable with respect to the spectral features of proteins, lipids, and nucleic acid. These findings suggest the potential of SERDS combined with machine learning techniques as a universal tool for cancer diagnostics.:Versicherung
Abstract
Zusammenfassung der Ergebnisse der Dissertation
Table of Contents
Abbreviations and symbols
1 Introduction
2 State of the art of the purification of Raman spectra
2.1 Experimental methods for the enhancement of the signal-to-background ratio and the signal-to-noise ratio
2.2 Mathematical methods for the extraction of pure Raman spectra from raw spectra
2.3 Raman based cancer diagnostics
2.4 Neural networks for the evaluation of Raman spectra
2.5 Objective
3 Application relevant fundaments
3.1 Basics of Raman spectroscopy
3.2 Simulation of raw Raman spectra
3.3 Shifted-excitation Raman difference Spectroscopy
3.4 Raman experimental setup
3.5 Mathematical method for Raman spectra refinement
3.6 Deep neural networks
4 Summary of the published results
4.1 A shifted-excitation Raman difference spectroscopy evaluation strategy for the efficient isolation of Raman spectra from extreme fluorescence interference
4.2 Vector casting for noise reduction
4.3 Refinement of spectra using a deep neural network; fully automated removal of noise and background
4.4 Breast Tumor Analysis using Shifted Excitation Raman difference Spectroscopy
4.5 Optical diagnosis of clinically apparent lesions of oral cavity by label free Raman spectroscopy
Conclusion
|
420 |
Attractor Neural Network modelling of the Lifespan Retrieval CurvePereira, Patrícia January 2020 (has links)
Human capability to recall episodic memories depends on how much time has passed since the memory was encoded. This dependency is described by a memory retrieval curve that reflects an interesting phenomenon referred to as a reminiscence bump - a tendency for older people to recall more memories formed during their young adulthood than in other periods of life. This phenomenon can be modelled with an attractor neural network, for example, the firing-rate Bayesian Confidence Propagation Neural Network (BCPNN) with incremental learning. In this work, the mechanisms underlying the reminiscence bump in the neural network model are systematically studied. The effects of synaptic plasticity, network architecture and other relevant parameters on the characteristics of the reminiscence bump are systematically investigated. The most influential factors turn out to be the magnitude of dopamine-linked plasticity at birth and the time constant of exponential plasticity decay with age that set the position of the bump. The other parameters mainly influence the general amplitude of the lifespan retrieval curve. Furthermore, the recency phenomenon, i.e. the tendency to remember the most recent memories, can also be parameterized by adding a constant to the exponentially decaying plasticity function representing the decrease in the level of dopamine neurotransmitters. / Människans förmåga att återkalla episodiska minnen beror på hur lång tid som gått sedan minnena inkodades. Detta beroende beskrivs av en sk glömskekurva vilken uppvisar ett intressant fenomen som kallas ”reminiscence bump”. Detta är en tendens hos äldre att återkalla fler minnen från ungdoms- och tidiga vuxenår än från andra perioder i livet. Detta fenomen kan modelleras med ett neuralt nätverk, sk attraktornät, t ex ett icke spikande Bayesian Confidence Propagation Neural Network (BCPNN) med inkrementell inlärning. I detta arbete studeras systematiskt mekanismerna bakom ”reminiscence bump” med hjälp av denna neuronnätsmodell. Exempelvis belyses betydelsen av synaptisk plasticitet, nätverksarkitektur och andra relavanta parameterar för uppkomsten av och karaktären hos detta fenomen. De mest inflytelserika faktorerna för bumpens position befanns var initial dopaminberoende plasticitet vid födseln samt tidskonstanten för plasticitetens avtagande med åldern. De andra parametrarna påverkade huvudsakligen den generella amplituden hos kurvan för ihågkomst under livet. Dessutom kan den s k nysseffekten (”recency effect”), dvs tendensen att bäst komma ihåg saker som hänt nyligen, också parametriseras av en konstant adderad till den annars exponentiellt avtagande plasticiteten, som kan representera densiteten av dopaminreceptorer.
|
Page generated in 0.0442 seconds