• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 265
  • 35
  • 14
  • 10
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 386
  • 386
  • 386
  • 248
  • 165
  • 159
  • 141
  • 87
  • 85
  • 81
  • 79
  • 77
  • 70
  • 67
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Automatic Detection of Brain Functional Disorder Using Imaging Data

Dey, Soumyabrata 01 January 2014 (has links)
Recently, Attention Deficit Hyperactive Disorder (ADHD) is getting a lot of attention mainly for two reasons. First, it is one of the most commonly found childhood behavioral disorders. Around 5-10% of the children all over the world are diagnosed with ADHD. Second, the root cause of the problem is still unknown and therefore no biological measure exists to diagnose ADHD. Instead, doctors need to diagnose it based on the clinical symptoms, such as inattention, impulsivity and hyperactivity, which are all subjective. Functional Magnetic Resonance Imaging (fMRI) data has become a popular tool to understand the functioning of the brain such as identifying the brain regions responsible for different cognitive tasks or analyzing the statistical differences of the brain functioning between the diseased and control subjects. ADHD is also being studied using the fMRI data. In this dissertation we aim to solve the problem of automatic diagnosis of the ADHD subjects using their resting state fMRI (rs-fMRI) data. As a core step of our approach, we model the functions of a brain as a connectivity network, which is expected to capture the information about how synchronous different brain regions are in terms of their functional activities. The network is constructed by representing different brain regions as the nodes where any two nodes of the network are connected by an edge if the correlation of the activity patterns of the two nodes is higher than some threshold. The brain regions, represented as the nodes of the network, can be selected at different granularities e.g. single voxels or cluster of functionally homogeneous voxels. The topological differences of the constructed networks of the ADHD and control group of subjects are then exploited in the classification approach. We have developed a simple method employing the Bag-of-Words (BoW) framework for the classification of the ADHD subjects. We represent each node in the network by a 4-D feature vector: node degree and 3-D location. The 4-D vectors of all the network nodes of the training data are then grouped in a number of clusters using K-means; where each such cluster is termed as a word. Finally, each subject is represented by a histogram (bag) of such words. The Support Vector Machine (SVM) classifier is used for the detection of the ADHD subjects using their histogram representation. The method is able to achieve 64% classification accuracy. The above simple approach has several shortcomings. First, there is a loss of spatial information while constructing the histogram because it only counts the occurrences of words ignoring the spatial positions. Second, features from the whole brain are used for classification, but some of the brain regions may not contain any useful information and may only increase the feature dimensions and noise of the system. Third, in our study we used only one network feature, the degree of a node which measures the connectivity of the node, while other complex network features may be useful for solving the proposed problem. In order to address the above shortcomings, we hypothesize that only a subset of the nodes of the network possesses important information for the classification of the ADHD subjects. To identify the important nodes of the network we have developed a novel algorithm. The algorithm generates different random subset of nodes each time extracting the features from a subset to compute the feature vector and perform classification. The subsets are then ranked based on the classification accuracy and the occurrences of each node in the top ranked subsets are measured. Our algorithm selects the highly occurring nodes for the final classification. Furthermore, along with the node degree, we employ three more node features: network cycles, the varying distance degree and the edge weight sum. We concatenate the features of the selected nodes in a fixed order to preserve the relative spatial information. Experimental validation suggests that the use of the features from the nodes selected using our algorithm indeed help to improve the classification accuracy. Also, our finding is in concordance with the existing literature as the brain regions identified by our algorithms are independently found by many other studies on the ADHD. We achieved a classification accuracy of 69.59% using this approach. However, since this method represents each voxel as a node of the network which makes the number of nodes of the network several thousands. As a result, the network construction step becomes computationally very expensive. Another limitation of the approach is that the network features, which are computed for each node of the network, captures only the local structures while ignore the global structure of the network. Next, in order to capture the global structure of the networks, we use the Multi-Dimensional Scaling (MDS) technique to project all the subjects from an unknown network-space to a low dimensional space based on their inter-network distance measures. For the purpose of computing distance between two networks, we represent each node by a set of attributes such as the node degree, the average power, the physical location, the neighbor node degrees, and the average powers of the neighbor nodes. The nodes of the two networks are then mapped in such a way that for all pair of nodes, the sum of the attribute distances, which is the inter-network distance, is minimized. To reduce the network computation cost, we enforce that the maximum relevant information is preserved with minimum redundancy. To achieve this, the nodes of the network are constructed with clusters of highly active voxels while the activity levels of the voxels are measured based on the average power of their corresponding fMRI time-series. Our method shows promise as we achieve impressive classification accuracies (73.55%) on the ADHD-200 data set. Our results also reveal that the detection rates are higher when classification is performed separately on the male and female groups of subjects. So far, we have only used the fMRI data for solving the ADHD diagnosis problem. Finally, we investigated the answers of the following questions. Do the structural brain images contain useful information related to the ADHD diagnosis problem? Can the classification accuracy of the automatic diagnosis system be improved combining the information of the structural and functional brain data? Towards that end, we developed a new method to combine the information of structural and functional brain images in a late fusion framework. For structural data we input the gray matter (GM) brain images to a Convolutional Neural Network (CNN). The output of the CNN is a feature vector per subject which is used to train the SVM classifier. For the functional data we compute the average power of each voxel based on its fMRI time series. The average power of the fMRI time series of a voxel measures the activity level of the voxel. We found significant differences in the voxel power distribution patterns of the ADHD and control groups of subjects. The Local binary pattern (LBP) texture feature is used on the voxel power map to capture these differences. We achieved 74.23% accuracy using GM features, 77.30% using LBP features and 79.14% using combined information. In summary this dissertation demonstrated that the structural and functional brain imaging data are useful for the automatic detection of the ADHD subjects as we achieve impressive classification accuracies on the ADHD-200 data set. Our study also helps to identify the brain regions which are useful for ADHD subject classification. These findings can help in understanding the pathophysiology of the problem. Finally, we expect that our approaches will contribute towards the development of a biological measure for the diagnosis of the ADHD subjects.
342

OBJECT DETECTION USING VISION TRANSFORMED EFFICIENTDET

Shreyanil Kar (16285265) 30 August 2023 (has links)
<p>This research presents a novel approach for object detection by integrating Vision Transformers (ViT) into the EfficientDet architecture. The field of computer vision, encompassing artificial intelligence, focuses on the interpretation and analysis of visual data. Recent advancements in deep learning, particularly convolutional neural networks (CNNs), have significantly improved the accuracy and efficiency of computer vision systems. Object detection, a widely studied application within computer vision, involves the identification and localization of objects in images.</p> <p>The ViT backbone, renowned for its success in image classification and natural language processing tasks, employs self-attention mechanisms to capture global dependencies in input images. However, ViT’s capability to capture fine-grained details and context information is limited. To address this limitation, the integration of ViT into the EfficientDet architecture is proposed. EfficientDet is recognized for its efficiency and accuracy in object detection. By combining the strengths of ViT and EfficientDet, the proposed integration enhances the network’s ability to capture fine-grained details and context information. It leverages ViT’s global dependency modeling alongside EfficientDet’s efficient object detection framework, resulting in highly accurate and efficient performance. Noteworthy object detection frameworks utilized in the industry, such as RetinaNet, EfficientNet, and EfficientDet, primarily employ convolution.</p> <p>Experimental evaluations were conducted using the PASCAL VOC 2007 and 2012 datasets, widely acknowledged benchmarks for object detection. The integrated ViT-EfficientDet model achieved an impressive mean Average Precision (mAP) score of 86.27% when tested on the PASCAL VOC 2007 dataset, demonstrating its superior accuracy. These results underscore the potential of the proposed integration for real-world applications.</p> <p>In conclusion, the research introduces a novel integration of Vision Transformers into the EfficientDet architecture, yielding significant improvements in object detection performance. By combining ViT’s ability to capture global dependencies with EfficientDet’s efficiency and accuracy, the proposed approach offers enhanced object detection capabilities. Future research directions may explore additional datasets and evaluate the performance of the proposed framework across various computer vision tasks.</p>
343

Mesurer la masse de trous noirs supermassifs à l’aide de l’apprentissage automatique

Chemaly, David 07 1900 (has links)
Des percées récentes ont été faites dans l’étude des trous noirs supermassifs (SMBH), grâce en grande partie à l’équipe du télescope de l’horizon des évènements (EHT). Cependant, déterminer la masse de ces entités colossales à des décalages vers le rouge élevés reste un défi de taille pour les astronomes. Il existe diverses méthodes directes et indirectes pour mesurer la masse de SMBHs. La méthode directe la plus précise consiste à résoudre la cinématique du gaz moléculaire, un traceur froid, dans la sphère d’influence (SOI) du SMBH. La SOI est définie comme la région où le potentiel gravitationnel du SMBH domine sur celui de la galaxie hôte. Par contre, puisque la masse d’un SMBH est négligeable face à la masse d’une galaxie, la SOI est, d’un point de vue astronomique, très petite, typiquement de quelques dizaines de parsecs. Par conséquent, il faut une très haute résolution spatiale pour étudier la SOI d’un SMBH et pouvoir adéquatement mesurer sa masse. C’est cette nécessité d’une haute résolution spatiale qui limite la mesure de masse de SMBHs à de plus grandes distances. Pour briser cette barrière, il nous faut donc trouver une manière d’améliorer la résolution spatiale d’objets observés à un plus au décalage vers le rouge. Le phénomène des lentilles gravitationnelles fortes survient lorsqu’une source lumineuse en arrière-plan se trouve alignée avec un objet massif en avant-plan, le long de la ligne de visée d’un observateur. Cette disposition a pour conséquence de distordre l’image observée de la source en arrière-plan. Puisque cette distorsion est inconnue et non-linéaire, l’analyse de la source devient nettement plus complexe. Cependant, ce phénomène a également pour effet d’étirer, d’agrandir et d’amplifier l’image de la source, permettant ainsi de reconstituer la source avec une résolution spatiale considérablement améliorée, compte tenu de sa distance initiale par rapport à l’observateur. L’objectif de ce projet consiste à développer une chaîne de simulations visant à étudier la faisabilité de la mesure de la masse d’un trou noir supermassif (SMBH) par cinéma- tique du gaz moléculaire à un décalage vers le rouge plus élevé, en utilisant l’apprentissage automatique pour tirer parti du grossissement généré par la distorsion d’une forte lentille gravitationnelle. Pour ce faire, nous générons de manière réaliste des observations du gaz moléculaire obtenues par le Grand Réseau d’Antennes Millimétrique/Submillimétrique de l’Atacama (ALMA). Ces données sont produites à partir de la suite de simulations hydrody- namiques Rétroaction dans des Environnements Réalistes (FIRE). Dans chaque simulation, l’effet cinématique du SMBH est intégré, en supposant le gaz moléculaire virialisé. Ensuite, le flux d’émission du gaz moléculaire est calculé en fonction de sa vitesse, température, densité, fraction de H2, décalage vers le rouge et taille dans le ciel. Le cube ALMA est généré en tenant compte de la résolution spatiale et spectrale, qui dépendent du nombre d’antennes, de leur configuration et du temps d’exposition. Finalement, l’effet de la forte lentille gravi- tationnelle est introduit par la rétro-propagation du faisceau lumineux en fonction du profil de masse de l’ellipsoïde isotherme singulière (SIE). L’exploitation de ces données ALMA simulées est testée dans le cadre d’un problème de régression directe. Nous entraînons un réseau de neurones à convolution (CNN) à apprendre à prédire la masse d’un SMBH à partir des données simulées, sans prendre en compte l’effet de la lentille. Le réseau prédit la masse du SMBH ainsi que son incertitude, en supposant une distribution a posteriori gaussienne. Les résultats sont convaincants : plus la masse du SMBH est grande, plus la prédiction du réseau est précise et exacte. Tout comme avec les méthodes conventionnelles, le réseau est uniquement capable de prédire la masse du SMBH tant que la résolution spatiale des données permet de résoudre la SOI. De plus, les cartes de saillance du réseau confirment que celui-ci utilise l’information contenue dans la SOI pour prédire la masse du SMBH. Dans les travaux à venir, l’effet des lentilles gravitationnelles fortes sera introduit dans les données pour évaluer s’il devient possible de mesurer la masse de ces mêmes SMBHs, mais à un décalage vers le rouge plus élevé. / Recent breakthroughs have been made in the study of supermassive black holes (SMBHs), thanks largely to the Event Horizon Telescope (EHT) team. However, determining the mass of these colossal entities at high redshifts remains a major challenge for astronomers. There are various direct and indirect methods for measuring the mass of SMBHs. The most accurate direct method involves resolving the kinematics of the molecular gas, a cold tracer, in the SMBH’s sphere of influence (SOI). The SOI is defined as the region where the gravitational potential of the SMBH dominates that of the host galaxy. However, since the mass of a SMBH is negligible compared to the mass of a galaxy, the SOI is, from an astronomical point of view, very small, typically a few tens of parsecs. As a result, very high spatial resolution is required to study the SOI of a SMBH and adequately measure its mass. It is this need for high spatial resolution that limits mass measurements of SMBHs at larger distances. To break this barrier, we need to find a way to improve the spatial resolution of objects observed at higher redshifts. The phenomenon of strong gravitational lensing occurs when a light source in the back- ground is aligned with a massive object in the foreground, along an observer’s line of sight. This arrangement distorts the observed image of the background source. Since this distor- tion is unknown and non-linear, analysis of the source becomes considerably more complex. However, this phenomenon also has the effect of stretching, enlarging and amplifying the image of the source, enabling the source to be reconstructed with considerably improved spatial resolution, given its initial distance from the observer. The aim of this project is to develop a chain of simulations to study the feasibility of measuring the mass of a supermassive black hole (SMBH) by kinematics of molecular gas at higher redshift, using machine learning to take advantage of the magnification generated by the distortion of a strong gravitational lens. To this end, we realistically generate observations of molecular gas obtained by the Atacama Large Millimeter/Submillimeter Antenna Array (ALMA). These data are generated from the Feedback in Realistic Environments (FIRE) suite of hydrodynamic simulations. In each simulation, the kinematic effect of the SMBH is integrated, assuming virialized molecular gas. Next, the emission flux of the molecular gas is calculated as a function of its velocity, temperature, density, H2 fraction, redshift and sky size. The ALMA cube is generated taking into account spatial and spectral resolution, which depend on the number of antennas, their configuration and exposure time. Finally, the effect of strong gravitational lensing is introduced by back-propagating the light beam according to the mass profile of the singular isothermal ellipsoid (SIE). The exploitation of these simulated ALMA data is tested in a direct regression problem. We train a convolution neural network (CNN) to learn to predict the mass of an SMBH from the simulated data, without taking into account the effect of the lens. The network predicts the mass of the SMBH as well as its uncertainty, assuming a Gaussian a posteriori distribution. The results are convincing: the greater the mass of the SMBH, the more precise and accurate the network’s prediction. As with conventional methods, the network is only able to predict the mass of the SMBH as long as the spatial resolution of the data allows the SOI to be resolved. Furthermore, the network’s saliency maps confirm that it uses the information contained in the SOI to predict the mass of the SMBH. In future work, the effect of strong gravitational lensing will be introduced into the data to assess whether it becomes possible to measure the mass of these same SMBHs, but at a higher redshift.
344

Assessment of Non-Invasive Blood Pressure Prediction from PPG and rPPG Signals Using Deep Learning

Schrumpf, Fabian, Frenzel, Patrick, Aust, Christoph, Osterhoff, Georg, Fuchs, Mirco 08 May 2023 (has links)
Exploiting photoplethysmography signals (PPG) for non-invasive blood pressure (BP) measurement is interesting for various reasons. First, PPG can easily be measured using fingerclip sensors. Second, camera based approaches allow to derive remote PPG (rPPG) signals similar to PPG and therefore provide the opportunity for non-invasive measurements of BP. Various methods relying on machine learning techniques have recently been published. Performances are often reported as the mean average error (MAE) on the data which is problematic. This work aims to analyze the PPG- and rPPG based BP prediction error with respect to the underlying data distribution. First, we train established neural network (NN) architectures and derive an appropriate parameterization of input segments drawn from continuous PPG signals. Second, we use this parameterization to train NNs with a larger PPG dataset and carry out a systematic evaluation of the predicted blood pressure. The analysis revealed a strong systematic increase of the prediction error towards less frequent BP values across NN architectures. Moreover, we tested different train/test set split configurations which underpin the importance of a careful subject-aware dataset assignment to prevent overly optimistic results. Third, we use transfer learning to train the NNs for rPPG based BP prediction. The resulting performances are similar to the PPG-only case. Finally, we apply different personalization techniques and retrain our NNs with subject-specific data for both the PPG-only and rPPG case. Whilst the particular technique is less important, personalization reduces the prediction errors significantly.
345

Image-classification for Brain Tumor using Pre-trained Convolutional Neural Network / Bildklassificering för hjärntumör med hjälp av förtränat konvolutionellt neuralt nätverk

Alsabbagh, Bushra January 2023 (has links)
Brain tumor is a disease characterized by uncontrolled growth of abnormal cells in the brain. The brain is responsible for regulating the functions of all other organs, hence, any atypical growth of cells in the brain can have severe implications for its functions. The number of global mortality in 2020 led by cancerous brains was estimated at 251,329. However, early detection of brain cancer is critical for prompt treatment and improving patient’s quality of life as well as survival rates. Manual medical image classification in diagnosing diseases has been shown to be extremely time-consuming and labor-intensive. Convolutional Neural Networks (CNNs) has proven to be a leading algorithm in image classification outperforming humans. This paper compares five CNN architectures namely: VGG-16, VGG-19, AlexNet, EffecientNetB7, and ResNet-50 in terms of performance and accuracy using transfer learning. In addition, the authors discussed in this paper the economic impact of CNN, as an AI approach, on the healthcare sector. The models’ performance is demonstrated using functions for loss and accuracy rates as well as using the confusion matrix. The conducted experiment resulted in VGG-19 achieving best performance with 97% accuracy, while EffecientNetB7 achieved worst performance with 93% accuracy. / Hjärntumör är en sjukdom som kännetecknas av okontrollerad tillväxt av onormala celler i hjärnan. Hjärnan är ansvarig för att styra funktionerna hos alla andra organ, därför kan all onormala tillväxt av celler i hjärnan ha allvarliga konsekvenser för dess funktioner. Antalet globala dödligheten ledda av hjärncancer har uppskattats till 251329 under 2020. Tidig upptäckt av hjärncancer är dock avgörande för snabb behandling och för att förbättra patienternas livskvalitet och överlevnadssannolikhet. Manuell medicinsk bildklassificering vid diagnostisering av sjukdomar har visat sig vara extremt tidskrävande och arbetskrävande. Convolutional Neural Network (CNN) är en ledande algoritm för bildklassificering som har överträffat människor. Denna studie jämför fem CNN-arkitekturer, nämligen VGG-16, VGG-19, AlexNet, EffecientNetB7, och ResNet-50 i form av prestanda och noggrannhet. Dessutom diskuterar författarna i studien CNN:s ekonomiska inverkan på sjukvårdssektorn. Modellens prestanda demonstrerades med hjälp av funktioner om förlust och noggrannhets värden samt med hjälp av en Confusion matris. Resultatet av det utförda experimentet har visat att VGG-19 har uppnått bästa prestanda med 97% noggrannhet, medan EffecientNetB7 har uppnått värsta prestanda med 93% noggrannhet.
346

On-Loom Fabric Defect Inspection Using Contact Image Sensors and Activation Layer Embedded Convolutional Neural Network

Ouyang, Wenbin 12 1900 (has links)
Malfunctions on loom machines are the main causes of faulty fabric production. An on-loom fabric inspection system is a real-time monitoring device that enables immediate defect detection for human intervention. This dissertation presented a solution for the on-loom fabric defect inspection, including the new hardware design—the configurable contact image sensor (CIS) module—for on-loom fabric scanning and the defect detection algorithms. The main contributions of this work include (1) creating a configurable CIS module adaptable to a loom width, which brings CIS unique features, such as sub-millimeter resolution, compact size, short working distance and low cost, to the fabric defect inspection system, (2) designing a two-level hardware architecture that can be efficiently deployed in a weaving factory with hundreds of looms, (3) developing a two-level inspecting scheme, with which the initial defect screening is performed on the Raspberry Pi and the intensive defect verification is processed on the cloud server, (4) introducing the novel pairwise-potential activation layer to a convolutional neural network that leads to high accuracies of defect segmentation on fabrics with fine and imbalanced structures, (5) achieving a real-time defect detection that allows a possible defect to be examined multiple times, and (6) implementing a new color segmentation technique suitable for processing multi-color fabric defects. The novel CIS-based on-loom scanning system offered real-time and high-resolution fabric images, which was able to deliver the information of single thread on a fabric. The algorithm evaluation on the fabric defect datasets showed a non-miss-detection rate on defect-free fabrics. The average precision of defect existed images reached above 90% at the pixel level. The detected defect pixels' integrity—the recall scored around 70%. Possible defect regions overestimated on ground truth images and the morphologies of fine defects similar to regular fabric pattern were the two major reasons that caused the imperfection in defect pixel locating. The experiments showed the defect areas on multi-color fabrics could be precisely located under the proposed color segmentation algorithm.
347

Deep Learning-Driven EEG Classification in Human-Robot Collaboration

Wo, Yuan January 2023 (has links)
Human-robot collaboration (HRC) occurs when people and robots work together in a shared environment. Current robots often use rigid programs unsuitable for HRC. Multimodal robot programming offers an easier way to control robots using inputs like voice and gestures. In this scenario, human commands from different sensors trigger the robot’s actions. However, this data-driven approach has challenges: accurately understanding power dynamics, integrating inputs, and precisely controlling the robot. To address this, we introduce EEG signals to improve robot control, requiring reliable signal processing, feature extraction, and accurate classification using machine learning and deep learning. Existing deep learning models struggle to balance accuracy and efficiency. This thesis focuses on whether dilated convolutional neural networks can improve accuracy and reduce training and reaction times compared to the baseline. After using the Morlet wavelet for EEG feature extraction, in the thesis, an existing convolutional neural network as a benchmark is employed and uses the dilated convolution algorithm for comparison. Accuracy, precision, recall, and time are used to assess the comparison algorithm’s performance. The conclusion is that the dilated convolutional neural network performs better than the baseline in accuracy and time parameters. / Samarbete mellan människa och robot (HRC) inträffar när människor och robotar arbetar tillsammans i en delad miljö. Nuvarande robotar använder ofta rigida program som inte är lämpliga för HRC. Multimodal robotprogrammering erbjuder ett enklare sätt att styra robotar med hjälp av röst och gester. I detta scenario utlöser mänskliga kommandon från olika sensorer robotens handlingar. Dock har denna datadrivna ansats utmaningar: att noggrant förstå kraftdynamik, integrera inmatning och exakt styra roboten. För att hantera detta introducerar vi EEG-signaler för att förbättra robotstyrningen, vilket kräver pålitlig signalbehandling, funktionsextraktion och noggrann klassificering med maskininlärning och djupinlärning. Nuvarande djupinlärningsmodeller har svårt att balansera noggrannhet och effektivitet. Den här artikeln fokuserar på om dilaterade konvolutionella neurala nätverk kan förbättra noggrannheten och minska träningstider och reaktionstider jämfört med baslinjen. Efter att ha använt Morlet-våg för EEG-funktionsutvinning använder artikeln en befintlig konvolutionell neural modell som referens och jämför med dilaterad konvolution för att bedöma prestandan. Noggrannhet, precision, recall och tidsparametrar bedömer jämförelsealgoritmens prestanda. Slutsatsen är att det dilaterade konvolutionella neurala nätverket presterar bättre än baslinjen vad gäller noggrannhet och tidsparametrar.
348

Particle Filter Bridge Interpolation in GANs / Brygginterpolation med partikelfilter i GANs

Käll, Viktor, Piscator, Erik January 2021 (has links)
Generative adversarial networks (GANs), a type of generative modeling framework, has received much attention in the past few years since they were discovered for their capacity to recover complex high-dimensional data distributions. These provide a compressed representation of the data where all but the essential features of a sample is extracted, subsequently inducing a similarity measure on the space of data. This similarity measure gives rise to the possibility of interpolating in the data which has been done successfully in the past. Herein we propose a new stochastic interpolation method for GANs where the interpolation is forced to adhere to the data distribution by implementing a sequential Monte Carlo algorithm for data sampling. The results show that the new method outperforms previously known interpolation methods for the data set LINES; compared to the results of other interpolation methods there was a significant improvement measured through quantitative and qualitative evaluations. The developed interpolation method has met its expectations and shown promise, however it needs to be tested on a more complex data set in order to verify that it also scales well. / Generative adversarial networks (GANs) är ett slags generativ modell som har fått mycket uppmärksamhet de senaste åren sedan de upptäcktes för sin potential att återskapa komplexa högdimensionella datafördelningar. Dessa förser en komprimerad representation av datan där enbart de karaktäriserande egenskaperna är bevarade, vilket följdaktligen inducerar ett avståndsmått på datarummet. Detta avståndsmått möjliggör interpolering inom datan vilket har åstadkommits med framgång tidigare. Häri föreslår vi en ny stokastisk interpoleringsmetod för GANs där interpolationen tvingas följa datafördelningen genom att implementera en sekventiell Monte Carlo algoritm för dragning av datapunkter. Resultaten för studien visar att metoden ger bättre interpolationer för datamängden LINES som användes; jämfört med resultaten av tidigare kända interpolationsmetoder syntes en märkbar förbättring genom kvalitativa och kvantitativa utvärderingar. Den framtagna interpolationsmetoden har alltså mött förväntningarna och är lovande, emellertid fordras att den testas på en mer komplex datamängd för att bekräfta att den fungerar väl även under mer generella förhållanden.
349

Evaluating deep learning models for electricity spot price forecasting

Zdybek, Mia January 2021 (has links)
Electricity spot prices are difficult to predict since they depend on different unstable and erratic parameters, and also due to the fact that electricity is a commodity that cannot be stored efficiently. This results in a volatile, highly fluctuating behavior of the prices, with many peaks. Machine learning algorithms have outperformed traditional methods in various areas due to their ability to learn complex patterns. In the last decade, deep learning approaches have been introduced in electricity spot price prediction problems, often exceeding their predecessors. In this thesis, several deep learning models were built and evaluated for their ability to predict the spot prices 10-days ahead. Several conclusions were made. Firstly, it was concluded that rather simple neural network architectures can predict prices with high accuracy, except for the most extreme sudden peaks. Secondly, all the deep networks outperformed the benchmark statistical model. Lastly, the proposed LSTM and CNN provided forecasts which were statistically, significantly superior and had the lowest errors, suggesting they are the most suitable for the prediction task. / Elspotspriser är svåra att förutsäga eftersom de beror på olika instabila och oregelbundna faktorer, och också på grund av att elektricitet är en vara som inte kan lagras effektivt. Detta leder till ett volatilt, fluktuerande beteende hos priserna, med många plötsliga toppar. Maskininlärningsalgoritmer har överträffat traditionella metoder inom olika områden på grund av deras förmåga att lära sig komplexa mönster. Under det senaste decenniet har djupinlärningsmetoder introducerats till problem inom elprisprognostisering och ofta visat sig överlägsna sina föregångare. I denna avhandling konstruerades och utvärderades flera djupinlärningsmodeller på deras förmåga att förutsäga spotpriserna 10 dagar framåt. Den första slutsatsen är att relativt simpla nätverksarkitekturer kan förutsäga priser med hög noggrannhet, förutom för fallen med de mest extrema, plötsliga topparna. Vidare, så övertränade alla djupa neurala nätverken den statistiska modellen som användes som riktmärke. Slutligen, så gav de föreslagna LSTM- och CNN-modellerna prognoser som var statistiskt, signifikant överlägsna de andra och hade de lägsta felen, vilket tyder på att de är bäst lämpade för prognostiseringsuppgiften.
350

Locality Optimizations for Regular and Irregular Applications

Rajbhandari, Samyam 28 December 2016 (has links)
No description available.

Page generated in 0.1048 seconds