Spelling suggestions: "subject:"[een] CONVOLUTION"" "subject:"[enn] CONVOLUTION""
191 |
Semantic Segmentation of RGB images for feature extraction in Real TimeElavarthi, Pradyumna January 2019 (has links)
No description available.
|
192 |
Modeling Stoppage Time as a Convolution of Negative BinomialsTalani, Råvan January 2023 (has links)
This thesis develops and evaluates a probabilistic model that estimates the stoppage time in football. Stoppage time represents the additional minutes of play given after the original matchtime is over. It is crucial in determining the course of events during the remainder of a match, thereby affecting the odds of live sports betting. The proposed approach uses the negative binomial distribution to model events in football and stoppage time is viewed as a convolution of these distributions. The parameters of the negative binomials are estimated using machine learning methods in Python, with TensorFlow as the underlying framework. The data used for the analysis consists of event data for thousands of football matches with corresponding stoppage time, as well as the duration of pauses that have occurred in these games. The negative binomial distribution is shown to be a good fit and can be adapted to the data using scaling and resolution techniques. The model allows us to see how different events contribute to the stoppage time, and the results indicate that injuries, VAR checks, and red cards have the most significant impact on stoppage time. The model has potential for use in live sports betting and can enhance the accuracy of odds calculation. This work was conducted in collaboration with xAlgo which is a department of Kambi, a business-to-business provider of sports betting services.
|
193 |
Comparing performance of convolutional neural network models on a novel car classification task / Jämförelse av djupa neurala nätverksmodeller med faltning på en ny bilklassificeringsuppgiftHansen Vedal, Amund January 2017 (has links)
Recent neural network advances have lead to models that can be used for a variety of image classification tasks, useful for many of today’s media technology applications. In this paper, I train hallmark neural network architectures on a newly collected vehicle image dataset to do both coarse- and fine-grained classification of vehicle type. The results show that the neural networks can learn to distinguish both between many very different and between a few very similar classes, reaching accuracies of 50.8% accuracy on 28 classes and 61.5% in the most challenging 5, despite noisy images and labeling of the dataset. / Nya neurala nätverksframsteg har lett till modeller som kan användas för en mängd olika bildklasseringsuppgifter, och är därför användbara många av dagens medietekniska applikationer. I detta projektet tränar jag moderna neurala nätverksarkitekturer på en nyuppsamlad bilbild-datasats för att göra både grov- och finkornad klassificering av fordonstyp. Resultaten visar att neurala nätverk kan lära sig att skilja mellan många mycket olika bilklasser, och även mellan några mycket liknande klasser. Mina bästa modeller nådde 50,8% träffsäkerhet vid 28 klasser och 61,5% på de mest utmanande 5, trots brusiga bilder och manuell klassificering av datasetet.
|
194 |
Establishing Effective Techniques for Increasing Deep Neural Networks Inference Speed / Etablering av effektiva tekniker för att öka inferenshastigheten i djupa neurala nätverkSunesson, Albin January 2017 (has links)
Recent trend in deep learning research is to build ever more deep networks (i.e. increase the number of layers) to solve real world classification/optimization problems. This introduces challenges for applications with a latency dependence. The problem arises from the amount of computations that needs to be performed for each evaluation. This is addressed by reducing inference speed. In this study we analyze two different methods for speeding up the evaluation of deep neural networks. The first method reduces the number of weights in a convolutional layer by decomposing its convolutional kernel. The second method lets samples exit a network through early exit branches when classifications are certain. Both methods were evaluated on several network architectures with consistent results. Convolutional kernel decomposition shows 20-70% speed up with no more than 1% loss in classification accuracy in setups evaluated. Early exit branches show up to 300% speed up with no loss in classification accuracy when evaluated on CPUs. / De senaste årens trend inom deep learning har varit att addera fler och fler lager till neurala nätverk. Det här introducerar nya utmaningar i applikationer med latensberoende. Problemet uppstår från mängden beräkningar som måste utföras vid varje evaluering. Detta adresseras med en reducering av inferenshastigheten. Jag analyserar två olika metoder för att snabba upp evalueringen av djupa neurala näverk. Den första metoden reducerar antalet vikter i ett faltningslager via en tensordekomposition på dess kärna. Den andra metoden låter samples lämna nätverket via tidiga förgreningar när en klassificering är säker. Båda metoderna utvärderas på flertalet nätverksarkitekturer med konsistenta resultat. Dekomposition på fältningskärnan visar 20-70% hastighetsökning med mindre än 1% försämring av klassifikationssäkerhet i evaluerade konfigurationer. Tidiga förgreningar visar upp till 300% hastighetsökning utan någon försämring av klassifikationssäkerhet när de evalueras på CPU.
|
195 |
Exotic Properties of Metal Organic Systems: Single Molecule StudiesSarkar, Sanjoy 10 September 2021 (has links)
No description available.
|
196 |
Improving Object Detection using Enhanced EfficientNet ArchitectureMichael Youssef Kamel Ibrahim (16302596) 30 August 2023 (has links)
<p>EfficientNet is designed to achieve top accuracy while utilizing fewer parameters, in addition to less computational resources compared to previous models. </p>
<p><br></p>
<p>In this paper, we are presenting compound scaling method that re-weight the network's width (w), depth (d), and resolution (r), which leads to better performance than traditional methods that scale only one or two of these dimensions by adjusting the hyperparameters of the model. Additionally, we are presenting an enhanced EfficientNet Backbone architecture. </p>
<p><br></p>
<p>We show that EfficientNet achieves top accuracy on the ImageNet dataset, while being up to 8.4x smaller and up to 6.1x faster than previous top performing models. The effectiveness demonstrated in EfficientNet on transfer learning and object detection tasks, where it achieves higher accuracy with fewer parameters and less computation. Henceforward, the proposed enhanced architecture will be discussed in detail and compared to the original architecture.</p>
<p><br></p>
<p>Our approach provides a scalable and efficient solution for both academic research and practical applications, where resource constraints are often a limiting factor.</p>
<p><br></p>
|
197 |
Generalised analytic queueing network models. The need, creation, development and validation of mathematical and computational tools for the construction of analytic queueing network models capturing more critical system behaviour.Almond, John January 1988 (has links)
Modelling is an important technique in the comprehension and
management of complex systems. Queueing network models capture
most relevant information from computer system and network
behaviour. The construction and resolution of these models is
constrained by many factors. Approximations contain detail lost
for exact solution and/or provide results at lower cost than
simulation.
Information at the resource and interactive command level is
gathered with monitors under ULTRIX'. Validation studies indicate
central processor service times are highly variable on the
system. More pessimistic predictions assuming this variability
are in part verified by observation.
The utility of the Generalised Exponential (GE) as a
distribution parameterised by mean and variance is explored.
Small networks of GE service centres can be solved exactly using
methods proposed for Generalised Stochastic Petri Nets. For two
centre. systems of GE type a new technique simplifying the balance equations is developed. A very efficient "building bglloocbka"l.
is presented for exactly solving two centre systems with service
or transfer blocking, Bernoulli feedback and load dependent rate,
multiple GE servers. In the tandem finite buffer algorithm the
building block illustrates problems encountered modelling high
variability in blocking networks. ':
. _.
A parametric validation study is made of approximations for
single class closed networks of First-Come-First-Served (FCFS)
centres with general service times. The multiserver extension
using the building block is validated. Finally the Maximum
Entropy approximation is extended to FCFS centres with multiple
chains and implemented with computationally efficient
convolution.
|
198 |
A Bridge between Graph Neural Networks and Transformers: Positional Encodings as Node EmbeddingsManu, Bright Kwaku 01 December 2023 (has links) (PDF)
Graph Neural Networks and Transformers are very powerful frameworks for learning machine learning tasks. While they were evolved separately in diverse fields, current research has revealed some similarities and links between them. This work focuses on bridging the gap between GNNs and Transformers by offering a uniform framework that highlights their similarities and distinctions. We perform positional encodings and identify key properties that make the positional encodings node embeddings. We found that the properties of expressiveness, efficiency and interpretability were achieved in the process. We saw that it is possible to use positional encodings as node embeddings, which can be used for machine learning tasks such as node classification, graph classification, and link prediction. We discuss some challenges and provide future directions.
|
199 |
Identifying and Minimizing Underspecification in Breast Cancer SubtypingTang, Jonathan Cheuk-Kiu 01 December 2022 (has links) (PDF)
In the realm of biomedical technology, both accuracy and consistency are crucial to the development and deployment of these tools. While accuracy is easy to measure, consistency metrics are not so simple to measure, especially in the scope of biomedicine where prediction consistency can be difficult to achieve. Typically, biomedical datasets contain a significantly larger amount of features compared to the amount of samples, which goes against ordinary data mining practices. As a result, predictive models may fail to find valid pathways for prediction during training on such datasets. This concept is known as underspecification.
Underspecification has been more accepted as a concept in recent years, with a handful of recent works exploring underspecification in different applications and a handful of past works experiencing underspecification prior to its declaration. However, underspecification is still under-addressed, to the point where some academics might even claim that it is not a significant problem.
With this in mind, this thesis aims to identify and minimize underspecification of deep learning cancer subtype predictors. To address these goals, this work details the development of Predicting Underspecification Monitoring Pipeline (PUMP), a software tool to provide methodology for data analysis, stress testing, and model evaluation. In this context, the hope is that PUMP can be applied to deep learning training such that any user can ensure that their models are able to generalize to new data as best as possible.
|
200 |
Enhancement-basedSmall TargetDetection for InfraredImagesHanqi, Yang January 2023 (has links)
Infrared small target detection is widely used in fields such as military and security. UNet, which is a classical semantic segmentation method proposed in 2015, has shown excellent performance and robustness. However, U-Net suffers from the problem of losing small targets in deep layers after multiple down-sampling operations. Dilated convolution, as a special convolution that can increase the receptive field without increasing the number of parameters, is considered to be able to optimize the problems caused by down-sampling. Dense Nested Attention Network (DNANet), due to its superior performance, was chosen as the baseline, but it still has the issue of target loss. This study proposes three optimization directions: deep down-sampling replaced by cascaded dilated convolution, dilated spatial attention, and dilated residual block. In these three directions, this study proposes four methods, respectively DNANet-DS-1, DNANet-DS-2, DNANet-Att, and DNANet-RB. Two open-source infrared small target datasets, NUDT-SIRST and NUAA-SIRST, were used in this study. The four proposed methods were trained and tested on these two datasets. Among them, DNANetRB significantly outperforms other methods on the NUAA-SIRST dataset, so further experiments were conducted to observe the influence of different network depths on DNANet-RB. The experimental result indicates that when the network depth exceeds a certain threshold, the network can only achieve marginal improvements, but the number of parameters will increase significantly. / Infraröd detektering av små mål används ofta inom områden som militär och säkerhet. U-Net, som är en klassisk semantisk segmenteringsmetod som föreslogs 2015, har visat utmärkt prestanda och robusthet. U-Net lider dock av problemet med att förlora små mål i djupa lager efter flera nedprovningsoperationer. Dilaterad konvolution, som är en speciell konvolution som kan öka det receptiva fältet utan att öka antalet parametrar, anses kunna optimera de problem som orsakas av downsampling. DNANet (Dense Nested Attention Network) valdes som baslinje på grund av dess överlägsna prestanda, men det har fortfarande problemet med målförlust. Denna studie föreslår tre optimeringsriktningar: djup nedsampling ersatt av kaskad dilaterad konvolution, dilaterad rumslig uppmärksamhet och dilaterat restblock. I dessa tre riktningar föreslår denna studie fyra metoder, respektive DNANet-DS-1, DNANet-DS-2, DNANet-Att och DNANet-RB. Två dataset med små infraröda mål med öppen källkod, NUDT-SIRST och NUAA-SIRST, användes i denna studie. De fyra föreslagna metoderna tränades och testades på dessa två datamängder. Bland dem överträffar DNANet-RB betydligt andra metoder på NUAA-SIRST-datasetet, så ytterligare experiment genomfördes för att observera påverkan av olika nätverksdjup på DNANet-RB. Det experimentella resultatet visar att när nätverksdjupet överskrider ett visst tröskelvärde kan nätverket bara uppnå marginella förbättringar, men antalet parametrar kommer att öka avsevärt.
|
Page generated in 0.0451 seconds