21 |
Estimation of partial discharge inception voltage of magnet wires under inverter surge voltage by volume-time theoryOkubo, Hitoshi, Shimizu, Fuminobu, Hayakawa, Naoki 04 1900 (has links)
No description available.
|
22 |
Study of electrical strength and lifetimes of polymeric insulation for DC applicationsIddrissu, Ibrahim January 2016 (has links)
Polymeric insulating materials are being re-evaluated in the context of the re-emergence of HVDC and its advantages in bulk power transfer over long distances. This has been met with new sets of requirement such as; the use of polymeric insulation, compaction of HV equipment (e.g. HV cables), and innovations in converter technology. This equipment requires high power rating and hence will be exposed to high electric stresses. One of the properties of polymeric DC insulation is its ability to retain injected charges at high DC fields leading to local field modification and subsequent breakdown of the insulation through electrical treeing. Electrical treeing is one of the important failure mechanisms of solid polymeric insulations resulting from high voltage stresses and a precursor to failure of electrical equipment. Hence, the performance and reliability of polymeric insulation designs will be affected by electrical treeing. Literature shows that electrical trees initiate easily with switching voltages such as impulses, voltage surges and reversal of power flow direction. Innovations in converter technology employs fast switching devices such as insulated gated bipolar transistors (IGBTs) which generates substantial amount of harmonics and may also impact insulation systems reliability. This research investigates the reliability of epoxy resin (LH/HY 5052) for suitability in HVDC applications due to its excellent properties as jointing compound in medium and high voltage cables systems. The development of test facilities for short term breakdown strength, space charge measurement and electrical treeing experiment have allowed short term breakdown strength on homogeneous layers of thin epoxy-epoxy samples and long term breakdown through electrical treeing under DC, AC and AC superimposed on DC to be investigated so that an understanding of the link between space charge, material strength and life times can be clarified. The results on short term breakdown showed the layered samples have 6% reduction in strength compared to un-layered samples. For long term treeing test, 100% of the samples stressed with negative DC did not fail while 67% of the sample stressed with positive DC failed with average lifetime of 250 minutes. Samples stressed under AC showed forward and reverse directions of tree growth with average lifetime of 143 minutes from 70% failed samples. For AC superimposed on ±DC all samples failed with average lifetimes of 54 and 78 minutes for positive and negative bias tests, respectively. It is concluded that, the differences in lifetime obtained under positive and negative pure DC tests and that of the positive and negative DC bias tests are associated with space charge causing field relief under negative DC and negative bias tests. The huge reduction in lifetimes under AC superimposed on DC as ripples tests highlights the potential threat of power quality issues on the reliability of DC systems. Electrical tree growth from the ground planer electrode (reverse tree) observed under AC test was associated with relatively low voltage under AC test compared with the other tests see Table 8-1 for test voltages employed.
|
23 |
Real-time face recognition using one-shot learning : A deep learning and machine learning projectDarborg, Alex January 2020 (has links)
Face recognition is often described as the process of identifying and verifying people in a photograph by their face. Researchers have recently given this field increased attention, continuously improving the underlying models. The objective of this study is to implement a real-time face recognition system using one-shot learning. “One shot” means learning from one or few training samples. This paper evaluates different methods to solve this problem. Convolutional neural networks are known to require large datasets to reach an acceptable accuracy. This project proposes a method to solve this problem by reducing the number of training instances to one and still achieving an accuracy close to 100%, utilizing the concept of transfer learning.
|
24 |
Use of Deep Learning in Detection of Skin Cancer and Prevention of MelanomaPapanastasiou, Maria January 2017 (has links)
Melanoma is a life threatening type of skin cancer with numerous fatal incidences all over the world. The 5-year survival rate is very high for cases that are diagnosed in early stage. So, early detection of melanoma is of vital importance. Except for several techniques that clinicians apply so as to improve the reliability of detecting melanoma, many automated algorithms and mobile applications have been developed for the same purpose.In this paper, deep learning model designed from scratch as well as the pretrained models Inception v3 and VGG-16 are used with the aim of developing a reliable tool that can be used for melanoma detection by clinicians and individual users. Dermatologists who use dermoscopes can take advantage of the algorithms trained on dermoscopical images and acquire a confirmation about their diagnosis. On the other hand, the models trained on clinical images can be used on mobile applications, since a cell phone camera takes images similar to them.The results using Inception v3 model for dermoscopical images achieved accuracy 91.4%, sensitivity 87.8% and specificity 92.3%. For clinical images, the VGG-16 model achieved accuracy 86.3%, sensitivity 84.5% and specificity 88.8%. The results are compared to those of clinicians, which shows that the algorithms can be used reliably for the detection of melanoma.
|
25 |
An Evaluation of Approaches for Generative Adversarial Network Overfitting DetectionTung Tien Vu (12091421) 20 November 2023 (has links)
<p dir="ltr">Generating images from training samples solves the challenge of imbalanced data. It provides the necessary data to run machine learning algorithms for image classification, anomaly detection, and pattern recognition tasks. In medical settings, having imbalanced data results in higher false negatives due to a lack of positive samples. Generative Adversarial Networks (GANs) have been widely adopted for image generation. GANs allow models to train without computing intractable probability while producing high-quality images. However, evaluating GANs has been challenging for the researchers due to a need for an objective function. Most studies assess the quality of generated images and the variety of classes those images cover. Overfitting of training images, however, has received less attention from researchers. When the generated images are mere copies of the training data, GAN models will overfit and will not generalize well. This study examines the ability to detect overfitting of popular metrics: Maximum Mean Discrepancy (MMD) and Fréchet Inception Distance (FID). We investigate the metrics on two types of data: handwritten digits and chest x-ray images using Analysis of Variance (ANOVA) models.</p>
|
26 |
Generation of synthetic plant images using deep learning architectureKola, Ramya Sree January 2019 (has links)
Background: Generative Adversarial Networks (Goodfellow et al., 2014) (GANs)are the current state of the art machine learning data generating systems. Designed with two neural networks in the initial architecture proposal, generator and discriminator. These neural networks compete in a zero-sum game technique, to generate data having realistic properties inseparable to that of original datasets. GANs have interesting applications in various domains like Image synthesis, 3D object generation in gaming industry, fake music generation(Dong et al.), text to image synthesis and many more. Despite having a widespread application domains, GANs are popular for image data synthesis. Various architectures have been developed for image synthesis evolving from fuzzy images of digits to photorealistic images. Objectives: In this research work, we study various literature on different GAN architectures. To understand significant works done essentially to improve the GAN architectures. The primary objective of this research work is synthesis of plant images using Style GAN (Karras, Laine and Aila, 2018) variant of GAN using style transfer. The research also focuses on identifying various machine learning performance evaluation metrics that can be used to measure Style GAN model for the generated image datasets. Methods: A mixed method approach is used in this research. We review various literature work on GANs and elaborate in detail how each GAN networks are designed and how they evolved over the base architecture. We then study the style GAN (Karras, Laine and Aila, 2018a) design details. We then study related literature works on GAN model performance evaluation and measure the quality of generated image datasets. We conduct an experiment to implement the Style based GAN on leaf dataset(Kumar et al., 2012) to generate leaf images that are similar to the ground truth. We describe in detail various steps in the experiment like data collection, preprocessing, training and configuration. Also, we evaluate the performance of Style GAN training model on the leaf dataset. Results: We present the results of literature review and the conducted experiment to address the research questions. We review and elaborate various GAN architecture and their key contributions. We also review numerous qualitative and quantitative evaluation metrics to measure the performance of a GAN architecture. We then present the generated synthetic data samples from the Style based GAN learning model at various training GPU hours and the latest synthetic data sample after training for around ~8 GPU days on leafsnap dataset (Kumar et al., 2012). The results we present have a decent quality to expand the dataset for most of the tested samples. We then visualize the model performance by tensorboard graphs and an overall computational graph for the learning model. We calculate the Fréchet Inception Distance score for our leaf Style GAN and is observed to be 26.4268 (the lower the better). Conclusion: We conclude the research work with an overall review of sections in the paper. The generated fake samples are much similar to the input ground truth and appear to be convincingly realistic for a human visual judgement. However, the calculated FID score to measure the performance of the leaf StyleGAN accumulates a large value compared to that of Style GANs original celebrity HD faces image data set. We attempted to analyze the reasons for this large score.
|
27 |
Využití aproximovaných aritmetických obvodů v neuronových sítí / Exploiting Approximate Arithmetic Circuits in Neural Networks InferenceMatula, Tomáš January 2019 (has links)
Táto práca sa zaoberá využitím aproximovaných obvodov v neurónových sieťach so zámerom prínosu energetických úspor. K tejto téme už existujú štúdie, avšak väčšina z nich bola príliš špecifická k aplikácii alebo bola demonštrovaná v malom rozsahu. Pre dodatočné preskúmanie možností sme preto skrz netriviálne modifikácie open-source frameworku TensorFlow vytvorili platformu umožňujúcu simulovať používanie approximovaných obvodov na populárnych a robustných neurónových sieťach ako Inception alebo MobileNet. Bodom záujmu bolo nahradenie väčšiny výpočtovo náročných častí konvolučných neurónových sietí, ktorými sú konkrétne operácie násobenia v konvolučnách vrstvách. Experimentálne sme ukázali a porovnávali rozličné varianty a aj napriek tomu, že sme postupovali bez preučenia siete sa nám podarilo získať zaujímavé výsledky. Napríklad pri architektúre Inception v4 sme získali takmer 8% úspor, pričom nedošlo k žiadnemu poklesu presnosti. Táto úspora vie rozhodne nájsť uplatnenie v mobilných zariadeniach alebo pri veľkých neurónových sieťach s enormnými výpočtovými nárokmi.
|
28 |
Experimental and Theoretical Study of the Characteristics of Submerged Horizontal Gas Jets and Vertical Plunging Water Jets in Water AmbientHarby Mohamed Abd Alaal, Khaled 07 December 2012 (has links)
En este estudio se han construido dos diferentes instalaciones para investigar primero los chorros de gas horizontales y en segundo lugar los chorros verticales de agua que impactan sobre superficies libres de fluido, también se ha desarrollado un modelo numérico integral para predecir las trayectorias de estos jets y sus parámetros más importantes, validándose con los resultados experimentales obtenidos.
En la primera parte de este trabajo, se han realizado experimentos para investigar el comportamiento de chorros de gas horizontales penetrando en agua. Los resultados experimentales indicaron que la longitud de penetración de los chorros de gas está fuertemente influenciada por el diámetro de la boquilla y el número de Froude, así como con el flujo de masa de de entrada y su momento. Aumentar el número de Froude y el diámetro del inyector lleva a aumentar la inestabilidad de jet. Además, la máxima ubicación antes de jet pinch-off se muestra que mantiene una relación logarítmica con el número de Froude para todos los diámetros de jet. Se han desarrollado correlaciones empíricas para predecir estos parámetros. Se ha desarrollado un modelo basado en la integración de las ecuaciones de conservación para que resulte útil en el diseño de aplicaciones en las que participen chorros horizontales así como para asistir a la investigación experimental. Las predicciones del modelo integral se comparan con los datos de los datos experimentales obtenidos con muy buenos resultados.
En la segunda parte de este trabajo, se realizaron una serie de experimentos con de chorros de agua, inyectados verticalmente hacia abajo, a través de toberas circulares que impactan sobre una superficie de agua. Los resultados obtenidos mostraron que la profundidad de penetración de la burbuja disminuye con la longitud del chorro, pero que después de ciertas condiciones se mantiene casi constante. Además ésta aumenta con los diámetros de la boquilla y la velocidad del chorro. La velocidad de arrastre / Harby Mohamed Abd Alaal, K. (2012). Experimental and Theoretical Study of the Characteristics of Submerged Horizontal Gas Jets and Vertical Plunging Water Jets in Water Ambient [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/18065
|
29 |
Analyzing the Negative Log-Likelihood Loss in Generative Modeling / Analys av log-likelihood-optimering inom generativa modellerEspuña I Fontcuberta, Aleix January 2022 (has links)
Maximum-Likelihood Estimation (MLE) is a classic model-fitting method from probability theory. However, it has been argued repeatedly that MLE is inappropriate for synthesis applications, since its priorities are at odds with important principles of human perception, and that, e.g. Generative Adversarial Networks (GANs) are a more appropriate choice. In this thesis, we put these ideas to the test, and explore the effect of MLE in deep generative modelling, using image generation as our example application. Unlike previous studies, we apply a new methodology that allows us to isolate the effects of the training paradigm from several common confounding factors of variation, such as the model architecture and the properties of the true data distribution. The thesis addresses two main questions. First, we ask if models trained via Non-Saturating Generative Adversarial Networks (NSGANs) are capable of producing more realistic images than the exact same architecture trained by directly minimizing the Negative Log-Likelihood (NLL) loss function instead (which is equivalent to MLE). We compare the two training paradigms using the MNIST dataset and a normalizing-flow architecture known as Real NVP, which can explicitly represent a very broad family of density functions. We use the Fréchet Inception Distance (FID) as an algorithmic estimate of subjective image quality. Second, we also analyze how the NLL loss behaves in the presence of model misspecification, which is when the model architecture is not capable of representing the true data distribution, and compare the resulting training curves and performance to those produced by models without misspecification. In order to control for and study different degrees of model misspecification, we create a realistic-looking – but actually synthetic – toy version of the classic MNIST dataset. By this we mean that we create a machine-learning problem where the examples in the dataset look like MNIST, but in fact it have been generated by a Real NVP architecture with known weights, and therefore the true distribution that generated the image data is known. We are not aware of this type of large-scale, realistic-looking toy problem having been used in prior work. Our results show that, first, models trained via NLL perform unexpectedly well in terms of FID, and that a Real NVP trained via an NSGAN approach is unstable during training – even at the Nash equilibrium, which is the global optimum onto which the NSGAN training updates are supposed to converge. Second, the experiments on synthetic data show that models with different degrees of misspecification reach different NLL losses on the training set, but all of them exhibit qualitatively similar convergence behavior. However, looking at the validation NLL loss reveals an important overfitting effect due to the finite size of the synthetic dataset: The models that in theory are able to perfectly describe the true data distribution achieve worse validation NLL losses in practice than some misspecified models, whose reduced complexity acts as a regularizer that helps them generalize better. At the same time, we observe that overfitting has a much stronger negative effect on the validation NLL loss than on the image quality as measured by the FID score. We also conclude that models with too many parameters and degrees of freedom (overparameterized models) should be avoided, as they not only are slow and frequently unstable to train, even using the NLL loss, but they also overfit heavily and produce poorer images. Throughout the thesis, our results highlight the complex and non-intuitive relationship between the NLL loss and the perceptual image quality as measured by the FID score. / Maximum likelihood-metoden är en klassisk parameteruppskattningsmetod från sannolikhetsteori. Det hävdas dock ofta att maximum likelihood är ett olämpligt val för tillämpningar inom exempelvis ljud- och bildsyntes, eftersom metodens prioriteringar står i strid med viktiga principer inom mänsklig perception, och att t.ex. Generative Adversarial Networks (GANs) är ett mer perceptuellt lämpligt val. I den här avhandlingen testar vi dessa hypoteser och utforskar effekten av maximum likelihood i djupa generativa modeller, med bildsyntes som vår exempeltillämpning. Till skillnad från tidigare studier använder vi en ny metodik som gör att vi kan isolera effekterna av träningsparadigmen från flera vanliga störfaktorer, såsom modellarkitekturen och hur väl denna arkitektur svarar mot datats sanna fördelning. Avhandlingen tar upp två huvudfrågor. Först frågar vi oss huruvida modeller tränade via NSGAN (Non-Saturating Generative Adversarial Networks) producerar mer realistiska bilder än om exakt samma arkitektur istället tränas att direkt minimera målfunktionen Negativ Log-Likelihood (NLL). (Att minimera NLL är ekvivalent med maximum likelihood-metoden.) För att jämföra de två träningsparadigmerna använder vi datamängden MNIST samt en normalizing flow-arkitektur kallad Real NVP, vilken på ett explicit sätt kan representera en mycket bred familj av kontinuerliga fördelingsfunktioner. Vi använder också Fréchet Inception Distance (FID) som ett mått för att algoritmiskt uppskatta kvaliteten på syntetiserade bilder. För det andra analyserar vi också hur målfunktionen NLL beter sig för felspecificerade modeller, vilket är det fall när modellarkitekturen inte kan representera datas sanna sannolikhetsfördelning perfekt, och jämför resulterande träningskurvor och -prestanda med motsvarande resultat när vi tränar modeller utan felspecifikation. För att studera och utöva kontroll över olika grader av felspecificerade modeller skapar vi en realistisk – men i själva verket syntetisk – leksaksversion av MNIST. Med detta menar vi att vi skapar ett maskininlärningsproblem där exemplen i datamängden är visuellt mycket lika de i MNIST, men i själva verket alla är slumpgenererade från en Real NVP-arkitektur med kända modellparametrar (vikter), och således är den sanna fördelningen för detta syntetiska bilddatamaterialet känd. Vi är inte medvetna om att någon tidigare forskning använt ett realistiskt och storskaligt leksaksproblem enligt detta recept. Våra resultat visar, för det första, att modeller som tränats via NLL presterar oväntat bra i termer av FID, och att NSGAN-baserad träning av Real NVP-modeller är instabil – även om vi startar träningen vid Nashjämvikten, vilken är det globala optimum som NSGAN är tänkt att konvergera mot. För det andra visar experimenten på syntetiska data att modeller med olika grader av felspecifikation når olika NLL-värden på träningsmaterialet, men de uppvisar alla kvalitativt liknande konvergensbeteende. Om man tittar på NLL-värdena på valideringsdata syns dock en överanpassningseffekt, som härrör från den ändliga storleken på det syntetiska träningsdatamaterialet; specifikt ser vi att de modeller som i teorin perfekt kan beskriva den sanna datafördelningen i praktiken uppnår sämre NLL-värden på valideringsdata än vissa felspecificerade modeller. Den reducerade komplexiteten hos de senare regulariserar uppenbarligen modellerna och hjälper dem att generalisera bättre. Samtidigt noterar vi att överanpassning har en mycket mer uttalad negativ effekt på validerings-NLL än på bildkvalitetsmåttet FID. Vi drar också slutsatsen att modeller med alltför många parametrar och frihetsgrader (överparametriserade modeller) bör undvikas, eftersom de inte bara är långsamma och ofta instabila att träna, också om vi tränar baserat på NLL, men dessutom uppvisar kraftig överanpassning och sämre bildkvalitet. Som helhet belyser resultaten i detta examensarbete det komplexa och icke-intuitiva förhållandet mellan NLL/maximum likelihood och perceptuell bildkvalitet utvärderad med hjälp av FID.
|
30 |
Cartographie et participation : vers une pluralisation des sources de connaissance : application à la Trame Verte et Bleue dans le bocage bressuirais / Cartography and participation : Towards pluralizing knowledge sources : Application to the “Green and Blue network” in the Bressuire hedgerowBousquet, Aurélie 11 April 2016 (has links)
En 2007, lors du Grenelle de l’environnement, la France décide de mettre en place la Trame Verte et Bleue (TVB). L’approche réticulaire retenue pour cette politique environnementale d’un genre nouveau doit faciliter la lutte contre « l’érosion de la biodiversité » (loi n°2009-967 août 2009). Le déploiement de la TVB sur le territoire français passe par l’identification des continuités écologiques dans les documents d’urbanismes. Leur intégration dans les documents d’urbanisme suscite de nombreuses interrogations et nous permet de renouveler les questionnements dialectiques associant cartographie et participation. Alors que ces deux termes peuvent a priori sembler inconciliables, nous proposons dans cette étude un cheminement qui permet de les articuler. Nous formulons une proposition méthodologique permettant de pluraliser les sources de connaissance au sein d’une démarche de conception participative. Cette dernière se caractérise par une démarche qualitative, exploratoire et inductive qui s’appuie sur le cadre méthodologique de la théorie ancrée. L’itinéraire méthodologique emprunté amène progressivement les participants aux ateliers à produire et à travailler à partir de photographies et de cartes, décontextualisées ou classiques. Ces changements de support traduisent une bascule de la vue tangentielle à la vue zénithale. Le travail en groupe ne constitue pas de fait un collectif. Alors pour faciliter le passage de l’individuel au collectif, nous avons permis aux participants d’élaborer et de tester leur argumentaire avant leur entrée dans l’arène publique. Le terrain d’étude mobilisé est la région Poitou-Charentes, où nous avons observé la mise en place du Schéma Régional de Cohérence Écologique (SRCE). Nos observations nous ont permis de proposer et de mettre en application une démarche participative innovante associant une pluralisation des sources de connaissance pour l’identification des continuités écologiques dans le bocage bressuirais. Les différents ateliers de conception participative nous ont permis, non pas la production d’une carte de synthèse, mais la production d’une série de cartes qui viennent enrichir les représentations spatiales des continuités écologiques. / In the wake of 2007’s “Environment conference of Grenelle”,French authorities decided to instate the Green and Blue Ecological Framework (Trame Verte et Bleue - TVB). This new type of environmental policy is design like a network and aims at reducing “biodiversity erosion” (law nr 2009-967, aug 2009). Deploying such frameworks on the national French territory requir to identify ecological continuities within urban planning documents. Integrating such new operational concepts inside urbanism documents triggers plenty of interrogations. Specifically, it calls into question the unlikely couple cartography and participation. While these terms may seem incompatible on a first sight, we argue that the two practices can be articulated, following a set methodology. By being rooted in a participative approach, our methodological proposal allows to pluralize knowledge sources. It is characterized by a scientific posture that is qualitative, exploratory and inductive. The methodological setting is based on the grounded theory. Through implementing our methodology, workshop participants were progressively led to produce and work on the basis of photograph and maps, both decontextualized and regular. Changing media implied a shift from a “tangential” point of view to a “zenithal” perspective. We understood a qualitative difference between “group work” on the one hand and “collective work” on the other hand. In order to facilitate the shift from individual- to collective–grade work, we allowed the participants to conceive and test their argumentative narrative prior to entering the public arena. Our field of study was the region “Poitou-Charentes”, where we observed the deployment of the Regional Ecological Coherence Framework (SRCE in French - Schéma Régional de Cohérence Écologique). Our observations led us to conceive an innovative and participative approach merging plurality of knowledge sources, to identify the ecological continuities in the Bressuire hedgerow. Hinging on participative conception, the various workshops organized resulted in producing a series of maps that expand the scope of spatial representations of ecological continuities, instead of producing a single synthetic map.
|
Page generated in 0.0783 seconds