• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 237
  • 114
  • 30
  • 15
  • 14
  • 10
  • 10
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 496
  • 150
  • 117
  • 106
  • 84
  • 81
  • 72
  • 59
  • 58
  • 56
  • 54
  • 51
  • 51
  • 45
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

High Speed (MHz) Switch Mode Power Supplies (SMPS) using Coreless PCB Transformer Technology

Kotte, Hari Babu January 2011 (has links)
The most essential unit required for all the electronic devices is the Power Supply Unit (PSU). The main objective of power supply designers is to reduce the size, cost and weight, and to increase the power density of the converter. There is also a requirement to have a lower loss in the circuit and hence in the improvement of energy efficiency of the converter circuit. Operating the converter circuits at higher switching frequencies reduces the size of the passive components such as transformers, inductors, and capacitors, which results in a compact size, weight, and increased power density of the converter. At present the switching frequency of the converter circuit is limited due to the increased switching losses in the existing semiconductor devices and in the magnetic area, because of increased hysteresis and eddy current loss in the core based transformer. Based on continuous efforts to improve the new semi conductor materials such as GaN/SiC and with recently developed high frequency multi-layered coreless PCB step down power transformers, it is now feasible to design ultra-low profile, high power density isolated DC/DC and AC/DC power converters. This thesis is focussed on the design, analysis and evaluation of the converters operating in the MHz frequency region with the latest semi conductor devices and multi-layered coreless PCB step-down power and signal transformers. An isolated flyback DC-DC converter operated in the MHz frequency with multi-layered coreless PCB step down 2:1 power transformer has been designed and evaluated. Soft switching techniques have been incorporated in order to reduce the switching loss of the circuit. The flyback converter has been successfully tested up to a power level of 10W, in the switching frequency range of 2.7-4 MHz. The energy efficiency of the quasi resonant flyback converter was found to be in the range of 72-84% under zero voltage switching conditions (ZVS). The output voltage of the converter was regulated by implementing the constant off-time frequency modulation technique. Because of the theoretical limitations of the Si material MOSFETs, new materials such as GaN and SiC are being introduced into the market and these are showing promising results in the converter circuits as described in this thesis. Comparative parameters of the semi conductor materials such as the vi energy band gap, field strengths and figure of merit have been discussed. In this case, the comparison of an existing Si MOSFET with that of a GaN MOSFET has been evaluated using a multi-layered coreless PCB step-down power transformer for the given input/output specifications of the flyback converter circuit. It has been determined that the energy efficiency of the 45 to 15V regulated converter using GaN was improved by 8-10% compared to the converter using the Si MOSFET due to the gate drive power consumption, lower conduction losses and improved rise/fall times of the switch. For some of the AC/DC and DC/DC applications such as laptop adapters, set-top-box, and telecom applications, high voltage power MOSFETs used in converter circuits possess higher gate charges as compared to that of the low voltage rating MOSFETs. In addition, by operating them at higher switching frequencies, the gate drive power consumption, which is a function of frequency, increases. The switching speeds are also reduced due to the increased capacitance. In order to minimize this gate drive power consumption and to increase the frequency of the converter, a cascode flyback converter was built up using a multi-layered coreless PCB transformer and this was then evaluated. Both simulation and experimental results have shown that with the assistance of the cascode flyback converter the switching speeds of the converter were increased including the significant improvement in the energy efficiency compared to that of the single switch flyback converter. In order to further maximize the utilization of the transformer, to reduce the voltage stress on MOSFETs and to obtain the maximum power density from the power converter, double ended topologies were chosen. For this purpose, a gate drive circuitry utilising the multi-layered coreless PCB gate drive transformer was designed and evaluated in both a Half-bridge and a Series resonant converter. It was found that the gate drive power consumption using this transformer was less than 0.8W for the frequency range of 1.5-3.5MHz. In addition, by using this gate drive circuitry, the maximum energy efficiency of the series resonant converter was found to be 86.5% with an output power of 36.5W.
392

Multilayered Coreless Printed Circuit Board (PCB) Step-down Transformers for High Frequency Switch Mode Power Supplies (SMPS)

Ambatipudi, Radhika January 2011 (has links)
The Power Supply Unit (PSU) plays a vital role in almost all electronic equipment. The continuous efforts applied to the improvement of semiconductor devices such as MOSFETS, diodes, controllers and MOSFET drivers have led to the increased switching speeds of power supplies. By increasing the switching frequency of the converter, the size of passive elements such as inductors, transformers and capacitors can be reduced. Hence, the high frequency transformer has become the backbone in isolated AC/DC and DC/DC converters. The main features of transformers are to provide isolation for safety purpose, multiple outputs such as in telecom applications, to build step down/step up converters and so on. The core based transformers, when operated at higher frequencies, do have limitations such as core losses which are proportional to the operating frequency. Even though the core materials are available in a few MHz frequency regions, because of the copper losses in the windings of the transformers those which are commercially available were limited from a few hundred kHz to 1MHz. The skin and proximity effects because of induced eddy currents act as major drawbacks while operating these transformers at higher frequencies. Therefore, it is necessary to mitigate these core losses, skin and proximity effects while operating the transformers at very high frequencies. This can be achieved by eliminating the magnetic cores of transformers and by introducing a proper winding structure. A new multi-layered coreless printed circuit board (PCB) step down transformer for power transfer applications has been designed and this maintains the advantages offered by existing core based transformers such as, high voltage gain, high coupling coefficient, sufficient input impedance and high energy efficiency with the assistance of a resonant technique. In addition, different winding structures have been studied and analysed for higher step down ratios in order to reduce copper losses in the windings and to achieve a higher coupling coefficient. The advantage of increasing the layer for the given power transfer application in terms of the coupling coefficient, resistance and energy efficiency has been reported. The maximum energy efficiency of the designed three layered transformers was found to be within the range of 90%-97% for power transfer applications operated in a few MHz frequency regions. The designed multi-layered coreless PCB transformers for given power applications of 8, 15 and 30W show that the volume reduction of approximately 40-90% is possible when compared to its existing core based counterparts. The estimation of EMI emissions from the designed transformers proves that the amount of radiated EMI from a three layered transformer is less than that of the two layered transformer because of the decreased radius for the same amount of inductance. Multi-layered coreless PCB gate drive transformers were designed for signal transfer applications and have successfully driven the double ended topologies such as the half bridge, the two switch flyback converter and resonant converters with low gate drive power consumption of about half a watt. The performance characteristics of these transformers have also been evaluated using the high frequency magnetic material made up of NiZn and operated in the 2-4MHz frequency region. These multi-layered coreless PCB power and signal transformers together with the latest semiconductor switching devices such as SiC and GaN MOSFETs and the SiC schottky diode are an excellent choice for the next generation compact SMPS.
393

Réalisation et caractérisation de dispositifs de mesure associés à la détermination de la constante de von Klitzing à partir d’un condensateur calculable étalon dit de Thompson-Lampard / Realization and characterization of the measurement devices associated to the determination of the von Klitzing constant from a standard calculable capacitor said Thompson-Lampard

Sindjui, Ralph 01 July 2016 (has links)
Le sujet de thèse s'inscrit dans le cadre d'un nouveau projet de détermination de la constante de von Klitzing débuté depuis quelques années au LNE et dont l'aboutissement est prévu pour 2018. A ce jour, la mesure la plus exacte de cette constante traçable au Système International d’unités (SI) est obtenue via le raccordement de l'ohm produit par l'effet Hall quantique au farad, matérialisé à l'aide d'un condensateur calculable dit de Thompson-Lampard. Afin d'améliorer sa précédente détermination délivrée en 2000 avec une incertitude relative de 5.10-8,le LNE a décidé de construire un nouvel étalon calculable de Thompson-Lampard (déjà en cours de développement) et d'améliorer l'exactitude de l'ensemble des dispositifs de mesure associés avec pour objectif de réduire l’incertitude globale sur cette détermination à une valeur proche de 10-8. Le travail de thèse porte sur la réalisation, la caractérisation et/ou l’automatisation de la chaîne de mesure associée à cette détermination. / The comparison of electrical quantities expressed in units of the International System of Units (SI) and the same quantities generated from quantum effects is a direct way of determining physical constants. The determination of the von Klitzing constant (quantum of resistance) from a calculable capacitor is a part of this process. The last determination of this constant was conducted at LNE in 2000 with an uncertainty of 5.10-8. To achieve a target uncertainty of 1.10-8, the LNE decided to build a new standard capacitor and improve the associated measurement chain. The work presented here is implemented in the framework of the design/amelioration and the characterization of the measurement chain leading to the relative uncertainty of 1.10-8. Exploratory studies were also conducted about the possible partial or full automation of elements of the measurement chain.
394

Annotating Job Titles in Job Ads using Swedish Language Models

Ridhagen, Markus January 2023 (has links)
This thesis investigates automated annotation approaches to assist public authorities in Sweden in optimizing resource allocation and gaining valuable insights to enhance the preservation of high-quality welfare. The study uses pre-trained Swedish language models for the named entity recognition (NER) task of finding job titles in job advertisements from The Swedish Public Employment Service, Arbetsförmedlingen. Specifically, it evaluates the performance of the Swedish Bidirectional Encoder Representations from Transformers (BERT), developed by the National Library of Sweden (KB), referred to as KB-BERT. The thesis explores the impact of training data size on the models’ performance and examines whether active learning can enhance efficiency and accuracy compared to random sampling. The findings reveal that even with a small training dataset of 220 job advertisements, KB-BERT achieves a commendable F1-score of 0.770 in predicting job titles. The model’s performance improves further by augmenting the training data with an additional 500 annotated job advertisements, yielding an F1-score of 0.834. Notably, the highest F1-score of 0.856 is achieved by applying the active learning strategy of uncertainty sampling and the measure of mean entropy. The test data provided by Arbetsförmedlingen was re-annotated to evaluate the complexity of the task. The human annotator achieved an F1-score of 0.883. Based on these findings, it can be inferred that KB-BERT performs satisfactorily in classifying job titles from job ads.
395

A Transformer-Based Scoring Approach for Startup Success Prediction : Utilizing Deep Learning Architectures and Multivariate Time Series Classification to Predict Successful Companies

Halvardsson, Gustaf January 2023 (has links)
The Transformer, an attention-based deep learning architecture, has shown promising capabilities in both Natural Language Processing and Computer Vision. Recently, it has also been applied to time series classification, which has traditionally used statistical methods or the Gated Recurrent Unit (GRU). The aim of this project was to apply multivariate time series classification to evaluate Transformer-based models, in comparison with the traditional GRUs. The evaluation was done within the problem of startup success prediction at a venture and private equity firm called EQT. Four different Machine Learning (ML) models – the Univariate GRU, Multivariate GRU, Transformer Encoder, and an already existing implementation, the Time Series Transformer (TST) – were benchmarked using two public datasets and the EQT dataset which utilized an investor-centric data split. The results suggest that the TST is the best-performing model on EQT’s dataset within the scope of this project, with a 47% increase in performance – measured by the Area Under the Curve (AUC) metric – compared to the Univariate GRU, and a 12% increase compared to the Multivariate GRU. It was also the best, and third-best, performing model on the two public datasets. Additionally, the model also demonstrated the highest training stability out of all four models, and 15 times shorter training times than the Univariate GRU. The TST also presented several potential qualitative advantages such as utilizing its embeddings for downstream tasks, an unsupervised learning technique, higher explainability, and improved multi-modal compatibility. The project results, therefore, suggest that the TST is a viable alternative to the GRU architecture for multivariate time series classification within the investment domain. With its performance, stability, and added benefits, the TST is certainly worth considering for time series modeling tasks. / Transformern är en attention-baserad arkitektur skapad för djupinlärning som har demonsterat lovande kapacitet inom både naturlig språkbehandling och datorseende. Nyligen har det även tillämpats på tidsserieklassificering, som traditionellt har använt statistiska metoder eller GRU. Syftet med detta projekt var att tillämpa multivariat tidsserieklassificering för att utvärdera transformer-baserade modeller, i jämförelse med de traditionella GRUerna. Jämförelsen gjordes inom problemet med att klassificera vilka startup-företag som är potentiellt framgångsrika eller inte, och gjordes på ett risk- och privatkapitalbolag som heter EQT. Fyra olika maskininlärningsmodeller – Univariat GRU, Multivariat GRU, Transformer Encoder och en redan existerande implementering, TST – jämfördes med hjälp av två offentliga datamängder och EQT-datamängden som använde sig av en investerarcentrerad datauppdelning. Resultaten tyder på att TST är den modellen som presterar bäst på EQT:s datauppsättning inom ramen för detta projekt, med en 47% ökning i prestanda – mätt med AUC – jämfört med den univariata GRUn och en ökning på 12% jämfört med den multivariata GRUn. Det var också den bäst och tredje bäst presterande modellen på de två offentliga datamängderna. Modellen visade även den högsta träningsstabiliteten av alla fyra modellerna och 15 gånger kortare träningstider än den univariata GRUn. TST visade även flera potentiella kvalitativa fördelar som att använda dess inbäddningar för nedströmsuppgifter, en oövervakad inlärningsteknik, högre förklarabarhet och förbättrad multimodal kompatibilitet. Projektresultaten tyder därför på att TST är ett gångbart alternativ till GRUarkitekturen för multivariat tidsserieklassificering inom investeringsdomänen. Med sin prestanda, stabilitet och extra fördelar är TST verkligen värt att överväga för tidsseriemodelleringsproblem.
396

Students Acceptance and Use of ChatGPT in Academic Settings

Hasselqvist Haglund, Jakob January 2023 (has links)
The swift progression of technology has radically reshaped our lives, becoming a big part of our daily routines and paving the way for advancements in communication, automation, and information processing. OpenAI, a company at the forefront of artificial intelligence since 2015, has made remarkable strides towards making AI accessible and beneficial for all (OpenAI, n.d.). A notable accomplishment in their journey has been the development of Chat Generative Pre-trained Transformers (ChatGPT). This study aims to identify and explore the factors influencing students' acceptance and use of ChatGPT in academic settings. Despite the rising prominence of ChatGPT across various disciplines, understanding its acceptance and utilization, particularly within the sphere of higher education, remains limited. ChatGPT holds immense potential as a valuable asset for both students and educators. Utilizing the Unified Theory of Acceptance and Use of Technology (UTAUT) and a quantitative research approach, investigating these factors. The results suggest that student acceptance and use lies in Behavioral Intention, while Behavioral Initiation is influenced by both Effort Expectancy and Performance Expectancy.
397

Learning Embeddings for Fashion Images

Hermansson, Simon January 2023 (has links)
Today the process of sorting second-hand clothes and textiles is mostly manual. In this master’s thesis, methods for automating this process as well as improving the manual sorting process have been investigated. The methods explored include the automatic prediction of price and intended usage for second-hand clothes, as well as different types of image retrieval to aid manual sorting. Two models were examined: CLIP, a multi-modal model, and MAE, a self-supervised model. Quantitatively, the results favored CLIP, which outperformed MAE in both image retrieval and prediction. However, MAE may still be useful for some applications in terms of image retrieval as it returns items that look similar, even if they do not necessarily have the same attributes. In contrast, CLIP is better at accurately retrieving garments with as many matching attributes as possible. For price prediction, the best model was CLIP. When fine-tuned on the dataset used, CLIP achieved an F1-Score of 38.08 using three different price categories in the dataset. For predicting the intended usage (either reusing the garment or exporting it to another country) the best model managed to achieve an F1-Score of 59.04.
398

Exploration of Knowledge Distillation Methods on Transformer Language Models for Sentiment Analysis / Utforskning av metoder för kunskapsdestillation på transformatoriska språkmodeller för analys av känslor

Liu, Haonan January 2022 (has links)
Despite the outstanding performances of the large Transformer-based language models, it proposes a challenge to compress the models and put them into the industrial environment. This degree project explores model compression methods called knowledge distillation in the sentiment classification task on Transformer models. Transformers are neural models having stacks of identical layers. In knowledge distillation for Transformer, a student model with fewer layers will learn to mimic intermediate layer vectors from a teacher model with more layers by designing and minimizing loss. We implement a framework to compare three knowledge distillation methods: MiniLM, TinyBERT, and Patient-KD. Student models produced by the three methods are evaluated by accuracy score on the SST-2 and SemEval sentiment classification dataset. The student models’ attention matrices are also compared with the teacher model to find the best student model for capturing dependencies in the input sentences. The comparison results show that the distillation method focusing on the Attention mechanism can produce student models with better performances and less variance. We also discover the over-fitting issue in Knowledge Distillation and propose a Two-Step Knowledge Distillation with Transformer Layer and Prediction Layer distillation to alleviate the problem. The experiment results prove that our method can produce robust, effective, and compact student models without introducing extra data. In the future, we would like to extend our framework to support more distillation methods on Transformer models and compare performances in tasks other than sentiment classification. / Trots de stora transformatorbaserade språkmodellernas enastående prestanda är det en utmaning att komprimera modellerna och använda dem i en industriell miljö. I detta examensarbete undersöks metoder för modellkomprimering som kallas kunskapsdestillation i uppgiften att klassificera känslor på Transformer-modeller. Transformers är neurala modeller med staplar av identiska lager. I kunskapsdestillation för Transformer lär sig en elevmodell med färre lager att efterlikna mellanliggande lagervektorer från en lärarmodell med fler lager genom att utforma och minimera förluster. Vi genomför en ram för att jämföra tre metoder för kunskapsdestillation: MiniLM, TinyBERT och Patient-KD. Elevmodeller som produceras av de tre metoderna utvärderas med hjälp av noggrannhetspoäng på datasetet för klassificering av känslor SST-2 och SemEval. Elevmodellernas uppmärksamhetsmatriser jämförs också med den från lärarmodellen för att ta reda på vilken elevmodell som är bäst för att fånga upp beroenden i de inmatade meningarna. Jämförelseresultaten visar att destillationsmetoden som fokuserar på uppmärksamhetsmekanismen kan ge studentmodeller med bättre prestanda och mindre varians. Vi upptäcker också problemet med överanpassning i kunskapsdestillation och föreslår en tvåstegs kunskapsdestillation med transformatorskikt och prediktionsskikt för att lindra problemet. Experimentresultaten visar att vår metod kan producera robusta, effektiva och kompakta elevmodeller utan att införa extra data. I framtiden vill vi utöka vårt ramverk för att stödja fler destillationmetoder på Transformer-modeller och jämföra prestanda i andra uppgifter än sentimentklassificering.
399

Transformer Based Object Detection and Semantic Segmentation for Autonomous Driving

Hardebro, Mikaela, Jirskog, Elin January 2022 (has links)
The development of autonomous driving systems has been one of the most popular research areas in the 21st century. One key component of these kinds of systems is the ability to perceive and comprehend the physical world. Two techniques that address this are object detection and semantic segmentation. During the last decade, CNN based models have dominated these types of tasks. However, in 2021, transformer based networks were able to outperform the existing CNN approach, therefore, indicating a paradigm shift in the domain. This thesis aims to explore the use of a vision transformer, particularly a Swin Transformer, in an object detection and semantic segmentation framework, and compare it to a classical CNN on road scenes. In addition, since real-time execution is crucial for autonomous driving systems, the possibility of a parameter reduction of the transformer based network is investigated. The results appear to be advantageous for the Swin Transformer compared to the convolutional based network, considering both object detection and semantic segmentation. Furthermore, the analysis indicates that it is possible to reduce the computational complexity while retaining the performance.
400

Computer Vision for Document Image Analysis and Text Extraction / Datorseende för analys av dokumentbilder och textutvinning

Benchekroun, Omar January 2022 (has links)
Automatic document processing has been a subject of interest in the industry for the past few years, especially with the recent technological advances in Machine Learning and Computer Vision. This project investigates in-depth a major component used in Document Image Processing known as Optical Character Recognition (OCR). First, an improvement upon existing shallow CNN+LSTM is proposed, using domain-specific data synthesis. We demonstrate that this model can achieve an accuracy of up to 97% on non-handwritten text, with an accuracy improvement of 24% when using synthetic data. Furthermore, we deal with handwritten text that presents more challenges including the variance of writing style, slanting, and character ambiguity. A CNN+Transformer architecture is validated to recognize handwriting extracted from real-world insurance statements data. This model achieves a maximal accuracy of 92% on real-world data. Moreover, we demonstrate how a data pipeline relying on synthetic data can be a scalable and affordable solution for modern OCR needs. / Automatisk dokumenthantering har varit ett ämne av intresse i branschen under de senaste åren, särskilt med de senaste tekniska framstegen inom maskininlärning och datorseende. I detta projekt kommer man att på djupet undersöka en viktig komponent som används vid bildbehandling av dokument och som kallas optisk teckenigenkänning (OCR). Först kommer en förbättring av befintlig ytlig CNN+LSTM att föreslås, med hjälp av domänspecifik datasyntes. Vi kommer att visa att denna modell kan uppnå en noggrannhet på upp till 97% på icke handskriven text, med en förbättring av noggrannheten på 24% när syntetiska data används. Dessutom kommer vi att behandla handskriven text som innebär fler utmaningar, t.ex. variationer i skrivstilen, snedställningar och tvetydiga tecken. En CNN+Transformer-arkitektur kommer att valideras för att känna igen handskrift från verkliga data om försäkringsbesked. Denna modell uppnår en maximal noggrannhet på 92% på verkliga data. Dessutom kommer vi att visa hur en datapipeline som bygger på syntetiska data är en skalbar och prisvärd lösning för moderna OCR-behov.

Page generated in 0.0595 seconds