• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 237
  • 114
  • 30
  • 15
  • 14
  • 10
  • 10
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 496
  • 150
  • 117
  • 106
  • 84
  • 81
  • 72
  • 59
  • 58
  • 56
  • 54
  • 51
  • 51
  • 45
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Typové zkoušky blokových trafostanic dle ČSN EN 62271-2002 a jejich vliv na konstrukci trafostanice / High-voltage/low-voltage prefabricated substantions

Loveček, Michal January 2011 (has links)
The diploma thesis is dedicated to modern kiosk-type transformer substations PET, that are used when modernizing distribution network. Since such substations are placed on publicly accessible places it is really important that they are safe for service staff as well as for the public. Apart from the highest characteristic values stress is put upon the safety. Right construction, functionality and safety is verified using the type-tests as stated in ČSN EN 62271-202. In the last part of the thesis there are the methods and procedures of type-testing described.
232

Výstavba datových center / Data Center Development

Dóša, Vladimír January 2011 (has links)
This thesis presents and describes new global trends among build and operation of datacenters. Further it contains practical application of particular examples, and the theory is supplemented by new findings from given field.
233

Diagnostické metody sledování plynů rozpuštěných v transformátorovém oleji / Diagnostics Methods of Dissolved Gas in Transformer Oil Observation

Hindra, Matěj January 2012 (has links)
This thesis is devoted to analysis of the diagnostic methods used in practice. It is divided into two parts: theoretical and practical. The theoretical part concerns with the general description of the transformers. Further it provils informatik about systems for sampling oil from transformers with insulating oil – paper system. Another important part is the description of gas chromatography and TRANSPORT X. Description of the most appropriate evaluation methods for assessment of the state of the transformer is included as well.
234

Constructiveness-Based Product Review Classification

Loobuyck, Ugo January 2020 (has links)
Promoting constructiveness in online comment sections is an essential step to make the internet a more productive place. On online marketplaces, customers often have the opportunity to voice their opinion and relate their experience with a given product. In this thesis, we investigate the possibility to model constructiveness in product review in order to promote the most informative and argumentative customer feedback. We develop a new constructiveness 4-class scale taxonomy based on heuristics and specific categorical criteria. We use this taxonomy to annotate 4000 Amazon customer reviews as our training set, referred to as the Corpus for Review Constructiveness (CRC). In addition to the 4-class constructiveness tag, we include a binary tag to compare modeling performance with previous work. We train and test several computational models such as Bidirectional Encoder Representations from Transformers (BERT), a Stacked Bidirectional LSTM and a Gradient Boosting Machine. We demonstrate our annotation scheme’s reliability with a set of inter-annotator agreement experiments, and show that good levels of performance can be reached in both multiclass setting (0.69 F1 and 57% error reduction over the baseline) and binary setting (0.85 F1 and 71% error reduction). Different features are evaluated individually and in combination. Moreover, we compare the advantages, downsides and performance of both feature-based and neural network models. Finally, these models trained on CRC are tested on out-of-domain data (news article comments) and shown to be nearly as proficient as on in-domain data. This work allows the extension of constuctiveness modeling to a new type of data and provides a new non-binary taxonomy for data labeling.
235

AI Drummer - Using Learning to EnhanceArti cial Drummer Creativity

Thörn, Oscar January 2020 (has links)
This project explores the usability of Transformers for learning a model that canplay the drums and accompany a human pianist. Building upon previous workusing fuzzy logic systems three experiments are devised to test the usabilityof Transformers. The report also includes a brief survey of algorithmic musicgeneration.The result of the project are that in their current form Transformers cannoteasily learn collaborative music generation. The key insights is that a new wayto encode sequences are needed for collaboration between human and robot inthe music domain. This encoding should be able to handle the varied demandsand lengths of di erent musical instruments.
236

FINE-TUNE A LANGUAGE MODEL FOR TEXT SUMMARIZATION (BERTSUM) ON EDGAR-CORPUS

Niu, Yijie January 2022 (has links)
Financial reports include a lot of useful information for investors, but extracting this information is time-consuming. We think text summarization is a feasible method. In this thesis, we implement BERTSUM, a state-of-the-art language model for text summarization, and evaluate the results by ROUGE metrics. The experiment was carried out on a novel and large-scale financial dataset called EDGAR-CORPUS. The BERTSUM with a transformer achieves the best performance with a ROUGE-L F1 score of 9.26%. We also hand-picked some model-generated summaries that contained common errors and investigated the causes. The results were then compared to previous research. The ROUGE-L F1 value in the previous study was much higher than ours, we think this is due to the length of the financial reports.
237

Argument Mining: Claim Annotation, Identification, Verification

Karamolegkou, Antonia January 2021 (has links)
Researchers writing scientific articles summarize their work in the abstracts mentioning the final outcome of their study. Argumentation mining can be used to extract the claim of the researchers as well as the evidence that could support their claim. The rapid growth of scientific articles demands automated tools that could help in the detection and evaluation of the scientific claims’ veracity. However, there are neither a lot of studies focusing on claim identification and verification neither a lot of annotated corpora available to effectively train deep learning models. For this reason, we annotated two argument mining corpora and perform several experiments with state-of-the-art BERT-based models aiming to identify and verify scientific claims. We find that using SciBERT provides optimal results regardless of the dataset. Furthermore, increasing the amount of training data can improve the performance of every model we used. These findings highlight the need for large-scale argument mining corpora, as well as domain-specific pre-trained models.
238

Last Mile Asset Monitoring: Low Cost Rapid Deployment Asset Monitoring

Zumr, Zdenek 05 September 2014 (has links)
Installation and utilization of residential distribution transformers has not changed substantially over a long period of time. Utilities typically size their transformers based on a formula that takes into account broadly what types and how many dwellings will be connected. Most new residential dwellings feature 200 Amp service per household with an anticipated energy demand of under 20,000 kWh per year. Average electrical energy consumption varies from state to state but averages to 11,280 kWh per year. Energy demand is expected to fall into a typical residential load curve that shows increased demand early in the morning, then decreasing during the day and another peak early to late evening. Distribution transformers are sized at the limit of the combined evening peak with the assumption that the transformer has enough thermal mass to absorb short overloads that may occur when concurrent loading situations among multiple dwellings arise. The assumption that concurrent loading is of short duration and the transformer can cool off during the night time has been validated over the years and has become standard practice. This has worked well when dwelling loads follow an averaging scheme and low level of coincidence. With the arrival of electric vehicles (EV's) this assumption has to be reevaluated. The acquisition of an electric vehicle in a household can drive up energy demand by over 4000 kWh per year. Potentially problematic is the increased capacity of battery packs and the resulting proliferation of Level 2 chargers. The additional load of a single Level 2 charger concurring with the combined evening peak load will push even conservatively sized distribution transformers over their nameplate rating for a substantial amount of time. Additionally, unlike common household appliances of similar power requirements such as ovens or water heaters, a Level 2 battery charger will run at peak power consumption for several hours, and the current drawn by the EVs has very high levels of harmonic distortion. The excessive loading and harmonic profile can potentially result in damaging heat build-up resulting in asset degradation. In this thesis I present a device and method that monitors pole mounted distribution transformers for overheating, collect and wirelessly upload data and initiate commands to chargers to change output levels from Level 2 to Level 1 or shut down EV charging altogether until the transformer returns into safe operational range.
239

Modality Bridging and Unified Multimodal Understanding

Akbari, Hassan January 2022 (has links)
Multimodal understanding is a vast realm of research that covers multiple disciplines. Hence, it requires a correct understanding of the goal in a generic multimodal understanding research study. The definition of modalities of interest is important since each modality requires its own considerations. On the other hand, it is important to understand whether these modalities should be complimentary to each other or have significant overlap in terms of the information they carry. For example, most of the modalities in biological signals do not have significant overlap with each other, yet they can be used together to improve the range and accuracy of diagnoses. An extreme example of two modalities that have significant overlap is an instructional video and its corresponding instructions in detailed texts. In this study, we focus on multimedia, which includes image, video, audio, and text about real world everyday events, mostly focused on human activities. We narrow our study to the important direction of common space learning since we want to bridge between different modalities using the overlap that a given pair of modalities have.There are multiple applications which require a strong common space to be able to perform desirably. We choose image-text grounding, video-audio autoencoding, video-conditioned text generation, and video-audio-text common space learning for semantic encoding. We examine multiple ideas in each direction and achieve important conclusions. In image-text grounding, we learn that different levels of semantic representations are helpful to achieve a thorough common space that is representative of two modalities. In video-audio autoencoding, we observe that reconstruction objectives can help with a representative common space. Moreover, there is an inherent problem when dealing with multiple modalities at the same time, and that is different levels of granularity. For example, the sampling rate and granularity of video is much higher and more complicated compared to audio. Hence, it might be more helpful to find a more semantically abstracted common space which does not carry redundant details, especially considering the temporal aspect of video and audio modalities. In video-conditioned text generation, we examine the possibility of encoding a video sequence using a Transformer (and later decoding the captions using a Transformer decoder). We further explore the possibility of learning latent states for storing real-world concepts without supervision. Using the observations from these three directions, we propose a unified pipeline based on the Transformer architecture to examine whether it is possible to train a (true) unified pipeline on raw multimodal data without supervision in an end-to-end fashion. This pipeline eliminates ad-hoc feature extraction methods and is independent of any previously trained network, making it simpler and easier to use. Furthermore, since it only utilizes one architecture, which enables us to move towards even more simplicity. Hence, we take an ambitious step forward and further unify this pipeline by sharing only one backbone among four major modalities: image, video, audio, and text. We show that it is not only possible to achieve this goal, but we further show the inherent benefits of such pipeline. We propose a new research direction under multimodal understanding and that is Unified Multimodal Understanding. This study is the first that examines this idea and further pushes its limit by scaling up to multiple tasks, modalities, and datasets. In a nutshell, we examine different possibilities for bridging between a pair of modalities in different applications and observe several limitations and propose solutions for them. Using these observations, we provide a unified and strong pipeline for learning a common space which could be used for many applications. We show that our approaches perform desirably and significantly outperform state-of-the-art in different downstream tasks. We set a new baseline with competitive performance for our proposed research direction, Unified Multimodal Understanding.
240

Shortcut Transformers and the Learnability of Automata

Martens, Willeke January 2023 (has links)
Transformers have emerged as a powerful architecture for various tasks in natural language processing, computer vision, and multi-modal domains. Despite their success, understanding the computational capabilities and limitations of transformers remains a challenge. This work focuses on relating transformers to deterministic finite automata (DFAs) and empirically investigates the architecture's ability to simulate DFAs of varying complexities. We empirically explore the simulation of DFAs by transformers, specifically focusing on the solvable A4-DFA and the non-solvable A5-DFA. We conduct experiments to evaluate the in-distribution and out-of-distribution accuracy of sub-linear depth transformers with positive results on both accounts. Additionally, we examine the impact of widening the transformer to find even shallower transformers for the  A4-DFA. While no significant improvements are observed compared to the sub-linear depth transformers, further exploration of hyperparameters is needed for more reliable results.

Page generated in 0.0561 seconds