• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • Tagged with
  • 14
  • 14
  • 10
  • 8
  • 7
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Frequency of in-season strength and power training for rugby league

Masters, Haydn, res.cand@acu.edu.au January 2001 (has links)
The purpose of this study was to determine the contribution of different in-season strength and power training frequencies to strength and power performance over the course of a 22 week rugby league competition period. Twenty-eight male (n=28) participants, with both high and low strength pre-training status, were divided into three groups following a 15 week pre-season strength and power training programme. A four week periodised in-season strength and power training programme, with intensities ranging from 75-100%, was cycled for the 22 week competition season. Strength and power training was conducted one day.week(-1) by the first high pre-training status group (HTFL, n=11), and two day.week(-1) by the second high pre-training status group (HTF2, n=9). The low pre-training status group (LTF1, n=8) performed the same strength and power training frequency and programme as HTF1. Training intensity (% 1RM) and volume (sets x repetitions) of in-season strength and power training sessions were standardised for both groups during each training week. Strength, power, and speed data were collected pre-season, and four times during the in-season period. No differences were found between HTF1 and HTF2 in performance variables throughout the 22-week in-season period. Both HTF1 and HTF2 displayed similar significant detraining effects in strength, power, and speed, regardless of in-season training frequency (p<0.05). LTF1 showed no change from pre-season strength and power performance following 22 weeks of the competition period (p<0.05). It was concluded that in-season strength and power training frequency may have a limited role in determining the success of the in-season strength and power training programme in highly trained footballers. The results of the present study suggest a number of factors other than in-season strength and power training frequency may affect in-season strength and power performance and detraining in high strength pre-training status athletes. The effect the start of a competition period has on dynamic athletic performance needs further investigation.
2

Turn of Phrase: Contrastive Pre-Training for Discourse-Aware Conversation Models

Laboulaye, Roland 16 August 2021 (has links)
Understanding long conversations requires recognizing a discourse flow unique to conversation. Recent advances in unsupervised representation learning of text have been attained primarily through language modeling, which models discourse only implicitly and within a small window. These representations are in turn evaluated chiefly on sentence pair or paragraph-question pair benchmarks, which measure only local discourse coherence. In order to improve performance on discourse-reliant, long conversation tasks, we propose Turn-of-Phrase pre-training, an objective designed to encode long conversation discourse flow. We leverage tree-structured Reddit conversations in English to, relative to a chosen conversation path through the tree, select paths of varying degrees of relatedness. The final utterance of the chosen path is appended to the related paths and the model learns to identify the most coherent conversation path. We demonstrate that our pre-training objective encodes conversational discourse awareness by improving performance on a dialogue act classification task. We then demonstrate the value of transferring discourse awareness with a comprehensive array of conversation-level classification tasks evaluating persuasion, conflict, and deception.
3

APPLYING CLIP FOR LAND COVER CLASSIFICATION USING AERIAL AND SATELLITE IMAGERY

Kexin Meng (17541795) 04 December 2023 (has links)
<p dir="ltr">Land cover classification has always been a crucial topic in the remote sensing domain. Utilizing data collected by unmanned aerial vehicles and satellites, researchers can detect land degradation, monitor environmental changes, and provide insights for urban planning. Recent advancements in large multi-modal models have enabled open-vocabulary classification, which is particularly beneficial in this field. Becuase of the pre-training method, these models can perform zero-shot inference on unseen data, significantly reducing the costs associated with data collection and model training. This open-vocabulary feature of large-scale vision-language pre-training aligns well with the requirements of land cover classification, where benchmark datasets in the remote sensing domain comprise various categories, and transferring results from one dataset to another through supervised learning methods is challenging.</p><p dir="ltr">In this thesis, the author explored the performance of zero-shot CLIP and linear probe CLIP to assess the feasibility of using the CLIP model for land cover classification tasks. Further, the author fine-tuned CLIP by creating hierarchical label sets for the datasets, leading to better zero-shot classification results and improving overall accuracy by 2.5%. Regarding data engineering, the author examined the performance of zero-shot CLIP and linear probe CLIP across different categories and proposed a categorization method for land cover datasets. In summary, this work evaluated CLIP's overall performance on land cover datasets of varying spatial resolutions and proposed a hierarchical classification method to enhance its zero-shot performance. The thesis also offers a practical approach for modifying current dataset categorizations to better align with the model.</p>
4

Transformer-based Model for Molecular Property Prediction with Self-Supervised Transfer Learning

Lin, Lyu January 2020 (has links)
Molecular property prediction has a vast range of applications in the chemical industry. A powerful molecular property prediction model can promote experiments and production processes. The idea behind this degree program lies in the use of transfer learning to predict molecular properties. The project is divided into two parts. The first part is to build and pre-train the model. The model, which is constructed with pure attention-based Transformer Layer, is pre-trained through a Masked Edge Recovery task with large-scale unlabeled data. Then, the performance of this pre- trained model is tested with different molecular property prediction tasks and finally verifies the effectiveness of transfer learning.The results show that after self-supervised pre-training, this model shows its excellent generalization capability. It is possible to be fine-tuned with a short period and performs well in downstream tasks. And the effectiveness of transfer learning is reflected in the experiment as well. The pre-trained model not only shortens the task- specific training time but also obtains better performance and avoids overfitting due to too little training data for molecular property prediction. / Prediktion av molekylers egenskaper har en stor mängd tillämpningar inom kemiindustrin. Kraftfulla metoder för att predicera molekylära egenskaper kan främja vetenskapliga experiment och produktionsprocesser. Ansatsen i detta arbete är att använda överförd inlärning (eng. transfer learning) för att predicera egenskaper hos molekyler. Projektet är indelat i två delar. Den första delen fokuserar på att utveckla och förträna en modell. Modellen består av Transformer-lager med attention- mekanismer och förtränas genom att återställa maskerade kanter i molekylgrafer från storskaliga mängder icke-annoterad data. Efteråt utvärderas prestandan hos den förtränade modellen i en mängd olika uppgifter baserade på prediktion av molekylegenskaper vilket bekräftar fördelen med överförd inlärning.Resultaten visar att modellen efter självövervakad förträning besitter utmärkt förmåga till att generalisera. Den kan finjusteras med liten tidskostnad och presterar väl i specialiserade uppgifter. Effektiviteten hos överförd inlärning visas också i experimenten. Den förtränade modellen förkortar inte bara tiden för uppgifts-specifik inlärning utan uppnår även bättre prestanda och undviker att övertränas på grund otillräckliga mängder data i uppgifter för prediktion av molekylegenskaper.
5

Procedural Pre-Training for Visual Recognition

Anderson, Connor S. 18 June 2024 (has links) (PDF)
Deep learning models can perform many tasks very capably, provided they are trained correctly. Usually, this requires a large amount of data. Pre-training refers to a process of creating a strong initial model by first training it on a large-scale dataset. Such a model can then be adapted to many different tasks, while only requiring a comparatively small amount of task-specific training data. Pre-training is the standard approach in most computer vision scenarios, but it's not without drawbacks. Aside from the cost and effort involved in collecting large pre-training datasets, such data may also contain unwanted biases, violations of privacy, inappropriate content, or copyright material used without permission. Such issues can lead to concerns about the ethical use of models trained using the data. This dissertation addresses a different approach to pre-training visual models by using abstract, procedurally generated data. Such data is free from the concerns around human bias, privacy, and intellectual property. It also has the potential to scale more easily, and provide precisely controllable sources of supervision that are difficult or impossible to extract from data collected in-the-wild from sources like the internet. The obvious disadvantage of such data is that it doesn't model real-world semantics, and thus introduces a large domain-gap. Surprisingly, however, such pre-training can lead to performance not far below models trained in the conventional way. This is shown for different visual recognition tasks, models, and procedural data-generation processes.
6

Pre-training a knowledge enhanced model in biomedical domain for information extraction

Yan, Xi January 2022 (has links)
While recent years have seen a rise of research in knowledge graph enrichedpre-trained language models(PLM), few studies have tried to transfer the work to the biomedical domain. This thesis is a first attempt to pre-train a large-scalebiological knowledge enriched language model (KPLM). Under the frameworkof CoLAKE (T. Sun et al., 2020), a general-use KPLM in general field, this study is pre-trained on PubMed abstracts (a large scale medical text data) andBIKG (AstraZeneca’s biological knowledge graph). We firstly get abstracts from PubMed and their entity linking results. Following this is to connect the entities from abstracts to BIKG to form sub-graphs. Such sub-graphs and sentences from PubMed abstracts are then sent to model CoLAKE for pre-training. By training the model on three objectives (masking word nodes, masking entity nodes and masking relation nodes), this research aims to not only enhancing model’s capacity on modeling natural language but also infusing in-depth knowledge. Later the model is fine-tuned on name entity recognition (NER) and relation extraction tasks on three benchmark datasets (Chemprot (Kringelumet al., 2016), DrugProt (form Text mining drug-protein/gene interactions sharedtask) and DDI (Segura-Bedmar et al., 2013)). Empirical results show that the model outperform state-of-the-art models relation extraction task on DDI dataset, with F1 score of 91.2%. Also on Drugprot and chemprot, this model shows improvement over baseline - scibert model.
7

Structural Self-Supervised Objectives for Transformers

Di Liello, Luca 21 September 2023 (has links)
In this Thesis, we leverage unsupervised raw data to develop more efficient pre-training objectives and self-supervised tasks that align well with downstream applications. In the first part, we present three alternative objectives to BERT’s Masked Language Modeling (MLM), namely Random Token Substitution (RTS), Cluster-based Random Token Substitution C-RTS, and Swapped Language Modeling (SLM). Unlike MLM, all of these proposals involve token swapping rather than replacing tokens with BERT’s [MASK]. RTS and C-RTS involve pre- dicting the originality of tokens, while SLM tasks the model at predicting the original token values. Each objective is applied to several models, which are trained using the same computational budget and corpora. Evaluation results reveal RTS and C-RTS require up to 45% less pre-training time while achieving performance on par with MLM. Notably, SLM outperforms MLM on several Answer Sentence Selection and GLUE tasks, despite utilizing the same computational budget for pre-training. In the second part of the Thesis, we propose self-supervised pre-training tasks that exhibit structural alignment with downstream applications, leading to improved performance and reduced reliance on labeled data to achieve comparable results. We exploit the weak supervision provided by large corpora like Wikipedia and CC-News, challenging the model to recognize whether spans of text originate from the same paragraph or document. To this end, we design (i) a pre-training objective that targets multi-sentence inference models by performing predictions over multiple spans of texts simultaneously, (ii) self-supervised objectives tailored to enhance performance in Answer Sentence Selection and its Contextual version, and (iii) a pre-training objective aimed at performance improvements in Summarization. Through continuous pre-training, starting from renowned checkpoints such as RoBERTa, ELEC- TRA, DeBERTa, BART, and T5, we demonstrate that our models achieve higher performance on Fact Verification, Answer Sentence Selection, and Summarization. We extensively evaluate our proposals on different benchmarks, revealing significant accuracy gains, particularly when annotation in the target dataset is limited. Notably, we achieve state-of-the-art results on the development set of the FEVER dataset and results close to state-of-the-art models using much more parameters on the test set. Furthermore, our objectives enable us to attain state-of-the-art results on ASNQ, WikiQA, and TREC-QA test sets, across all evaluation metrics (MAP, MRR, and P@1). For Summarization, our objective enhances summary quality, as measured by various metrics like ROUGE and BLEURT. We maintain that our proposals can be seamlessly combined with other techniques from recently proposed works, as they do not require alterations to the internal structure of Transformer models but only involve modifications to the training tasks.
8

Recurrent neural network language models for automatic speech recognition

Gangireddy, Siva Reddy January 2017 (has links)
The goal of this thesis is to advance the use of recurrent neural network language models (RNNLMs) for large vocabulary continuous speech recognition (LVCSR). RNNLMs are currently state-of-the-art and shown to consistently reduce the word error rates (WERs) of LVCSR tasks when compared to other language models. In this thesis we propose various advances to RNNLMs. The advances are: improved learning procedures for RNNLMs, enhancing the context, and adaptation of RNNLMs. We learned better parameters by a novel pre-training approach and enhanced the context using prosody and syntactic features. We present a pre-training method for RNNLMs, in which the output weights of a feed-forward neural network language model (NNLM) are shared with the RNNLM. This is accomplished by first fine-tuning the weights of the NNLM, which are then used to initialise the output weights of an RNNLM with the same number of hidden units. To investigate the effectiveness of the proposed pre-training method, we have carried out text-based experiments on the Penn Treebank Wall Street Journal data, and ASR experiments on the TED lectures data. Across the experiments, we observe small but significant improvements in perplexity (PPL) and ASR WER. Next, we present unsupervised adaptation of RNNLMs. We adapted the RNNLMs to a target domain (topic or genre or television programme (show)) at test time using ASR transcripts from first pass recognition. We investigated two approaches to adapt the RNNLMs. In the first approach the forward propagating hidden activations are scaled - learning hidden unit contributions (LHUC). In the second approach we adapt all parameters of RNNLM.We evaluated the adapted RNNLMs by showing the WERs on multi genre broadcast speech data. We observe small (on an average 0.1% absolute) but significant improvements in WER compared to a strong unadapted RNNLM model. Finally, we present the context-enhancement of RNNLMs using prosody and syntactic features. The prosody features were computed from the acoustics of the context words and the syntactic features were from the surface form of the words in the context. We trained the RNNLMs with word duration, pause duration, final phone duration, syllable duration, syllable F0, part-of-speech tag and Combinatory Categorial Grammar (CCG) supertag features. The proposed context-enhanced RNNLMs were evaluated by reporting PPL and WER on two speech recognition tasks, Switchboard and TED lectures. We observed substantial improvements in PPL (5% to 15% relative) and small but significant improvements in WER (0.1% to 0.5% absolute).
9

Employing a Transformer Language Model for Information Retrieval and Document Classification : Using OpenAI's generative pre-trained transformer, GPT-2 / Transformermodellers användbarhet inom informationssökning och dokumentklassificering

Bjöörn, Anton January 2020 (has links)
As the information flow on the Internet keeps growing it becomes increasingly easy to miss important news which does not have a mass appeal. Combating this problem calls for increasingly sophisticated information retrieval methods. Pre-trained transformer based language models have shown great generalization performance on many natural language processing tasks. This work investigates how well such a language model, Open AI’s General Pre-trained Transformer 2 model (GPT-2), generalizes to information retrieval and classification of online news articles, written in English, with the purpose of comparing this approach with the more traditional method of Term Frequency-Inverse Document Frequency (TF-IDF) vectorization. The aim is to shed light on how useful state-of-the-art transformer based language models are for the construction of personalized information retrieval systems. Using transfer learning the smallest version of GPT-2 is trained to rank and classify news articles achieving similar results to the purely TF-IDF based approach. While the average Normalized Discounted Cumulative Gain (NDCG) achieved by the GPT-2 based model was about 0.74 percentage points higher the sample size was too small to give these results high statistical certainty. / Informationsflödet på Internet fortsätter att öka vilket gör det allt lättare att missa viktiga nyheter som inte intresserar en stor mängd människor. För att bekämpa detta problem behövs allt mer sofistikerade informationssökningsmetoder. Förtränade transformermodeller har sedan ett par år tillbaka tagit över som de mest framstående neurala nätverken för att hantera text. Det här arbetet undersöker hur väl en sådan språkmodell, Open AIs General Pre-trained Transformer 2 (GPT-2), kan generalisera från att generera text till att användas för informationssökning och klassificering av texter. För att utvärdera detta jämförs en transformerbaserad modell med en mer traditionell Term Frequency- Inverse Document Frequency (TF-IDF) vektoriseringsmodell. Målet är att klargöra hur användbara förtränade transformermodeller faktiskt är i skapandet av specialiserade informationssökningssystem. Den minsta versionen av språkmodellen GPT-2 anpassas och tränas om till att ranka och klassificera nyhetsartiklar, skrivna på engelska, och uppnår liknande prestanda som den TF-IDF baserade modellen. Den GPT-2 baserade modellen hade i genomsnitt 0.74 procentenheter högre Normalized Discounted Cumulative Gain (NDCG) men provstorleken var ej stor nog för att ge dessa resultat hög statistisk säkerhet.
10

Resource-efficient image segmentation using self-supervision and active learning

Max, Muriel January 2021 (has links)
Neural Networks have been demonstrated to perform well in computer vision tasks, especially in the field of semantic segmentation, where a classification is performed on a per pixel-level. Using deep learning can reduce time and effort in comparison to manual segmentation, however, the performance of neural networks highly depends on the data quality and quantity, which is costly and time-consuming to obtain; especially for image segmentation tasks. In this work, this problem is addressed by investigating a combined approach of self-supervised pre-training and active learning aimed at selecting the most informative training samples. Experiments were performed using the Gland Segmentation and BraTS 2020 datasets. The results indicate that active learning can increase performance for both datasets when only a small percentage of labeled data is used. Furthermore, self-supervised pre-training improves model robustness as well as in some cases additionally boosts model performance. / Neurala nätverk har visats fungera bra för att lösa visionsbasesarade problem med datorer, särskilt inom bildsegmentering, där operationer utförs på en per pixelnivå. Att använda djupinlärning kan minska tid och ansträngning jämfört med manuell segmentering. Prestandan för dessa metoder är dock beror på kvaliteten och kvantiteten på den tillgängliga datan, vilket är kostsamt och tidskrävande att få fram. I detta arbete behandlar vi problemet om kostsam dataannotering genom att undersöka mer effektiva tillvägagångssätt för att träna dessa modeller på mindre annoterad data genom en kombination av självövervakad förträning och active learning - som kan användas för att finna de mest informativa träningspunkterna. Experiment utfördes med hjälp av datasetten Gland Segmentation och BraTS 2020. Resultaten indikerar attactive learning kan öka prestandan för båda datamängderna när endast ett fåtal datapunkter har annoterats och används för träning. Dessutom förbättrar självövervakad pre-training modellens robusthet och kan i vissa fall öka modellprestandan.

Page generated in 0.0783 seconds