• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 27
  • 27
  • 12
  • 11
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Surprise for Horwich (and Some Advocates of the Fine-Tuning Argument (Which Does Not Include Horwich (as Far as I Know)))

Harker, David 01 November 2012 (has links)
The judgment that a given event is epistemically improbable is necessary but insufficient for us to conclude that the event is surprising. Paul Horwich has argued that surprising events are, in addition, more probable given alternative background assumptions that are not themselves extremely improbable. I argue that Horwich's definition fails to capture important features of surprises and offer an alternative definition that accords better with intuition. An important application of Horwich's analysis has arisen in discussions of fine-tuning arguments. In the second part of the paper I consider the implications for this argument of employing my definition of surprise. I argue that advocates of fine-tuning arguments are not justified in attaching significance to the fact that we are surprised by examples of fine-tuning.
2

[en] SUMARIZATION OF HEALTH SCIENCE PAPERS IN PORTUGUESE / [pt] SUMARIZAÇÃO DE ARTIGOS CIENTÍFICOS EM PORTUGUÊS NO DOMÍNIO DA SAÚDE

DAYSON NYWTON C R DO NASCIMENTO 30 October 2023 (has links)
[pt] Neste trabalho, apresentamos um estudo sobre o fine-tuning de um LLM (Modelo de Linguagem Amplo ou Large Language Model) pré-treinado para a sumarização abstrativa de textos longos em português. Para isso, construímos um corpus contendo uma coleção de 7.450 artigos científicos na área de Ciências da Saúde em português. Utilizamos esse corpus para o fine-tuning do modelo BERT pré-treinado para o português brasileiro (BERTimbau). Em condições semelhantes, também treinamos um segundo modelo baseado em Memória de Longo Prazo e Recorrência (LSTM) do zero, para fins de comparação. Nossa avaliação mostrou que o modelo ajustado obteve pontuações ROUGE mais altas, superando o modelo baseado em LSTM em 30 pontos no F1-score. O fine-tuning do modelo pré-treinado também se destaca em uma avaliação qualitativa feita por avaliadores a ponto de gerar a percepção de que os resumos gerados poderiam ter sido criados por humanos em uma coleção de documentos específicos do domínio das Ciências da Saúde. / [en] In this work, we present a study on the fine-tuning of a pre-trained Large Language Model for abstractive summarization of long texts in Portuguese. To do so, we built a corpus gathering a collection of 7,450 public Health Sciences papers in Portuguese. We fine-tuned a pre-trained BERT model for Brazilian Portuguese (the BERTimbau) with this corpus. In a similar condition, we also trained a second model based on Long Short-Term Memory (LSTM) from scratch for comparison purposes. Our evaluation showed that the fine-tuned model achieved higher ROUGE scores, outperforming the LSTM based by 30 points for F1-score. The fine-tuning of the pre-trained model also stands out in a qualitative evaluation performed by assessors, to the point of generating the perception that the generated summaries could have been created by humans in a specific collection of documents in the Health Sciences domain.
3

Bimodal Automatic Speech Segmentation And Boundary Refinement Techniques

Akdemir, Eren 01 March 2010 (has links) (PDF)
Automatic segmentation of speech is compulsory for building large speech databases to be used in speech processing applications. This study proposes a bimodal automatic speech segmentation system that uses either articulator motion information (AMI) or visual information obtained by a camera in collaboration with auditory information. The presence of visual modality is shown to be very beneficial in speech recognition applications, improving the performance and noise robustness of those systems. In this dissertation a significant increase in the performance of the automatic speech segmentation system is achieved by using a bimodal approach. Automatic speech segmentation systems have a tradeoff between precision and resulting number of gross errors. Boundary refinement techniques are used in order to increase precision of these systems without decreasing the system performance. Two novel boundary refinement techniques are proposed in this thesis / a hidden Markov model (HMM) based fine tuning system and an inverse filtering based fine tuning system. The segment boundaries obtained by the bimodal speech segmentation system are improved further by using these techniques. To fulfill these goals, a complete two-stage automatic speech segmentation system is produced and tested in two different databases. A phonetically rich Turkish audiovisual speech database, that contains acoustic data and camera recordings of 1600 Turkish sentences uttered by a male speaker, is build from scratch in order to be used in the experiments. The visual features of the recordings are extracted and manual phonetic alignment of the database is done to be used as a ground truth for the performance tests of the automatic speech segmentation systems.
4

Automatic fine tuning of cavity filters / Automatisk finjustering av kavitetsfilter

Boyer de la Giroday, Anna January 2016 (has links)
Cavity filters are a necessary component in base stations used for telecommunication. Without these filters it would not be possible for base stations to send and receive signals at the same time. Today these cavity filters require fine tuning by humans before they can be deployed. This thesis have designed and implemented a neural network that can tune cavity filters. Different types of design parameters have been evaluated, such as neural network architecture, data presentation and data preprocessing. While the results was not comparable to human fine tuning, it was shown that there was a relationship between error and number of weights in the neural network. The thesis also presents some rules of thumb for future designs of neural network used for filter tuning.
5

Marine Habitat Mapping Using Image Enhancement Techniques & Machine Learning

Mureed, Mudasar January 2022 (has links)
AbstractThe mapping of habitats is the first step that is done in policies that target theenvironment, as well as in spatial planning and management. The biodiversityplans are always centered around habitats. Therefore, constant monitoring ofthese delicate species in terms of health, changes, and extinction is a must inbiodiversity plans. Human activities are constantly growing, resulting in theextinction of land and marine habitats. Land habitats are being destroyed using airpollution and the cutting of forests. At the same time, marine habitats are beingdestroyed due to acidification of ocean waters and waste materials from theindustries and pollution. The author has focused on aquatic habitats in thisdissertation, mainly coral reefs. An estimate of 27% of coral reef ecosystems havebeen destroyed, and a further 30% are at risk of being damaged in the comingyears. Coral reefs occupy 1% of the ocean floor, and yet they provide a home to30% of marine organisms. To analyze the health of these aquatic habitats, theyneed to be assessed through habitat mapping. Habitat mapping shows thegeographic distribution of different habitats within a particular area. Marinehabitats are typically mapped using camera imagery. The quality of underwaterimages suffers from the characteristics of the marine environment. This results inblurry images or containing particles that cover many parts of an image. Toovercome this, underwater image enhancement algorithms are used to preprocessimages beforehand. Now, there are many underwater image enhancementalgorithms that target different characteristics of the marine environment, butthere is no consensus among researchers about a single underwater technique thatcan be used for any marine dataset. In this dissertation, multiple experiments onvarious popular image enhancement techniques (seven) were conducted and usedto reach a decision about a single underwater approach for all datasets. Thedatasets include EILAT, EILAT2, RSMAS, and MLC08. Also, two state-of-the-artdeep convolutional neural networks for habitat mapping, i.e., DenseNet andMobileNet tested. Maximum results from the combination of Contrast LimitedAdaptive Histogram Equalization (CLAHE) achieved as underwater imageenhancement technique and DenseNet as deep convolutional network. / Not applicable
6

[pt] SUMARIZAÇÃO AUTOMÁTICA DE MULTIPLAS AVALIAÇÕES UTILIZANDO AJUSTE FINO DE MODELOS DE LINGUAGEM TRANSFORMERS / [en] UNSUPERVISED MULTI-REVIEW SUMMARIZATION USING FINE-TUNED TRANSFORMER LANGUAGE MODELS

LUCAS ROBERTO DA SILVA 05 July 2021 (has links)
[pt] Sumarização automática é a tarefa de gerar resumos concisos, corretos e com consistência factual. A tarefa pode ser aplicada a diversos estilos textuais, dentre eles notícias, publicações acadêmicas e avaliações de produtos ou lugares. A presente dissertação aborda a sumarização de múltiplas avaliações. Esse tipo de aplicação se destaca por sua natureza não supervisionada e pela necessidade de lidar com a redundância das informações presentes nas avaliações. Os trabalhos de sumarização automática são avaliados utilizando a métrica ROUGE, que se baseia na comparação de n-gramas entre o texto de referência e o resumo gerado. A falta de dados supervisionados motivou a criação da arquitetura MeanSum, que foi a primeira arquitetura de rede neural baseada em um modelo não supervisionado para essa tarefa. Ela é baseada em auto-encoder e foi estendida por outros trabalhos, porém nenhum deles apresentou os efeitos do uso do mecanismo de atenção e tarefas auxiliares durante o treinamento do modelo. O presente trabalho é dividido em duas etapas. A primeira trata de um experimento no qual extensões à arquitetura do MeanSum foram propostas para acomodar mecanismos de atenção e tarefas auxiliares de classificação de sentimento. Ainda nessa etapa, explora-se o uso de dados sintéticos para adaptar modelos supervisionados a tarefas não supervisionadas. Na segunda etapa, os resultados obtidos anteriormente foram utilizados para realizar um estudo sobre o uso de ajuste fino (fine-tuning) de modelos de linguagem Transformers pré-treinados. A utilização desses modelos mostrou ser uma alternativa promissora para enfrentar a natureza não supervisionada do problema, apresentando um desempenho de + 4 ROUGE quando comparado a trabalhos anteriores. / [en] Automatic summarization is the task of generating concise, correct, and factual summaries. The task can be applied to different textual styles, including news, academic publications, and product or place reviews. This dissertation addresses the summary of multiple evaluations. This type of application stands out for its unsupervised nature and the need to deal with the redundancy of the information present in the reviews. The automatic summarization works are evaluated using the ROUGE metric, which is based on the comparison of n-grans between the reference text and the generated summary. The lack of supervised data motivated the creation of the MeanSum architecture, which was the first neural network architecture based on an unsupervised model for this task. It is based on auto-encoder and has been extended to other works, but none explored the effects of using the attention mechanism and auxiliary tasks during training. The present work is divided into two parts: the first deals with an experiment in which we make extensions to the MeanSum architecture, adding attention mechanisms and auxiliary sentiment classification tasks. In the same experiment, we explore synthetic data to adapt supervised models for unsupervised tasks. In the second part, we used the results previously obtained to carry out a second study on fine-tuning pre-trained Transformer language models. The use of these models showed a promising alternative to the unsupervised nature of the problem, outperforming previous works by + 4 ROUGE.
7

Radiative alpha capture on carbon-12

Gan, Ling 08 December 2023 (has links) (PDF)
In this thesis, we used Effective Field Theory (EFT) to calculate the radiative �� capture on 12C . This reaction is considered the “holy grail” in nuclear astrophysics because it determines the relevant abundance of 16O and 12C . We considered the E1 transition from initial �� wave at energy around the Gamow energy ���� =0.3MeV.The theoretical formula for the cross section is obtained by fitting the EFT parameters to the phase shift and S-factor data. We find the Effective Range Expansion (ERE) parameters describing the ��-wave phase shift are fine tuned. The shallow bound state and the resonance ��-wave states are also described.
8

Land Use and Land Cover Classification Using Deep Learning Techniques

January 2016 (has links)
abstract: Large datasets of sub-meter aerial imagery represented as orthophoto mosaics are widely available today, and these data sets may hold a great deal of untapped information. This imagery has a potential to locate several types of features; for example, forests, parking lots, airports, residential areas, or freeways in the imagery. However, the appearances of these things vary based on many things including the time that the image is captured, the sensor settings, processing done to rectify the image, and the geographical and cultural context of the region captured by the image. This thesis explores the use of deep convolutional neural networks to classify land use from very high spatial resolution (VHR), orthorectified, visible band multispectral imagery. Recent technological and commercial applications have driven the collection a massive amount of VHR images in the visible red, green, blue (RGB) spectral bands, this work explores the potential for deep learning algorithms to exploit this imagery for automatic land use/ land cover (LULC) classification. The benefits of automatic visible band VHR LULC classifications may include applications such as automatic change detection or mapping. Recent work has shown the potential of Deep Learning approaches for land use classification; however, this thesis improves on the state-of-the-art by applying additional dataset augmenting approaches that are well suited for geospatial data. Furthermore, the generalizability of the classifiers is tested by extensively evaluating the classifiers on unseen datasets and we present the accuracy levels of the classifier in order to show that the results actually generalize beyond the small benchmarks used in training. Deep networks have many parameters, and therefore they are often built with very large sets of labeled data. Suitably large datasets for LULC are not easy to come by, but techniques such as refinement learning allow networks trained for one task to be retrained to perform another recognition task. Contributions of this thesis include demonstrating that deep networks trained for image recognition in one task (ImageNet) can be efficiently transferred to remote sensing applications and perform as well or better than manually crafted classifiers without requiring massive training data sets. This is demonstrated on the UC Merced dataset, where 96% mean accuracy is achieved using a CNN (Convolutional Neural Network) and 5-fold cross validation. These results are further tested on unrelated VHR images at the same resolution as the training set. / Dissertation/Thesis / Masters Thesis Computer Science 2016
9

Klasifikace vztahů mezi pojmenovanými entitami v textu / Classification of Relations between Named Entities in Text

Ondřej, Karel January 2020 (has links)
This master thesis deals with the extraction of relationships between named entities in the text. In the theoretical part of the thesis, the issue of natural language representation for machine processing is discussed. Subsequently, two partial tasks of relationship extraction are defined, namely named entities recognition and classification of relationships between them, including a summary of state-of-the-art solutions. In the practical part of the thesis, system for automatic extraction of relationships between named entities from downloaded pages is designed. The classification of relationships between entities is based on the pre-trained transformers. In this thesis, four pre-trained transformers are compared, namely BERT, XLNet, RoBERTa and ALBERT.
10

Information Extraction for Test Identification in Repair Reports in the Automotive Domain

Jie, Huang January 2023 (has links)
The knowledge of tests conducted on a problematic vehicle is essential for enhancing the efficiency of mechanics. Therefore, identifying the tests performed in each repair case is of utmost importance. This thesis explores techniques for extracting data from unstructured repair reports to identify component tests. The main emphasis is on developing a supervised multi-class classifier to categorize data and extract sentences that describe repair diagnoses and actions. It has been shown that incorporating a category-aware contrastive learning objective can improve the repair report classifier’s performance. The proposed approach involves training a sentence representation model based on a pre-trained model using a category-aware contrastive learning objective. Subsequently, the sentence representation model is further trained on the classification task using a loss function that combines the cross-entropy and supervised contrastive learning losses. By applying this method, the macro F1-score on the test set is increased from 90.45 to 90.73. The attempt to enhance the performance of the repair report classifier using a noisy data classifier proves unsuccessful. The noisy data classifier is trained using a prompt-based fine-tuning method, incorporating open-ended questions and two examples in the prompt. This approach achieves an F1-score of 91.09 and the resulting repair report classification datasets are found easier to classify. However, they do not contribute to an improvement in the repair report classifier’s performance. Ultimately, the repair report classifier is utilized to aid in creating the input necessary for identifying component tests. An information retrieval method is used to conduct the test identification. The incorporation of this classifier and the existing labels when creating queries leads to an improvement in the mean average precision at the top 3, 5, and 10 positions by 0.62, 0.81, and 0.35, respectively, although with a slight decrease of 0.14 at the top 1 position.

Page generated in 0.0941 seconds