• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 250
  • 191
  • 69
  • 37
  • 28
  • 18
  • 13
  • 10
  • 10
  • 9
  • 9
  • 8
  • 4
  • 3
  • 3
  • Tagged with
  • 701
  • 124
  • 115
  • 101
  • 96
  • 91
  • 88
  • 84
  • 82
  • 77
  • 74
  • 73
  • 72
  • 69
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Exploring Data Extraction and Relation Identification Using Machine Learning : Utilizing Machine-Learning Techniques to Extract Relevant Information from Climate Reports

Berger, William, Fooladi, Alex, Lindgren, Markus, Messo, Michel, Rosengren, Jonas, Rådmann, Lukas January 2023 (has links)
Ensuring the accessibility of data from Swedish municipal climate reports is necessary for examining climate work in Sweden. Manual data extraction is time-consuming and prone to errors, necessitating automation of the process. This project presents machine-learning techniques that can be used to extract data and information from Swedish municipal climate plans, to improve the accessibility of climate data. The proposed solution involves recognizing entities in plain text and extracting predefined relations between these using Named Entity Recognition and Relation Extraction, respectively. The result of the project is a functioning prototype in the medical domain due to the lack of annotated climate datasets in Swedish. Nevertheless, the problem remained the same: how to effectively perform data extraction from reports using machine learning techniques. The presented prototype demonstrates the potential of automating data extraction from reports. These findings imply that the system could be adapted to handle climate reports when a sufficient dataset becomes available. / Tillgängliggörande av information som sammanställs i svenska kommunala klimatplaner är avgörande för att utvärdera och ifrågasätta klimatarbetet i Sverige. Manuell dataextraktion är tidskrävande och komplicerat, vilket understryker behovet av att automatisera processen. Detta projekt utforskar maskininlärningstekniker som kan användas för att extrahera data och information från de kommunala klimatplanerna. Den föreslagna lösningen utnyttjar Named Entity Recognition för att identifiera entiteter i text och Relation Extraction för att extrahera fördefinierade relationer mellan entiteterna. I brist på svenska annoterade dataset inom klimatdomänen, är resultatet av projektet en fungerande prototyp inom den medicinska domänen. Frågeställningen är således densamma, hur maskininlärning kan användas för att utföra dataextraktion på rapporter. Prototypen som presenteras visar potentialen i att automatisera denna typ av dataextrahering. Denna framgång antyder att modellen kan anpassas för att hantera klimatrapporter när ett adekvat dataset blir tillgängligt.
272

The Influence of Dispositional and Induced Implicit Theories of Personality on the Relationship between Self-Reported Procrastination and Procrastination Behaviors

Shyamsunder, Aarti 17 December 2008 (has links)
No description available.
273

A System for Automatic Information Extraction from Log Files

Chhabra, Anubhav 15 August 2022 (has links)
The development of technology, data-driven systems and applications are constantly revolutionizing our lives. We are surrounded by digitized systems/solutions that are transforming and making our lives easier. The criticality and complexity behind these systems are immense. So as to meet user satisfaction and keep up with the business needs, these digital systems should possess high availability, minimum downtime, and mitigate cyber attacks. Hence, system monitoring becomes an integral part of the lifecycle of a digital product/system. System monitoring often includes monitoring and analyzing logs outputted by the systems containing information about the events occurring within a system. The first step in log analysis generally includes understanding and segregating the various logical components within a log line, termed log parsing. Traditional log parsers use regular expressions and human-defined grammar to extract information from logs. Human experts are required to create, maintain and update the database containing these regular expressions and rules. They should keep up with the pace at which new products, applications and systems are being developed and deployed, as each unique application/system would have its own set of logs and logging standards. Logs from new sources tend to break the existing systems as none of the expressions match the signature of the incoming logs. The reasons mentioned above make the traditional log parsers time-consuming, hard to maintain, prone to errors, and not a scalable approach. On the other hand, machine learning based methodologies can help us develop solutions that automate the log parsing process without much intervention from human experts. NERLogParser is one such solution that uses a Bidirectional Long Short Term Memory (BiLSTM) architecture to frame the log parsing problem as a Named Entity Recognition (NER) problem. There have been recent advancements in the Natural Language Processing (NLP) domain with the introduction of architectures like Transformer and Bidirectional Encoder Representations from Transformers (BERT). However, these techniques have not been applied to tackle the problem of information extraction from log files. This gives us a clear research gap to experiment with the recent advanced deep learning architectures. This thesis extensively compares different machine learning based log parsing approaches that frame the log parsing problem as a NER problem. We compare 14 different approaches, including three traditional word-based methods: Naive Bayes, Perceptron and Stochastic Gradient Descent; a graphical model: Conditional Random Fields (CRF); a pre-trained sequence-to-sequence model for log parsing: NERLogParser; an attention-based sequence-to-sequence model: Transformer Neural Network; three different neural language models: BERT, RoBERTa and DistilBERT; two traditional ensembles and three different cascading classifiers formed using the individual classifiers mentioned above. We evaluate the NER approaches using an evaluation framework that offers four different evaluation schemes that not just help in comparing the NER approaches but also help us assess the quality of extracted information. The primary goal of this research is to evaluate the NER approaches on logs from new and unseen sources. To the best of our knowledge, no study in the literature evaluates the NER methodologies in such a context. Evaluating NER approaches on unseen logs helps us understand the robustness and the generalization capabilities of various methodologies. To carry out the experimentation, we use In-Scope and Out-of-Scope datasets. Both the datasets originate from entirely different sources and are entirely mutually exclusive. The In-Scope dataset is used for training, validation and testing purposes, whereas the Out-of-Scope dataset is purely used to evaluate the robustness and generalization capability of NER approaches. To better deal with logs from unknown sources, we propose Log Diversification Unit (LoDU), a unit of our system that enables us to carry out log augmentation and enrichment, which helps make the NER approaches more robust towards new and unseen logs. We segregate our final results on a use-case basis where different NER approaches may be suitable for various applications. Overall, traditional ensembles perform the best in parsing the Out-of-Scope log files, but they may not be the best option to consider for real-time applications. On the other hand, if we want to balance the trade-off between performance and throughput, cascading classifiers can be considered the go-to solution.
274

Multilingual Transformer Models for Maltese Named Entity Recognition

Farrugia, Kris January 2022 (has links)
The recently developed state-of-the-art models for Named Entity Recognition are heavily dependent upon huge amounts of available annotated data. Consequently, it is extremely challenging for data-scarce languages to obtain significant result. Several approaches have been proposed to circumvent this issue, including cross-lingual transfer learning, which is the leveraging of knowledge obtained by available resources in the source language and transfer it to a target low-resource language.        Maltese is one of the many majorly underresourced languages. The main purpose of this project is to research how recently developed transformer multilingual models (Multilingual BERT and XLM-RoBERTa) perform and to ultimately set up an evaluation benchmark in zero-shot cross-lingual transfer learning for Maltese Named Entity Recognition. The models are fine-tuned on Arabic, English, Italian, Spanish and Dutch. The experiments evaluated the efficacy of the source languages and the use of multilingual data in both the training and validation stages.         The experiments demonstrated that feeding multilingual data to both the training and the validation phases was mostly beneficial to the performance. However, adding it to the validation phase only was generally detrimental. Furthermore, XLM-R achieved overall better scores however, employing mBERT and English as the source language yielded the best performance.
275

Stockholms kultur- och ateljéstrategier : En analys av stadens producerande kulturverksamheters lokalisering och behov / Stockholms Culture and Studio Strategy : An Analysis of the Citys Cultural Entities Localisation and Needs

Landin, Tim, Holmberg, Otto January 2019 (has links)
Kultur ur ett socialt hållbarhetsperspektiv är något som både Stockholm stad, Region Stockholm och även Statens Kulturråd värderar högt, där den producerande kulturverksamheten idag ofta ses som en viktig faktor i skapandet av en konkurrenskraftig och attraktiv stad. Ändå har konstnärer i de producerande kulturverksameterna låga löner, trots långa utbildningar och lång arbetslivserfarenhet. Detta kan ses i mediebilden där Stockholm stad ofta beskrivs som att de istället för att främja kulturverksamhet snarare motverkar kulturverksamhet genom förtätning och omvandling.Med denna undersökning är syftet att undersöka behoven som den producerande kulturverksamheten har. Både ur Stockholm stads synpunkt och från konstnärernas håll. Genom att undersöka olika kulturstrategier är målet att få en klarare överblick på problemen kring kulturverksamheter. Genom en: litteratur-, dokument- och fallstudie har den producerande kulturverksamhetens behov, lokalisering och förutsättningar identifierats. I Stockholm stads ateljéstrategi uttrycker Kulturförvaltningen ett mål med 200 nya ateljéer till 2020. De menar i sin strategi att dessa ateljéer bör byggas i områden som har få eller inga ateljéer. I dagsläget lokaliserar sig Stockholms ateljéer likt kluster där högst täthet är kring: östra Södermalm, Liljeholmen, Telefonplan, Slakthusområdet och Hagalunds industriområde. Kulturens lokalisering begränsas av betalningsförmågan och den nuvarande bristen på lokaler och fastigheter som till ett rimligt pris uppfyller de krav som verksamheten ställer på lokalen och dess lokalisering. Lösningen på problematiken med bristen på produktionslokaler ligger inte på någon enskild faktor som enkelt kan lösas. Det handlar istället om en budget och prioriteringsfråga som kommunen och regionen bör se över. Politiker måste värdesätta kulturverksamhet på ett annat sätt än som de gör idag. Annars kommer kulturverksamheten aldrig att kunna få rätt förutsättningar för att utvecklas. / Today, local production of arts and culture is viewed as an important factor in making of a competitive and attractive city. Stockholm stad, Region Stockholm and the Swedish Arts Council express a high value of culture from a social sustainability perspective. Despite this, the majority of people working with making or producing art statistically can be seen to have a low median salary in comparison to the general population. The media points towards how Stockholm stad contributes to this situation through densifying and converting areas at the expense of maintaining cultural entities and production venues, as well as creating the right opportunities for cultural businesses. The purpose of this report is to examine the needs of the individuals, groups and organisations working with the making and production of arts and culture. Various strategies are examined, with an aim to understand challenges cultural practitioners and entities face as well as creating an understanding of how these strategies deal with producing cultural entities. As identified through literature, document and case studies, the report outlines the localisations of art studios and cultural entities, their need and their opportunities presented through the economic and social structures.In Stockholm stad’s studio strategy, the Kulturförvaltning expresses an aim to create 200 new studios for artists before 2020. Furthermore, they express that these studios are mainly to be situated in areas which are lacking or in absence of spaces for art production. Currently artist studios in Stockholm are concentrated in clusters, mainly in areas such as: östra Södermalm, Liljeholmen, Telefonplan, Slakthusområdet and Hagalund’s industrial area. The main factors affecting the localisation of studios in the urban space, is the shortage of spaces with adequate facilities and the lack of adequate facilities at an affordable rental cost. There is no simple solution for solving the shortage of production facilities for artists, since the problem depends on numerous factors. As a matter of city budgets and political prioritisation, this problem needs to be handled at both a municipal and regional level. Politicians need to value cultural businesses in a new way. In order for the cultural field to progress and prosper, artists require the right opportunities and for their needs to be met out of an economic, geographic and social perspective.
276

Information Extraction of Technical Details From Scholarly Articles

Kaushal, Kulendra Kumar 16 June 2021 (has links)
Researchers have made significant progress in information extraction from short documents in the last few years, including social media interaction, news articles, and email excerpts. This research aims to extract technical entities like hardware resources, computing platforms, compute time, programming language, and libraries from scholarly research articles. Research articles are generally long documents having both salient as well as non-salient entities. Analyzing the cross-sectional relation, filtering the relevant information, measuring the saliency of mentioned entities, and extracting novel entities are some of the technical challenges involved in this research. This work presents a detailed study about the performance, effectiveness, and scalability of rule-based weakly supervised algorithms. We also develop an automated end-to-end Research Entity and Relationship Extractor (E2R Extractor). Additionally, we perform a comprehensive study about the effectiveness of existing deep learning-based information extraction tools like Dygie, Dygie++, SciREX. The research also contributes a dataset containing novel entities annotated in BILUO format and represents the baseline results using the E2R extractor on the proposed dataset. The results indicate that the E2R extractor successfully extracts salient entities from research articles. / Master of Science / Information extraction is a process of automatically extracting meaningful information from unstructured text such as articles, news feeds and presenting it in a structured format. Researchers have made significant progress in this domain over the past few years. However, their work primarily focuses on short documents such as social media interactions, news articles, email excerpts, and not on long documents such as scholarly articles and research papers. Long documents contain a lot of redundant data, so filtering and extracting meaningful information is quite challenging. This work focuses on extracting entities such as hardware resources, compute platforms, and programming languages used in scholarly articles. We present a deep learning-based model to extract such entities from research articles and research papers. We evaluate the performance of our deep learning model against simple rule-based algorithms and other state-of-the-art models for extracting the desired entities. Our work also contributes a labeled dataset containing the entities mentioned above and results obtained on this dataset using our deep learning model.
277

Continuous Multidisciplinary Care for Patients With Orofacial Clefts—Should the Follow-up Interval Depend on the Cleft Entity?

Sander, Anna K., Grau, Elisabeth, Kloss-Brandstätter, Anita, Zimmerer, Rüdiger, Neuhaus, Michael, Bartella, Alexander K., Lethaus, Bernd 26 October 2023 (has links)
Objective: The multidisciplinary follow-up of patients with cleft lip with or without palate (CL/P) is organized differently in specialized centers worldwide. The aim of this study was to evaluate the different treatment needs of patients with different manifestations of CL/P and to potentially adapt the frequency and timing of checkup examinations accordingly. Design:We retrospectively analyzed the data of all patients attending the CL/P consultation hour at a tertiary care center between June 2005 and August 2020 (n=1126). We defined 3 groups of cleft entities: (1) isolated clefts of lip or lip and alveolus (CL/A), (2) isolated clefts of the hard and/or soft palate, and (3) complete clefts of lip, alveolus and palate (CLP). Timing and type of therapy recommendations given by the specialists of different disciplines were analyzed for statistical differences. Results: Patients with CLP made up the largest group (n=537), followed by patients with cleft of the soft palate (n=371) and CL ±A (n=218). There were significant differences between the groups with regard to type and frequency of treatment recommendations. A therapy was recommended in a high proportion of examinations in all groups at all ages. Conclusion: Although there are differences between cleft entities, the treatment need of patients with orofacial clefts is generally high during the growth period. Patients with CL/A showed a similarly high treatment demand and should be monitored closely. A close follow-up for patients with diagnosis of CL/P is crucial and measures should be taken to increase participation in followup appointments.
278

Designing Microservices with Use Cases and UML

Akhil Reddy, Bommareddy 03 August 2023 (has links)
No description available.
279

Belief Revision in Dynamic Abducers through Meta-Abduction

Bharathan, Vivek 14 September 2010 (has links)
No description available.
280

Identifying Single and Stacked News Triangles in Online News Articles - an Analysis of 31 Danish Online News Articles Annotated by 68 Journalists

Njor, Miklas January 2015 (has links)
While news articles for print use one News Triangle, where important information is at the top of the article, online news articles are supposed to use a series of Stacked News Triangles, due to online readers text- skimming habits[1]. To identify Stacked News Triangles presence, we analyse how 68 Danish journalists annotate 31 articles. We use keyword frequency as the measure of popularity. To explore if Named Entities influence News Triangle presence, we analyse Named Entities found in the articles and keywords.We find the presence of an overall News Triangle in 30 of 31 articles, while, for the presence of Stacked News Triangles, 14 of the 31 articles have Stacked News Triangles. For Named Entities in News Triangles we cannot see what their influences is. Nonetheless, we find difference in Named Entity Types in each category (Culture, Domestic, Economy, Sports).

Page generated in 0.0498 seconds