• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 194
  • 18
  • 14
  • 10
  • 9
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 301
  • 135
  • 104
  • 91
  • 81
  • 75
  • 60
  • 52
  • 52
  • 43
  • 38
  • 37
  • 35
  • 32
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Superparsing with Improved Segmentation Boundaries through Nonparametric Context

Pan, Hong January 2015 (has links)
Scene parsing, or segmenting all the objects in an image and identifying their categories, is one of the core problems of computer vision. In order to achieve an object-level semantic segmentation, we build upon the recent superparsing approach by Tighe and Lazebnik, which is a nonparametric solution to the image labeling problem. Superparsing consists of four steps. For a new query image, the most similar images from the training dataset of labeled images is retrieved based on global features. In the second step, the query image is segmented into superpxiels and 20 di erent local features are computed for each superpixel. We propose to use the SLICO segmentation method to allow control of the size, shape and compactness of the superpixels because SLICO is able to produce accurate boundaries. After all superpixel features have been extracted, feature-based matching of superpixels is performed to nd the nearest-neighbour superpixels in the retrieval set for each query superpxiel. Based on the neighbouring superpixels a likelihood score for each class is calculated. Finally, we formulate a Conditional Random Field (CRF) using the likelihoods and a pairwise cost both computed from nonparametric estimation to optimize the labeling of the image. Speci cally, we de ne a novel pairwise cost to provide stronger semantic contextual constraints by incorporating the similarity of adjacent superpixels depending on local features. The optimized labeling obtained with the CRF results in superpixels with the same labels grouped together to generate segmentation results which also identify the categories of objects in an image. We evaluate our improvements to the superparsing approach using segmentation evaluation measures as well as the per-pixel rate and average per-class rate in a labeling evaluation. We demonstrate the success of our modi ed approach on the SIFT Flow dataset, and compare our results with the basic superparsing methods proposed by Tighe and Lazebnik.
92

Security Assessment and *nix Package Vulnerabilities

Sandgren, Per January 2018 (has links)
Background. Vulnerabilities in software provides attackers with the means to fulfill unlawful behavior. Since software has so much power, gaining control over vulnerabilities can mean that an attacker gains unauthorized powers. Since vulnerabilities are the keys that let attackers attack, vulnerabilities must be discovered and mitigated. Scanning vulnerable machines is not enough, and scanning data results must be parsed to prioritize vulnerability mitigation and conduct security assessment. Objectives. Creating a parser is the first objective, a tool that takes in input, filters it and gives output specified by the parser. The second objective is to have the parser connect found packages to known vulnerabilities. And the last objective is to have the parser give the output more information, sort them by severity and give information on what areas they are vulnerable. Methods. The interviews are conducted on experienced employees at Truesec AB. A parser is implemented with guidance from the supervisor at Truesec. The parser is experimented with to check practicality of parser. Results. The parser can find vulnerabilities from the Centos tests and does not find any from the Debian tests. From the interviews, we see that more information strengthens a security assessment. Expanding the scanning results will provide more information to the person(s) conducting security assessment. Conclusions. The amount of information gathered in security assessment needs to be expanded to make the assessment more reliable. Packages found can be connected with vulnerabilities by implementing a vulnerability database to match packages. The parser developed does not help in security assessment since the output is not reliable enough, this is caused by the phenomenon backporting.
93

Induction, Training, and Parsing Strategies beyond Context-free Grammars

Gebhardt, Kilian 03 July 2020 (has links)
This thesis considers the problem of assigning a sentence its syntactic structure, which may be discontinuous. It proposes a class of models based on probabilistic grammars that are obtained by the automatic refinement of a given grammar. Different strategies for parsing with a refined grammar are developed. The induction, refinement, and application of two types of grammars (linear context-free rewriting systems and hybrid grammars) are evaluated empirically on two German and one Dutch corpus.
94

Using a Cognitive Architecture in Incremental Sentence Processing

McGhee, Jeremiah Lane 10 December 2012 (has links)
XNL-Soar is a specialized implementation of the Soar cognitive architecture. The version of XNL-Soar described in this thesis builds upon and extends prior research (Lewis, 1993; Rytting,2000) using Soar for natural language processing. This thesis describes the updates made to operators creating syntactic structure and the improved coverage of syntactic phenomena. It describes the addition of semantic structure building capability. This thesis also details the implementation of semantic memory and describes two experiments utilizing semantic memory in structural disambiguation. This thesis shows that XNL-Soar, as currently instantiated, resolves ambiguities common in language using strategies and resources including: reanalysis via snip operators, use of data-driven techniques with annotated corpora, and complex part-of-speech and word sense processing based on WordNet.
95

Neural Dependency Parsing of Low-resource Languages: A Case Study on Marathi

Zhang, Wenwen January 2022 (has links)
Cross-lingual transfer has been shown effective for dependency parsing of some low-resource languages. It typically requires closely related high-resource languages. Pre-trained deep language models significantly improve model performance in cross-lingual tasks. We evaluate cross-lingual model transfer on parsing Marathi, a low-resource language that does not have a closely related highresource language. In addition, we investigate monolingual modeling for comparison. We experiment with two state-of-the-art language models: mBERT and XLM-R. Our experimental results illustrate that the cross-lingual model transfer approach still holds with distantly related source languages, and models benefit most from XLM-R. We also evaluate the impact of multi-task learning by training all UD tasks simultaneously and find that it yields mixed results for dependency parsing and degrades the transfer performance of the best performing source language Ancient Greek.
96

Evaluation of web scraping methods : Different automation approaches regarding web scraping using desktop tools / Utvärdering av webbskrapningsmetoder : Olika automatiserings metoder kring webbskrapning med hjälp av skrivbordsverktyg

Oucif, Kadday January 2016 (has links)
A lot of information can be found and extracted from the semantic web in different forms through web scraping, with many techniques emerging throughout time. This thesis is written with the objective to evaluate different web scraping methods in order to develop an automated, performance reliable, easy implemented and solid extraction process. A number of parameters are set to better evaluate and compare consisting techniques. A matrix of desktop tools are examined and two were chosen for evaluation. The evaluation also includes the learning of setting up the scraping process with so called agents. A number of links gets scraped by using the presented techniques with and without executing JavaScript from the web sources. Prototypes with the chosen techniques are presented with Content Grabber as a final solution. The result is a better understanding around the subject along with a cost-effective extraction process consisting of different techniques and methods, where a good understanding around the web sources structure facilitates the data collection. To sum it all up, the result is discussed and presented with regard to chosen parameters. / En hel del information kan bli funnen och extraherad i olika format från den semantiska webben med hjälp av webbskrapning, med många tekniker som uppkommit med tiden. Den här rapporten är skriven med målet att utvärdera olika webbskrapnings metoder för att i sin tur utveckla en automatiserad, prestandasäker, enkelt implementerad och solid extraheringsprocess. Ett antal parametrar är definierade för att utvärdera och jämföra befintliga webbskrapningstekniker. En matris av skrivbords verktyg är utforskade och två är valda för utvärdering. Utvärderingen inkluderar också tillvägagångssättet till att lära sig sätta upp olika webbskrapnings processer med så kallade agenter. Ett nummer av länkar blir skrapade efter data med och utan exekvering av JavaScript från webbsidorna. Prototyper med de utvalda teknikerna testas och presenteras med webbskrapningsverktyget Content Grabber som slutlig lösning. Resultatet utav det hela är en bättre förståelse kring ämnet samt en prisvärd extraheringsprocess bestående utav blandade tekniker och metoder, där en god vetskap kring webbsidornas uppbyggnad underlättar datainsamlingen. Sammanfattningsvis presenteras och diskuteras resultatet med hänsyn till valda parametrar.
97

Recovering Chinese Nonlocal Dependencies with a Generalized Categorial Grammar

Duan, Manjuan 03 September 2019 (has links)
No description available.
98

Evaluating Globally Normalized Transition Based Neural Networks for Multilingual Natural Language Understanding

Azzarone, Andrea January 2017 (has links)
We analyze globally normalized transition-based neural network models for dependency parsing on English, German, Spanish, and Catalan. We compare the results with FreeLing, an open source language analysis tool developed at the UPC natural language processing research group. Furthermore we study how the mini-batch size, the number of units in the hidden layers and the beam width affect the performances of the network. Finally we propose a multi-lingual parser with parameters sharing and experiment with German and English obtaining a significant accuracy improvement upon the monolingual parsers. These multi-lingual parsers can be used for low-resource languages of for all the applications with low memory requirements, where having one model per language in intractable.
99

Working Memory in Sentence Comprehension: Processing Hindi Center Embeddings

Vasishth, Shravan 02 July 2002 (has links)
No description available.
100

The interaction of prosodic phrasing, verb bias, and plausibility during spoken sentence comprehension

Blodgett, Allison Ruth 17 June 2004 (has links)
No description available.

Page generated in 0.0963 seconds