• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 250
  • 191
  • 69
  • 37
  • 28
  • 18
  • 12
  • 10
  • 10
  • 9
  • 9
  • 8
  • 4
  • 3
  • 3
  • Tagged with
  • 700
  • 124
  • 114
  • 101
  • 96
  • 91
  • 88
  • 84
  • 82
  • 77
  • 74
  • 73
  • 72
  • 69
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Volba obhájce a zmocněnce právnické osoby v trestním řízení / The choice of a defence counsel and attorney for a legal entity in criminal proceedings

Hujerová, Věra January 2021 (has links)
The Master's Thesis deals with the issue of the choice of a defence counsel and attorney for a legal entity in criminal proceedings in the context of the right to a fair trial and the right to defence of a legal entity. Defence of a legal entity in criminal proceedings is exercised by persons authorised to act on its behalf. If those persons find themselves in the incompatible procedural position of an accused, a witness or a victim in the same case, they are excluded by law from all acts on behalf of the legal entity in criminal proceedings on the grounds of a presumption of conflict of interests. The title of the Master's Thesis is based on the conclusions of the case law of the Constitutional Court which case law, in order to preserve the right of defence of a legal entity, also grants a person in an incompatible procedural position of an accused or a witness the right to choose a defence counsel or an attorney for the legal entity, subject to other conditions described in this Master's Thesis. I get focused in the introductory part on the description of the rudiments of criminal liability of legal entities, I mention the legal regulation which is used in the Master's Thesis and I summarize what is its content. Subsequently, I deal with the definition of individual persons authorised to act on...
102

Undersökande studie inom Information Extraction : Konsten att Klassicera

Torstensson, Erik, Carls, Fredrik January 2016 (has links)
Denna uppsats är en undersökande studie inom Information Extraction. Huvudsyftet är att skapa och utvärdera metoder inom Information Extraction och undersöka hur de kan hjälpa till att förbättra det vetenskapliga resultatet av klassificering av textelement. En deluppgift är att utvärdera den befintliga marknaden för Information Extraction i Sverige.                       För att göra detta har vi skapat ett program bestående av två delar. Den första delen utgörs av ett basfall som är en enkel metod och den andra är mer avancerad och använder sig av olika tekniker inom området Information Extraction. Fältet vi undersöker är hur ofta män och kvinnor nämns i sju olika nyhetskällor i Sverige. Resultatet jämför dessa två metoder och utvärderar dem med vetenskapliga prestationsmått inom Information Extraction.                       Studiens resultat visar på liknande förekomster av män och kvinnor mellan basfallet och den mer avancerade metoden. Undantaget är att den mer avancerade metoden har ett högre vetenskapligt värde. Marknaden för Information Extraction i Sverige är dominerad av stora medieägda bolag, där media dessutom förser dessa företag med data att analysera. Detta gör att det blir svårt att konkurrera utan en ny innovativ idé. / This paper is an investigatory report about Information Extraction. The main purpose is to create and evaluate methods within Information Extraction and see how they can help improve the scientific result in classification of text elements. A subtask is to evaluate the existing market for Information Extraction in Sweden.                       For this task a two-part computer program has been created. The first part is just a baseline with a simple method and the second one is more advanced with tools used in the field Information Extraction. The field we investigate is how often men and women are mentioned in seven different newspapers in Sweden. The result compares these two methods and evaluates them using scientific measurements of information retrieval performance.                       The results of the study show similar occurrences of men and women between the baseline and the more advanced method. The exception being that the more advanced method has a higher scientific value. The market for Information Extraction in Sweden is dominated by large corporations owned by the media, which also provide the data for these kinds of companies to analyze. This makes it hard to compete without having a new innovative idea.
103

Capturing Knowledge of Emerging Entities from the Extended Search Snippets

Ngwobia, Sunday C. January 2019 (has links)
No description available.
104

Entity Information Extraction using Structured and Semi-structured resources

Sil, Avirup January 2014 (has links)
Among all the tasks that exist in Information Extraction, Entity Linking, also referred to as entity disambiguation or entity resolution, is a new and important problem which has recently caught the attention of a lot of researchers in the Natural Language Processing (NLP) community. The task involves linking/matching a textual mention of a named-entity (like a person or a movie-name) to an appropriate entry in a database (e.g. Wikipedia or IMDB). If the database does not contain the entity it should return NIL (out-of-database) value. Existing techniques for linking named entities in text mostly focus on Wikipedia as a target catalog of entities. Yet for many types of entities, such as restaurants and cult movies, relational databases exist that contain far more extensive information than Wikipedia. In this dissertation, we introduce a new framework, called Open-Database Entity Linking (Open-DB EL), in which a system must be able to resolve named entities to symbols in an arbitrary database, without requiring labeled data for each new database. In experiments on two domains, our Open-DB EL strategies outperform a state-of-the-art Wikipedia EL system by over 25% in accuracy. Existing approaches typically perform EL using a pipeline architecture: they use a Named-Entity Recognition (NER) system to find the boundaries of mentions in text, and an EL system to connect the mentions to entries in structured or semi-structured repositories like Wikipedia. However, the two tasks are tightly coupled, and each type of system can benefit significantly from the kind of information provided by the other. We propose and develop a joint model for NER and EL, called NEREL, that takes a large set of candidate mentions from typical NER systems and a large set of candidate entity links from EL systems, and ranks the candidate mention-entity pairs together to make joint predictions. In NER and EL experiments across three datasets, NEREL significantly outperforms or comes close to the performance of two state-of-the-art NER systems, and it outperforms 6 competing EL systems. On the benchmark MSNBC dataset, NEREL, provides a 60% reduction in error over the next best NER system and a 68% reduction in error over the next-best EL system. We also extend the idea of using semi-structured resources to a relatively less explored area of entity information extraction. Most previous work on information extraction from text has focused on named-entity recognition, entity linking, and relation extraction. Much less attention has been paid to extracting the temporal scope for relations between named-entities; for example, the relation president-Of (John F. Kennedy, USA) is true only in the time-frame (January 20, 1961 - November 22, 1963). In this dissertation we present a system for temporal scoping of relational facts, called TSRF which is trained on distant supervision based on the largest semi-structured resource available: Wikipedia. TSRF employs language models consisting of patterns automatically bootstrapped from sentences collected from Wikipedia pages that contain the main entity of a page and slot-fillers extracted from the infobox tuples. This proposed system achieves state-of-the-art results on 6 out of 7 relations on the benchmark Text Analysis Conference (TAC) 2013 dataset for the task of temporal slot filling (TSF). Overall, the system outperforms the next best system that participated in the TAC evaluation by 10 points on the TAC-TSF evaluation metric. / Computer and Information Science
105

Rapprochement de données pour la reconnaissance d'entités dans les documents océrisés / Data matching for entity recognition in ocred documents

Kooli, Nihel 13 September 2016 (has links)
Cette thèse traite de la reconnaissance d'entités dans les documents océrisés guidée par une base de données. Une entité peut être, par exemple, une entreprise décrite par son nom, son adresse, son numéro de téléphone, son numéro TVA, etc. ou des méta-données d'un article scientifique tels que son titre, ses auteurs et leurs affiliations, le nom de son journal, etc. Disposant d'un ensemble d'entités structurées sous forme d'enregistrements dans une base de données et d'un document contenant une ou plusieurs de ces entités, nous cherchons à identifier les entités contenues dans le document en utilisant la base de données. Ce travail est motivé par une application industrielle qui vise l'automatisation du traitement des images de documents administratifs arrivant en flux continu. Nous avons abordé ce problème comme un problème de rapprochement entre le contenu du document et celui de la base de données. Les difficultés de cette tâche sont dues à la variabilité de la représentation d'attributs d'entités dans la base et le document et à la présence d'attributs similaires dans des entités différentes. À cela s'ajoutent les redondances d'enregistrements et les erreurs de saisie dans la base de données et l'altération de la structure et du contenu du document, causée par l'OCR. Devant ces problèmes, nous avons opté pour une démarche en deux étapes : la résolution d'entités et la reconnaissance d'entités. La première étape consiste à coupler les enregistrements se référant à une même entité et à les synthétiser dans un modèle entité. Pour ce faire, nous avons proposé une approche supervisée basée sur la combinaison de plusieurs mesures de similarité entre attributs. Ces mesures permettent de tolérer quelques erreurs sur les caractères et de tenir compte des permutations entre termes. La deuxième étape vise à rapprocher les entités mentionnées dans un document avec le modèle entité obtenu. Nous avons procédé par deux manières différentes, l'une utilise le rapprochement par le contenu et l'autre intègre le rapprochement par la structure. Pour le rapprochement par le contenu, nous avons proposé deux méthodes : M-EROCS et ERBL. M-EROCS, une amélioration/adaptation d'une méthode de l'état de l'art, consiste à faire correspondre les blocs de l'OCR avec le modèle entité en se basant sur un score qui tolère les erreurs d'OCR et les variabilités d'attributs. ERBL consiste à étiqueter le document par les attributs d'entités et à regrouper ces labels en entités. Pour le rapprochement par les structures, il s'agit d'exploiter les relations structurelles entre les labels d'une entité pour corriger les erreurs d'étiquetage. La méthode proposée, nommée G-ELSE, consiste à utiliser le rapprochement inexact de graphes attribués modélisant des structures locales, avec un modèle structurel appris pour cet objectif. Cette thèse étant effectuée en collaboration avec la société ITESOFT-Yooz, nous avons expérimenté toutes les étapes proposées sur deux corpus administratifs et un troisième corpus extrait du Web / This thesis focuses on entity recognition in documents recognized by OCR, driven by a database. An entity is a homogeneous group of attributes such as an enterprise in a business form described by the name, the address, the contact numbers, etc. or meta-data of a scientific paper representing the title, the authors and their affiliation, etc. Given a database which describes entities by its records and a document which contains one or more entities from this database, we are looking to identify entities in the document using the database. This work is motivated by an industrial application which aims to automate the image document processing, arriving in a continuous stream. We addressed this problem as a matching issue between the document and the database contents. The difficulties of this task are due to the variability of the entity attributes representation in the database and in the document and to the presence of similar attributes in different entities. Added to this are the record redundancy and typing errors in the database, and the alteration of the structure and the content of the document, caused by OCR. To deal with these problems, we opted for a two-step approach: entity resolution and entity recognition. The first step is to link the records referring to the same entity and to synthesize them in an entity model. For this purpose, we proposed a supervised approach based on a combination of several similarity measures between attributes. These measures tolerate character mistakes and take into account the word permutation. The second step aims to match the entities mentioned in documents with the resulting entity model. We proceeded by two different ways, one uses the content matching and the other integrates the structure matching. For the content matching, we proposed two methods: M-EROCS and ERBL. M-EROCS, an improvement / adaptation of a state of the art method, is to match OCR blocks with the entity model based on a score that tolerates the OCR errors and the attribute variability. ERBL is to label the document with the entity attributes and to group these labels into entities. The structure matching is to exploit the structural relationships between the entity labels to correct the mislabeling. The proposed method, called G-ELSE, is based on local structure graph matching with a structural model which is learned for this purpose. This thesis being carried out in collaboration with the ITESOFT-Yooz society, we have experimented all the proposed steps on two administrative corpuses and a third one extracted from the web
106

A responsabilidade penal da pessoa jurídica por fato próprio : uma análise de seus critérios de imputação

Fabris, Gabriel Baingo 20 December 2016 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2017-06-23T14:14:11Z No. of bitstreams: 1 Gabriel Baingo Fabris_.pdf: 1151080 bytes, checksum: 75a40c2a7c383b9e8628ba538d1b3c3a (MD5) / Made available in DSpace on 2017-06-23T14:14:11Z (GMT). No. of bitstreams: 1 Gabriel Baingo Fabris_.pdf: 1151080 bytes, checksum: 75a40c2a7c383b9e8628ba538d1b3c3a (MD5) Previous issue date: 2016-12-20 / Nenhuma / Em meio às modificações sociais, passa-se a constatar que o Direito penal é chamado para resolver problemas que outrora eram inimagináveis. Ao passo que o campo de atuação deste se amplia, verifica-se que passa a englobar novos bens jurídicos, sobretudo de cunho coletivo, supraindividual. Como resultado desta expansão, amplia-se o âmbito de responsabilidades, estendendo-se à pessoa jurídica, percebendo-se essa tendência em outros ordenamentos jurídicos. A partir de uma metodologia sistêmico-construtivista, utiliza-se a técnica de pesquisa, a partir de pesquisa bibliográfica, sobretudo de teorias previamente analisadas e discorridas pela doutrina, a partir de suas produções bibliográficas, englobando, a presente pesquisa, também, textos legislativos e análise da perspectiva jurisprudencial acerca da opção político-criminal. Ao passo em que são evidenciados problemas quando da identificação da autoria em meio à atividade empresarial, surgem problemas quanto à atribuição de responsabilidades por meio das normas de imputação inerentes ao Direito penal. Como resposta, a doutrina identifica duas formas de resolvê-lo: utilizar as normas de imputação do indivíduo que atua no interior da empresa ou utilizar normas de imputação próprias à pessoa jurídica. Partindo do pressuposto de que deveriam ser utilizadas normas de imputação diretamente à pessoa jurídica, perante o desenvolvimento das atividades empresariais, faz-se necessária uma análise acerca da adequação das normas de imputação – ação, tipicidade subjetiva e culpabilidade – sobretudo para que possam permitir a atribuição desta responsabilidade. Para esta adequação, o desenvolvimento de uma teoria do delito é realizado com base em critérios próprios da pessoa jurídica, a partir de sua própria estrutura organizativa. Desta análise, verifica-se que a doutrina não é pacífica e, embora sucetível a críticas, busca uma solução para este problema. / Amid social changes, it becomes clear that Criminal law is called to solve problems that were once unimaginable. While the field of activity of this one is widening, it turns out to include new legal property, especially of a collective issue, supra-individual nature. As a result of this expansion, the range of responsibilities is widen, extending to the legal person, perceiving this tendency in other legal systems. From a systemic-constructivist methodology approach, the research technique is used based on a bibliographical research, mainly on theories previously analyzed and discussed by the doctrine, based on its bibliographic productions, encompassing, the present research, also, legislative texts and analysis of the jurisprudential perspective on the political-criminal option. Whereas problems are shown when identifying authorship among the business activity, problems come to light while regarding the attribution of responsibilities through the rules of imputation inherent in Criminal law. As a response, the doctrine identifies two ways of solving it: using the rules of imputation from the individual that operates inside the enterprise or using rules of imputation specific to the legal entity. Assuming that the rules of imputation should be used directly to the legal entity, towards the development of business activities, an analysis is required about the adequacy of the imputation rules - action, subjective typicity and culpability – especially so that they can allow the attribution of this liabillity. For this adequacy, the development of a theory of crime is made from own criteria of the legal entity, from its own organizational structure. From this analysis, it turns out that the doctrine is not peaceful and, although susceptible to criticism, seeks a solution to this problem.
107

Knowledge-Enabled Entity Extraction

Al-Olimat, Hussein S. January 2019 (has links)
No description available.
108

Knowledge Extraction for Hybrid Question Answering

Usbeck, Ricardo 22 May 2017 (has links) (PDF)
Since the proposal of hypertext by Tim Berners-Lee to his employer CERN on March 12, 1989 the World Wide Web has grown to more than one billion Web pages and still grows. With the later proposed Semantic Web vision,Berners-Lee et al. suggested an extension of the existing (Document) Web to allow better reuse, sharing and understanding of data. Both the Document Web and the Web of Data (which is the current implementation of the Semantic Web) grow continuously. This is a mixed blessing, as the two forms of the Web grow concurrently and most commonly contain different pieces of information. Modern information systems must thus bridge a Semantic Gap to allow a holistic and unified access to information about a particular information independent of the representation of the data. One way to bridge the gap between the two forms of the Web is the extraction of structured data, i.e., RDF, from the growing amount of unstructured and semi-structured information (e.g., tables and XML) on the Document Web. Note, that unstructured data stands for any type of textual information like news, blogs or tweets. While extracting structured data from unstructured data allows the development of powerful information system, it requires high-quality and scalable knowledge extraction frameworks to lead to useful results. The dire need for such approaches has led to the development of a multitude of annotation frameworks and tools. However, most of these approaches are not evaluated on the same datasets or using the same measures. The resulting Evaluation Gap needs to be tackled by a concise evaluation framework to foster fine-grained and uniform evaluations of annotation tools and frameworks over any knowledge bases. Moreover, with the constant growth of data and the ongoing decentralization of knowledge, intuitive ways for non-experts to access the generated data are required. Humans adapted their search behavior to current Web data by access paradigms such as keyword search so as to retrieve high-quality results. Hence, most Web users only expect Web documents in return. However, humans think and most commonly express their information needs in their natural language rather than using keyword phrases. Answering complex information needs often requires the combination of knowledge from various, differently structured data sources. Thus, we observe an Information Gap between natural-language questions and current keyword-based search paradigms, which in addition do not make use of the available structured and unstructured data sources. Question Answering (QA) systems provide an easy and efficient way to bridge this gap by allowing to query data via natural language, thus reducing (1) a possible loss of precision and (2) potential loss of time while reformulating the search intention to transform it into a machine-readable way. Furthermore, QA systems enable answering natural language queries with concise results instead of links to verbose Web documents. Additionally, they allow as well as encourage the access to and the combination of knowledge from heterogeneous knowledge bases (KBs) within one answer. Consequently, three main research gaps are considered and addressed in this work: First, addressing the Semantic Gap between the unstructured Document Web and the Semantic Gap requires the development of scalable and accurate approaches for the extraction of structured data in RDF. This research challenge is addressed by several approaches within this thesis. This thesis presents CETUS, an approach for recognizing entity types to populate RDF KBs. Furthermore, our knowledge base-agnostic disambiguation framework AGDISTIS can efficiently detect the correct URIs for a given set of named entities. Additionally, we introduce REX, a Web-scale framework for RDF extraction from semi-structured (i.e., templated) websites which makes use of the semantics of the reference knowledge based to check the extracted data. The ongoing research on closing the Semantic Gap has already yielded a large number of annotation tools and frameworks. However, these approaches are currently still hard to compare since the published evaluation results are calculated on diverse datasets and evaluated based on different measures. On the other hand, the issue of comparability of results is not to be regarded as being intrinsic to the annotation task. Indeed, it is now well established that scientists spend between 60% and 80% of their time preparing data for experiments. Data preparation being such a tedious problem in the annotation domain is mostly due to the different formats of the gold standards as well as the different data representations across reference datasets. We tackle the resulting Evaluation Gap in two ways: First, we introduce a collection of three novel datasets, dubbed N3, to leverage the possibility of optimizing NER and NED algorithms via Linked Data and to ensure a maximal interoperability to overcome the need for corpus-specific parsers. Second, we present GERBIL, an evaluation framework for semantic entity annotation. The rationale behind our framework is to provide developers, end users and researchers with easy-to-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools and frameworks on multiple datasets. The decentral architecture behind the Web has led to pieces of information being distributed across data sources with varying structure. Moreover, the increasing the demand for natural-language interfaces as depicted by current mobile applications requires systems to deeply understand the underlying user information need. In conclusion, the natural language interface for asking questions requires a hybrid approach to data usage, i.e., simultaneously performing a search on full-texts and semantic knowledge bases. To close the Information Gap, this thesis presents HAWK, a novel entity search approach developed for hybrid QA based on combining structured RDF and unstructured full-text data sources.
109

Ocenění podniku Nutricia Deva a.s. / Valuation of Nutricia Deva a.s.

Šťásková, Pavla January 2010 (has links)
The aim of this diploma thesis is to estimate the fair value of the company Nutricia Deva a.s. to 1. 1. 2010. The thesis is divided into two parts, theoretical and practical. There is described the basic steps of the valuation process and some methods in the theoretical part. The first chapter of the practical part deals with the strategic analysis. Other chapter contains the work of financial analysis and its aim is to assess the financial health of the company. Other part is prediction of generators and preparing the financial plan. Valuation itself was performed using the DCF Entity method and a market comparison approach.
110

Problematika účetnictví a financování školské právnické osoby zřízené církví / The issue of accounting and financing of school legal entity established churches

Pechová, Blanka January 2011 (has links)
This diploma thesis deals with the issue of accounting and financing of school legal entity established churches. It solves the regulation of school legal entity, the specifics of accounting and taxation. Furthermore, the diploma thesis focuses on sources of financing for school legal entity. The main content of the diploma thesis is the description of accounting, taxation and funding of school legal entity in certain conditions Bishop gymnasium J. N. Neumann and Primary Church School in the Czech Budweis. The diploma thesis is complemented by an analysis of the financial economy of the organization for the period 2008 - 2010.

Page generated in 0.0617 seconds