• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 250
  • 191
  • 69
  • 37
  • 28
  • 18
  • 13
  • 10
  • 10
  • 9
  • 9
  • 8
  • 4
  • 3
  • 3
  • Tagged with
  • 701
  • 124
  • 115
  • 101
  • 96
  • 91
  • 88
  • 84
  • 82
  • 77
  • 74
  • 73
  • 72
  • 69
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Ocenění podniku Hopi Popi, a.s. / Valuation of HOPI POPI, a.s.

Zapletal, Jan January 2014 (has links)
Diploma thesis evaluate company HOPI POPI, a.s., to the date of 1. 3. 2015. Purpose of the valuation is to determine the company enterprise value to meet the needs of an external investor and for the better information of the leaders of the company about its performance on the market. The final enterprise value in this thesis is defined by using income approach with discounted cash flow "entity" method. Diploma thesis is divided into two major parts which is theoretical-methodologic part and analythical part. The theoretical part describes the basic concepts, principles and approaches of valuation which will be used for the final enterprise value and selection of the valuation method. The analytical part addresses the execution of strategic and financial analysis, analysis and prediction of generators of value, prognosis of financial plan and determining the final enterprise value with DCF "entity" income approach. Final enterprise value has been stated at 69 047 426 CZK. Conclusion of the thesis summarizes results, fulfillment of set goals and usability for practise.
302

Objektivizované stanovení hodnoty společnosti / Objectified Assesment Value of Company

Kula, Radomír January 2019 (has links)
This master thesis deals with determination of objectivized value of company based on results of strategic and financial analysis which are used as a basis for distribution of assets as an operationally needed and operationally not needed, setting of individual value generators and compiling of a financial plan. Using these bases, the objectivized value of the company will be determined using method discount cash flow entity and the economic value added.
303

Knowledge Extraction for Hybrid Question Answering

Usbeck, Ricardo 18 May 2017 (has links)
Since the proposal of hypertext by Tim Berners-Lee to his employer CERN on March 12, 1989 the World Wide Web has grown to more than one billion Web pages and still grows. With the later proposed Semantic Web vision,Berners-Lee et al. suggested an extension of the existing (Document) Web to allow better reuse, sharing and understanding of data. Both the Document Web and the Web of Data (which is the current implementation of the Semantic Web) grow continuously. This is a mixed blessing, as the two forms of the Web grow concurrently and most commonly contain different pieces of information. Modern information systems must thus bridge a Semantic Gap to allow a holistic and unified access to information about a particular information independent of the representation of the data. One way to bridge the gap between the two forms of the Web is the extraction of structured data, i.e., RDF, from the growing amount of unstructured and semi-structured information (e.g., tables and XML) on the Document Web. Note, that unstructured data stands for any type of textual information like news, blogs or tweets. While extracting structured data from unstructured data allows the development of powerful information system, it requires high-quality and scalable knowledge extraction frameworks to lead to useful results. The dire need for such approaches has led to the development of a multitude of annotation frameworks and tools. However, most of these approaches are not evaluated on the same datasets or using the same measures. The resulting Evaluation Gap needs to be tackled by a concise evaluation framework to foster fine-grained and uniform evaluations of annotation tools and frameworks over any knowledge bases. Moreover, with the constant growth of data and the ongoing decentralization of knowledge, intuitive ways for non-experts to access the generated data are required. Humans adapted their search behavior to current Web data by access paradigms such as keyword search so as to retrieve high-quality results. Hence, most Web users only expect Web documents in return. However, humans think and most commonly express their information needs in their natural language rather than using keyword phrases. Answering complex information needs often requires the combination of knowledge from various, differently structured data sources. Thus, we observe an Information Gap between natural-language questions and current keyword-based search paradigms, which in addition do not make use of the available structured and unstructured data sources. Question Answering (QA) systems provide an easy and efficient way to bridge this gap by allowing to query data via natural language, thus reducing (1) a possible loss of precision and (2) potential loss of time while reformulating the search intention to transform it into a machine-readable way. Furthermore, QA systems enable answering natural language queries with concise results instead of links to verbose Web documents. Additionally, they allow as well as encourage the access to and the combination of knowledge from heterogeneous knowledge bases (KBs) within one answer. Consequently, three main research gaps are considered and addressed in this work: First, addressing the Semantic Gap between the unstructured Document Web and the Semantic Gap requires the development of scalable and accurate approaches for the extraction of structured data in RDF. This research challenge is addressed by several approaches within this thesis. This thesis presents CETUS, an approach for recognizing entity types to populate RDF KBs. Furthermore, our knowledge base-agnostic disambiguation framework AGDISTIS can efficiently detect the correct URIs for a given set of named entities. Additionally, we introduce REX, a Web-scale framework for RDF extraction from semi-structured (i.e., templated) websites which makes use of the semantics of the reference knowledge based to check the extracted data. The ongoing research on closing the Semantic Gap has already yielded a large number of annotation tools and frameworks. However, these approaches are currently still hard to compare since the published evaluation results are calculated on diverse datasets and evaluated based on different measures. On the other hand, the issue of comparability of results is not to be regarded as being intrinsic to the annotation task. Indeed, it is now well established that scientists spend between 60% and 80% of their time preparing data for experiments. Data preparation being such a tedious problem in the annotation domain is mostly due to the different formats of the gold standards as well as the different data representations across reference datasets. We tackle the resulting Evaluation Gap in two ways: First, we introduce a collection of three novel datasets, dubbed N3, to leverage the possibility of optimizing NER and NED algorithms via Linked Data and to ensure a maximal interoperability to overcome the need for corpus-specific parsers. Second, we present GERBIL, an evaluation framework for semantic entity annotation. The rationale behind our framework is to provide developers, end users and researchers with easy-to-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools and frameworks on multiple datasets. The decentral architecture behind the Web has led to pieces of information being distributed across data sources with varying structure. Moreover, the increasing the demand for natural-language interfaces as depicted by current mobile applications requires systems to deeply understand the underlying user information need. In conclusion, the natural language interface for asking questions requires a hybrid approach to data usage, i.e., simultaneously performing a search on full-texts and semantic knowledge bases. To close the Information Gap, this thesis presents HAWK, a novel entity search approach developed for hybrid QA based on combining structured RDF and unstructured full-text data sources.
304

Doktrína jedné hospodářské jednotky v soutěžním právu EU / The Single Economic Entity Doctrine in EU Competition Law

Lepara, Samir January 2020 (has links)
The Single Economic Entity Doctrine in EU Competition Law Focused on applying single economic entity doctrine in relation to merger control of State Owned Enterprises Abstract This thesis focuses on the issues surrounding single economic entity doctrine in relation to State Owned Enterprises (SOEs), in particular on the effects of Article 22 of the Preamble of Council Regulation (EC) No 139/2004 of 20 January 2004 on the control of concentrations between undertakings (EUMR), and the Commission Consolidated Jurisdictional Notice under Council Regulation (EC) No 139/2004 on the control of concentrations between undertakings. Together these documents set the standards for merger control practice concerning SOEs. They are centred around the principle of applying the single economic entity doctrine when identifying the turnover of SOEs and when determining the jurisdiction of the Commission. The thesis possesses two major goals. The primary goal of the thesis is to elucidate the criteria used for the determination of a single economic unit (single economic entity) with independent decision-making power in the public sector. To this end, in chapter 3, the author has synthetized a list of the relevant criteria used for the determination of an economic entity, based on the criteria used by the Commission that have...
305

Tidsrapporteringssystem för mobila och stationära enheter : Utveckling av en MVC4 Webbapplikation i ASP.NET och PhoneGap / Timesheet system for mobile and stationary devices : Development of a MVC4 Web Application in ASP.NET and PhoneGap

Gandhi, Vicky, Kufa, David January 2014 (has links)
Målet med detta projekt var att utforma ett tidsrapporteringssystem åt Online CC AB för att effektivisera deras kunders tidsrapportering. Systemet är en webbapplikation som ska användas till att rapportera in tid som framdeles kan exporteras till valfritt lönesystem för lönehante-ring av personal. Detta system är grunden för ett framtida, fulländat system som har utökad funktionalitet. Produkten togs fram med Ex-treme Programming samt testdriven utveckling. Under utvecklingen jobbade utvecklingsgruppen med välkända och beprövade metoder för att säkerställa ett system av hög kvalité. Webbapplikationen nyttjar moderna teknologier och ramverk för webbutveckling – inklusive Microsofts ASP.NET MVC 4 och Entity Framework. Det visade sig att apputveckling är ett diffust område där de senaste verktygen inom verksamhetsgrenen inte förhållandevis förenklade arbetet. Ett system som fungerar såväl på mobila enheter, i form av en hybridapplikation, som stationära enheter, som webbapplikation, krävde att utvecklings-gruppen var erfarna inom respektive områden. I slutet av projektet var inte alla ställda krav uppfyllda - men eftersom vi använder oss av testdriven utveckling så är systemet fullt operationsdugligt. De krav som implementerades, gjordes det till fullo. Till sist så kan det visa sig att de senaste teknologierna och ramverken inte alltid är de bästa att nyttja. Mer beprövade metoder och teknologier kan i vissa fall vara mer lämpliga. / The goal of this project was to design a timesheet system for Online CC AB in order to make time reporting more efficient for their customers. The system is a web application that is to be used for time reporting in-which, later on can be exported to a salary system of their choice for salary transactions of personnel. This system is the foundation for a future, all-in-one system with extended functionality. The product was produced using Extreme Programming and Test-Driven Development. During development the development team worked with well-renowned and well-tried methods to ensure a system with the utmost quality. The web application utilizes modern technologies and frame-works for web development – including Microsoft’s ASP.NET MVC 4 and Entity Framework. It’s shown that app development is a diffuse field in which the latest tools within the field do not comparatively simplify the work. A system that works on as-well as mobile units, in the form of a hybrid application, as stationary units, in the form of a web application, demands the development team to be experienced within respective fields. At the end of the project not all requirements are met – however due to us using Test-Driven Development, the system is fully operational. Those requirements that were implemented are done so fully. Furthermore, it’s shown that the latest technologies and frame-works not always are best-suited for usage. More well-tried methods and technologies can in some cases be more appropriate
306

Data Preparation from Visually Rich Documents

Sarkhel, Ritesh January 2022 (has links)
No description available.
307

Prerequisites for Extracting Entity Relations from Swedish Texts

Lenas, Erik January 2020 (has links)
Natural language processing (NLP) is a vibrant area of research with many practical applications today like sentiment analyses, text labeling, questioning an- swering, machine translation and automatic text summarizing. At the moment, research is mainly focused on the English language, although many other lan- guages are trying to catch up. This work focuses on an area within NLP called information extraction, and more specifically on relation extraction, that is, to ex- tract relations between entities in a text. What this work aims at is to use machine learning techniques to build a Swedish language processing pipeline with part-of- speech tagging, dependency parsing, named entity recognition and coreference resolution to use as a base for later relation extraction from archival texts. The obvious difficulty lies in the scarcity of Swedish annotated datasets. For exam- ple, no large enough Swedish dataset for coreference resolution exists today. An important part of this work, therefore, is to create a Swedish coreference solver using distantly supervised machine learning, which means creating a Swedish dataset by applying an English coreference solver on an unannotated bilingual corpus, and then using a word-aligner to translate this machine-annotated En- glish dataset to a Swedish dataset, and then training a Swedish model on this dataset. Using Allen NLP:s end-to-end coreference resolution model, both for creating the Swedish dataset and training the Swedish model, this work achieves an F1-score of 0.5. For named entity recognition this work uses the Swedish BERT models released by the Royal Library of Sweden in February 2020 and achieves an overall F1-score of 0.95. To put all of these NLP-models within a single Lan- guage Processing Pipeline, Spacy is used as a unifying framework. / Natural Language Processing (NLP) är ett stort och aktuellt forskningsområde idag med många praktiska tillämpningar som sentimentanalys, textkategoriser- ing, maskinöversättning och automatisk textsummering. Forskningen är för när- varande mest inriktad på det engelska språket, men många andra språkområ- den försöker komma ikapp. Det här arbetet fokuserar på ett område inom NLP som kallas informationsextraktion, och mer specifikt relationsextrahering, det vill säga att extrahera relationer mellan namngivna entiteter i en text. Vad det här ar- betet försöker göra är att använda olika maskininlärningstekniker för att skapa en svensk Language Processing Pipeline bestående av part-of-speech tagging, de- pendency parsing, named entity recognition och coreference resolution. Denna pipeline är sedan tänkt att användas som en bas for senare relationsextrahering från svenskt arkivmaterial. Den uppenbara svårigheten med detta ligger i att det är ont om stora, annoterade svenska dataset. Till exempel så finns det inget till- räckligt stort svenskt dataset för coreference resolution. En stor del av detta arbete går därför ut på att skapa en svensk coreference solver genom att implementera distantly supervised machine learning, med vilket menas att använda en engelsk coreference solver på ett oannoterat engelskt-svenskt corpus, och sen använda en word-aligner för att översätta detta maskinannoterade engelska dataset till ett svenskt, och sen träna en svensk coreference solver på detta dataset. Det här arbetet använder Allen NLP:s end-to-end coreference solver, både för att skapa det svenska datasetet, och för att träna den svenska modellen, och uppnår en F1-score på 0.5. Vad gäller named entity recognition så använder det här arbetet Kungliga Bibliotekets BERT-modeller som bas, och uppnår genom detta en F1- score på 0.95. Spacy används som ett enande ramverk för att samla alla dessa NLP-komponenter inom en enda pipeline.
308

[en] A SOFTWARE ARCHITECTURE TO SUPPORT DEVELOPMENT OF MEDICAL IMAGING DIAGNOSTIC SYSTEMS / [pt] UMA ARQUITETURA DE SOFTWARE PARA APOIO AO DESENVOLVIMENTO DE SISTEMAS DE DIAGNÓSTICO MÉDICOS POR IMAGEM

RICARDO ALMEIDA VENIERIS 02 August 2018 (has links)
[pt] O apoio diagnóstico de exames médicos por imagem utilizando técnicas de Inteligência Artificial tem sido amplamente discutido e pesquisado academicamente. Diversas técnicas computacionais para segmentação e classificação de tais imagens são continuamente criadas, testadas e aperfeiçoadas. Destes estudos emergem sistemas com alto grau de especialização que se utilizam de técnicas de visão computacional e aprendizagem de máquina para segmentar e classificar imagens de exames utilizando conhecimento adquirido através de grandes coleções de exames devidamente laudados. No domínio médico há ainda a dificuldade de se conseguir bases de dados qualificada para realização da extração de conhecimento pelos sistemas de aprendizagem de máquina. Neste trabalho propomos a construção de uma arquitetura de software que suporte o desenvolvimento de sistemas de apoio diagnóstico que possibilite: (i) a utilização em múltiplos tipos exames, (ii) que consiga segmentar e classificar, (iii) utilizando não só de estratégias padrão de aprendizado de máquina como, (iv) o conhecimento do domínio médico disponível. A motivação é facilitar a tarefa de geração de classificadores que possibilite, além de buscar marcadores patológicos específicos, ser aplicado em objetivos diversos da atividade médica, como o diagnóstico pontual, triagem e priorização do atendimento. / [en] The image medical exam diagnostic support using Artificial Intelligence techniques has been extensively discussed and academically researched. Several computational techniques for segmentation and classification of such images are continuously created, tested and improved. From these studies, highly specialized systems that use computational vision and machine learning techniques to segment and classify exam images using knowledge acquired through large collections of lauded exams. In the medical domain, there is still the difficulty of obtaining qualified databases to support the extraction of knowledge by machine learning systems. In this work we propose a software architecture construction that supports diagnostic support systems development that allows: (i) use of multiple exam types, (ii) supporting segmentation and classification, (iii) using not only machine learning techniques as, (iv) knowledge of the available medical domain. The motivation is to facilitate the generation of classifiers task that, besides searching for specific pathological markers, can be applied to different medical activity objectives, such as punctual diagnosis, triage and prioritization of care.
309

Encyclopaedic question answering

Dornescu, Iustin January 2012 (has links)
Open-domain question answering (QA) is an established NLP task which enables users to search for speciVc pieces of information in large collections of texts. Instead of using keyword-based queries and a standard information retrieval engine, QA systems allow the use of natural language questions and return the exact answer (or a list of plausible answers) with supporting snippets of text. In the past decade, open-domain QA research has been dominated by evaluation fora such as TREC and CLEF, where shallow techniques relying on information redundancy have achieved very good performance. However, this performance is generally limited to simple factoid and deVnition questions because the answer is usually explicitly present in the document collection. Current approaches are much less successful in Vnding implicit answers and are diXcult to adapt to more complex question types which are likely to be posed by users. In order to advance the Veld of QA, this thesis proposes a shift in focus from simple factoid questions to encyclopaedic questions: list questions composed of several constraints. These questions have more than one correct answer which usually cannot be extracted from one small snippet of text. To correctly interpret the question, systems need to combine classic knowledge-based approaches with advanced NLP techniques. To Vnd and extract answers, systems need to aggregate atomic facts from heterogeneous sources as opposed to simply relying on keyword-based similarity. Encyclopaedic questions promote QA systems which use basic reasoning, making them more robust and easier to extend with new types of constraints and new types of questions. A novel semantic architecture is proposed which represents a paradigm shift in open-domain QA system design, using semantic concepts and knowledge representation instead of words and information retrieval. The architecture consists of two phases, analysis – responsible for interpreting questions and Vnding answers, and feedback – responsible for interacting with the user. This architecture provides the basis for EQUAL, a semantic QA system developed as part of the thesis, which uses Wikipedia as a source of world knowledge and iii employs simple forms of open-domain inference to answer encyclopaedic questions. EQUAL combines the output of a syntactic parser with semantic information from Wikipedia to analyse questions. To address natural language ambiguity, the system builds several formal interpretations containing the constraints speciVed by the user and addresses each interpretation in parallel. To Vnd answers, the system then tests these constraints individually for each candidate answer, considering information from diUerent documents and/or sources. The correctness of an answer is not proved using a logical formalism, instead a conVdence-based measure is employed. This measure reWects the validation of constraints from raw natural language, automatically extracted entities, relations and available structured and semi-structured knowledge from Wikipedia and the Semantic Web. When searching for and validating answers, EQUAL uses the Wikipedia link graph to Vnd relevant information. This method achieves good precision and allows only pages of a certain type to be considered, but is aUected by the incompleteness of the existing markup targeted towards human readers. In order to address this, a semantic analysis module which disambiguates entities is developed to enrich Wikipedia articles with additional links to other pages. The module increases recall, enabling the system to rely more on the link structure of Wikipedia than on word-based similarity between pages. It also allows authoritative information from diUerent sources to be linked to the encyclopaedia, further enhancing the coverage of the system. The viability of the proposed approach was evaluated in an independent setting by participating in two competitions at CLEF 2008 and 2009. In both competitions, EQUAL outperformed standard textual QA systems as well as semi-automatic approaches. Having established a feasible way forward for the design of open-domain QA systems, future work will attempt to further improve performance to take advantage of recent advances in information extraction and knowledge representation, as well as by experimenting with formal reasoning and inferencing capabilities.
310

Semi-automated co-reference identification in digital humanities collections

Croft, David January 2014 (has links)
Locating specific information within museum collections represents a significant challenge for collection users. Even when the collections and catalogues exist in a searchable digital format, formatting differences and the imprecise nature of the information to be searched mean that information can be recorded in a large number of different ways. This variation exists not just between different collections, but also within individual ones. This means that traditional information retrieval techniques are badly suited to the challenges of locating particular information in digital humanities collections and searching, therefore, takes an excessive amount of time and resources. This thesis focuses on a particular search problem, that of co-reference identification. This is the process of identifying when the same real world item is recorded in multiple digital locations. In this thesis, a real world example of a co-reference identification problem for digital humanities collections is identified and explored. In particular the time consuming nature of identifying co-referent records. In order to address the identified problem, this thesis presents a novel method for co-reference identification between digitised records in humanities collections. Whilst the specific focus of this thesis is co-reference identification, elements of the method described also have applications for general information retrieval. The new co-reference method uses elements from a broad range of areas including; query expansion, co-reference identification, short text semantic similarity and fuzzy logic. The new method was tested against real world collections information, the results of which suggest that, in terms of the quality of the co-referent matches found, the new co-reference identification method is at least as effective as a manual search. The number of co-referent matches found however, is higher using the new method. The approach presented here is capable of searching collections stored using differing metadata schemas. More significantly, the approach is capable of identifying potential co-reference matches despite the highly heterogeneous and syntax independent nature of the Gallery, Library Archive and Museum (GLAM) search space and the photo-history domain in particular. The most significant benefit of the new method is, however, that it requires comparatively little manual intervention. A co-reference search using it has, therefore, significantly lower person hour requirements than a manually conducted search. In addition to the overall co-reference identification method, this thesis also presents: • A novel and computationally lightweight short text semantic similarity metric. This new metric has a significantly higher throughput than the current prominent techniques but a negligible drop in accuracy. • A novel method for comparing photographic processes in the presence of variable terminology and inaccurate field information. This is the first computational approach to do so.

Page generated in 0.0385 seconds