• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 250
  • 191
  • 69
  • 37
  • 28
  • 18
  • 13
  • 10
  • 10
  • 9
  • 9
  • 8
  • 4
  • 3
  • 3
  • Tagged with
  • 701
  • 124
  • 115
  • 101
  • 96
  • 91
  • 88
  • 84
  • 82
  • 77
  • 74
  • 73
  • 72
  • 69
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Hydrogeological data modelling in groundwater studies

Wojda, Piotr 19 January 2009 (has links)
Managing, handling, exchanging and accessing hydrogeological information depend mainly on the applied hydrogeological data models, which differ between institutions and across countries. Growing interest in hydrogeological information diffusion, combined with a need for information availability, require the convergence of hydrogeological data models. Model convergence makes hydrogeological information accessible to multiple institutions, universities, administration, water suppliers, and research organisations, at different levels: from the local level (on-site measurement teams), to national and international institutions dealing with water resources management. Furthermore, because hydrogeological studies are complex, they require a large variety of high-quality hydrogeological data with appropriate metadata in clearly designed and coherent structures. To respond to the requirement of model convergence, easy information exchange and hydrogeological completeness, new data models have been developed, using two different methodologies. At local-regional level, the HydroCube model has been developed for the Walloon Region in Belgium. This logical data model uses entity-relationship diagrams and it has been implemented in the MS Access environment, further enriched with a fully functional user-interface. The HydroCube model presents an innovative holistic project-based approach, which covers a full set of hydrogeological concepts and features, allowing for effective hydrogeological project management. This approach enables to store data about the project localisation, hydrogeological equipment, related observations and measurements. Furthermore, topological relationships facilitate management of spatially associated data. Finally, the model focuses on specialized hydrogeological field experiments, such as pumping tests and tracer tests. At the international level, a new hydrogeological data model has been developed which guarantees hydrogeological information availability in one standard format in the scope of the FP6 project GABARDINE (Groundwater Artificial recharge Based on Alternative sources of wateR: aDvanced Integrated technologies and management). The model has been implemented in the ArcGIS environment, as a Geospatial Database for a decision support system. The GABARDINE Geospatial Database uses advantages of object-oriented modelling (UML), it follows standards for geoscientific information exchange (ISO/TC211 and OGC), and it is compliant with the recommendations from the European Geospatial Information Working Group. Finally, these two developed models have been tested with hydrogeological field data on different informatics platforms: from MS Access, through a proprietary ArcGIS environment, to the open source, free Web2GIS on-line application. They have also contributed to the development of the GroundWater Markup Language (GWML) Canadian exchange standard, compliant with Geographic Markup Language (GML). GWML has the potential of becoming an international HydroGeology Markup Language (HgML) standard with a strong and continuous support from the hydrogeological community.
462

Dealing with unstructured data : A study about information quality and measurement / Hantera ostrukturerad data : En studie om informationskvalitet och mätning

Vikholm, Oskar January 2015 (has links)
Many organizations have realized that the growing amount of unstructured text may contain information that can be used for different purposes, such as making decisions. Organizations can by using so-called text mining tools, extract information from text documents. For example within military and intelligence activities it is important to go through reports and look for entities such as names of people, events, and the relationships in-between them when criminal or other interesting activities are being investigated and mapped. This study explores how information quality can be measured and what challenges it involves. It is done on the basis of Wang and Strong (1996) theory about how information quality can be measured. The theory is tested and discussed from empirical material that contains interviews from two case organizations. The study observed two important aspects to take into consideration when measuring information quality: context dependency and source criticism. Context dependency means that the context in which information quality should be measured in must be defined based on the consumer’s needs. Source criticism implies that it is important to take the original source into consideration, and how reliable it is. Further, data quality and information quality is often used interchangeably, which means that organizations needs to decide what they really want to measure. One of the major challenges in developing software for entity extraction is that the system needs to understand the structure of natural language, which is very complicated. / Många organisationer har insett att den växande mängden ostrukturerad text kan innehålla information som kan användas till flera ändamål såsom beslutsfattande. Genom att använda så kallade text-mining verktyg kan organisationer extrahera information från textdokument. Inom till exempel militär verksamhet och underrättelsetjänst är det viktigt att kunna gå igenom rapporter och leta efter exempelvis namn på personer, händelser och relationerna mellan dessa när brottslig eller annan intressant verksamhet undersöks och kartläggs. I studien undersöks hur informationskvalitet kan mätas och vilka utmaningar det medför. Det görs med utgångspunkt i Wang och Strongs (1996) teori om hur informationskvalité kan mätas. Teorin testas och diskuteras utifrån ett empiriskt material som består av intervjuer från två fall-organisationer. Studien uppmärksammar två viktiga aspekter att ta hänsyn till för att mäta informationskvalitét; kontextberoende och källkritik. Kontextberoendet innebär att det sammanhang inom vilket informationskvalitét mäts måste definieras utifrån konsumentens behov. Källkritik innebär att det är viktigt att ta hänsyn informationens ursprungliga källa och hur trovärdig den är. Vidare är det viktigt att organisationer bestämmer om det är data eller informationskvalitét som ska mätas eftersom dessa två begrepp ofta blandas ihop. En av de stora utmaningarna med att utveckla mjukvaror för entitetsextrahering är att systemen ska förstå uppbyggnaden av det naturliga språket, vilket är väldigt komplicerat.
463

An Exploratory Study of the Effects of Project Finance on Project Risk Management : How the Distinguishing Attributes of Project Finance affects the Prevailing Risk Factor?

Chan, Ka Fai January 2011 (has links)
Project finance is a financing arrangement for projects, and it is characterised by the creation of a legally independent project company financed with non- or limited recourse loans. It is observed that the popularity of project finance is increasing in the recent decades, despite of the impact of Asian financial crisis. Especially in emerging markets, project finance is very common among the public-private partnership projects. It is possible that project finance yields some benefits in project management that other forms of funding are not able to provide. This research aims to explore the impacts of project finance on the risk management of projects, as well as the mechanisms of the effects of various factors on project risk management. The research starts with a quantitative analysis which consists of project data from 32 projects in recent years. The regression analysis on these quantitative data reveals that factors such as the separation of legal entity and existence of third-party guarantees can effectively reduce the borrowing rates of the projects. The borrowing rates, expressed in terms of credit spreads over LIBOR, are regarded as a proxy for the overall risk level of the projects. The qualitative section which involves five structured interviews further explores the relationships of the attributes of project finance on project risk management. The interviewees largely agrees on the effects of the separation of legal entity, non- or limited recourse loans, and the existence of third-party guarantees in managing political and country risks, business risks, and principal-agency risks. The involvement of a larger number of stakeholders in the projects enable the project to enhance its risk management ability by gaining external expertise and knowledge, influences on government policies, and more importantly, closer supervisions on project activities. Apart from revealing the important features of project finance, and the potential benefits it may yield on project risk management, the effectiveness of these features are also discussed. The study also examines the relationships between these features and the common risk factors which may affect all projects. Some recommendations to enhance the benefits of project finance and reduce the associated transaction costs are made based on this study.
464

Serviceorientiertes Text Mining am Beispiel von Entitätsextrahierenden Diensten

Pfeifer, Katja 08 September 2014 (has links) (PDF)
Der Großteil des geschäftsrelevanten Wissens liegt heute als unstrukturierte Information in Form von Textdaten auf Internetseiten, in Office-Dokumenten oder Foreneinträgen vor. Zur Extraktion und Verwertung dieser unstrukturierten Informationen wurde eine Vielzahl von Text-Mining-Lösungen entwickelt. Viele dieser Systeme wurden in der jüngeren Vergangenheit als Webdienste zugänglich gemacht, um die Verwertung und Integration zu vereinfachen. Die Kombination verschiedener solcher Text-Mining-Dienste zur Lösung konkreter Extraktionsaufgaben erscheint vielversprechend, da so bestehende Stärken ausgenutzt, Schwächen der Systeme minimiert werden können und die Nutzung von Text-Mining-Lösungen vereinfacht werden kann. Die vorliegende Arbeit adressiert die flexible Kombination von Text-Mining-Diensten in einem serviceorientierten System und erweitert den Stand der Technik um gezielte Methoden zur Auswahl der Text-Mining-Dienste, zur Aggregation der Ergebnisse und zur Abbildung der eingesetzten Klassifikationsschemata. Zunächst wird die derzeit existierende Dienstlandschaft analysiert und aufbauend darauf eine Ontologie zur funktionalen Beschreibung der Dienste bereitgestellt, so dass die funktionsgesteuerte Auswahl und Kombination der Text-Mining-Dienste ermöglicht wird. Des Weiteren werden am Beispiel entitätsextrahierender Dienste Algorithmen zur qualitätssteigernden Kombination von Extraktionsergebnissen erarbeitet und umfangreich evaluiert. Die Arbeit wird durch zusätzliche Abbildungs- und Integrationsprozesse ergänzt, die eine Anwendbarkeit auch in heterogenen Dienstlandschaften, bei denen unterschiedliche Klassifikationsschemata zum Einsatz kommen, gewährleisten. Zudem werden Möglichkeiten der Übertragbarkeit auf andere Text-Mining-Methoden erörtert.
465

Extracting Clinical Findings from Swedish Health Record Text

Skeppstedt, Maria January 2014 (has links)
Information contained in the free text of health records is useful for the immediate care of patients as well as for medical knowledge creation. Advances in clinical language processing have made it possible to automatically extract this information, but most research has, until recently, been conducted on clinical text written in English. In this thesis, however, information extraction from Swedish clinical corpora is explored, particularly focusing on the extraction of clinical findings. Unlike most previous studies, Clinical Finding was divided into the two more granular sub-categories Finding (symptom/result of a medical examination) and Disorder (condition with an underlying pathological process). For detecting clinical findings mentioned in Swedish health record text, a machine learning model, trained on a corpus of manually annotated text, achieved results in line with the obtained inter-annotator agreement figures. The machine learning approach clearly outperformed an approach based on vocabulary mapping, showing that Swedish medical vocabularies are not extensive enough for the purpose of high-quality information extraction from clinical text. A rule and cue vocabulary-based approach was, however, successful for negation and uncertainty classification of detected clinical findings. Methods for facilitating expansion of medical vocabulary resources are particularly important for Swedish and other languages with less extensive vocabulary resources. The possibility of using distributional semantics, in the form of Random indexing, for semi-automatic vocabulary expansion of medical vocabularies was, therefore, evaluated. Distributional semantics does not require that terms or abbreviations are explicitly defined in the text, and it is, thereby, a method suitable for clinical corpora. Random indexing was shown useful for extending vocabularies with medical terms, as well as for extracting medical synonyms and abbreviation dictionaries.
466

Vaistinės veiklos efektyvumo didinimas klientų lūkesčių tyrimo pagrindu / The improvement of effectiveness in pharmacy service according to the research of customers' expectations

Jurevičiūtė, Vilma 18 June 2014 (has links)
Atspindėdama sveikatos apsaugos sistemos pokyčius, vaistinių praktika vis daugiau fokusuojama į individualius paciento poreikius – siekiant teikti kokybišką farmacinę paslaugą pacientas ir jo poreikiai atsiduria vaistininko dėmesio centre. Nors farmacinė paslauga yra reglamentuojama Lietuvos Respublikos Farmacijos įstatymu, Geros vaistinių praktikos nuostatomis ir kitais teisės aktais, Lietuvos ir kitų šalių praktika rodo, kad ne visada šių nuostatų pavyksta laikytis, be to, šių teisės aktų nuostatomis nėra reglamentuojama, kaip turėtų būti garantuojamas klientų individualių poreikių ir lūkesčių tenkinimas vaistinėse. Suprantant pacientų lūkesčius ir žinant, kiek tuos lūkesčius įstaiga sugeba įgyvendinti, galima gerinti farmacinės paslaugos kokybę ir, tuo pačiu, didinti klientų pasitenkinimą bei jų lojalumą vaistinei. Šio darbo tikslas yra parengti vaistinės klientų lūkesčių bei poreikių identifikavimo metodologines gaires, kurios galėtų būti naudojamos identifikuojant farmacinės paslaugos teikimo rizikos veiksnius bei pasirenkant farmacinės veiklos optimizavimo galimybes. Tikslą pasiekti buvo parengtas integralus tyrimo modelis, kurio struktūros pagrindas yra farmacinės paslaugos poreikių bei lūkesčių identifikavimas Lietuvos ir Europos Sąjungos sveikatos politikos gairių bei žmogaus motyvacijos dėsningumų aspektu. Remiantis šiuo modeliu buvo parengti lūkesčių vertinimo įrankiai – klausimynai vaistinės klientams bei personalui. Tyrimo rezultatai atskleidė, kad... [toliau žr. visą tekstą] / Reflecting the changes in the health care system, pharmaceutical practice is getting more and more focused on the individual needs of the patient - providing quality pharmaceutical services to patients and meeting their individual needs come to the spotlight of pharmacists. Even though pharmaceutical service is regulated by the laws of Republic of Lithuania, Good pharmacy practice and other laws, the practice of Lithuania and other countries shows that it is not always easy to follow all these legislations. In addition, these statutory provisions do not regulate how meeting individual needs and expectations of customers should be met. Understanding the expectations of clients and knowing how the institution is able to implement these expectations is essential while improving the quality of pharmaceutical services and while increasing customers’ satisfaction and loyalty to the pharmacy. The aim of this study is to develop methodological guidelines of pharmacy customers’ expectations and needs, which could be used for the identification of pharmaceutical service risk factors and optimization opportunities. To achieve the goal of the study an integral model was created: the structure of the model is based on the identification of pharmaceutical service needs and expectations in Lithuanian and European Union health policy guidelines and patterns of human motivation perspective, was created. Based on this model an assessment tool for expectations was created - questionnaires for... [to full text]
467

La situation juridique d’une entité étatique non-reconnue dans l’ordre international / The legal situation of an unrecognised entity in international order

Bozkaya, Ali 24 March 2017 (has links)
Une entité qui remplit les critères classiques de l’État, en se constituant en une autorité gouvernementale stable et indépendante qui exerce un contrôle effectif sur une population déterminée dans un territoire délimité, est un État selon le droit international, et ce indépendamment de la question de savoir si elle est reconnue par les autres États ou autres sujets du droit international. Une non-reconnaissance discrétionnaire, opposée par certains États à une telle entité étatique, signifie tout au plus un refus d’entrer en relations diplomatiques et autres avec cet État non-reconnu. En revanche, une non-reconnaissance imposée par le droit international général ou par une résolution contraignante d’une organisation internationale se traduit non seulement par le refus d’entretenir des relations facultatives avec l’entité non-reconnue, mais aussi par la négation de son statut étatique. L’étude de la situation juridique des entités étatiques non-reconnues montre que le droit international ne considère pas ces entités comme des zones de non-droit qui ne peuvent générer aucun acte ou aucune relation dans l’ordre international. Tout au contraire, les États prennent acte de l’existence des entités non-reconnues et établissent des relations avec elles dans le cadre défini par le droit international général ou les résolutions des organes des Nations Unies. La non-reconnaissance représente seulement la position hostile adoptée par les États non-reconnaissants envers une entité étatique pour des raisons politiques ou comme réaction à une violation du droit international. / An entity that fulfils classical criteria for statehood, in constituting a stable and independant governmental authority having an effectif control on a certain population in a certain territory is a State in terms of international law, notwithstanding its recognition by other States or other international law subjects. A discretionary non-recognition adopted by certains States towards such an entity means atmost a refusal to enter in diplomatical or other relations with this unrecognised entity. On the other hand, a non-recognition imposed by general international law or by a mandatory resolution of an international organisation signifies not only a refusal to enter in optional relations with the unrecognised entity but also a denial of its state status.The study of the legal situation of unrecognised entities shows that international law does not consider these entities as a land without law that can produce no act or relation in international order. On the contrary, the States take notice of the existance of unrecognised entities and establish relations with them in the framework of general international law or the resolutions of United Nations organs. Non-recognition represents only an unfriendly position adopted by non-recognising States towards the unrecognised entity for political reasons or as a response to an international law violation.
468

La structuration dans les entités nommées / Structuration in named entities

Dupont, Yoann 23 November 2017 (has links)
La reconnaissance des entités nommées et une discipline cruciale du domaine du TAL. Elle sert à l'extraction de relations entre entités nommées, ce qui permet la construction d'une base de connaissance (Surdeanu and Ji, 2014), le résumé automatique (Nobata et al., 2002), etc... Nous nous intéressons ici aux phénomènes de structurations qui les entourent.Nous distinguons ici deux types d'éléments structurels dans une entité nommée. Les premiers sont des sous-chaînes récurrentes, que nous appelerons les affixes caractéristiques d'une entité nommée. Le second type d'éléments est les tokens ayant un fort pouvoir discriminant, appelés des tokens déclencheurs. Nous détaillerons l'algorithme que nous avons mis en place pour extraire les affixes caractéristiques, que nous comparerons à Morfessor (Creutz and Lagus, 2005b). Nous appliquerons ensuite notre méthode pour extraire les tokens déclencheurs, utilisés pour l'extraction d'entités nommées du Français et d'adresses postales.Une autre forme de structuration pour les entités nommées est de nature syntaxique, qui suit généralement une structure d'imbrications ou arborée. Nous proposons un type de cascade d'étiqueteurs linéaires qui n'avait jusqu'à présent jamais été utilisé pour la reconnaissance d'entités nommées, généralisant les approches précédentes qui ne sont capables de reconnaître des entités de profondeur finie ou ne pouvant modéliser certaines particularités des entités nommées structurées.Tout au long de cette thèse, nous comparons deux méthodes par apprentissage automatique, à savoir les CRF et les réseaux de neurones, dont nous présenterons les avantages et inconvénients de chacune des méthodes. / Named entity recognition is a crucial discipline of NLP. It is used to extract relations between named entities, which allows the construction of knowledge bases (Surdeanu and Ji, 2014), automatic summary (Nobata et al., 2002) and so on. Our interest in this thesis revolves around structuration phenomena that surround them.We distinguish here two kinds of structural elements in named entities. The first one are recurrent substrings, that we will call the caracteristic affixes of a named entity. The second type of element is tokens with a good discriminative power, which we call trigger tokens of named entities. We will explain here the algorithm we provided to extract such affixes, which we will compare to Morfessor (Creutz and Lagus, 2005b). We will then apply the same algorithm to extract trigger tokens, which we will use for French named entity recognition and postal address extraction.Another form of structuration for named entities is of a syntactic nature. It follows an overlapping or tree structure. We propose a novel kind of linear tagger cascade which have not been used before for structured named entity recognition, generalising other previous methods that are only able to recognise named entities of a fixed depth or being unable to model certain characteristics of the structure. Ours, however, can do both.Throughout this thesis, we compare two machine learning methods, CRFs and neural networks, for which we will compare respective advantages and drawbacks.
469

The trade name and its use by the legal entity / El nombre comercial y su uso por parte de la persona jurídica

Pazos Hayashida, Javier Mihail 10 April 2018 (has links)
This article analyzes the trade name function as a distinctive sign of the economic agents, and the use of it, having in consideration the problems with the confusion between legal entity name, label or business sign, and service mark. Furthermore, it discusses the origin of the exclusive right to a trade name, the proof of its use, and the effects when use of the sign ceases. / Este artículo analiza la función del nombre comercial como signo distintivo del agente económico y su uso, teniendo en cuenta el marco regulatorio actual y los problemas generados por la confusión con el nombre de la persona jurídica, el rótulo de establecimiento o la marca de servicio. Además, se analiza el surgimiento del derecho sobre el referido signo, la acreditación del uso y los efectos de la falta del mismo.
470

Automatic Identification of Duplicates in Literature in Multiple Languages

Klasson Svensson, Emil January 2018 (has links)
As the the amount of books available online the sizes of each these collections are at the same pace growing larger and more commonly in multiple languages. Many of these cor- pora contain duplicates in form of various editions or translations of books. The task of finding these duplicates is usually done manually but with the growing sizes making it time consuming and demanding. The thesis set out to find a method in the field of Text Mining and Natural Language Processing that can automatize the process of manually identifying these duplicates in a corpora mainly consisting of fiction in multiple languages provided by Storytel. The problem was approached using three different methods to compute distance measures between books. The first approach was comparing titles of the books using the Levenstein- distance. The second approach used extracting entities from each book using Named En- tity Recognition and represented them using tf-idf and cosine dissimilarity to compute distances. The third approach was using a Polylingual Topic Model to estimate the books distribution of topics and compare them using Jensen Shannon Distance. In order to es- timate the parameters of the Polylingual Topic Model 8000 books were translated from Swedish to English using Apache Joshua a statistical machine translation system. For each method every book written by an author was pairwise tested using a hypothesis test where the null hypothesis was that the two books compared is not an edition or translation of the others. Since there is no known distribution to assume as the null distribution for each book a null distribution was estimated using distance measures of books not written by the author. The methods were evaluated on two different sets of manually labeled data made by the author of the thesis. One randomly sampled using one-stage cluster sampling and one consisting of books from authors that the corpus provider prior to the thesis be considered more difficult to label using automated techniques. Of the three methods the Title Matching was the method that performed best in terms of accuracy and precision based of the sampled data. The entity matching approach was the method with the lowest accuracy and precision but with a almost constant recall at around 50 %. It was concluded that there seems to be a set of duplicates that are clearly distin- guished from the estimated null-distributions, with a higher significance level a better pre- cision and accuracy could have been made with a similar recall for the specific method. For topic matching the result was worse than the title matching and when studied the es- timated model was not able to create quality topics the cause of multiple factors. It was concluded that further research is needed for the topic matching approach. None of the three methods were deemed be complete solutions to automatize detection of book duplicates.

Page generated in 0.0633 seconds