• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 250
  • 191
  • 69
  • 37
  • 28
  • 18
  • 13
  • 10
  • 10
  • 9
  • 9
  • 8
  • 4
  • 3
  • 3
  • Tagged with
  • 701
  • 124
  • 115
  • 101
  • 96
  • 91
  • 88
  • 84
  • 82
  • 77
  • 74
  • 73
  • 72
  • 69
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
631

GoPubMed: Ontology-based literature search for the life sciences

Doms, Andreas 06 January 2009 (has links)
Background: Most of our biomedical knowledge is only accessible through texts. The biomedical literature grows exponentially and PubMed comprises over 18.000.000 literature abstracts. Recently much effort has been put into the creation of biomedical ontologies which capture biomedical facts. The exploitation of ontologies to explore the scientific literature is a new area of research. Motivation: When people search, they have questions in mind. Answering questions in a domain requires the knowledge of the terminology of that domain. Classical search engines do not provide background knowledge for the presentation of search results. Ontology annotated structured databases allow for data-mining. The hypothesis is that ontology annotated literature databases allow for text-mining. The central problem is to associate scientific publications with ontological concepts. This is a prerequisite for ontology-based literature search. The question then is how to answer biomedical questions using ontologies and a literature corpus. Finally the task is to automate bibliometric analyses on an corpus of scientific publications. Approach: Recent joint efforts on automatically extracting information from free text showed that the applied methods are complementary. The idea is to employ the rich terminological and relational information stored in biomedical ontologies to markup biomedical text documents. Based on established semantic links between documents and ontology concepts the goal is to answer biomedical question on a corpus of documents. The entirely annotated literature corpus allows for the first time to automatically generate bibliometric analyses for ontological concepts, authors and institutions. Results: This work includes a novel annotation framework for free texts with ontological concepts. The framework allows to generate recognition patterns rules from the terminological and relational information in an ontology. Maximum entropy models can be trained to distinguish the meaning of ambiguous concept labels. The framework was used to develop a annotation pipeline for PubMed abstracts with 27,863 Gene Ontology concepts. The evaluation of the recognition performance yielded a precision of 79.9% and a recall of 72.7% improving the previously used algorithm by 25,7% f-measure. The evaluation was done on a manually created (by the original authors) curation corpus of 689 PubMed abstracts with 18,356 curations of concepts. Methods to reason over large amounts of documents with ontologies were developed. The ability to answer questions with the online system was shown on a set of biomedical question of the TREC Genomics Track 2006 benchmark. This work includes the first ontology-based, large scale, online available, up-to-date bibliometric analysis for topics in molecular biology represented by GO concepts. The automatic bibliometric analysis is in line with existing, but often out-dated, manual analyses. Outlook: A number of promising continuations starting from this work have been spun off. A freely available online search engine has a growing user community. A spin-off company was funded by the High-Tech Gründerfonds which commercializes the new ontology-based search paradigm. Several off-springs of GoPubMed including GoWeb (general web search), Go3R (search in replacement, reduction, refinement methods for animal experiments), GoGene (search in gene/protein databases) are developed.
632

Trestní odpovědnost právnických osob / Criminal liability of legal entities

Filipovičová, Gabriela January 2020 (has links)
Criminal lability of legal entities Abstract The diploma thesis is focused on criminal lability of legal entities, which was incorporated into Czech law by Act No. 418/2011 Coll, on Criminal Liability of Legal Entities and Proceeding Againts Them (hereinafter "the Act"), which came into effect on 1 January 2012. The phenomenon of criminal liability of legal entities is a controversial topic, because it conflicts with many traditional principles of criminal law in the continental legal culture, which also includes the Czech Republic. Even after eight years since the introduction of criminal liability of legal entities into the Czech legal order, this topic is given constant attention in the field of doctrine and practice. The aim of the diploma thesis is to present a complex issue of the rise and expiry of criminal liability of legal entities in the Czech Republic and to evaluate the practical and theoretical problems that this law institute brings. The diploma thesis is divided into five chapters. The first chapter presents the reasons that led the legislature to introduce criminal liability of legal entities and presents also counter-arguments of opponents that the legislature had to deal with. The circumstances of the legislative process of the adoption of the Act and its subsequent amendments are also...
633

Efecto de la concentración de mercado sobre el rendimiento de las entidades financieras en el segmento de depósitos bancarios / The effect of concentration and deposits on the performance of financial institutions

Galvez Segura, Edwin Ivan 11 June 2021 (has links)
Utilizando la especificación de Generalized Leas Square (GLS) de los modelos de efectos fijos, este trabajo analiza el efecto de la concentración de mercado sobre las rentabilidades en relación con los depósitos para las entidades integrantes de la banca múltiple durante el periodo 2010-2011. La literatura ha evaluado ampliamente esta relación; no obstante, el análisis desde un enfoque de los depósitos financieros ha sido nulo. La relevancia de las hipótesis del paradigma Estructura-Conducta-Resultado, ha sido refutada en esta investigación. Los resultados sugieren que, dado el comportamiento propio de los depósitos bancarios, la concentración de la industria no tiene un impacto positivo sobre los retornos, es decir, no incrementan las ganancias. / Using the Generalized Leas Square (GLS) specification of fixed effects models, this paper analyzes the effect of market concentration on returns on deposits for commercial banks during the period 2010-2011. The literature has extensively evaluated this relationship; however, the analysis from a financial deposits approach has been null. The relevance of the hypotheses of the Structure-Conduct-Performance paradigm has been refuted in this research. The results suggest that, given the behavior of bank deposits, industry concentration does not have a positive impact on returns, i.e., it does not increase profits. / Trabajo de investigación
634

Specializované knihovny v 21. století - entitně-relační model a model systému služeb knihovny / Special libraries in the 21st century - the entity-relationship model and the model of system library services

Římanová, Radka January 2015 (has links)
The dissertation is devoted to the analysis of the environment and the future prospects of special libraries. Knowledge has been achieved by direct observation, conceptual modeling and content analysis of scientific documents. Universum of special library can be expressed as an entity- relational model. Document services, which are based on the theoretical framework of information management, are proving to be the most important service in special libraries. Management of special libraries is based on knowledge of the information needs of the user community and the methodology of library processes. Library services are interconnected and the system can be expressed in the form of mind maps. Processes and services of special libraries can provide data for information-science research.
635

Relación entre los estilos de liderazgo y la transformación digital en las sucursales de las entidades financieras en Lima Moderna, 2021

Cloke Pineda, Erika Julissa, Romero Caldas, Renzo Gonzalo 30 November 2021 (has links)
El presente trabajo de investigación tiene como objetivo determinar la relación que existe entre los estilos de liderazgo y la transformación digital en las sucursales de las entidades financieras en Lima Moderna, 2021. Con el fin de validar si los estilos de liderazgo transaccional, liderazgo transformador y liderazgo trascendental tienen una relación con la transformación digital realizamos este estudio para servir de base a las entidades bancarias y no pierdan su nivel de posicionamiento en el mercado. El diseño de la investigación es No experimental - Transversal. Para recolectar la información se aplicó una encuesta cualitativa tipo Likert con 5 niveles a una muestra de 349 colaboradores de los 4 bancos más representativos del Perú a nivel Lima Moderna: Banco de Crédito del Perú, BBVA Banco Continental, Interbank y Scotiabank; calculando el número de colaboradores a encuestar por banco según participación de mercado. Para determinar la fiabilidad del instrumento, se realizó el cálculo del coeficiente de Alfa de Cronbach donde se obtuvo un valor de 0.804 en los estilos de liderazgo y 0.801 para la transformación digital, lo cual indica que nuestro instrumento es confiable al tener valores muy cercanos al 1. Finalmente se obtuvo un valor de 0.764, lo cual llevó a rechazar la hipótesis nula y aceptar la hipótesis general. Lo cual indica que la relación entre los estilos de liderazgo y la transformación digital en las sucursales de las entidades financieras en Lima Moderna, 2021 es significativa y positiva y corresponde a una correlación alta. / The purpose of this research is to determine the relationship between leadership styles and digital transformation in the branches of financial institutions in Lima Moderna, 2021. In order to validate whether transactional leadership, transformational leadership and transcendental leadership styles have a relationship with digital transformation, we conducted this study to serve as a basis for banking institutions and not to lose their level of positioning in the market. The research design is non-experimental - Transversal. To collect the information, a qualitative Likert-type survey with 5 levels was applied to a sample of 349 employees of the 4 most representative banks in Peru at the Modern Lima level: Banco de Crédito del Perú, BBVA Banco Continental, Interbank and Scotiabank; calculating the number of employees to be surveyed per bank according to market share. To determine the reliability of the instrument, the Cronbach's Alpha coefficient was calculated, where a value of 0.804 was obtained in the leadership styles and 0.801 for the digital transformation, which indicates that our instrument is reliable as it has values ​​very close to the 1. Finally, a value of 0.764 was obtained, which led to rejecting the null hypothesis and accepting the general hypothesis. Which indicates that the relationship between leadership styles and digital transformation in branches of financial institutions in Lima Moderna, 2021 is significant and positive and corresponds to a high correlation. / Tesis
636

The Effect of Social Media Subtle Communication on Beliefs About Mental Illness Trajectories

Whitted, Whitney M. 22 December 2022 (has links)
No description available.
637

[en] EXTRACTING RELIABLE INFORMATION FROM LARGE COLLECTIONS OF LEGAL DECISIONS / [pt] EXTRAINDO INFORMAÇÕES CONFIÁVEIS DE GRANDES COLEÇÕES DE DECISÕES JUDICIAIS

FERNANDO ALBERTO CORREIA DOS SANTOS JUNIOR 09 June 2022 (has links)
[pt] Como uma consequência natural da digitalização do sistema judiciário brasileiro, um grande e crescente número de documentos jurídicos tornou-se disponível na internet, especialmente decisões judiciais. Como ilustração, em 2020, o Judiciário brasileiro produziu 25 milhões de decisões. Neste mesmo ano, o Supremo Tribunal Federal (STF), a mais alta corte do judiciário brasileiro, produziu 99.5 mil decisões. Alinhados a esses valores, observamos uma demanda crescente por estudos voltados para a extração e exploração do conhecimento jurídico de grandes acervos de documentos legais. Porém, ao contrário do conteúdo de textos comuns (como por exemplo, livro, notícias e postagem de blog), o texto jurídico constitui um caso particular de uso de uma linguagem altamente convencionalizada. Infelizmente, pouca atenção é dada à extração de informações em domínios especializados, como textos legais. Do ponto de vista temporal, o Judiciário é uma instituição em constante evolução, que se molda para atender às demandas da sociedade. Com isso, o nosso objetivo é propor um processo confiável de extração de informações jurídicas de grandes acervos de documentos jurídicos, tomando como base o STF e as decisões monocráticas publicadas por este tribunal nos anos entre 2000 e 2018. Para tanto, pretendemos explorar a combinação de diferentes técnicas de Processamento de Linguagem Natural (PLN) e Extração de Informação (EI) no contexto jurídico. Da PLN, pretendemos explorar as estratégias automatizadas de reconhecimento de entidades nomeadas no domínio legal. Do ponto da EI, pretendemos explorar a modelagem dinâmica de tópicos utilizando a decomposição tensorial como ferramenta para investigar mudanças no raciocinio juridico presente nas decisões ao lonfo do tempo, a partir da evolução do textos e da presença de entidades nomeadas legais. Para avaliar a confiabilidade, exploramos a interpretabilidade do método empregado, e recursos visuais para facilitar a interpretação por parte de um especialista de domínio. Como resultado final, a proposta de um processo confiável e de baixo custo para subsidiar novos estudos no domínio jurídico e, também, propostas de novas estratégias de extração de informações em grandes acervos de documentos. / [en] As a natural consequence of the Brazilian Judicial System’s digitization, a large and increasing number of legal documents have become available on the Internet, especially judicial decisions. As an illustration, in 2020, 25 million decisions were produced by the Brazilian Judiciary. Meanwhile, the Brazilian Supreme Court (STF), the highest judicial body in Brazil, alone has produced 99.5 thousand decisions. In line with those numbers, we face a growing demand for studies focused on extracting and exploring the legal knowledge hidden in those large collections of legal documents. However, unlike typical textual content (e.g., book, news, and blog post), the legal text constitutes a particular case of highly conventionalized language. Little attention is paid to information extraction in specialized domains such as legal texts. From a temporal perspective, the Judiciary itself is a constantly evolving institution, which molds itself to cope with the demands of society. Therefore, our goal is to propose a reliable process for legal information extraction from large collections of legal documents, based on the STF scenario and the monocratic decisions published by it between 2000 and 2018. To do so, we intend to explore the combination of different Natural Language Processing (NLP) and Information Extraction (IE) techniques on legal domain. From NLP, we explore automated named entity recognition strategies in the legal domain. From IE, we explore dynamic topic modeling with tensor decomposition as a tool to investigate the legal reasoning changes embedded in those decisions over time through textual evolution and the presence of the legal named entities. For reliability, we explore the interpretability of the methods employed. Also, we add visual resources to facilitate interpretation by a domain specialist. As a final result, we expect to propose a reliable and cost-effective process to support further studies in the legal domain and, also, to propose new strategies for information extraction on a large collection of documents.
638

A Step Toward GDPR Compliance : Processing of Personal Data in Email

Olby, Linnea, Thomander, Isabel January 2018 (has links)
The General Data Protection Regulation enforced on the 25th of may in 2018 is a response to the growing importance of IT in today’s society, accompanied by public demand for control over personal data. In contrast to the previous directive, the new regulation applies to personal data stored in an unstructured format, such as email, rather than solely structured data. Companies are now forced to accommodate to this change, among others, in order to be compliant. This study aims to provide a code of conduct for the processing of personal data in email as a measure for reaching compliance. Furthermore, this study investigates whether Named Entity Recognition (NER) can aid this process as a means of finding personal data in the form of names. A literature review of current research and recommendations was conducted for the code of conduct proposal. A NER system was constructed using a hybrid approach with Binary Logistic Regression, hand-crafted rules and gazetteers. The model was applied to a selection of emails, including attachments, obtained from a small consultancy company in the automotive industry. The proposed code of conduct consists of six items, applied to the consultancy firm. The NER-model demonstrated low ability to identify names and was therefore deemed insufficient for this task. / Dataskyddsförordningen började gälla den 25e maj 2018, och uppstod som ett svar på den okände betydelsen av IT i dagens samhälle samt allmänhetens krav på ökad kontroll över personuppgifter för den enskilde individen. Till skillnad från det tidigare direktivet, omfattar den nya förordningen även personuppgifter som är lagrad i ostrukturerad form, som till exempel e-post, snarare än endast i strukturerad form. Många företag tvingas därmed att anpassa sig efter detta, tillsammans med ett flertal andra nya krav, i syfte att efterfölja förordningen. Den här studien syftar till att lägga fram ett förslag på en uppförandekod för behandling av personuppgifter i e-post som ett verktyg för att nå medgörlighet. Utöver detta undersöks det om Named Entity Recognition (NER) kan användas som ett hjälpmedel vid identifiering av personuppgifter, mer specifikt namn. En litteraturstudie kring tidigare forskning och aktuella rekommendationer utfördes inför utformningen av uppförandekoden. Ett NER-system konstruerades med hjälp av Binär Logistisk Regression, handgjorda regler och ordlistor. Modellen applicerades på ett urval av e-postmeddelanden, med eventuella bilagor, som tillhandahölls från ett litet konsultbolag aktivt inom bilindustrin. Den rekommenderade uppförandekoden består av sex punkter, applicerade på konsultbolaget. NER-modellen påvisade en låg förmåga att identifiera namn och ansågs därför inte vara lämplig för den utsatta uppgiften.
639

Financing Public Solar Projects: California Public Jurisdictions' Experiences in Acquiring and Financing Solar Photovoltaic Installations

Hoffman, Dana M.C. 01 May 2013 (has links) (PDF)
More efficient technologies, state laws as well as environmental, social, and political pressures have all contributed to placing solar acquisition on the agenda for California’s public entities over the last half decade. But a key question for these frequently cash-strapped jurisdictions is how to utilize public dollars and lands, and how to leverage incentives to obtain solar PVs. As an alternative to outright purchase, a promising financing option made available to jurisdictions in recent years is ownership by a third party, usually the solar company, including various forms of Power Purchase Agreements (PPA’s) and leasing. Due in part to state and federal incentives available between 2007 and 2012, these third-party provider (TPP) options have been used with increasing frequency; TPP arrangements accounted for “virtually all” larger and mid-size non-residential installations in 2008 (Sherwood 2008). A number of California’s early adopters of third-party financing have installations that have now been operational for several years. Consequently, there is a new opportunity to evaluate third-party financing effectiveness. This thesis reviews solar acquisition practices in California over the last six years, comparing financing options through document analysis and feedback from jurisdiction staff. It finds that directly buying installations has provided a slight advantage in direct savings and overall satisfaction for jurisdictions on average, but success generally depends upon the jurisdiction having secured upfront capital, usually from successfully accessing very low-interest loans or large grants. TPP projects have provided a good alternative to direct purchase, resulting in significant savings and positive reviews from jurisdictions, allowing them to invest in larger installation sizes, and to meet local policy goals or mandates. Additionally, this thesis makes observations about the limitations for installation sizing, impacts of siting on savings, tips for selecting a solar installer, the benefits of cooperative procurement arrangements, and the relative importance of existing and expired monetary incentives available for solar from 2006 through 2020.
640

Utilizing Transformers with Domain-Specific Pretraining and Active Learning to Enable Mining of Product Labels

Norén, Erik January 2023 (has links)
Structured Product Labels (SPLs), the package inserts that accompany drugs governed by the Food and Drugs Administration (FDA), hold information about Adverse Drug Reactions (ADRs) that exists associated with drugs post-market. This information is valuable for actors working in the field of pharmacovigilance aiming to improve the safety of drugs. One such actor is Uppsala Monitoring Centre (UMC), a non-profit conducting pharmacovigilance research. In order to access the valuable information of the package inserts, UMC have constructed an SPL mining pipeline in order to mine SPLs for ADRs. This project aims to investigate new approaches to the solution to the Scan problem, the part of the pipeline responsible for extracting mentions of ADRs. The Scan problem is solved by approaching the problem as a Named Entity Recognition task, a subtask of Natural Language Processing. By using the transformer-based deep learning model BERT, with domain-specific pre-training, an F1-score of 0.8220 was achieved. Furthermore, the chosen model was used in an iteration of Active Learning in order to efficiently extend the available data pool with the most informative examples. Active Learning improved the F1-score to 0.8337. However, the Active Learning was benchmarked against a data set extended with random examples, showing similar improved scores, therefore this application of Active Learning could not be determined to be effective in this project.

Page generated in 0.0442 seconds