111 |
Assessing the Readiness for Implementing Linked Open Government Data: A Study on Indonesian Government Agencies' LibraryIrhamni, Irhamni 07 1900 (has links)
The main purpose of this study is to assess the factors of library readiness in Indonesian government agencies for implementing and using linked open government data (LOGD). The studies investigated readiness factors in the context of TOE framework: technology, compatibility, complexity, relative advantage, organization: top management support, internal readiness, environment: competitive pressure, external support, peer pressure. This study employed a mixed-methods approach, encompassing surveys and interviews, to gather data from a representative sample of libraries inside Indonesian government organizations. The quantitative research design results indicates that compatible technology, external support and peer pressure are significant factors in the readiness level to implement LOGD in Indonesia perceived by librarians. The qualitative design research employed to explore the quantitative research design found that in the technological perspective are data quality policy, metadata standard policy, privacy and security policy are the main factors to make LOGD compatible in the library. From the environmental perspective are government agency libraries in Indonesia needs law and legal policy and technical policy in LOGD for bibliographic data. The external support also needs commitment and support engagement to ensure the government agencies library in Indonesia is ready to implement LOGD. Indonesian government librarians should consider the peer communication among other librarian as essential factors in LOGD implementation. To increase readiness of LOGD implementation in the government agencies library across Indonesia, the Indonesian government should create compatible technology policies for government agencies library, creating national policy to support LOGD in technical aspect, and creating peer partnership among government agencies library in Indonesia.
|
112 |
Konzeption eines RDF-Vokabulars für die Darstellung von COUNTER-NutzungsstatistikenDomin, Annika 15 October 2015 (has links) (PDF)
Die vorliegende Masterarbeit dokumentiert die Erstellung eines RDF-basierten Vokabulars zur Darstellung von Nutzungsstatistiken elektronischer Ressourcen, die nach dem COUNTER-Standard erstellt wurden. Die konkrete Anwendung dieses Vokabulars bildet das Electronic Resource Management System (ERMS), welches momentan von der Universitätsbibliothek Leipzig im Rahmen des kooperativen Projektes AMSL entwickelt wird. Dieses basiert auf Linked Data, soll die veränderten Verwaltungsprozesse elektronischer Ressourcen abbilden können und gleichzeitig anbieterunabhängig und flexibel sein. Das COUNTER-Vokabular soll aber auch über diese Anwendung hinaus einsetzbar sein.
Die Arbeit gliedert sich in die beiden Teile Grundlagen und Modellierung. Im ersten Teil wird zu nächst die bibliothekarische Notwendigkeit von ERM-Systemen herausgestellt und der Fokus der Betrachtung auf das Teilgebiet der Nutzungsstatistiken und die COUNTER-Standardisierung gelenkt. Anschließend werden die technischen Grundlagen der Modellierung betrachtet, um die Arbeit auch für nicht mit Linked Data vertraute Leser verständlich zu machen. Darauf folgt der Modellierungsteil, der mit einer Anforderungsanalyse sowie der Analyse des den COUNTER-Dateien zugrunde liegenden XML-Schemas beginnt. Daran schließt sich die Modellierung des Vokabulars mit Hilfe von RDFS und OWL an. Aufbauend auf angestellten Überlegungen zur Übertragung von XML-Statistiken nach RDF und der Vergabe von URIs werden anschließend reale Beispieldateien manuell konvertiert und in einem kurzen Test erfolgreich überprüft. Den Abschluss bilden ein Fazit der Arbeit sowie ein Ausblick auf das weitere Verfahren mit den Ergebnissen.
Das erstellte RDF-Vokabular ist bei GitHub unter der folgenden URL zur Weiterverwendung hinterlegt: https://github.com/a-nnika/counter.vocab
|
113 |
Current and Potential Use of Linked Geodata / Nuvarande och framtida användning av Länkade GeodataEduards, Rasmus January 2017 (has links)
As of Today (2017) Geographic Information (GI) is a vital part of our daily life. With different applications like Google Maps it is hard to not get in contact with these platforms. Applications like Google are becoming more than just maps for us to find our way in the real world, they contain important data. As of now some of these datasets are kept by authorities and institutes with no connection to each other. One way to link this information to each other is by using Linked Data and more specifically when it comes to GI, Linked Geodata. By linking data together, information becomes connected, which can help the structure of Open Data and other data collaborates. It also enables ways to query the data to for example in search engines. This Bachelor of Science thesis has been conducted at KTH Royal Institute of Technology, in cooperation with Digpro AB. This thesis purpose is to examine whether the Linked Geodata is something to invest in. This was done by investigating current use to understand how Linked Geodata is implemented, as well to describe challenges and possibilities in respect to Linked Geodata. This is done by literature review and through interviews with personnel working with implementation of Linked Geodata. The result showed some implementations in the Netherlands and in Finland, also a private initiative from the University of Leipzig called LinkedGeoData. In Sweden authorities had explored the topic of Linked Geodata without any actual attempts to implement it. The biggest challenges was that queries did not supported all kind of spatial data, maintain the Linked Geodata consistent and find a way to fund the workload. The biggest possibilities were to create cooperation between authorities, integration and discoverability of data in search engines and to improve the environment for publishing open data, which could lead to an improved social and economic situation. After evaluation this thesis concludes that there is a lot of potential use for Linked Geodata. The most considerable possible use is for authorities with a larger amount of geodata especially regarding their publishing of Open Data and integrating their data to search engines to provide more advanced queries. The technology seems to have some problems, mainly the lack of support for spatial data and also problems with maintaining the connections. However the problems are not too severe in order to not invest in the technology. The technology just needs some improvements and more initiatives. / Idag (2017) är Geografisk Information (GI) en viktigt del av vårt dagliga liv. Med olika applikationer som Google Maps så är det svårt att inte komma i kontakt med sådana plattformar. Dem börjar bli mer än bara kartor för att hitta vart man ska. Idag är informationen i många fall inte knuten till varandra vilket betyder att informationen skulle kunna utnyttjas bättre om det var länkat. Ett sätt att länka sådan information och länka objekt till varandra är med Länkade Data och mer specifikt när det kommer till GI Länkade Geodata. Länkade Data kan sedan användas vid publicering av öppen data, för att berika mängden information. Det kan också användas för att förberedd webben för maskin läsning. Med detta menas att datorer ska kunna läsa av webben. Detta är en Kandidat examensarbete som har varit dirigerat av Kungliga Tekniska Högskolan, i samarbete med företaget Digpro AB. Syftet med denna uppsats är att ta reda på om Länkade Geodata är något att investera i. Detta var utfört genom att ta reda på hur dagsläget ser ut i olika länder samt hur det är implementerat. Samt beskriva utmaningar och möjligheter med Länkade Geodata. Detta är utfört genom litteraturstudier och intervjuer med behörig personal som antingen arbetar inom Geodata sektorn eller med Länkade Geodata. Resultatet visade några implementationer i Nederländer och Finland samt ett privat initiativ av ett Universitet kallat LinkedGeoData. I Sverige hade institut utvärderat möjligheterna för Länkade Geodata samt kommit fram med riktlinjer, men ingen storskalig implementation har blivit gjord. De störst utmaningarna var hitta tillräckligt med stöd för alla typer av spatial data, underhålla det så kallade Semantiska Molnet samt fördela arbetsbörda och hitta en finansiär. Det största möjligheterna var att kunna skapa en samverkan mellan olika instituts data, integration och upptäckbarhet av information i sökmotorer samt en förbättrad miljö för publicering av öppen data som kan leda till social och ekonomiska förbättringar. Den här uppsatsen drar slutsatsen att det finns stor potential användande för Länkade Geodata. Det störst användningsområdet är för institut med stor mängder geodata speciellt när det kommer till användandet av att publicera Öppen Data och integrera information till sökmotorer för att möjligöra svårare frågeställningar. Tekniken har en del problem t.ex. med att bearbeta spatial data och att den är svår att underhålla. Dock är dessa problem inte graverande nog att stoppa investeringar i den. Tekniken behöver förbättringar och mer initiativ till bearbetning av den.
|
114 |
Avaliação da acurácia do teste imunoenzimático e de sua contribuição no seguimento de pacientes com paracoccidioidomicose em tratamento e no diagnóstico de doença reativada /Sylvestre, Tatiane Fernanda. January 2013 (has links)
Orientador: Rinaldo Poncio Mendes / Coorientador: James Venturini / Coorientador: Ana Pardini Vicentini / Coorientador: Daniela Vanessa Moris de Oliveira / Banca: Mário León Silva-Vergara / Banca: Anamaria Mello Miranda Peniago / Resumo: O reaparecimento de manifestações clínicas após tratamento eficaz, neste texto identificado como recaída, tem sido pouco avaliado. Assim, foram estudados os casos de recaída observados em 400 pacientes, 93 com a forma aguda/subaguda (FA) e 307 com a crônica (FC), que já tinham apresentado cura aparente, isto é, com cura clínica, normalização da velocidade de hemossedimentação e cura sorológica - caracterizada pela presença de teste negativo à imunodifusão dupla em gel de agar por dois anos. Vinte e um (5,2%) desses pacientes apresentaram recaídas da doença. Três (14,3%) eram do sexo feminino e 18 (85,7%) do masculino, com razão de masculinidade de 6:1. Dos 21 pacientes com recaída, 15 (4,8%) apresentavam a FC e 6 (6,4%) a FA (p>0,05). As reativações ocorreram de 46 a 296 meses após introdução do tratamento (Md=96) e de 4 a 267 meses (Md= 60) após sua suspensão. As formas clínicas não diferiram quanto aos tempos decorridos até a reativação. O diagnóstico sorológico de recaída pela IDD foi feito em apenas 45% dos casos, o que levou à avaliação de outros testes para esse fim. Assim, foi realizado o enzimaimunoensaio (ELISA) e o immunoblotting (IB). A sensibilidade da IDD e do ELISA / 0,710 foi 76,1% em amostras de soro obtidos no pré-tratamento (p=0,25). No diagnóstico de recaída, a sensibilidade da IDD foi menor que no pré-tratamento (80% vs 45%; p=0,017), enquanto o ELISA / 0,710 foi igual (80% vs 65%;p=0,125). A sensibilidade do IB para diagnóstico de recaída foi de 12,5% na identificação da gp70 e 43,8% na da gp43 (p<0,05). A avaliação da acurácia do teste ELISA revelou sensibilidade de 96%, especificidade de 95%, valor preditivo positivo de 95%, valor preditivo negativo de 96% e acurácia de 95,5% quando o cut-off utilizado foi a densidade óptica de 0,710, obtido em função da construção da receiver operator characteristc - ROC, para um intervalo de confiança ... / Abstract: The reappearance of clinical manifestations after efficacious treatment, here identified as relapse, has been rarely evaluated. Thus, the cases of relapse observed in a cohort of 400 patients, 93 with acute/subacute form (AF) and 307 with chronic form (CF) were studied. They had already reached apparent cure, characterized as clinical cure, normalization of the erythrocyte sedimentation rate and serological cure, with a negative double agar gel immunodiffusion test (DID) for two years after antifungal discontinuation. Twenty-one (5.2%) of these patients had relapses. Three (14.3%) were female and 18 (85.7%) were male, with a male:female ratio of 6:1. Of the 21 relapsed patients, 15 (4.8%) presented the CF and 6 (6.4%) the AF (p>0.05). The relapse occurred 46-296 months after introduction of the treatment (Md=8 yrs) and from 4 to 267 months (Md=5 yrs) after withdrawal. Clinical forms did not differ regarding to the time elapsed until relapse. DID was positive in only 45% of the relapsed cases, which led to the evaluation of other tests to diagnose this condition. Thus, the enzyme-linked immunosorbent assay (ELISA) was standardized and the cut off was determined using the curve receiver operator characteristic - ROC and confidence intervals of 95% and 99%, showing optical densities of 0.710 and 0.850, respectively. Then, serological evaluation was performed using ELISA/0.710 and ELISA/0.850, and immunoblotting identifying gp43 (IBgp43) and gp70 (IBgp70). ELISA 0.710 and DID showed the same sensitivity, 76.1%, in serum samples obtained before treatment (p=0.25). DID sensitivity was lower at relapse than before the initial treatment (45% vs 80%; p=0.017), whereas ELISA/0,710 was the same (65% vs 80%; p=0.125). IBgp70 showed a 12.5% and IBgp43 a 43.8% sensitivity for diagnosing relapse (p<0.05). ELISA/0.710 showed a 96% sensitivity, 95% specificity, 95% positive predictive value, 96% negative ... / Mestre
|
115 |
保險連結型證券在台灣市場之應用與未來發展分析 / The implementation and market development of insurance-linked security (ILS) in Taiwan蔡智聖, Tsai, Chih Sheng Unknown Date (has links)
Past several decades have been an extraordinary time period in the history of extreme catastrophes e, g., the September 11 terrorist attack (2001), the South Asia tsunami (2004), and Hurricane Katrina (2005). Life insurance industry also faces catastrophic risk events- longevity and mortality risks. Facing this insurance/reinsurance capacity shortage, raising additional equity capital is one of solutions. Then, innovation occurred. Insurance-linked securities (ILS) was created. Insurance-linked securities is a means of transferring insurance risks to the capital market. Since the inception of the market in 1996, ILS has evolved to become a strong complement to traditional reinsurance, providing benefits to transaction sponsors, i.e. ceding companies. This study explores the prospects for ILS by focusing on some issues, First of all, the product features of ILS, reviewing the structure, trigger mechanism, perils, capacity, pricing and costs of ILS. Secondly, this will make some analysis for the international market development of ILS. Thirdly, This study will turn on to the potential market in Taiwan. The study tries to review the potential market in Taiwan from property casualty insurance and life insurance respectively. Finally, with the analysis in various aspects, hopefully, the study can provide solid conclusion for ILS development in Taiwan. / Past several decades have been an extraordinary time period in the history of extreme catastrophes e, g., the September 11 terrorist attack (2001), the South Asia tsunami (2004), and Hurricane Katrina (2005). Life insurance industry also faces catastrophic risk events- longevity and mortality risks. Facing this insurance/reinsurance capacity shortage, raising additional equity capital is one of solutions. Then, innovation occurred. Insurance-linked securities (ILS) was created. Insurance-linked securities is a means of transferring insurance risks to the capital market. Since the inception of the market in 1996, ILS has evolved to become a strong complement to traditional reinsurance, providing benefits to transaction sponsors, i.e. ceding companies. This study explores the prospects for ILS by focusing on some issues, First of all, the product features of ILS, reviewing the structure, trigger mechanism, perils, capacity, pricing and costs of ILS. Secondly, this will make some analysis for the international market development of ILS. Thirdly, This study will turn on to the potential market in Taiwan. The study tries to review the potential market in Taiwan from property casualty insurance and life insurance respectively. Finally, with the analysis in various aspects, hopefully, the study can provide solid conclusion for ILS development in Taiwan.
|
116 |
On the Neutralome of Great Apes and Nearest Neighbor Search in Metric SpacesWoerner, August Eric, Woerner, August Eric January 2016 (has links)
Problems of population genetics are magnified by problems of big data. My dissertation spans the disciplines of computer science and population genetics, leveraging computational approaches to biological problems to address issues in genomics research. In this dissertation I develop more efficient metric search algorithms. I also show that vast majority of the genomes of great apes are impacted by the forces of natural selection. Finally, I introduce a heuristic to identify neutralomes—regions that are evolving with minimal selective pressures—and use these neutralomes for inferences on effective population size in great apes. We begin with a formal and far-reaching problem that impacts a broad array of disciplines including biology and computer science; the 𝑘-nearest neighbors problem in generalized metric spaces. The 𝑘-nearest neighbors (𝑘-NN) problem is deceptively simple. The problem is as follows: given a query q and dataset D of size 𝑛, find the 𝑘-closest points to q. This problem can be easily solved by algorithms that compute 𝑘th order statistics in O(𝑛) time and space. It follows that if D can be ordered, then it is perhaps possible to solve 𝑘-NN queries in sublinear time. While this is not possible for an arbitrary distance function on the points in D, I show that if the points are constrained by the triangle inequality (such as with metric spaces), then the dataset can be properly organized into a dispersion tree (Appendix A). Dispersion trees are a hierarchical data structure that is built around a large dispersed set of points. Dispersion trees have construction times that are sub-quadratic (O(𝑛¹·⁵ log 𝑛)) and use O(𝑛) space, and they use a provably optimal search strategy that minimizes the number of times the distance function is invoked. While all metric data structures have worst-case O(𝑛) search times, dispersion trees have average-case search times that are substantially faster than a large sampling of comparable data structures in the vast majority of spaces sampled. Exceptions to this include extremely high dimensional space (d>20) which devolve into near-linear scans of the dataset, and unstructured low-dimensional (d<6) Euclidean spaces. Dispersion trees have empirical search times that appear to scale as O(𝑛ᶜ) for 0<c<1. As solutions to the 𝑘-NN problem are in general too slow to be used effectively in the arena of big data in genomics, it is my hope that dispersion trees may help lift this barrier. With source-code that is freely available for academic use, dispersion trees may be useful for nearest neighbor classification problems in machine learning, fast read-mapping against a reference genome, and as a general computational tool for problems such clustering. Next, I turn to problems in population genomics. Genomic patterns of diversity are a complex function of the interplay between demographics, natural selection and mechanistic forces. A central tenet of population genetics is the neutral theory of molecular evolution which states the vast majority of changes at the molecular level are (relatively) selectively neutral; that is, they do not effect fitness. A corollary of the neutral theory is that the frequency of most alleles in populations are dictated by neutral processes and not selective processes. The forces of natural selection impact not just the site of selection, but linked neutral sites as well. I proposed an empirical assessment of the extents of linked selection in the human genome (Appendix B). Recombination decouples sites of selection from the genomic background, thus it serves to mitigate the effects of linked selection. I use two metrics on recombination, both the minimum genetic distance to genes and local rates of recombination, to parse the effects of linked selection into selection from genic and nongenic sources in the human genome. My empirical assessment shows profound linked selective effects from nongenic sources, with these effects being greater than that of genic sources on the autosomes, as well as generally greater effects on the X chromosome than on the autosomes. I quantify these trends using multiple linear regression, and then I model the effects of linked selection to conserved elements across the whole of the genome. Places predicted to be neutral by my model do not, unlike the vast majority of the genome, show these linked selective effects. This demonstrates that linkage to these regulatory elements, and not some other mechanistic force, accounts for our findings. Further, neutrally evolving regions are extremely rare (~1%) in the genome, and despite generally larger linked selective effects on the X chromosome, the size of this “neutralome” is proportionally larger on the X chromosome than on the autosomes. To account for this and to extend my findings to other great apes I improve on my procedure to find neutralomes, and apply this procedure to the genome of humans, Nigerian chimpanzees, bonobos, and western lowland gorillas (Appendix C). In doing so I show that like humans, these other apes are also enormously impacted by linked selection, with their neutralomes being substantially smaller than the neutralomes of humans. I then use my genomic predictions on neutrality to see how the landscape of linked selection changes across the X chromosome and the autosomes in regions close to, and far from, genes. While I had previously demonstrated the linked selective forces near genes are stronger on the X chromosome than on the autosomes in these taxa, I show that regions far from genes show the opposite; regions far from genes show more selection from noncoding targets on the autosomes than on the X chromosome. This finding is replicated across our great ape samples. Further, inferences on the relative effective population size of the X chromosome and the autosomes both near and far from genes can be biased as a result.
|
117 |
Correlation between American mortality and DJIA index priceOng, Li Kee 14 September 2016 (has links)
For an equity-linked insurance, the death benefit is linked to the performance of the company’s investment portfolio. Hence, both mortality risk and equity return shall be considered for pricing such insurance. Several studies have found some dependence between mortality improvement and economy growth. In this thesis, we showed that American mortality rate and Dow Jones Industrial Average (DJIA) index price are negatively dependent by using several copulas to define the joint distribution. Then, we used these copulas to forecast mortality rates and index prices, and calculated the payoffs of a 10-year term equity-linked insurance. We showed that the predicted insurance payoffs will be smaller if dependence between mortality and index price is taken into account. / October 2016
|
118 |
Geek As a Constructed Identity and a Crucial Component of Stem PersistenceLiggett, Joshua B. 05 1900 (has links)
The fields of science, technology, engineering and mathematics (STEM) have long been the bastions of the white male elite. Recently, academia has begun to recognize gender and ethnic disparities. In an effort to expand the recruitment pool for these STEM fields in college, various efforts have been employed nationally at the secondary level. In California, the latest of these efforts is referred to as Linked Learning, a pedagogy that combines college preparation with career preparation. The current study is investigating the connection between what has been referred to in current scholarship as "Geeking Out" with higher academic performance. The phenomenon of “Geeking Out” includes a variety of non-school related activities that range from participating in robotics competitions to a simple game of Dungeons & Dragons. The current project investigates the relationship between long term success in STEM fields and current informal behaviors of secondary students. This particular circumstance where Linked Learning happens to combine with "Geeking Out" is successful due to the associated inclusionary environment. Methods included a yearlong ethnographic study of the Center for Advanced Research and Technology, a Central Valley school with a diverse student body. Through participant observation and interviews, the main goal of this research is to examine the circumstances that influence the effectiveness found in the environment of the Center for Advanced Research and Technology.
|
119 |
Využití Linked Data pro sdílení dat o smlouvách veřejných institucí / Exploitation of Linked Data for sharing public agreements dataHryzlík, Pavel January 2016 (has links)
Title: Exploitation of Linked Data for sharing public agreements data Author: Bc. Pavel Hryzlík Department: Department of Software Engineering Supervisor: Doc. Mgr. Martin Nečaský, Ph.D., Department of Software Engineering Abstract: The objective of the thesis is to explore the possibilities of using Linked Data principles for publishing and sharing data on contracts of public institutions and their connections to related data in the public domain (eg. Business and trade register, register of contracts, etc.). Thesis presents the entire process of opening up contracts. Defines a data standard for open contracts and proposes an ontology for the publication of data on contracts and their interconnections. Furthermore, it designs and implements a platform for publishing contracts. The first part of the platform is a conversion module enabling the conversion of contracts stored in relational databases into RDF form. Employed are R2RML mapping techniques. The second part is a uniform repository that downloads data on contracts in Linked Data format. The third part is a web application that will make the data on contracts available to end users. Keywords: Contract, Open Data, Linked Data, RDF, JSON-LD, R2RML, SPARQL Powered by TCPDF (www.tcpdf.org)
|
120 |
Scalable Discovery and Analytics on Web Linked DataAbdelaziz, Ibrahim 07 1900 (has links)
Resource Description Framework (RDF) provides a simple way for expressing facts across the web, leading to Web linked data. Several distributed and federated RDF systems have emerged to handle the massive amounts of RDF data available nowadays. Distributed systems are optimized to query massive datasets that appear as a single graph, while federated systems are designed to query hundreds of decentralized and interlinked graphs.
This thesis starts with a comprehensive experimental study of the state-of-the-art RDF systems. It identifies a set of research problems for improving the state-of-the-art, including: supporting the emerging RDF analytics required by many modern applications, querying linked data at scale, and enabling discovery on linked data. Addressing these problems is the focus of this thesis.
First, we propose Spartex; a versatile framework for complex RDF analytics. Spartex extends SPARQL to seamlessly combine generic graph algorithms with SPARQL queries. Spartex implements a generic SPARQL operator as a vertex-centric program that interprets SPARQL queries and executes them efficiently using a built-in optimizer. We demonstrate that Spartex scales to datasets with billions of edges, and is at least as fast as the state-of-the-art specialized RDF engines. For analytical tasks, Spartex is an order of magnitude faster than existing alternatives.
To address the scalability limitation of federated RDF engines, we propose Lusail; a scalable system for querying geo-distributed RDF graphs. Lusail follows a two-tier strategy: (i) locality-aware decomposition of the query into subqueries to maximize the computations at the endpoints and minimize intermediary results, and (ii) selectivity-aware execution to reduce network latency and increase parallelism. Our experiments on billions of triples show that Lusail outperforms existing systems by orders of magnitude in scalability and response time.
Finally, enabling discovery on linked data is challenging due to the prior knowledge required to formulate SPARQL queries. To address these challenges; we develop novel techniques to (i) predict semantically equivalent SPARQL queries from a set of keywords by leveraging word embeddings, and (ii) generate fine-grained and non-blocking query plans to get fast and early results.
|
Page generated in 0.0453 seconds