81 |
Strategiska frågeställningar vid etablering av öppna API:er hos Skatteverket, vars funktion utgår från existerande e-tjänster / Strategic issues in establishing open APIs at the Swedish Tax Agency whose function is based on existing e-servicesÖhman, Emil January 2018 (has links)
Examensarbetet föreslår en effektivare process för att analysera olika frågeställningar vid Skatteverkets utveckling av öppna API:er. Frågeställningarna är främst olika juridiska krav men också tekniska och säkerhetsmässiga. Skatteverkets har i dag ett antal e-tjänster med potential att utveckla till öppna API:er. Skatteverket är den svenska statliga myndighet som ansvarar för folkbokföring, beskattning, fastighetstaxering med mera vilket skulle vara ett steg framåt i den moderna e-förvaltningen. Rapporten tar också upp bakgrundsfakta om det juridiska, om öppna data samt olika sätt att använda och tillgängliggöra API:er för allmänheten och skillnaderna mellan öppna data och API:er. Vi har lagar och regeringsdirektiv som öppnar upp och uppmuntrar vidareutnyttjande och tillgång till data, men vi har också lagar som skapar begränsningar vilket gör analysen för att tillgängliggöra mer information väldigt viktig. Förutom vanliga lagar gäller också registerlagar som definierar mer precist hur Skatteverket får arbeta med registerlagsskyddad information. Skatteverkets nuvarande e-tjänster och API:er har analyserats med tanke på vidareutveckling samt undersökning av hur medarbetare på Skatteverket ser på framtida användning av dessa API:er samt nätverksekonomin. Som ett exempel kan det tänkas att man med hjälp av API:er via företags affärssystem enkelt kan göra en CSR-förfrågan varje månad för att då kunna justera skatteinbetalningar och minska riskerna att de anställda fått för mycket eller för lite skatt inbetalt till Skatteverket vilket kan leda till problem. Detta sparar också tid på företagens personalavdelningar. Ett flödesschema har konstruerats för att ge ett effektivt stöd i analysen, om det är lämpligt att en e-tjänst tillgängliggörs via API, samt utvecklingen av API-tjänsten. Den komplexa kravbilden gör att det är svårt att ge ett standardsvar för om det är lämpligt för att e-tjänster tillgängliggörs via API samt att det också är viktigt att ha säkerheten i åtanke. Det är också viktigt att vara införstådd med att teknik och juridik förändras över tid vilket medför att det behövs kontinuerliga analyser för att se att de uppfyller samtliga krav. / The thesis propose a more efficient process for analyzing different issues at the Swedish Tax Agency's development of open APIs. The issues are primarily different legal requirements but also technical and security aspects. The Swedish Tax Agency currently has a number of e-services with the potential to develop into open APIs. Skatteverket is the Swedish government agency responsible for public records, taxation, property valuation, and this would be a step forward in modern e-government. The report also addresses background facts about the legal, open data, and different ways of using and making available public APIs and the differences between open data and APIs. We have laws and government directives that open up and encourage reuse and access to data, but we also have laws that create limitations, making the analysis to make more information available very important. In addition to ordinary laws, register laws also define more precisely how the Tax Agency is allowed to work with data protection. The Tax Agency's current e-services and APIs are analyzed in terms of further development and examination of how employees at the Swedish Tax Agency look at the future use of these APIs as well as the network economy. As an example, it can be assumed that with the help of APIs through corporate business systems could easily do a monthly CSR-request to be able to adjust tax payments and reduce the risks that employees have received too much or too little tax paid to the Tax Agency which can lead to problems. This also saves a lot of manual work for the human resources department of corporations. The report presents a flow chart that can be used as support to determine whether it is appropriate for an e-service to be made available through an API, and the development of the API-service. The complex requirements makes it difficult to provide a standard response if it is appropriate for e-services to be made available through APIs and that it is also important to keep the security in mind. It is also important to be aware that technology and laws change over time which means that a continuous analysis is required to ensure that they meet all requirements.
|
82 |
Qalamos: Connecting Manuscript TraditionsBecker, Michael, Krause, Anett, Schmid, Larissa 09 February 2024 (has links)
No description available.
|
83 |
MEEDS- A Decision Support System for Selecting the Most Useful Developmental Projects in Developing Countries : Case of GhanaHeathcote-Fumador, Ida Ey January 2018 (has links)
Several sustainable development indicators have been used to monitor and measure the progress of various countries. Similarly, reports and data available about countries progress prove that development has not been equal in all regions. On the brighter side, the data can be used to inform decision making in areas that are experiencing deficiencies. In this research, a decision support system(DSS) is built to help governments and NGOs to properly choose projects that align with the needs of the people. We approached this research by utilizing Abraham Maslow’s proven psychological framework on the hierarchy of needs as the main criteria for choosing projects for sustainable development. The system ranks development projects based on the needs priority and how much it has been fulfilled. It ranks projects that meet an urgent need that is also lacking fulfillment higher than other project alternatives. The social progress index (SPI), a comprehensive open data that measures the social progress of counties were correlated to the needs indicated by Maslow’s Hierarchy. The needs were then used as criteria in the AHP decision analysis model to build a classic DSS to aid in selecting the most appropriate development project.
|
84 |
Open Government Data and Value Creation: Exploring the Roles Canadian Data Intermediaries Play in the Value Creation ProcessMerhi, Salah 03 January 2023 (has links)
Open Government Data, an initiative of the open government movement, is believed to have the potential to increase government transparency, accountability, and citizens' participation in government affairs. It is also posited that they will contribute to economic growth and value creation. The Canadian federal, provincial, and local governments have been actively opening and releasing open datasets about multiple subjects of interest to the public. However, evidence of the benefits of using open datasets is scant, with no empirical research undertaken to understand how the data are used and what value is being created. This study, based on a qualitative, grounded theory method, focuses on the works and experiences of 17 Canadian open data intermediary firms to discover patterns and themes that explain how the data were used, what resources were needed, the value created, and the challenges faced.
The data collection is based on semi-structured interviews conducted virtually with the founder or company's executives. The data analysis provided insights into how open government data were used, the organizational challenges the open data intermediaries faced, the state of open government data, and the economic value created. The findings highlighted the key similarities and differences in the activities the open data intermediaries performed and the importance of resources and capabilities in developing products/services that contribute to economic value creation. The study concluded by listing five challenges impacting the use of open government data: (a) awareness, (b) quality of open government data, (c) competencies of users, (d) data standards, and (e) value creation.
|
85 |
Connecting Open Data With Transparency and Accountability to Promote Value Generation : A Case Study of ZambiaHassanali, Anzee January 2022 (has links)
This research will examine how open data may be used and accessed as well as what role it plays in promoting transparency. Through this thesis, I aim to investigate how open data may be utilized to foster accountability and transparency, which would improve productivity and bring about social change. Transparency needs to be integrated not just into the systems of the ‘supply side’ or the party that is providing the information, but also into those of the users or ‘the demand side’ in order to be effective. Initiatives for transparency and accountability have emerged in the development context over the past ten years as a means of addressing democratic and developmental shortcomings. The analysis is drawn from the existing literature around transparency and accountability initiatives and what is needed for it to be impactful. The author of this study has further examined this concept through interviews with professionals in the area of this data value chain in Zambia which has allowed for further analysis of this concept. The right data must first be disclosed for open data and transparency initiatives to result in accountability, this necessitates an awareness of the politics of data publication. Next it is important to have this data published with a ‘user centric’ frame. Further it is crucial for intermediaries to gather, analyze and make use of the data in order to create a system for accountability where by the institutions are enabled to provide better services and citizens are able to demand those services to enable value creation.
|
86 |
Fictional first memoriesAkhtar, Shazia, Justice, L.V., Morrison, Catriona M., Conway, M.A. 17 July 2018 (has links)
Yes / In a large-scale survey, 6,641 respondents provided descriptions of their first memory and their age when they
encoded that memory, and they completed various memory judgments and ratings. In good agreement with many
other studies, where mean age at encoding of earliest memories is usually found to fall somewhere in the first half of
the 3rd year of life, the mean age at encoding here was 3.2 years. The established view is that the distribution around
mean age at encoding is truncated, with very few or no memories dating to the preverbal period, that is, below about
2 years of age. However, we found that 2,487 first memories (nearly 40% of the entire sample) dated to an age at
encoding of 2 years and younger, with 893 dating to 1 year and younger. We discuss how such improbable, fictional
first memories could have arisen and contrast them with more probable first memories, those with an age at encoding
of 3 years and older.
|
87 |
The role of e-participation and open data in evidence-based policy decision making in local governmentSivarajah, Uthayasankar, Weerakkody, Vishanth J.P., Waller, P., Lee, Habin, Irani, Zahir, Choi, Y., Morgan, R., Glikman, Y. 12 February 2015 (has links)
Yes / The relationships between policies, their values, and outcomes are often difficult for citizens and policymakers to assess due to the complex nature of the policy lifecycle. With the opening of data by public administrations, there is now a greater opportunity for transparency, accountability, and evidence-based decision making in the policymaking process. In representative democracies, citizens rely on their elected representatives and local administrations to take policy decisions that address societal challenges and add value to their local communities. Citizens now have the opportunity to assess the impact and values of the policies introduced by their elected representatives and hold them accountable by utilizing historical open data that is publicly available. Using a qualitative case study in a UK Local Government Authority, this article examines how e-participation platforms and the use of open data can facilitate more factual, evidence-based, and transparent policy decision making and evaluation. From a theoretical stance, this article contributes to the policy lifecycle and e-participation literature. The article also offers valuable insights to public administrations on how open data can be utilized for evidence-based policy decision making and evaluation.
|
88 |
A context-consent meta-framework for designing open (qualitative) data studiesBranney, Peter, Reid, K., Frost, N., Coan, S., Mathieson, A., Woolhouse, M. 12 May 2018 (has links)
Yes / To date, open science, and particularly open data, in Psychology, has focused on quantitative research. This paper aims to explore ethical and practical issues encountered by UK-based psychologists utilising open qualitative datasets. Semi-structured telephone interviews with eight qualitative psychologists were explored using a framework analysis. From the findings, we offer a context-consent meta-framework as a resource to help in the design of studies sharing their data and/or studies using open data. We recommend ‘secondary’ studies conduct archaeologies of context and consent to examine if the data available is suitable for their research questions. This research is the first we know of in the study of ‘doing’ (or not doing) open science, which could be repeated to develop a longitudinal picture or complemented with additional approaches, such as observational studies of how context and consent are negotiated in pre-registered studies and open data. / The author's manuscript has a slightly different title from the published article: A meta-framework for designing open data studies in psychology: ethical and practical issues of open qualitative data sets
|
89 |
Automating Geospatial RDF Dataset Integration and Enrichment / Automatische geografische RDF Datensatzintegration und AnreicherungSherif, Mohamed Ahmed Mohamed 12 December 2016 (has links) (PDF)
Over the last years, the Linked Open Data (LOD) has evolved from a mere 12 to more than 10,000 knowledge bases. These knowledge bases come from diverse domains including (but not limited to) publications, life sciences, social networking, government, media, linguistics. Moreover, the LOD cloud also contains a large number of crossdomain knowledge bases such as DBpedia and Yago2. These knowledge bases are commonly managed in a decentralized fashion and contain partly verlapping information. This architectural choice has led to knowledge pertaining to the same domain being published by independent entities in the LOD cloud. For example, information on drugs can be found in Diseasome as well as DBpedia and Drugbank. Furthermore, certain knowledge bases such as DBLP have been published by several bodies, which in turn has lead to duplicated content in the LOD . In addition, large amounts of geo-spatial information have been made available with the growth of heterogeneous Web of Data.
The concurrent publication of knowledge bases containing related information promises to become a phenomenon of increasing importance with the growth of the number of independent data providers. Enabling the joint use of the knowledge bases published by these providers for tasks such as federated queries, cross-ontology question answering and data integration is most commonly tackled by creating links between the resources described within these knowledge bases. Within this thesis, we spur the transition from isolated knowledge bases to enriched Linked Data sets where information can be easily integrated and processed. To achieve this goal, we provide concepts, approaches and use cases that facilitate the integration and enrichment of information with other data types that are already present on the Linked Data Web with a focus on geo-spatial data.
The first challenge that motivates our work is the lack of measures that use the geographic data for linking geo-spatial knowledge bases. This is partly due to the geo-spatial resources being described by the means of vector geometry. In particular, discrepancies in granularity and error measurements across knowledge bases render the selection of appropriate distance measures for geo-spatial resources difficult. We address this challenge by evaluating existing literature for point set measures that can be used to measure the similarity of vector geometries. Then, we present and evaluate the ten measures that we derived from the literature on samples of three real knowledge bases.
The second challenge we address in this thesis is the lack of automatic Link Discovery (LD) approaches capable of dealing with geospatial knowledge bases with missing and erroneous data. To this end, we present Colibri, an unsupervised approach that allows discovering links between knowledge bases while improving the quality of the instance data in these knowledge bases. A Colibri iteration begins by generating links between knowledge bases. Then, the approach makes use of these links to detect resources with probably erroneous or missing information. This erroneous or missing information detected by the approach is finally corrected or added.
The third challenge we address is the lack of scalable LD approaches for tackling big geo-spatial knowledge bases. Thus, we present Deterministic Particle-Swarm Optimization (DPSO), a novel load balancing technique for LD on parallel hardware based on particle-swarm optimization. We combine this approach with the Orchid algorithm for geo-spatial linking and evaluate it on real and artificial data sets. The lack of approaches for automatic updating of links of an evolving knowledge base is our fourth challenge. This challenge is addressed in this thesis by the Wombat algorithm. Wombat is a novel approach for the discovery of links between knowledge bases that relies exclusively on positive examples. Wombat is based on generalisation via an upward refinement operator to traverse the space of Link Specifications (LS). We study the theoretical characteristics of Wombat and evaluate it on different benchmark data sets.
The last challenge addressed herein is the lack of automatic approaches for geo-spatial knowledge base enrichment. Thus, we propose Deer, a supervised learning approach based on a refinement operator for enriching Resource Description Framework (RDF) data sets. We show how we can use exemplary descriptions of enriched resources to generate accurate enrichment pipelines. We evaluate our approach against manually defined enrichment pipelines and show that our approach can learn accurate pipelines even when provided with a small number of training examples.
Each of the proposed approaches is implemented and evaluated against state-of-the-art approaches on real and/or artificial data sets. Moreover, all approaches are peer-reviewed and published in a conference or a journal paper. Throughout this thesis, we detail the ideas, implementation and the evaluation of each of the approaches. Moreover, we discuss each approach and present lessons learned. Finally, we conclude this thesis by presenting a set of possible future extensions and use cases for each of the proposed approaches.
|
90 |
Covering or complete? : Discovering conditional inclusion dependenciesBauckmann, Jana, Abedjan, Ziawasch, Leser, Ulf, Müller, Heiko, Naumann, Felix January 2012 (has links)
Data dependencies, or integrity constraints, are used to improve the quality of a database schema, to optimize queries, and to ensure consistency in a database. In the last years conditional dependencies have been introduced to analyze and improve data quality. In short, a conditional dependency is a dependency with a limited scope defined by conditions over one or more attributes. Only the matching part of the instance must adhere to the dependency. In this paper we focus on conditional inclusion dependencies (CINDs).
We generalize the definition of CINDs, distinguishing covering and completeness conditions. We present a new use case for such CINDs showing their value for solving complex data quality tasks. Further, we define quality measures for conditions inspired by precision and recall. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. Our algorithms choose not only the condition values but also the condition attributes automatically. Finally, we show that our approach efficiently provides meaningful and helpful results for our use case. / Datenabhängigkeiten (wie zum Beispiel Integritätsbedingungen), werden verwendet, um die Qualität eines Datenbankschemas zu erhöhen, um Anfragen zu optimieren und um Konsistenz in einer Datenbank sicherzustellen. In den letzten Jahren wurden bedingte Abhängigkeiten (conditional dependencies) vorgestellt, die die Qualität von Daten analysieren und verbessern sollen. Eine bedingte Abhängigkeit ist eine Abhängigkeit mit begrenztem Gültigkeitsbereich, der über Bedingungen auf einem oder mehreren Attributen definiert wird. In diesem Bericht betrachten wir bedingte Inklusionsabhängigkeiten (conditional inclusion dependencies; CINDs).
Wir generalisieren die Definition von CINDs anhand der Unterscheidung von überdeckenden (covering) und vollständigen (completeness) Bedingungen. Wir stellen einen Anwendungsfall für solche CINDs vor, der den Nutzen von CINDs bei der Lösung komplexer Datenqualitätsprobleme aufzeigt. Darüber hinaus definieren wir Qualitätsmaße für Bedingungen basierend auf Sensitivität und Genauigkeit. Wir stellen effiziente Algorithmen vor, die überdeckende und vollständige Bedingungen innerhalb vorgegebener Schwellwerte finden. Unsere Algorithmen wählen nicht nur die Werte der Bedingungen, sondern finden auch die Bedingungsattribute automatisch. Abschließend zeigen wir, dass unser Ansatz effizient sinnvolle und hilfreiche Ergebnisse für den vorgestellten Anwendungsfall liefert.
|
Page generated in 0.0248 seconds