Spelling suggestions: "subject:"data sharing"" "subject:"mata sharing""
81 |
Un modèle rétroactif de réconciliation utilité-confidentialité sur les données d’assuranceRioux, Jonathan 04 1900 (has links)
Le partage des données de façon confidentielle préoccupe un bon nombre d’acteurs, peu importe le domaine. La recherche évolue rapidement, mais le manque de solutions adaptées à la réalité d’une entreprise freine l’adoption de bonnes pratiques d’affaires quant à la protection des renseignements sensibles.
Nous proposons dans ce mémoire une solution modulaire, évolutive et complète nommée PEPS, paramétrée pour une utilisation dans le domaine de l’assurance. Nous évaluons le cycle entier d’un partage confidentiel, de la gestion des données à la divulgation, en passant par la gestion des forces externes et l’anonymisation. PEPS se démarque du fait qu’il utilise la contextualisation du problème rencontré et l’information propre au domaine afin de s’ajuster et de maximiser l’utilisation de l’ensemble anonymisé. À cette fin, nous présentons un algorithme d’anonymat fortement contextualisé ainsi que des mesures de performances ajustées aux analyses d’expérience. / Privacy-preserving data sharing is a challenge for almost any enterprise nowadays, no matter their field of expertise. Research is evolving at a rapid pace, but there is still a lack of adapted and adaptable solutions for best business practices regarding the management and sharing of privacy-aware datasets.
To this problem, we offer PEPS, a modular, upgradeable and end-to-end system tailored for the need of insurance companies and researchers. We take into account the entire cycle of sharing data: from data management to publication, while negotiating with external forces and policies. Our system distinguishes itself by taking advantage of the domain-specific and problem-specific knowledge to tailor itself to the situation and increase the utility of the resulting dataset. To this end, we also present a strongly contextualised privacy algorithm and adapted utility measures to evaluate the performance of a successful disclosure of experience analysis.
|
82 |
公務機關之間傳輸個人資料保護規範之研究-以我國、美國及英國法為中心 / A Comparative Study of Regulations for the Protection of Personal Data Transmitted between Government Agencies in Taiwan, the U.S. and the U.K.林美婉, Lin, Mei Wan Unknown Date (has links)
政府利用公權力掌握之個人資訊包羅萬象,舉凡姓名、生日、身分證字號、家庭、教育、職業等。科技進步與網際網路發達,使原本散置各處之資料,可以迅速連結、複製、處理、利用;而為了增加行政效率與減少成本,機關透過網路提供公眾服務日益頻繁,藉由傳輸共用個人資料等情況已漸成常態。這些改變雖然對政府與民眾帶來利益,但是也伴隨許多挑戰,尤其當數機關必須共用資訊時,將使管理風險更添複雜與難度,一旦過程未加妥善管制,遭人竊取、竄改、滅失或洩露,不僅當事人隱私受損,也嚴重傷害政府威信。因此,凡持有個人資料的政府機關,均必須建立適當行政、技術與實體防護措施,以確保資料安全與隱密,避免任何可能危及資料真實之威脅與機會,而造成個人人格與公平之侵害。
隨著全球經濟相互連結以及網路普及,個人資料保護如今已是國際事務,這個趨勢顯現在愈來愈多的國家法律與跨國條款如OECD、歐盟、APEC等國際組織規範。而在先進國家中,美國與英國關於資訊隱私法制發展有其不同歷史背景,目前美國聯邦機關持有使用個人資料必須遵循的主要法規為隱私法、電腦比對與隱私保護法、電子化政府法、聯邦資訊安全管理法,以及預算管理局發布的相關指導方針;英國政府則必須遵守人權法與歐盟指令架構所制定的資料保護法,並且受獨立資訊官監督審核。此外,為了增加效率,減少錯誤、詐欺及降低個別系統維護成本,公務機關之間或不同層級政府所持有之個人資料流用有其必要性,故二國在資料傳輸實務上亦有特殊規定或作業規則。相較之下,我國2012年10月1日始施行的「個人資料保護法」對於公部門間傳輸個人資料之情形並無具體規定,機關內外監督機制亦付之闕如,使個人資料遭不當使用與揭露之風險提高。
為了保障個人資訊隱私權,同時使公務機關之間傳輸利用個人資訊得以增進公共服務而不違反當事人權益,本研究建議立法或決策者可參酌美國與英國法制經驗,明定法務部負責研擬詳細實施規則與程序以供各機關傳輸個人資料之遵循,減少機關資訊流用莫衷一是的情況;而為保證個人資訊受到適當保護,除了事先獲得當事人同意外,機關進行資料共用之前,應由專業小組審核,至於考慮採取的相關重要措施尚有:(1)建置由政策、程序、人力與設備資源所組成之個人資訊管理系統(PIMS),並使成為整體資訊管理基礎設施的一部分;(2)指派高階官員負責施行及維護安全控制事項;(3)教育訓練人員增加風險意識,塑造良好組織文化;(4)諮詢利害關係人,界定共用資料範圍、目的與法律依據;(5)實施隱私衝擊評估(PIA),指出對個人隱私的潛在威脅並分析風險減緩替代方案;(6)簽定正式書面契約,詳述相關權利與義務;(7)執行內外稽核,監督法規遵循情況,提升機關決策透明、誠信與責任。
關鍵詞:個人資料保護、隱私權、資訊隱私、資料傳輸、資料共用 / Governments have the power to hold a variety of personal information about individuals, such as the name, date of birth, I.D. Card number, family, education, and occupation. Due to advanced technology and the use of the Internet, personal data stored in different places can be connected, copied, processed, and used immediately. It is relatively common for government agencies to provide people with services online as well as transmit or share individual information to improve efficiency and reduce bureaucratic costs. These changes clearly deliver great benefits for governments and for the public, but they also bring new challenges. Specifically, managing risks around sharing information can sometimes become complicated and difficult when more than one agency is involved. If the government agency which keeps personal information cannot prevent it from being stolen, altered, damaged, destroyed or disclosed, it can seriously erode personal privacy and people’s trust in the government. Therefore, each agency that maintains personal data should establish appropriate administrative, technical, and physical safeguards to insure the security and confidentiality of data and to protect against any anticipated threats or hazards to the integrity which could result in substantial harm on personality and fairness to any individual .
As the global economy has become more interconnected and the Internet ubiquitous, personal data protection is by now a truly international matter. The trend is fully demonstrated by the growing number of national laws, supranational provisions, and international regulations, such as the OECD, the EU or the APEC rules. Among those developed countries, both the U.S. and the U.K. have their historical contexts of developing legal framework for information privacy. The U.S. Federal agency use of personal information is governed primarily by the Privacy Act of 1974, the Computer Matching and Privacy Protection Act of 1988, the E-Government Act of 2002 , the Federal Information Security Management Act of 2002, and related guidance periodically issued by OMB. The U.K. government has to comply with the Human Rights Act and the Data Protection Act of 1998 which implemented Directive 95/46/EC. Its use of individual data is overseen and audited by the independent Information Commissioner. Further, because interagency data sharing is necessary to make government more efficient by reducing the error, fraud, and costs associated with maintaining a segregated system, both countries have made specific rules or code of practice for handling the transmission of information among different agencies and levels of government. By contrast, Taiwan Personal Information Protection Act of 2010 which finally came into force on 1 October 2012 contains no detailed and clear provisions for data transmitted between government agencies. Moreover, there are also no internal or external oversight of data sharing practices in the public sector. These problems will increase the risk of inappropriate use and disclosure of personal data.
To protect individual information privacy rights and ensure that government agencies can enhance public services by data sharing without unreasonably impinging on data subjects’ interests, I recommend that law makers draw on legal experiences of the U.S. and the U.K., and specify that the Ministry of Justice has a statutory duty to prescribe detailed regulations and procedures for interagency data transmission. This could remove the fog of confusion about the circumstances in which personal information may be shared. Also, besides obtaining the prior consent of the data subject and conducting auditing by a professional task force before implementing interagency data sharing program, some important measures as follows should be taken: (1) Establish a Personal Information Management System which is composed of the policies, procedures, human, and machine resources to make it as part of an overall information management infrastructure; (2) Appoint accountable senior officials to undertake and maintain the implementation of security controls; (3) Educate and train personnel to raise risk awareness and create a good organizational culture; (4) Consult interested parties and define the scope, objective, and legal basis for data sharing; (5) Conduct privacy impact assessments to identify potential threats to individual privacy and analyze risk mitigation alternatives; (6) Establish a formal written agreement to clarify mutual rights and obligations; (7) Enforce internal as well as external auditing to monitor their compliance with data protection regulations and promote transparency, integrity and accountability of agency decisions.
Key Words: personal data protection, privacy rights, information privacy, data transmission, data sharing
|
83 |
Checking Compatability of Programs on Shared DataPranavadatta, DN January 2011 (has links) (PDF)
A large software system is built by composing multiple programs, possibly developed independently. The component programs communicate by sharing data. Data sharing involves creation of instances of the shared data by one program, called the producer, and its interpretation by another program, called the consumer. Valid instances of shared data and their correct interpretation is usually specified by a protocol or a standard that governs the communication. If a consumer misinterprets or does not handle some instances of data produced by a producer, it is called as a data compatibility bug. Such bugs manifest as various forms of runtime errors that are difficult to find and fix.
In this work, we define various compatibility relations, between both producer-consumer programs and version-related programs, that characterize various subtle requirements for correct sharing of data. We design and implement a static analysis to infer types and guards over elements of shared data and the results are used for automatic compatibility checking. As case studies, we consider two widely used shared data-the TIFF structure, used to store TIFF directory attributes in memory, and IEEE 802. 11 MAC frame header which forms the layer 2 header in Wireless LAN communication. We analyze and check compatibility of 6 pairs of producer-consumer programs drawn from the transmit-receive code of Linux WLAN drivers of 3 different vendors. In the setting of version-related programs, we analyze a total of 48 library and utility routines of 2 pairs of TIFF image library (libtiff) versions. We successfully identify 5 known bugs and 1 new bug. For two of known bugs, bug fixes are available and we verify that they resolve the compatibility issues.
|
84 |
Ouverture des données de la recherche : de la vision politique aux pratiques des chercheurs / Open research data : from political vision to research practicesRebouillat, Violaine 03 December 2019 (has links)
Cette thèse s’intéresse aux données de la recherche, dans un contexte d’incitation croissante à leur ouverture. Les données de la recherche sont des informations collectées par les scientifiques dans la perspective d’être utilisées comme preuves d’une théorie scientifique. Il s’agit d’une notion complexe à définir, car contextuelle. Depuis les années 2000, le libre accès aux données occupe une place de plus en plus stratégique dans les politiques de recherche. Ces enjeux ont été relayés par des professions intermédiaires, qui ont développé des services dédiés, destinés à accompagner les chercheurs dans l’application des recommandations de gestion et d’ouverture. La thèse interroge le lien entre idéologie de l’ouverture et pratiques de recherche. Quelles formes de gestion et de partage des données existent dans les communautés de recherche et par quoi sont-elles motivées ? Quelle place les chercheurs accordent-ils à l’offre de services issue des politiques de gestion et d’ouverture des données ? Pour tenter d’y répondre, 57 entretiens ont été réalisés avec des chercheurs de l’Université de Strasbourg dans différentes disciplines. L’enquête révèle une très grande variété de pratiques de gestion et de partage de données. Un des points mis en évidence est que, dans la logique scientifique, le partage des données répond un besoin. Il fait partie intégrante de la stratégie du chercheur, dont l’objectif est avant tout de préserver ses intérêts professionnels. Les données s’inscrivent donc dans un cycle de crédibilité, qui leur confère à la fois une valeur d’usage (pour la production de nouvelles publications) et une valeur d’échange (en tant que monnaie d’échange dans le cadre de collaborations avec des partenaires). L’enquête montre également que les services développés dans un contexte d’ouverture des données correspondent pour une faible partie à ceux qu’utilisent les chercheurs. L’une des hypothèses émises est que l’offre de services arrive trop tôt pour rencontrer les besoins des chercheurs. L’évaluation et la reconnaissance des activités scientifiques étant principalement fondées sur la publication d’articles et d’ouvrages, la gestion et l’ouverture des données ne sont pas considérées comme prioritaires par les chercheurs. La seconde hypothèse avancée est que les services d’ouverture des données sont proposés par des acteurs relativement éloignés des communautés de recherche. Les chercheurs sont davantage influencés par des réseaux spécifiques à leurs champs de recherche (revues, infrastructures…). Ces résultats invitent finalement à reconsidérer la question de la médiation dans l’ouverture des données scientifiques. / The thesis investigates research data, as there is a growing demand for opening them. Research data are information that is collected by scientists in order to be used as evidence for theories. It is a complex, contextual notion. Since the 2000s, open access to scientific data has become a strategic axis of research policies. These policies has been relayed by third actors, who developed services dedicated to support researchers with data management and sharing.The thesis questions the relationship between the ideology of openness and the research practices. Which kinds of data management and sharing practices already exist in research communities? What drives them? Do scientists rely on research data services? Fifty-seven interviews were conducted with researchers from the University of Strasbourg in many disciplines. The survey identifies a myriad of different data management and sharing practices. It appears that data sharing is embedded in the researcher’s strategy: his main goal is to protect his professional interests. Thus, research data are part of a credibility cycle, in which they get both use value (for new publications) and exchange value (as they are traded for other valuable resources). The survey also shows that researchers rarely use the services developed in a context of openness. Two explanations can be put forward. (1) The service offer comes too early to reach researchers’ needs. Currently, data management and sharing are not within researchers’ priorities. The priority is publishing, which is defined as source of reward and recognition of the scientific activities. (2) Data management services are offered by actors outside the research communities. But scientists seem to be more influenced by internal networks, close to their research topics (like journals, infrastructures…). These results prompt us to reconsider the mediation between scientific communities and open research data policies.
|
85 |
Hälsodata & smartklockor : En användarundersökning om medvetenhet och attitydApelthun, Henrietta, Anni, Töyrä January 2020 (has links)
Digital teknik och den ökade digitalisering som är under ständig utveckling i världen medför att strukturer i samhällen ändras och likaså vårt sätt att leva, då stora mängder data samlas in. Sverige är bland de ledande länderna i världen när det kommer till användning av ny teknik och varannan svensk har minst en uppkopplad enhet i sitt hem. Smartklockor som de senaste åren stigit i popularitet ger många möjligheter till insamling av hälsodata, vilket kan leda till integritetsproblem. Syftet med detta arbete är att undersöka attityder hos studenter (20–30 år) som använder smartklockor genom att utgå från teorin om integritetsparadoxen. Integritetsparadoxen menar att människors beteende grundar sig på ett risktänk, men det verkliga beteende, hur de agerar, istället grundar sig på tillit. För att besvara syftet har fem kvalitativa intervjuer genomförts där författarna sedan analyserat hur studenterna förhåller sig till risk, tillit, integritet och deras medvetenhet kring hälsodatadelning. Det visade sig att studenterna var positiva till datadelning men synen på integritet skiljde sig åt. Risk var något som studenterna sa sig tas i beaktande, men när studenterna väl fattade beslut, såsom att ladda ner en applikation, grundade sig utfallet på tillit. / Digital technology and the increasing digitalization that is constantly evolving worldwide means that structures in our society and our way of living are changing, when large amounts of data are being collected. Sweden is among the leading countries in the world when it comes to the use of new technology and every other swede has at least one connected device. Smartwatches have risen in popularity in recent years and provide many opportunities for the collection of health data, which for the users may lead to privacy problems. The purpose of this thesis is to research the attitudes of students (20-30 years old) using smartwatches based on the theory of the privacy paradox. The theory of the privacy paradox explores the relationship between the user’s behavioral intention based on the risks and their actual behavior based on trust. To fulfill the purpose the researchers conducted five qualitative interviews. The authors then analyzed the answers about how students relate to risk, trust, privacy and their awareness of sharing health data. It turned out that the students were optimistic about data sharing but the view of privacy differed. Risk was something that they said was taken into account but when the students actual decisions were made, such as downloading an application, the outcome was based on trust.
|
86 |
Policy-based usage control for trustworthy data sharing in smart cities / Contrôle des politiques d’accès pour les relations de confiance dans les données des smart citiesCao Huu, Quyet 08 June 2017 (has links)
Dans le domaine de “smart cities” ou “villes connectées”, les technologies de l’information et de la communication sont intégrées aux services traditionnels de la ville (eau, électricité, gaz, transports collectifs, équipements publics, bâtiments, etc.) pour améliorer la qualité des services urbains ou encore pour réduire les coûts. Les données dans la ville connectée sont généralement produites par une grande variété d’acteurs. Ces données devraient être partagées entre diverses applications ou services. Or, il y a un problème, comment les acteurs peuvent-ils exercer un contrôle sur la façon dont leurs données vont être utilisées? C’est important car pour encourager le partage des données, nous devons établir des relations de confiance entre acteurs. Les acteurs ont confiance s’ils ont la capacité à contrôler l’utilisation de leurs données. Nous prendrons en compte les obligations définies par les acteurs pour leurs données : (i) Abstraction de certaines informations, (ii) Granularité spatio-temporelle, (iii) Classification des acteurs et des objectifs, et (iv) Monétisation des données. Mes contributions sont: (i) Un modèle de contrôle d’utilisation des données. Ce modèle répond aux obligations définies par les acteurs pour leur données. (ii) Une plateforme en tant que service. La plateforme a rajouté des composants nécessaire pour permettre la transparence et la traçabilité d’utilisation des données basée sur le modèle. (iii) Un outil de visualisation. C’est l’implémentation d’un prototype pour que les acteurs puissent exercer un contrôle sur la façon dont leurs données vont être utilisées. (iv) Une évaluation de la performance et l’impact de notre solution. Ces solutions permettent l’établissement des relations de confiance pour le partage des données de Smart Cities basées sur le modèle de contrôle d’utilisation des données. Les résultats de ma thèse peuvent être appliqués à la plateforme IoT Datavenue d’Orange / In smart cities, Information and Communication Technologies, in particular Internet of Things (IoT) Technologies, are integrated into traditional services of our city, for example waste management, air pollution monitoring, and parking to improve quality while reducing costs of these services. IoT data in this context are generated by different actors, such as service providers, developers, and municipal authorities. These data should be shared among applications or services. However, in traditional scenario, there is no sharing of IoT data between them. Each actor consumes data from sensors deployed on behalf of that actor, and network infrastructure maybe shared. In order to encourage IoT data sharing, we need to establish the confidence between the actors. Exercising control over the usage of data by other actors is critical in building trust. Thus, the actors should have an ability to exercise control on how their data are going to be used. This major issue have not been treated in IoT namely Usage Control. In this thesis, we take into account obligations defined by the actors for their data (i) Abstraction of certain information, (ii) Spatial and temporal granularity, (iii) Classification of actors and purposes, and (iv) Monetization of data. For example, requirements of data usage in Intelligent parking applications are (i) Data owners have full access to all the details, (ii) Municipal authorities can access the average occupancy of parking place per street on an hourly basis, (iii) Commercial service providers can access only statistical data over a zone and a weekly basis, and (iv) Monetization of data can be based on subscription types or users roles. Thesis contributions include: (i) Policy-based Data Usage Control Model (DUPO) responds to the obligations defined by actors to their data. (ii) Trustworthy Data Sharing Platform as a Service allows transparency and traceability of data usage with open APIs based on the DUPO and Semantic technologies. (iii) Visualization Tool Prototype enables actors to exercise control on how their data will be used. (iv) Evaluation of the performance and the impact of our solution. The results show that the performance of the added trust is not affecting of the system. Mistrust might hamper public acceptance of IoT data sharing in smart cities. Our solution is key which will establish the trust between data owners and consumers by taking into account the obligations of the data owners. It is useful for data operators who would like to provide an open data platform with efficient enablers to partners, data-based services to clients, and ability to attract partners to share data on their platforms
|
87 |
Crowdtuning : towards practical and reproducible auto-tuning via crowdsourcing and predictive analytics / Crowdtuning : towards practical and reproducible auto-tuning via crowdsourcing and predictive analytictMemon, Abdul Wahid 17 June 2016 (has links)
Le réglage des heuristiques d'optimisation de compilateur pour de multiples cibles ou implémentations d’une même architecture est devenu complexe. De plus, ce problème est généralement traité de façon ad-hoc et consomme beaucoup de temps sans être nécessairement reproductible. Enfin, des erreurs de choix de paramétrage d’heuristiques sont fréquentes en raison du grand nombre de possibilités d’optimisation et des interactions complexes entre tous les composants matériels et logiciels. La prise en compte de multiples exigences, comme la performance, la consommation d'énergie, la taille de code, la fiabilité et le coût, peut aussi nécessiter la gestion de plusieurs solutions candidates. La compilation itérative avec profil d’exécution (profiling feedback), le réglage automatique (auto tuning) et l'apprentissage automatique ont montré un grand potentiel pour résoudre ces problèmes. Par exemple, nous les avons utilisés avec succès pour concevoir le premier compilateur qui utilise l'apprentissage pour l'optimisation automatique de code. Il s'agit du compilateur Milepost GCC, qui apprend automatiquement les meilleures optimisations pour plusieurs programmes, données et architectures en se basant sur les caractéristiques statiques et dynamiques du programme. Malheureusement, son utilisation en pratique, a été très limitée par le temps d'apprentissage très long et le manque de benchmarks et de données représentatives. De plus, les modèles d'apprentissage «boîte noire» ne pouvaient pas représenter de façon pertinente les corrélations entre les caractéristiques des programme ou architectures et les meilleures optimisations. Dans cette thèse, nous présentons une nouvelle méthodologie et un nouvel écosystème d’outils(framework) sous la nomination Collective Mind (cM). L’objectif est de permettre à la communauté de partager les différents benchmarks, données d’entrée, compilateurs, outils et autres objets tout en formalisant et facilitant la contribution participative aux boucles d’apprentissage. Une contrainte est la reproductibilité des expérimentations pour l’ensemble des utilisateurs et plateformes. Notre cadre de travail open-source et notre dépôt (repository) public permettent de rendre le réglage automatique et l'apprentissage d’optimisations praticable. De plus, cM permet à la communauté de valider les résultats, les comportements inattendus et les modèles conduisant à de mauvaises prédictions. cM permet aussi de fournir des informations utiles pour l'amélioration et la personnalisation des modules de réglage automatique et d'apprentissage ainsi que pour l'amélioration des modèles de prévision et l'identification des éléments manquants. Notre analyse et évaluation du cadre de travail proposé montre qu'il peut effectivement exposer, isoler et identifier de façon collaborative les principales caractéristiques qui contribuent à la précision de la prédiction du modèle. En même temps, la formalisation du réglage automatique et de l'apprentissage nous permettent d'appliquer en permanence des techniques standards de réduction de complexité. Ceci permet de se contenter d'un ensemble minimal d'optimisations pertinentes ainsi que de benchmarks et de données d’entrée réellement représentatifs. Nous avons publié la plupart des résultats expérimentaux, des benchmarks et des données d’entrée à l'adresse http://c-mind.org tout en validant nos techniques dans le projet EU FP6 Milepost et durant un stage de thèse HiPEAC avec STMicroelectronics. / Tuning general compiler optimization heuristics or optimizing software for rapidly evolving hardware has become intolerably complex, ad-hoc, time consuming and error prone due to enormous number of available design and optimization choices, complex interactions between all software and hardware components, and multiple strict requirements placed on performance, power consumption, size, reliability and cost. Iterative feedback-directed compilation, auto-tuning and machine learning have been showing a high potential to solve above problems. For example, we successfully used them to enable the world's first machine learning based self-tuning compiler, Milepost GCC, which automatically learns the best optimizations across multiple programs, data sets and architectures based on static and dynamic program features. Unfortunately, its practical use was very limited by very long training times and lack of representative benchmarks and data sets. Furthermore, "black box" machine learning models alone could not get full insight into correlations between features and best optimizations. In this thesis, we present the first to our knowledge methodology and framework, called Collective Mind (cM), to let the community share various benchmarks, data sets, compilers, tools and other artifacts while formalizing and crowdsourcing optimization and learning in reproducible way across many users (platforms). Our open-source framework and public optimization repository helps make auto-tuning and machine learning practical. Furthermore, cM let the community validate optimization results, share unexpected run-time behavior or model mispredictions, provide useful feedback for improvement, customize common auto-tuning and learning modules, improve predictive models and find missing features. Our analysis and evaluation of the proposed framework demonstrates that it can effectively expose, isolate and collaboratively identify the key features that contribute to the model prediction accuracy. At the same time, formalization of auto-tuning and machine learning allows us to continuously apply standard complexity reduction techniques to leave a minimal set of influential optimizations and relevant features as well as truly representative benchmarks and data sets. We released most of the experimental results, benchmarks and data sets at http://c-mind.org while validating our techniques in the EU FP6 MILEPOST project and during HiPEAC internship at STMicroelectronics.
|
88 |
Datové rozhraní pro sdílení "městských dat" / Data Interface for Sharing of "City Data"Fiala, Jan January 2021 (has links)
The goal of this thesis is to explore existing solutions of closed and open data sharing, propose options of sharing non-public data, implement selected solution and demonstrate the functionality of the system for sharing closed data. Implementation output consist of a catalog of non-public datasets, web application for administration of non-public datasets, application interface gateway and demonstration application.
|
89 |
The future of agriculture : Creating conditions for a more sustainable agriculture sector with the help of data and connectivity / Framtidens jordbruk : Möjligheten att skapa en mer hållbar jordbrukssektor med hjälp av data och uppkopplingErnfors, Märta January 2021 (has links)
The food production rate is required to increase in order to meet the ever-increasing world population. At the same time, this needs to be done in a sustainable manner as the agriculture sector today is responsible for a substantial part of the annual carbon dioxide emissions associated with human activities. In this study, eight farmers in the Swedish agricultural sector whose businesses are primarily based on cultivation and crop production, were interviewed. This to get an understanding of farmers ́ view on connectivity and data, and how this could enable a more productive and sustainable sector in the future. The study has identified future scenarios that have the potential of contributing to a sustainable development of the sector which are enabled by data and a more connected agriculture sector. One scenario is about fleets of small, connected autonomous agricultural units enabling the electrification transformation of the sector. This will allow for small-scale farms focusing on quality to have great positive impact on the food supply and the sustainability development of the sector. A second scenario is to, with the help of data, make it easier to establish a true consumer value for sustainable products or those of good quality and thereby enable consumers to budget their environmental impact related to food from arable land. In order for this to become feasible a third scenario is related to the agricultural ecosystem which needs to come together and find solutions for data management, creating systems for data handling and analytics to be used both by the farmers and decision makers. With this in place a fourth scenario will be feasible where laws, regulations, and subsidies of today could transform from a generic approach into a more area-based system taking local conditions, determined by data, from individual farms into consideration. There are few contradictions between sustainability and profitability from a farmer ́s point of view and with the help of data and a more connected agriculture, the sector could develop in a positive direction and increase the food production in a sustainable manner. / Produktionen av livsmedel behöver öka för att möta den ständigt ökande världsbefolkningen. Samtidigt måste detta göras på ett hållbart sätt, eftersom jordbrukssektorn redan idag bidrar med en betydande del av de årliga koldioxidutsläppen från mänskliga aktiviteter. I den här studien intervjuades åtta jordbrukare i den svenska jordbrukssektorn med inriktning på odling och produktion av grödor. Detta för att få en förståelse för jordbrukarnas syn på uppkoppling och data, samt hur detta skulle möjliggöra en mer produktiv och hållbar sektor i framtiden. Studien har identifierat några framtidsscenarier som har potential att bidra till en hållbar utveckling av sektorn och som möjliggörs av data och en mer kopplad jordbrukssektor. Ett scenario handlar om flottor av små, uppkopplade autonoma jordbruksenheter som i sin tur möjliggör en elektrifieringstransformation av sektorn. Detta gör det möjligt för småskaliga jordbruk med fokus på kvalitet att få stor positiv inverkan på livsmedelsförsörjningen och jordbrukssektorns hållbarhetsutveckling. Ett andra scenario är att med hjälp av data göra det lättare att skapa ett verkligt konsumentvärde för hållbara produkter eller produkter av god kvalitet och därigenom göra det möjligt för konsumenterna att budgetera sin miljöpåverkan relaterad till mat från åkermark. För att detta ska bli verklighet krävs ett tredje scenario som innebär att ekosystemet inom jordbrukssektorn måste komma samman och hitta lösningar för datahantering, skapa system för dataanvändning och analys som i sin tur blir användbar både av jordbrukare och beslutsfattare. Med detta på plats kommer ett fjärde scenario att vara genomförbart där dagens generiska lagar, regleringar och subventioner kan förvandlas till ett mer områdesbaserat system där direkt hänsyn kan tas till lokala förhållanden, baserat på data från enskilda gårdar. Det finns få motsättningar mellan hållbarhet och lönsamhet ur jordbrukarnas synvinkel och med hjälp av data och ett mer uppkopplat jordbruk kan sektorn utvecklas i en positiv riktning och öka livsmedelsproduktionen på ett hållbart sätt.
|
90 |
Hantering av data genom tid och rum : En records continuum-analys av hur humanistiska forskare hanterar forskningsdata för tillgängliggörande och bevarande / Data management through time and space : A records continuum analysis of how researchers in the humanities manage research data for sharing and preservation purposesSundberg, Sara January 2024 (has links)
While the preservation and sharing of research data are two topics well researched, there is a need to better understand the connections between them, especially from a Swedish perspective. In relation to this, it is interesting to investigate how researchers themselves are involved in these processes – where and how they preserve their data, for what reasons, how they manage their data for preservation and sharing, and furthermore what consequences this might have for the archiving of research data. The method used in this thesis is semi-structured interviews with 10 researchers from various Swedish universities, conducted in person, via Zoom or by e-mail. The researchers were chosen from their sharing activities on research data repositories. The interviews were processed in Taguette, where the data was organized by tags. The tags with their related statements were then organized into themes. Furthermore, a theoretical analysis based on the records continuum model was conducted. The primary reason that was stated for preservation was data sharing. The researchers expressed a wish for their research data to be reused, as well as stating reasons related to transparency. The researchers also expressed that their data management was influenced by future data sharing. One researcher had archived their data at the university, although most of the participants was positive to doing so in the future. It appears that the researchers in this study takes initiative when it comes to the preservation and sharing of their data. Most of the participants view the data repository as a good platform for preservation, possibly because such platforms can fulfil their reasons for participating in preservation activities. This is a two years master’s thesis in Archival Science.
|
Page generated in 0.1808 seconds