• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Dataspårning och loggning av användardata med gränssnitt mot slutanvändare / Data tracking and logging of user data with user interface for the end user

Hult, Gabriel, Åkesson, Henrik, Oscarsson, Joakim, Falk, Jonathan, Wilhelmsson, Hugo, Skanvik, Max, Liao, Douglas, Kågemyr, Joel January 2022 (has links)
Denna rapport behandlar utvecklingen av en webbapplikation som sparar och visar förändringar från ett annat, redan befintligt system. Det befintliga systemet, skapat av projektgruppens kund Personalkollen, är en plattform där företag inom servicebranschen hanterar administration av personal. Det system som utvecklats i detta projekt kallas Loggkollen, eftersom det hanterar just loggar. Loggkollen utvecklades av civilingenjörsstudenter med inriktningarna datateknik och mjukvaruteknik vid Linköpings universitet. Systemet utvecklades med ramverken React och Django samt med språken Python, JavaScript, HTML och CSS. Denna rapport innehåller bland annat bakgrund, teori, resultat samt överblick av arbetet. Varje projektmedlem skrev en egen individuell rapport som behandlade något ämne relaterat till detta arbete. Dessa individuella rapporter hittas i slutet av rapporten. Vidare diskuteras olika aspekter av arbetet, som arbetsmetoder och resultat. Slutligen besvaras rapportens frågeställningar i slutsatsen. Resultatet blev en webbapplikation bestående av en frontend och backend. I användargränssnittet visas olika loggar i en lista som representerar förändringar som gjorts i Personalkollens system. Loggar kan filtreras utifrån typ av loggning och vilken kategori de tillhör, men även genom en valfri söksträng. Det går också att exportera listan av loggar till ett Excel-dokument, vilket kan underlätta arbetet för användarna av systemet då de vill hantera informationen på annat sätt.
2

Towards Measuring Apps' Privacy-Friendliness

Momen, Nurul January 2018 (has links)
Today's phone could be described as a charismatic tool that has the ability to keep human beings captivated for a considerable amount of their precious time. Users remain in the illusory wonderland with free services, while their data becomes the subject to monetizing by a genie called big data. In other words, users pay with their personal data but the price is in a way invisible. Poor means to observe and to assess the consequences of data disclosure causes hindrance for the user to be aware of and to take preventive measures. Mobile operating systems use permission-based access control mechanism to guard system resources and sensors. Depending on the type, apps require explicit consent from the user in order to avail access to those permissions. Nonetheless, it does not put any constraint on access frequency. Granted privileges allow apps to access to users' personal information for indefinite period of time until being revoked explicitly. Available control tools lack monitoring facility which undermines the performance of access control model. It has the ability to create privacy risks and nontransparent handling of personal information for the data subject. This thesis argues that app behavior analysis yields information which has the potential to increase transparency, to enhance privacy protection, to raise awareness regarding consequences of data disclosure, and to assist the user in informed decision making while selecting apps or services. It introduces models and methods, and demonstrates the risks with experiment results. It also takes the risks into account and makes an effort to determine apps' privacy-friendliness based on empirical data from app-behavior analysis. / Today's phone could be described as a charismatic tool that has the ability to keep human beings captivated for a considerable amount of their precious time. Users remain in the illusory wonderland with free services, while their data becomes the subject to monetizing by a genie called big data. In other words, users pay with their personal data but the price is in a way invisible. They face hindrance to be aware of and to take preventive measures because of poor means to observe and to assess consequences of data disclosure. Available control tools lack monitoring properties that do not allow the user to comprehend the magnitude of personal data access. Such circumstances can create privacy risks, erode intervenability of access control mechanism and lead to opaque handling of personal information for the data subject. This thesis argues that app behavior analysis yields information which has the potential to increase transparency, to enhance privacy protection, to raise awareness regarding consequences of data disclosure, and to assist the user in informed decision making while selecting apps or services. It introduces models and methods, and demonstrates the data disclosure risks with experimental results. It also takes the risks into account and makes an effort to determine apps' privacy-friendliness based on empirical data from app-behavior analysis.
3

Undersökning av metoder för automatiserad kontinuerlig datautvinning av IoT-data för att utvinna funktioner / : Investigation of methods for automated continuous data mining of IoT data to extract features

Järte, Erik January 2024 (has links)
Företaget Cake har idag inte en komplett bild över hur dess fordon används. Därför samlar företaget idag upp användardata i en förhoppning om att kunna analysera denna data för att få insikter över hur dess produkter används och vad de ska satsa på i framtiden. Företaget har idag inte någon komplett lösning för att analysera denna mängd data. De efterlyser en kartläggning över den data som samlas in och vilket analysverktyg som kan utvecklas. Rapportens syfte var att undersöka om företagets användardata kan utvinnas och användas till visualisering samt maskininlärning för att få mer nytta av den.För att uppnå detta började arbetet med en undersökning av de befintliga processerna och metoderna såsom hur data samlas in och kan analyseras hos uppdragsgivaren samt genomgång hur den insamlade datan var strukturerad. Därefter utvecklades och implementerades en lösning för att visualisera och analysera användardata, inklusive undersökning av möjligheterna med maskininlärning för att få djupare insikter i användarnas beteenden. Resultatet visar att om företaget implementerar ett användarvänligt visualiserings- och analysverktyg blir det överlägset jämfört med nuvarande verktyg och metoder då prototypen möjliggör att företaget kan undersöka användarmönster under specifika tidsperioder och göra jämförelser mellan fordon vilket inte kunde göras med befintliga verktyg och metoder. Vid implementeringen måste säkerhet avseende personuppgifter beaktas. Vid analys av IoT-data med maskininlärningsmetoden Clustering samt den behandling som görs dras slutsatsen att möjligheten fanns till att särskilja olika användares dolda användarmönster och gruppera dessa. Användarmönster refererar till de regelbundna och karakteristiska sätten på vilka användare interagerar med en specifik teknologi, tjänst eller system över tid. Dessa mönster kan inkludera upprepade handlingar, preferenser, tidsbaserade aktiviteter och andra beteendemässiga aspekter. / Today, the company Cake does not have a complete picture of how its vehicles are used.Therefore, the company today collects user data in the hope of being able to analyze this data to gain insights into how its products are used and what they should focus on in the future. Today, the company does not have a complete solution for analyzing this amount of data. Today, the company also does not have a complete solution for analyzing this amount of data. They are calling for a survey of the data that is collected and what analysis tools can be developed. The purpose of the report was to investigate whether the company's user data can be mined and used for visualization and machine learning to get more use out of it. To achieve this, the work began with an investigation of the existing processes and methods such as how data is collected and can be analyzed at the client and a review of how the collected data was structured. A solution to visualize and analyze user data was then developed and implemented, including exploring the possibilities of machine learning to gain deeper insights into user behaviour. The result shows that if the company implements a user-friendly visualization and analysis tool, it will be superior compared to current tools and methods, as the prototype enables the company to examine user patterns during specific time periods and make comparisons between vehicles, which could not be done with existing tools and methods. During the implementation, security regarding personal data must be taken into account. When analyzing IoT data with the machine learning method Clustering and the processing that is done, it is concluded that the possibility existed to distinguish the hidden user patterns of different users and group them. User patterns refer to the regular and characteristic ways in which users interact with a specific technology, service or system over time. These patterns may include repeated actions, preferences, time-based activities, and other behavioral aspects.
4

An Integrated End-User Data Service for HPC Centers

Monti, Henry Matthew 16 January 2013 (has links)
The advent of extreme-scale computing systems, e.g., Petaflop supercomputers, High Performance Computing (HPC) cyber-infrastructure, Enterprise databases, and experimental facilities such as large-scale particle colliders, are pushing the envelope on dataset sizes.  Supercomputing centers routinely generate and consume ever increasing amounts of data while executing high-throughput computing jobs. These are often result-datasets or checkpoint snapshots from long-running simulations, but can also be input data from experimental facilities such as the Large Hadron Collider (LHC) or the Spallation Neutron Source (SNS). These growing datasets are often processed by a geographically dispersed user base across multiple different HPC installations.  Moreover, end-user workflows are also increasingly distributed in nature with massive input, output, and even intermediate data often being transported to and from several HPC resources or end-users for further processing or visualization. The growing data demands of applications coupled with the distributed nature of HPC workflows, have the potential to place significant strain on both the storage and network resources at HPC centers. Despite this potential impact, rather than stringently managing HPC center resources, a common practice is to leave application-associated data management to the end-user, as the user is intimately aware of the application's workflow and data needs. This means end-users must frequently interact with the local storage in HPC centers, the scratch space, which is used for job input, output, and intermediate data. Scratch is built using a parallel file system that supports very high aggregate I/O throughput, e.g., Lustre, PVFS, and GPFS. To ensure efficient I/O and faster job turnaround, use of scratch by applications is encouraged.  Consequently, job input and output data are required to be moved in and out of the scratch space by end-users before and after the job runs, respectively. In practice, end-users arbitrarily stage and offload data as and when they deem fit, without any consideration to the center's performance, often leaving data on the scratch long after it is needed. HPC centers resort to "purge" mechanisms that sweep the scratch space to remove files found to be no longer in use, based on not having been accessed in a preselected time threshold called the purge window that commonly ranges from a few days to a week. This ad-hoc data management ignores the interactions between different users' data storage and transmission demands, and their impact on center serviceability leading to suboptimal use of precious center resources. To address the issues of exponentially increasing data sizes and ad-hoc data management, we present a fresh perspective to scratch storage management by fundamentally rethinking the manner in which scratch space is employed. Our approach is twofold. First, we re-design the scratch system as a "cache" and build "retention", "population", and "eviction"  policies that are tightly integrated from the start, rather than being add-on tools. Second, we aim to provide and integrate the necessary end-user data delivery services, i.e. timely offloading (eviction) and just-in-time staging (population), so that the center's scratch space usage can be optimized through coordinated data movement. Together, these two combined approaches create our Integrated End-User Data Service, wherein data transfer and placement on the scratch space are scheduled with job execution. This strategy allows us to couple job scheduling with cache management, thereby bridging the gap between system software tools and scratch storage management. It enables the retention of only the relevant data for the duration it is needed. Redesigning the scratch as a cache captures the current HPC usage pattern more accurately, and better equips the scratch storage system to serve the growing datasets of workloads. This is a fundamental paradigm shift in the way scratch space has been managed in HPC centers, and outweighs providing simple purge tools to serve a caching workload. / Ph. D.
5

Seeing Yourself Visualized as Data : A Qualitative Study on Users' Interactions and Perceptions of Data Visualizations in Digital Self-Tracking / Seeing Yourself Visualized as Data : A Qualitative Study on Users' Interactions and Perceptions of Data Visualizations in Digital Self-Tracking

Lepler, Liis January 2023 (has links)
Effective data visualization is essential for digital self-tracking to help users gain insights into their behavior and habits. Personalized visualizations engage users, making the self-tracking experience more meaningful. However, potential biases and limitations should be considered to ensure an accurate and objective self-tracking process. The study aims to examine users' interactions with visualized data in the digital self-tracking process and understand their perceptions of the accuracy and objectivity of personal data visualizations and the self-tracking processes on platforms that offer self-tracking features. These platforms include applications for tracking health and fitness, habits, music listening, book reading, and movie watching. The study employs a qualitative approach using semi-structured interviews with an ethnographic approach as a data collection method. This approach was selected to investigate users' interactions and opinions on data visualizations and the digital self-tracking process.  The findings show that participants primarily use data visualizations and other personal visualizations as reminders, for comparisons, planning, and motivation. Although they do not extensively analyze the visualized data, participants report experiencing heightened self-awareness and motivation. Despite their awareness of potential inaccuracies and subjectivity in the visualizations and the self-tracking process, participants are willing to overlook these aspects due to the perceived benefits associated with the process. Moreover, participants generally express a level of trust in the accuracy of their visualized data.
6

A Design- by- Privacy Framework for End- User Data Controls

Zhou, Tangjia January 2021 (has links)
Our internet makes data storage and sharing more convenient. An increasing amount of privacy data is being stored on different application platforms, so the security of these data has become a public concern. The European Council General Data Protection Regulation (GDPR) put forward clear requirements for application platforms to give back end- users data control, and the regulations came into force in May 2018. However, there is still a lack of low- cost, easy- to-manage user data control framework in the application platform, especially for startups. To address the problem, I apply Amazon Cognito to provide user account management and monitor. Therefore, I store the user information (e.g., username, email) registered on the web application in Cognito to achieve user authentication. I also associate the web application with Amazon Web Services (AWS) Application Programming Interface (API) Gateway to implement the data control operations on the web application to the AWS DynamoDB database. The final result proves that the framework can successfully implement data control operations on the end- user data under the requirements of GDPR. Meanwhile, all data operation results can be displayed in real- time on the web application and can be used in the corresponding AWS service to monitor. / Vårt internet gör datalagring och delning bekvämare. En ökande mängd känslig användardata lagras på olika applikationsplattformar, så säkerheten för dessa data har blivit en allmän angelägenhet. Europeiska rådets GDPR presenterade tydliga krav på att applikationsplattformar ska returnera slutanvändarnas datakontroller och de förordningar som trädde i kraft i maj 2018. Det saknas dock fortfarande ett billigt, lättanvänt hanteringsramverk för användarna samt datakontroll i applikationsplattformen, särskilt för startups. För att lösa problemet använder jag Amazon Cognito för att hantera och övervaka användarkonton. Därför lagrar jag användarinformationen (t.ex. användarnamn, epost) som är registrerad i webbapplikationen i Cognito för att uppnå användarautentisering. Jag associerar också webbapplikationen med AWS API Gateway för att implementera datakontrollåtgärderna på webbapplikationen till AWS DynamoDB- databasen. Det slutliga resultatet visar att ramverket framgångsrikt kan implementera datahanteringsoperationer på slutanvändardata i enlighet med kraven i GDPR. Under tiden kan alla datadriftresultat visas i realtid i webbapplikationen och kan därmed användas i motsvarande AWS- tjänst för att monitorera.
7

Prise en compte des dépendances entre données thématiques utilisateur et données topographiques lors d’un changement de niveau de détail / Taking into account the dependences between user thematic data and topographic data when the level of detail is changed

Jaara, Kusay 10 March 2015 (has links)
Avec l'importante disponibilité de données topographiques de référence, la création des données géographiques n'est plus réservée aux professionnels de l'information géographique. De plus en plus d'utilisateurs saisissent leurs propres données, que nous appelons données thématiques, en s'appuyant sur ces données de référence qui jouent alors le rôle de données support. Les données thématiques ainsi saisies font sens en tant que telles, mais surtout de par leurs relations avec les données topographiques. La non prise en compte des relations entre données thématiques et topographiques lors de traitements modifiant les unes ou les autres peut engendrer des incohérences, notamment pour les traitements liés au changement de niveau de détail. L'objectif de la thèse est de définir une méthodologie pour préserver la cohérence entre les données thématiques et topographiques lors d'un changement de niveau de détail. Nous nous concentrons sur l'adaptation des données thématiques suite à une modification des données topographiques, processus que nous appelons migration des données thématiques. Nous proposons d'abord un modèle pour la migration de données thématiques ponctuelles sur réseau composé de : (1) un modèle pour décrire le référencement des données thématiques sur les données topographiques par des relations spatiales (2) une méthode de relocalisation basée sur ces relations. L'approche consiste à identifier les relations finales attendues en fonction des relations initiales et des changements sur les données topographiques entre les états initial et final. La relocalisation est alors effectuée grâce à une méthode multicritère de manière à respecter au mieux les relations attendues. Une mise en œuvre est présentée sur des cas d'étude jouets et sur un cas réel fourni par un service de l'Etat gestionnaire de réseau routier. Nous discutons enfin l'extension du modèle proposé pour traiter la prise en compte des relations pour d'autres applications que la migration de données thématiques / With the large availability of reference topographic data, creating geographic data is not exclusive to experts of geographic information any more. More and more users rely on reference data to create their own data, hereafter called thematic data. Reference data then play the role of support for thematic data. Thematic data make sense by themselves, but even more by their relations with topographic data. Not taking into account the relations between thematic and topographic data during processes that modify the former or the latter may cause inconsistencies, especially for processes that are related to changing the level of detail. The objective of this thesis is to define a methodology to preserve the consistency between thematic and topographic when the level of detail is modified. This thesis focuses on the adaptation of thematic data after a modification of topographic data: we call this process thematic data migration. We first propose a model for the migration of punctual thematic data hosted by a network. This model is composed of: (1) a model to describe the referencing of thematic data on topographic data using spatial relations (2) a method to re-locate thematic data based on these relations. The approach consists in identifying the expected final relations according to the initial relations and the modifications of topographic data between the initial and the final state. The thematic data are then re-located using a multi-criteria method in order to satisfy, as much as possible, the expected relations. An implementation is presented on toy problems and on a real use case provided by a French public authority in charge of road network management. The extension of the proposed model to take into account the relations for other applications than thematic data migration is also discussed
8

Transforming user data into user value by novel mining techniques for extraction of web content, structure and usage patterns : the development and evaluation of new Web mining methods that enhance information retrieval and improve the understanding of users' Web behavior in websites and social blogs

Ammari, Ahmad N. January 2010 (has links)
The rapid growth of the World Wide Web in the last decade makes it the largest publicly accessible data source in the world, which has become one of the most significant and influential information revolution of modern times. The influence of the Web has impacted almost every aspect of humans' life, activities and fields, causing paradigm shifts and transformational changes in business, governance, and education. Moreover, the rapid evolution of Web 2.0 and the Social Web in the past few years, such as social blogs and friendship networking sites, has dramatically transformed the Web from a raw environment for information consumption to a dynamic and rich platform for information production and sharing worldwide. However, this growth and transformation of the Web has resulted in an uncontrollable explosion and abundance of the textual contents, creating a serious challenge for any user to find and retrieve the relevant information that he truly seeks to find on the Web. The process of finding a relevant Web page in a website easily and efficiently has become very difficult to achieve. This has created many challenges for researchers to develop new mining techniques in order to improve the user experience on the Web, as well as for organizations to understand the true informational interests and needs of their customers in order to improve their targeted services accordingly by providing the products, services and information that truly match the requirements of every online customer. With these challenges in mind, Web mining aims to extract hidden patterns and discover useful knowledge from Web page contents, Web hyperlinks, and Web usage logs. Based on the primary kinds of Web data used in the mining process, Web mining tasks can be categorized into three main types: Web content mining, which extracts knowledge from Web page contents using text mining techniques, Web structure mining, which extracts patterns from the hyperlinks that represent the structure of the website, and Web usage mining, which mines user's Web navigational patterns from Web server logs that record the Web page access made by every user, representing the interactional activities between the users and the Web pages in a website. The main goal of this thesis is to contribute toward addressing the challenges that have been resulted from the information explosion and overload on the Web, by proposing and developing novel Web mining-based approaches. Toward achieving this goal, the thesis presents, analyzes, and evaluates three major contributions. First, the development of an integrated Web structure and usage mining approach that recommends a collection of hyperlinks for the surfers of a website to be placed at the homepage of that website. Second, the development of an integrated Web content and usage mining approach to improve the understanding of the user's Web behavior and discover the user group interests in a website. Third, the development of a supervised classification model based on recent Social Web concepts, such as Tag Clouds, in order to improve the retrieval of relevant articles and posts from Web social blogs.
9

Prise en compte des dépendances entre données thématiques utilisateur et données topographiques lors d’un changement de niveau de détail / Taking into account the dependences between user thematic data and topographic data when the level of detail is changed

Jaara, Kusay 10 March 2015 (has links)
Avec l'importante disponibilité de données topographiques de référence, la création des données géographiques n'est plus réservée aux professionnels de l'information géographique. De plus en plus d'utilisateurs saisissent leurs propres données, que nous appelons données thématiques, en s'appuyant sur ces données de référence qui jouent alors le rôle de données support. Les données thématiques ainsi saisies font sens en tant que telles, mais surtout de par leurs relations avec les données topographiques. La non prise en compte des relations entre données thématiques et topographiques lors de traitements modifiant les unes ou les autres peut engendrer des incohérences, notamment pour les traitements liés au changement de niveau de détail. L'objectif de la thèse est de définir une méthodologie pour préserver la cohérence entre les données thématiques et topographiques lors d'un changement de niveau de détail. Nous nous concentrons sur l'adaptation des données thématiques suite à une modification des données topographiques, processus que nous appelons migration des données thématiques. Nous proposons d'abord un modèle pour la migration de données thématiques ponctuelles sur réseau composé de : (1) un modèle pour décrire le référencement des données thématiques sur les données topographiques par des relations spatiales (2) une méthode de relocalisation basée sur ces relations. L'approche consiste à identifier les relations finales attendues en fonction des relations initiales et des changements sur les données topographiques entre les états initial et final. La relocalisation est alors effectuée grâce à une méthode multicritère de manière à respecter au mieux les relations attendues. Une mise en œuvre est présentée sur des cas d'étude jouets et sur un cas réel fourni par un service de l'Etat gestionnaire de réseau routier. Nous discutons enfin l'extension du modèle proposé pour traiter la prise en compte des relations pour d'autres applications que la migration de données thématiques / With the large availability of reference topographic data, creating geographic data is not exclusive to experts of geographic information any more. More and more users rely on reference data to create their own data, hereafter called thematic data. Reference data then play the role of support for thematic data. Thematic data make sense by themselves, but even more by their relations with topographic data. Not taking into account the relations between thematic and topographic data during processes that modify the former or the latter may cause inconsistencies, especially for processes that are related to changing the level of detail. The objective of this thesis is to define a methodology to preserve the consistency between thematic and topographic when the level of detail is modified. This thesis focuses on the adaptation of thematic data after a modification of topographic data: we call this process thematic data migration. We first propose a model for the migration of punctual thematic data hosted by a network. This model is composed of: (1) a model to describe the referencing of thematic data on topographic data using spatial relations (2) a method to re-locate thematic data based on these relations. The approach consists in identifying the expected final relations according to the initial relations and the modifications of topographic data between the initial and the final state. The thematic data are then re-located using a multi-criteria method in order to satisfy, as much as possible, the expected relations. An implementation is presented on toy problems and on a real use case provided by a French public authority in charge of road network management. The extension of the proposed model to take into account the relations for other applications than thematic data migration is also discussed
10

Systém pro analýzu použitelnosti softwaru / Software Tool for Usability Testing

Kubík, Tomáš January 2012 (has links)
This work is concerned with an implementation of a software framework for usability testing. This extensive network framework and its protocol allow for integration of libraries for data collection from the basic peripherals like mouse, keyboard, camera, etc. If the protocol rules are implemented, these libraries can be platform independent. The client-server architecture allows for management of all collected data in a central database. The data in this database can be queried for evaluating the usability of applications.

Page generated in 0.0851 seconds