• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 15
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 84
  • 29
  • 23
  • 19
  • 18
  • 17
  • 15
  • 12
  • 12
  • 12
  • 10
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Informační systém pro správu vizualizací geografických dat / Information System for Management of Geographical Data Visualizations

Grossmann, Jan January 2021 (has links)
The goal of this this is to create an information system for the visualization of geographical data. The main idea is to allow users to create visualizations with their own geographical data, which they can either import from files or directly attach their own database system as a source of data and make use of the data in real-time. The result will be a new web information system that will act as a point of contact between users, geographical data, and visualizations.
52

A Conceptual Model for determining the Value of Business Intelligence Systems

Budree, Adheesh January 2014 (has links)
Philosophiae Doctor - PhD / Business Intelligence refers to the use of Information Systems to enable raw data to be collated into information that can be reported, with the end goal of using this information to enhance the business decision-making process. Business Intelligence is enabled by making use of information that is complete, relevant, accurate, timely and accessible. There are currently a number of documented perspectives that can be used to gauge the value of Business Intelligence systems; however, from an overall business value perspective the most robust method would be to identify and analyse the most commonly identified factors that impact the value assigned to Business Intelligence Systems by a company, and the correlation of each of these factors to calculate the overall value. The importance of deriving a conceptual model, representing the major factors identified from literature and moderated by the quantitative research conducted, lies in its enabling companies and government bodies to assess the true value addition of Business Intelligence systems, and to understand the return on investment of these systems for organisations. In doing so, companies can justify or reject any further expenditure on Business Intelligence. The quantitative research for this thesis was conducted together with a project that was run between the University of the Western Cape and the Hochschule Neu-Ulm University of Applied Sciences in Germany. The research was conducted simultaneously across organisations in South Africa and Germany on the use of BI Systems and Corporate Performance Management. The respondents for the research were Chief Executive Officers, Chief Information Officers and Business Intelligence Managers in selected organisations. A Direct Oblimin-factor analysis was conducted on the online survey responses. The survey was conducted on a sample of approximately 1500 Business Intelligence specialists across South Africa and Germany; and 113 responses were gathered. The factor analysis reduced the key factors identified in the literature to a few major factors, namely: Information Quality, Management and Accessibility, Information Usage, and Knowledge-sharing Culture. Thereafter, a Structural-Equation-Modelling analysis was completed using the Partial-least-Squares method. The results indicate that there is a strong relationship between the factor-Information Quality, Management and Accessibility, and the Value of Business Intelligence. It was found that while there was no strong impact from Information Usage and Culture, there was a strong correlation between Information Usage and Culture and Information Quality, Management and Accessibility The research findings are significant for academic researchers, information technology experts, Business Intelligence specialists and Business Intelligence users. This study contributes to the body of knowledge by bringing together disparate factors that have been identified in academic journals; and assessing the relationship each has on the value of Business Intelligence, as well as the correlations that exist between these factors. From this, the final conceptual model was derived using factors that were identified and tested through the Factor Analysis and the PLS-SEM. The following conclusions can be drawn from the research: (1) The assurance of quality information in the form of complete, accurate, relevant and timeous information that is efficiently managed is the most paramount factor to an organisation deriving value from Business Intelligence systems; (2) information accessibility is key, in order to realise the value of Business Intelligence systems in organisations; and (3) Business Intelligence systems cannot add value to an organisation if a culture of information use and sharing is absent within that organisation. The derived model can be practically implemented as a checklist for organisations to assess Business Intelligence system investments as well as current implementations
53

DIGITAL TRAILS IN VIRTUAL WORLDS: A FORENSIC INVESTIGATION OF VIRTUAL REALITY SOCIAL COMMUNITY APPLICATIONS ON OCULUS PLATFORMS

Samuel Li Feng Ho (17602290) 12 December 2023 (has links)
<p dir="ltr">Virtual Reality (VR) has become a pivotal element in modern society, transforming interactions with digital content and interpersonal communication. As VR integrates into various sectors, understanding its forensic potential is crucial for legal, investigative, and security purposes. This involves examining the digital footprints and artifacts left by immersive technologies. While previous studies in digital forensics have primarily concentrated on traditional computing devices such as smartphones and computers, research on VR, particularly on specific devices like the Oculus Go, Meta Quest, and Meta Quest 2, has been limited. This thesis explores the digital forensics of VR, focusing on the Oculus Go, Meta Quest and Meta Quest 2, using tools like Magnet AXIOM and Wireshark. The research uncovers specific forensic and network-based artifacts from eight social community applications, revealing user personally identifiable information, application usage history, WiFi network details, and multimedia content. These findings have significant implications for legal proceedings and cybercrime investigations, highlighting the role these artifacts can play in influencing the outcome of cases. This research not only deepens our understanding of VR-related digital forensics but also sets the stage for further investigations in this rapidly evolving domain.</p>
54

IMPROVING THE UTILITY OF DIFFERENTIALLY PRIVATE ALGORITHMS USING DATA CHARACTERISTICS

Farzad Zafarani (11837222) 10 January 2025 (has links)
<p dir="ltr">As data continues to grow rapidly in volume and complexity, there is an increasing need to extract meaningful insights from it. These datasets often contain sensitive individual information, making privacy protection crucial. Differential privacy has become the de facto standard for protecting individuals' privacy. Many datasets also have known constraints and structures. Can these known constraints or structures be leveraged to design mechanisms with better utility?</p><p dir="ltr">The focus of this thesis is to demonstrate that by leveraging the inherent structures and constraints within datasets, it may be possible to design differential privacy mechanisms that offer better utility (i.e., more accurate results) while maintaining the required level of privacy. This involves exploring advanced techniques and modifications to the basic mechanisms that take advantage of dataset-specific properties, such as sparsity, distributional assumptions, or other contextual information. This approach aims to minimize the noise added, thereby improving the utility of differentially private outputs.</p><p dir="ltr">In many scenarios, datasets contain constraints. In this thesis, we show that generating differentially private synthetic data while preserving constraints increases utility across several metrics, including marginal queries, classification task accuracy, and clustering. Smooth sensitivity is a data-dependent sensitivity metric that allows for more precise noise addition based on the actual data distribution, rather than worst-case scenarios. It addresses the limitations of local sensitivity by ensuring robust privacy guarantees, even in the presence of outliers or small changes in the data.</p><p dir="ltr"><br></p><p dir="ltr">We have developed a differentially private Naive Bayes model using smooth sensitivity. By using data-dependent sensitivity measures like smooth sensitivity and incorporating known data constraints, we can reduce the amount of noise added, resulting in a more accurate model.</p>
55

The Hindemith string quartets a computer-assisted study of selected aspects of style.

Kostka, Stefan M. January 1900 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1969. / Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references.
56

Měření efektivnosti zavedení datových schránek / Measuring the effectiveness of the introduction of data boxes

Ovesný, Tomáš January 2009 (has links)
This work addresses the issue of data boxes information system, which supports electronic communication between public authorities and other bodies. Aim of the work is to identify measure and evaluate the effectiveness of the data boxes at selected public authorities when processing received and sent documents. The effectiveness is detected by using the proposed metrics and also based on the analysis conducted in order to examine the processing of documents at selected institutions. The effectiveness of data boxes is being researched at the Municipal Office Dub nad Moravou and the Czech Agriculture and Food Inspection Authority. The information needed for the process analysis and the data required for evaluating the effectiveness of the implementation of the proposed metrics were obtained from the staff of those institutions. The work is divided into four parts. The first part is a thorough description of the data boxes information system. The functionality, use, technical solutions and legislations of data boxes are described based on available information. The second part focuses on the description and analysis of selected public authorities and on the way they process received and outgoing mail. In the third part of the work appropriate metrics to assess the effectiveness of the data boxes are proposed, followed by implementation of two-phase measurements. In the fourth section the effectiveness of the data boxes is evaluated based on the measurements conducted at selected public authorities. The main contribution of this work is that it will be the first evaluation of the effectiveness of the data boxes in selected subjects approximately one month after the official launch. Another benefit is the proposal of process for the evaluation of the effectiveness of a very specific new information system within the body and a comprehensive description of the data boxes information system.
57

Integrace datových schránek do informačního systému / Data boxes integration into information systems

Jakeš, Jiří January 2009 (has links)
This diploma thesis discusses problems of data boxes information system and its possible integration with various information systems through the use of particular software tool developed by company Ixtent s.r.o. In the first part of this thesis is data boxes information system (ISDS) established into wider contexts and defined by terms. In the second part of this thesis are described options to connect ISDS and other various business applications. In so doing the software tool "connector ISDS" id used. In the third part are discussed particular realized project, case studies and real benefits of these projects.
58

Návrh informační strategie společnosti / Proposal of the Information Strategy of a Company

Šimek, Václav January 2018 (has links)
This diploma thesis deals with the information strategy of the company EURPAL Ltd. (spol. s r.o.), whose current strategy seems to be outdated. Thanks to the analysis of the current state and the proposals of usable measures, an improvement in the information strategy of the company should be achieved through the implementation of the information system. The object of this thesis is first to introduce the company and its structure, then to use analyses to identify possible risks and finally propose measures, that can be used to help prevent these risks, or that can at least be used to mitigate their impacts.
59

GENERAL-PURPOSE STATISTICAL INFERENCE WITH DIFFERENTIAL PRIVACY GUARANTEES

Zhanyu Wang (13893375) 06 December 2023 (has links)
<p dir="ltr">Differential privacy (DP) uses a probabilistic framework to measure the level of privacy protection of a mechanism that releases data analysis results to the public. Although DP is widely used by both government and industry, there is still a lack of research on statistical inference under DP guarantees. On the one hand, existing DP mechanisms mainly aim to extract dataset-level information instead of population-level information. On the other hand, DP mechanisms introduce calibrated noises into the released statistics, which often results in sampling distributions more complex and intractable than the non-private ones. This dissertation aims to provide general-purpose methods for statistical inference, such as confidence intervals (CIs) and hypothesis tests (HTs), that satisfy the DP guarantees. </p><p dir="ltr">In the first part of the dissertation, we examine a DP bootstrap procedure that releases multiple private bootstrap estimates to construct DP CIs. We present new DP guarantees for this procedure and propose to use deconvolution with DP bootstrap estimates to derive CIs for inference tasks such as population mean, logistic regression, and quantile regression. Our method achieves the nominal coverage level in both simulations and real-world experiments and offers the first approach to private inference for quantile regression.</p><p dir="ltr">In the second part of the dissertation, we propose to use the simulation-based ``repro sample'' approach to produce CIs and HTs based on DP statistics. Our methodology has finite-sample guarantees and can be applied to a wide variety of private inference problems. It appropriately accounts for biases introduced by DP mechanisms (such as by clamping) and improves over other state-of-the-art inference methods in terms of the coverage and type I error of the private inference. </p><p dir="ltr">In the third part of the dissertation, we design a debiased parametric bootstrap framework for DP statistical inference. We propose the adaptive indirect estimator, a novel simulation-based estimator that is consistent and corrects the clamping bias in the DP mechanisms. We also prove that our estimator has the optimal asymptotic variance among all well-behaved consistent estimators, and the parametric bootstrap results based on our estimator are consistent. Simulation studies show that our framework produces valid DP CIs and HTs in finite sample settings, and it is more efficient than other state-of-the-art methods.</p>
60

<strong>Deep Learning-Based Anomaly  Detection in TLS Encrypted Traffic</strong>

Kehinde Ayano (16650471) 03 August 2023 (has links)
<p> The growing trend of encrypted network traffic is changing the cybersecurity threat scene. Most critical infrastructures and organizations enhance service delivery by embracing digital platforms and applications that use encryption to ensure that data and Information are moved across networks in an encrypted form to improve security. While this protects data confidentiality, hackers are also taking advantage of encrypted network traffic to hide malicious software known as malware that will easily bypass the conventional detection mechanisms on the system because the traffic is not transparent for the monitoring mechanism on the system to analyze. Cybercriminals leverage encryption using cryptographic protocols such as SSL/TLS to launch malicious attacks. This hidden threat exists because of the SSL encryption of benign traffic. Hence, there is a need for visibility in encrypted traffic. This research was conducted to detect malware in encrypted network traffic without decryption. The existing solution involves bulk decryption, analysis, and re-encryption. However, this method is prone to privacy issues, is not cost-efficient, and is time-consuming, creating huge overhead on the network. In addition, limited research exists on detecting malware in encrypted traffic without decryption. There is a need to strike a balance between security and privacy by building an intelligent framework that can detect malicious activity in encrypted network traffic without decrypting the traffic prior to inspection. With the payload still encrypted, the study focuses on extracting metadata from flow features to train the machine-learning model. It further deployed this set of features as input to an autoencoder, leveraging the construction error of the autoencoder for anomaly detection. </p>

Page generated in 0.1143 seconds