• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 15
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 81
  • 81
  • 28
  • 20
  • 19
  • 18
  • 16
  • 14
  • 12
  • 11
  • 10
  • 10
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Private and Secure Data Communication: Information Theoretic Approach

Basciftci, Yuksel O., Basciftci January 2016 (has links)
No description available.
42

ENHANCING PRIVACY OF TRAINING DATA OF DEEP NEURAL NETWORKS ON EDGE USING TRUSTED EXECUTION ENVIRONMENTS

Gowri Ramshankar (18398499) 18 April 2024 (has links)
<p dir="ltr">Deep Neural Networks (DNNs) are deployed in many applications and protecting the privacy of training data has become a major concern. Membership Inference Attacks (MIAs) occur when an unauthorized person is able to determine whether a piece of data is used in training the DNNs. This paper investigates using Trusted Execution Environments (TEEs) in modern processors to protect the privacy of training data. Running DNNs on TEE, however, encounters many challenges, including limited computing and storage resources as well as a lack of development frameworks. This paper proposes a new method to partition pre-trained DNNs so that parts of the DNNs can fit into TEE to protect data privacy. The existing software infrastructure for running DNNs on TEE requires a significant amount of human effort using C programs. However, most existing DNNs are implemented using Python. This paper presents a framework that can automate most parts of the process of porting Python-based DNNs to TEE. The proposed method is deployed in Arm TrustZone-A on Raspberry Pi 3B+ with OPTEE-OS and evaluated on popular image classification models - AlexNet, ResNet, and VGG. Experimental results show that our method can reduce the accuracy of gradient-based MIAs on AlexNet, VGG- 16, and ResNet-20 evaluated on the CIFAR-100 dataset by 17.9%, 11%, and 35.3%. On average, processing an image in the native execution environment takes 4.3 seconds, whereas in the Trusted Execution Environment (TEE), it takes about 10.1 seconds per image.<br><br></p>
43

Differentially Private Federated Learning Algorithms for Sparse Basis Recovery

Ajinkya K Mulay (18823252) 14 June 2024 (has links)
<p dir="ltr">Sparse basis recovery is an important learning problem when the number of model dimensions (<i>p</i>) is much larger than the number of samples (<i>n</i>). However, there has been little work that studies sparse basis recovery in the Federated Learning (FL) setting, where the Differential Privacy (DP) of the client data must also be simultaneously protected. Notably, the performance guarantees of existing DP-FL algorithms (such as DP-SGD) will degrade significantly when the system is ill-determined (i.e., <i>p >> n</i>), and thus they will fail to accurately learn the true underlying sparse model. The goal of my thesis is therefore to develop DP-FL sparse basis recovery algorithms that can recover the true underlying sparse basis provably accurately even when <i>p >> n</i>, yet still guaranteeing the differential privacy of the client data.</p><p dir="ltr">During my PhD studies, we developed three DP-FL sparse basis recovery algorithms for this purpose. Our first algorithm, SPriFed-OMP, based on the Orthogonal Matching Pursuit (OMP) algorithm, can achieve high accuracy even when <i>n = O(\sqrt{p})</i> under the stronger Restricted Isometry Property (RIP) assumption for least-square problems. Our second algorithm, Humming-Bird, based on a carefully modified variant of the Forward-Backward Algorithm (FoBA), can achieve differentially private sparse recovery for the same setup while requiring the much weaker Restricted Strong Convexity (RSC) condition. We further extend Humming-Bird to support loss functions beyond least-square satisfying the RSC condition. To the best of our knowledge, these are the first DP-FL results guaranteeing sparse basis recovery in the <i>p >> n</i> setting.</p>
44

Analysis of the survival patterns of United States naval officers

Korkmaz, Ibrahim 03 1900 (has links)
Approved for public release, distribution is unlimited / The goal of this thesis is to identify and quantitatively evaluate the factors, especially commissioning source, that affect the longevity of officers in the U.S. Navy. To reach this goal, a survival analysis is conducted on the survival patterns of officer cohorts who entered the service between the years 1983 and 1990. Using data created from Navy Officer Data Card information and annual promotion board results, three survival analysis procedures, LIFETEST, LIFEREG and PHREG were used to examine the factors that influence the survival of U.S. Naval Officers. The results of the survival analysis indicate that commissioning source has significant strong effect on survival rates with Naval Academy graduates have a better survival rate than other commissioning sources. Also, the analysis show that females and African-Americans have better survival rates than males and whites, respectively, and prior enlisted, older, graduates from non-selective colleges have higher survival rates than their counterparts. Additionally, Surface Warfare, Fleet Support and Supply Corps officers were found to have lower survival rates than officers in other communities. When survival functions for involuntary and voluntary separations were analyzed separately, the results were found different. Commissioning age, being African- American, single with children, commissioned from NROTC Contract Program, commissioned from OTHERSOURCE, being prior enlisted, having high GPAs and designated in AIR community had significant, negative effects on involuntary separations and significant positive effects on voluntary separations. / Captain, Turkish General Command of Gendarmerie
45

Model distribuiranja geopodataka u komunalnim sistemima / Model of Spatial Data Distribution in Municipal Systems

Bulatović Vladimir 14 May 2011 (has links)
<p>U radu su prikazani Open Geospatial Consortium (OGC) web servisi, iz aspekta serverskih i klijentskih aplikacija. Analizirani su problemi razmene prostornih podataka u složenim sistemima sa naglaskom na komunalne službe gradova. Na osnovu analize razmene podataka, predložen je model koji unapređuje komunikaciju i pospešuje napredak celokupnog sistema implementacijom distribuiranih OGC web servisa. Predloženi model distribucije prostornih podataka može se primenjivati na sve složene sisteme, ali i unutar manjih sistema kao što su kompanije koje se sastoje iz više sektora ili podsistema</p> / <p> The short review of the Open Geospatial Consortium (OGC) web services have been given in this work from the perspective of server and client applications. The problems of the exchange of spatial data in the complex systems as municipal service have been analysed. Based on analysis of data exchange, the model has been proposed to improve communication and progress of the whole system by implementing OGC web services. Described model of spatial data distribution can be applied to all complex systems, but also within smaller systems such as companies which consist of more<br /> sectors or subsystems.</p>
46

Analýza fungování datových schránek ve veřejné správě na příkladu ČSSZ / Analysis of operation of data boxes in public administration at the example of the CSSA

Pavelec, Jan January 2011 (has links)
This work deals with the operation of Data Boxes in relation to public institutions, namely the Czech Social Security Administration. The introduction is devoted to a summary of the historical development of Data Boxes and description of the basic rules for using the Information System of Data Boxes. There is a mention of the role of Data Boxes in the Czech eGovernment and its relationship to the other elements. Finally, there are listed and briefly discussed the basic legislation with the problems related to Data Boxes. The next part deals with analysis of the situation in the Czech Social Security Administration in the sphere of Data Boxes. It describes situation in the delivery of documents before commissioning of Data Boxes and the situation occured after their commissioning. Follows the comparision of this periods and the evaluation the impact of the introduction of Data Boxes for the operation of Czech Social Security Administration. The final part of the work focuses the analysis of the problems arising from the Czech Social Security Administration in connection with the operation of Data Boxes and suggests possible options for addressing these problems.
47

A web-based graphical user interface to display spatial data

Fiedeldey, Claudia Alexandra 23 February 2007 (has links)
This dissertation presents the design and implementation of a graphical user interface (GUI) to display spatial data in a web-based environment. The work is a case study for a web-based framework for distributed applications, the Web Computing Skeleton, using a distributed open spatial query mechanism to display the geographic data. The design is based on investigation of geographic information systems (GISs), GUI design and properties of spatial query mechanisms. The purpose ofthe GUI is to integrate information about a geographic area; display, manipulate and query geographic-based spatial data; execute queries about spatial relationships and analyse the attribute data to calculate the shortest routes for emergency response. The GUI is implemented as a Java applet embedded in a web document that communicates with the application server via generic GIS classes that provide a common interface to various GIS data sources used in the spatial query mechanism to access a geographic database. Features that are supported by the distributed open spatial query mechanism include a basic set of spatial selection criteria, spatial selection based on pointing, specification of a query window, description of a map scale and identification of a map legend. The design is based on a formal design process that includes the selection of a conceptual model, identification of task flow, major windows and dialog flow, the definition of fields and detailed window layout and finally the definition of field constraints and defaults. The conceptual model characterises the application and provides a framework for users to learn the system model. This model is conceptualised as a map that the user manipulates directly. Unlike a typical map, which just shows spatial data such as roads, cities, and country borders, the GIS links attribute data like population statistics to the spatial data. This link between the map data and the attribute data makes the GIS a powerful tool to manipulate and display data. To measure the performance of displaying spatial data, two main factors are considered, namely processing speed and display quality. Factors that affect the processing speed include the rate of data transfer from the generic GIS classes, the rate data is downloaded over the network and the speed of execution of the drawing. Two factors that influence the spatial data display quality are pixel distance and bitmap quality. The pixel distance set in the geographic database is represented by two pixels on the display screen, which affects the display quality since the pixel distance is the upper limit for display granularity. This means that setting the pixel distance is a trade-off between the processing speed and the display quality. Bitmaps are raster images that are made up of pixels or cells. To improve the raster image quality, the bitmap resolution can be adjusted to display more pixels per centimetre. / Dissertation (MSc (Computer Science))--University of Pretoria, 2007. / Computer Science / unrestricted
48

Komplexní řízení kvality dat a informací / Towards Complex Data and Information Quality Management

Pejčoch, David January 2010 (has links)
This work deals with the issue of Data and Information Quality. It critically assesses the current state of knowledge within tvarious methods used for Data Quality Assessment and Data (Information) Quality improvement. It proposes new principles where this critical assessment revealed some gaps. The main idea of this work is the concept of Data and Information Quality Management across the entire universe of data. This universe represents all data sources which respective subject comes into contact with and which are used under its existing or planned processes. For all these data sources this approach considers setting the consistent set of rules, policies and principles with respect to current and potential benefits of these resources and also taking into account the potential risks of their use. An imaginary red thread that runs through the text, the importance of additional knowledge within a process of Data (Information) Quality Management. The introduction of a knowledge base oriented to support the Data (Information) Quality Management (QKB) is therefore one of the fundamental principles of the author proposed a set of best
49

Výběr a implementace informačního systému / Information System Selection

Odehnal, Ondřej January 2017 (has links)
Master thesis deals with the selection of a suitable information system for company ELKOV elektro a.s., which deals with the sale of electronic material and luminaries. A Suitable system is selected using rough and fine selection according per company's requirements. The thesis also contains timetable for implementation and economic evaluation.
50

Posouzení informačního systému firmy a návrh změn / Information System Assessment and Proposal of ICT Modification

Čelko, Boris January 2020 (has links)
This diploma thesis focuses on the company INAT s.r.o., operating in the engineering industry. Focus of the work is to assess the currently used information system and propose appropriate changes to improve it. The work is divided into three main parts. The first part is an overview of the theory of the issue, the second part deals with detailed analyzes of society and the information system, and the last part consists of their own proposals for improving the information system along with economic evaluation.

Page generated in 0.1318 seconds