• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 306
  • 291
  • 118
  • 94
  • 51
  • 50
  • 37
  • 22
  • 19
  • 10
  • 9
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1118
  • 305
  • 294
  • 219
  • 156
  • 149
  • 127
  • 125
  • 124
  • 120
  • 115
  • 112
  • 104
  • 103
  • 100
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1061

Diseño de una infraestructura centralizada de servidores virtuales en el centro de datos de una empresa pesquera / Design of a centralized infrastructure of virtual servers in the data center of a fishing company

Castañeda Alanya, Elmer Alfonso 17 April 2021 (has links)
El presente trabajo de tesis se ha orientado al desarrollo de un estudio encaminado a diseñar un sistema que permita centralizar servidores a través de una plataforma virtual, con el objeto de ofrecer soporte a servicios y aplicaciones fundamentales dentro de una empresa determinada. En el capítulo I, se incluye la entidad objeto de estudio a la cual será destinado el proyecto. Igualmente, se describirá el entorno organizacional de dicha empresa. Posteriormente, se identifica la problemática existente, se define el objetivo general y los específicos, finalizando con la justificación en el diseño del presente proyecto. Respecto al siguiente apartado, se presenta el capítulo II, en el cual se despliega un marco teórico que incluye la data obtenida con relación a la teoría fundamental y requerida para apoyar la realización del presente proyecto. En relación con el capítulo III, constituye un apartado del proyecto en el que se sustenta la problemática evidenciada en el capítulo I, mediante información precisa y necesaria para el estudio de su trascendencia, incluyendo la caracterización y justificación de las exigencias del proyecto. El capítulo IV aborda las características esenciales del propio diseño que se plantea para solucionar el problema existente, mediante la creación de una plataforma de virtualización de los servidores. Finalmente, se muestran los resultados, así como también las validaciones del proyecto, que permiten comprobar el alcance de los objetivos propuestos en función de los indicadores que determinan el resultado esperado. / The present thesis work has been oriented to the development of a study aimed at designing a system that allows to centralize servers through a virtual platform, in order to offer support to fundamental services and applications within a given company. Chapter I includes the entity under study to which the project will be assigned. Likewise, the organizational environment of said company will be described. Subsequently, the existing problem is identified, the general and specific objectives are defined, ending with the justification in the design of this project. Regarding the following section, chapter II is presented, in which a theoretical framework is displayed that includes the data obtained in relation to the fundamental theory and required to support the realization of this project. In relation to chapter III, it constitutes a section of the project in which the problems evidenced in chapter I is based, through precise and necessary information for the study of its significance, including the characterization and justification of the project's requirements. Chapter IV addresses the essential characteristics of the design itself that arises to solve the existing problem, through the creation of a server virtualization platform. Finally, the results are shown, as well as the validations of the project, which allow verifying the scope of the proposed objectives based on the indicators that determine the expected result. / Tesis
1062

Intégration de l’utilisateur au contrôle d’accès : du processus cloisonné à l’interface homme-machine de confiance / Involving the end user in access control : from confined processes to trusted human-computer interface

Salaün, Mickaël 02 March 2018 (has links)
Cette thèse souhaite fournir des outils pour qu’un utilisateur puisse contribuer activement à la sécurité de son usage d’un système informatique. Les activités de sensibilités différentes d’un utilisateur nécessitent tout d’abord d’être cloisonnées dans des domaines dédiés, par un contrôle d’accès s’ajustant aux besoins de l’utilisateur. Afin de conserver ce cloisonnement, celui-ci doit être en mesure d’identifier de manière fiable les domaines avec lesquels il interagit, à partir de l’interface de sa machine. Dans une première partie, nous proposons un nouveau mécanisme de cloisonnement qui peut s’adapter de manière transparente aux changements d’activité de l’utilisateur, sans altérer le fonctionnement des contrôles d’accès existants, ni dégrader la sécurité du système. Nous en décrivons une première implémentation, nommée StemJail, basée sur les espaces de noms de Linux. Nous améliorons ce cloisonnement en proposant un nouveau module de sécurité Linux, baptisé Landlock, utilisable sans nécessiter de privilèges. Dans un second temps, nous identifions et modélisons les propriétés de sécurité d’une interface homme-machine (IHM) nécessaires à la compréhension fiable et sûre du système par l’utilisateur. En particulier, il s’agit d’établir un lien entre les entités avec lesquelles l’utilisateur pense communiquer, et celles avec lesquelles il communique vraiment. Cette modélisation permet d’évaluer l’impact de la compromission de certains composants d’IHM et d’aider à l’évaluation d’une architecture donnée. / This thesis aims to provide end users with tools enhancing the security of the system they use. First, user activities of different sensitivities require to be confined in dedicated domains by an access control fitting the user’s needs. Next, in order to maintain this confinement, users must be able to reliably identify the domains they interact with, from their machine’s interface. In the first part, we present a new confinement mechanism that seamlessly adapts to user activity changes, without altering the behavior of existing access controls nor degrading the security of the system. We also describe a first implementation named StemJail, based on Linux namespaces. We improve this confinement tool by creating a new Linux security module named Landlock which can be used without requiring privileges. In a second step, we identify and model the security properties a human-computer interface (HCI) requires for the reliable and secure understanding of the system by the user. Precisely, the goal is to establish a link between the entities with which the users think they communicate, and those with which they actually communicate. This model enables to evaluate the impact of HCI components jeopardization and helps assessing a given architecture.
1063

Dolování asociačních pravidel z datových skladů / Association Rules Mining over Data Warehouses

Hlavička, Ladislav January 2009 (has links)
This thesis deals with association rules mining over data warehouses. In the first part the reader will be familiarized with terms like knowledge discovery in databases and data mining. The following part of the work deals with data warehouses. Further the association analysis, the association rules, their types and mining possibilities are described. The architecture of Microsoft SQL Server and its tools for working with data warehouses are presented. The rest of the thesis includes description and analysis of the Star-miner algorithm, design, implementation and testing of the application.
1064

Dostupná řešení pro clustrování serverů / Available Solutions for Server Clustering

Bílek, Václav January 2008 (has links)
The goal of this master thesis is to analyze Open Source resources for loadbalancing and high availability, with aim on areas of its typical usage. These areas are particularly solutions of network infrastructure (routers, loadbalancers), generally network and internet services and parallel filesystems. Next part of this thesis is analysis of design, implementation and plans of subsequent advancement of an fast growing Internet project. The effect of this growth is necessity of solving scalability on all levels. The last part is performance analysis of individual loadbalancing methods in the Linux Virtual Server project.
1065

Systém pro dolování z dat v prostředí Oracle / Data Mining System in Oracle

Krásný, Michal January 2008 (has links)
This MSc Project deals with the system of Knowledge Discovery in Databases. It is a client application which uses the Oracle Data Mining Server's 10.g Release 2 (10.2) services. The application is implemented in Java, the graphical user interface is built on the NetBeans Rich Client Platform. The theoretical part introduces the Knowledge Discovery in Databases, while the practical part describes functionality of the original system, it's deficiencies, documents sollutions of theese deficiencies and there are proposed improvements for further development. The goal of this project is to modify the system to increase the application usability.
1066

Application interference analysis: Towards energy-efficient workload management on heterogeneous micro-server architectures

Hähnel, Markus, Arega, Frehiwot Melak, Dargie, Waltenegus, Khasanov, Robert, Castrillo, Jeronimo 11 May 2023 (has links)
The ever increasing demand for Internet traffic, storage and processing requires an ever increasing amount of hardware resources. In addition to this, infrastructure providers over-provision system architectures to serve users at peak times without performance delays. Over-provisioning leads to underutilization and thus to unnecessary power consumption. Therefore, there is a need for workload management strategies to map and schedule different services simultaneously in an energy-efficient manner without compromising performance, specially for heterogeneous micro-server architectures. This requires statistical models of how services interfere with each other, thereby affecting both performance and energy consumption. Indeed, the performance-energy behavior when mixing workloads is not well understood. This paper presents an interference analysis for heterogeneous workloads (i.e., CPU- and memory-intensive) on a big.LITTLE MPSoC architecture. We employ state-of-the-art tools to generate multiple single-application mappings and characterize the interference among two different services. We observed a performance degradation factor between 1.1 and 2.5. For some configurations, executing on different clusters resulted in reduced energy consumption with no performance penalty. This kind of detailed analysis give us first insights towards more general models for future workload management systems.
1067

PREVENTING DATA POISONING ATTACKS IN FEDERATED MACHINE LEARNING BY AN ENCRYPTED VERIFICATION KEY

Mahdee, Jodayree 06 1900 (has links)
Federated learning has gained attention recently for its ability to protect data privacy and distribute computing loads [1]. It overcomes the limitations of traditional machine learning algorithms by allowing computers to train on remote data inputs and build models while keeping participant privacy intact. Traditional machine learning offered a solution by enabling computers to learn patterns and make decisions from data without explicit programming. It opened up new possibilities for automating tasks, recognizing patterns, and making predictions. With the exponential growth of data and advances in computational power, machine learning has become a powerful tool in various domains, driving innovations in fields such as image recognition, natural language processing, autonomous vehicles, and personalized recommendations. traditional machine learning, data is usually transferred to a central server, raising concerns about privacy and security. Centralizing data exposes sensitive information, making it vulnerable to breaches or unauthorized access. Centralized machine learning assumes that all data is available at a central location, which is only sometimes practical or feasible. Some data may be distributed across different locations, owned by different entities, or subject to legal or privacy restrictions. Training a global model in traditional machine learning involves frequent communication between the central server and participating devices. This communication overhead can be substantial, particularly when dealing with large-scale datasets or resource-constrained devices. / Recent studies have uncovered security issues with most of the federated learning models. One common false assumption in the federated learning model is that participants are the attacker and would not use polluted data. This vulnerability enables attackers to train their models using polluted data and then send the polluted updates to the training server for aggregation, potentially poisoning the overall model. In such a setting, it is challenging for an edge server to thoroughly inspect the data used for model training and supervise any edge device. This study evaluates the vulnerabilities present in federated learning and explores various types of attacks that can occur. This paper presents a robust prevention scheme to address these vulnerabilities. The proposed prevention scheme enables federated learning servers to monitor participants actively in real-time and identify infected individuals by introducing an encrypted verification scheme. The paper outlines the protocol design of this prevention scheme and presents experimental results that demonstrate its effectiveness. / Thesis / Doctor of Philosophy (PhD) / federated learning models face significant security challenges and can be vulnerable to attacks. For instance, federated learning models assume participants are not attackers and will not manipulate the data. However, in reality, attackers can compromise the data of remote participants by inserting fake or altering existing data, which can result in polluted training results being sent to the server. For instance, if the sample data is an animal image, attackers can modify it to contaminate the training data. This paper introduces a robust preventive approach to counter data pollution attacks in real-time. It incorporates an encrypted verification scheme into the federated learning model, preventing poisoning attacks without the need for specific attack detection programming. The main contribution of this paper is a mechanism for detection and prevention that allows the training server to supervise real-time training and stop data modifications in each client's storage before and between training rounds. The training server can identify real-time modifications and remove infected remote participants with this scheme.
1068

Biometric Multi-modal User Authentication System based on Ensemble Classifier

Assaad, Firas Souhail January 2014 (has links)
No description available.
1069

Extending the Cutting Stock Problem for Consolidating Services with Stochastic Workloads

Hähnel, Markus, Martinovic, John, Scheithauer, Guntram, Fischer, Andreas, Schill, Alexander, Dargie, Waltenegus 16 May 2023 (has links)
Data centres and similar server clusters consume a large amount of energy. However, not all consumed energy produces useful work. Servers consume a disproportional amount of energy when they are idle, underutilised, or overloaded. The effect of these conditions can be minimised by attempting to balance the demand for and the supply of resources through a careful prediction of future workloads and their efficient consolidation. In this paper we extend the cutting stock problem for consolidating workloads having stochastic characteristics. Hence, we employ the aggregate probability density function of co-located and simultaneously executing services to establish valid patterns. A valid pattern is one yielding an overall resource utilisation below a set threshold. We tested the scope and usefulness of our approach on a 16-core server with 29 different benchmarks. The workloads of these benchmarks have been generated based on the CPU utilisation traces of 100 real-world virtual machines which we obtained from a Google data centre hosting more than 32000 virtual machines. Altogether, we considered 600 different consolidation scenarios during our experiment. We compared the performance of our approach-system overload probability, job completion time, and energy consumption-with four existing/proposed scheduling strategies. In each category, our approach incurred a modest penalty with respect to the best performing approach in that category, but overall resulted in a remarkable performance clearly demonstrating its capacity to achieve the best trade-off between resource consumption and performance.
1070

Mitteilungen des URZ 2/2005

Blumtritt, Clauß, Fischer, Kempe, Trapp, Richter, Wolf, Ziegler 03 May 2005 (has links)
Informationen des Universitätsrechenzentrums: - Die Projekte Campusnetz II und IP-Telefonie (VoIP) - PROWeb - Ein neuer Dienst für Projekt-WWW-Server - Unterstützte Linux-Distributionen - WUSCH - Windows-Update-Service an der TU Chemnitz - Informationen des URZ zur 'Rahmenvereinbarung zum Einkauf von Standard-PC-Technik' -Umstellung des Lokalsystems der UB auf LIBERO 5 - Elektronisches Publizieren an der TU Chemnitz - 10 Jahre MONARCH - Kurzinformationen - Software-News

Page generated in 0.048 seconds