Spelling suggestions: "subject:"director"" "subject:"directorate""
21 |
Design of a generic client-server messaging interface using XMLRimer, Suvendi Chinnappen 21 September 2005 (has links)
Applications that use directory services or relational databases operate in client-server mode where a client requests information from a server, and the server returns a response to the client. Communication between each client-server application is achieved by using separate custom built front-ends with non-portable data formats. A need exists to access information from different heterogeneous client-server systems in a standard message request-response format. This research proposes a generic XML document that presents a common request-response interface to the client from which they can access network protocol or database information. The XML component is easily adaptable to accessing any new client-server type protocol or database data that may be added to a server. The approach in determining the XML elements is, firstly review each systems command and data structure separately, and then determine if there are any commonalities within each protocol that would allow for a common representation of both the data and command structure. For the purposes of this project, three different data sources that are typically used in an Internet application were analysed, namely: -- a TCP based server program; -- a relational type database; and -- a directory service. The solution was implemented using Linux as the operating system, Java as the programming language, MySQL as the relational database, openLDAP as the directory server and a proprietary TCP based server application. Initially the complete system was developed for the proprietary TCP-based application. The other systems were added with minimum additional work. The result of the implementation was that it is relatively easy to add new protocols (for e.g. LDAP) on an as needed basis with minimal changes required on the server side. A client will receive XML responses that the client can either adapt (typically using a separate style-sheet) to their specific needs or use the existing front-ends if they are suitable. After the design was implemented and tested, the performance of XML and non-XML messages was evaluated. As expected the increased verbosity of XML results in a larger footprint that requires more processing time and resources. This means that any implementation using XML has to carefully weight the benefits of flexibility, extensibility and standard message formats against reduced performance. After evaluating XML type messages in an Internet type environment that involved human-computer interaction, it was concluded that the slower response times is not that significant to negate the benefits of a common message interface provided by using XML. / Dissertation (M Eng (Computer Engineering))--University of Pretoria, 2006. / Electrical, Electronic and Computer Engineering / unrestricted
|
22 |
Efektivn metoda Äten adresovch poloek v souborov©m syst©mu Ext4 / An Efficient Way to Allocate and Read Directory Entries in the Ext4 File SystemPazdera, Radek January 2013 (has links)
Clem t©to prce je zvit vkon sekvenÄnho prochzen adres v souborov©m syst©mu ext4. Datov struktura HTree, jen je v souÄasn© dobÄ pouita k implementaci adresu v ext4 zvld velmi dobe nhodn© pstupy do adrese, avak nen optimalizovna pro sekvenÄn prochzen. Tato prce pin analzu tohoto probl©mu. Nejprve studuje implementaci souborov©ho syst©mu ext4 a dalch subsyst©mu Linuxov©ho jdra, kter© s nm souvis. Pro vyhodnocen vkonu souÄasn© implementace adresov©ho indexu byla vytvoena sada test. Na zkladÄ vsledk tÄchto test bylo navreno een, kter© bylo nslednÄ implementovno do Linuxov©ho jdra. V zvÄru t©to prce naleznete vyhodnocen pnosu a porovnn vkonu nov© implementace s dalmi souborovmi syst©my v Linuxu.
|
23 |
Verteilte Autorisierung innerhalb von Single Sign-On-Umgebungen : Analyse, Architektur und Implementation eines Frameworks für verteilte Autorisierung in einer ADFS-Umgebung / Distributed authorization within single sign on environments : analysis, architecture, and implementation of a framework for distributed authorization within an ADFS environmentKirchner, Peter January 2007 (has links)
Aktuelle Softwaresysteme erlauben die verteilte Authentifizierung von Benutzern über Ver-zeichnisdienste, die sowohl im Intranet als auch im Extranet liegen und die über Domänen-grenzen hinweg die Kooperation mit Partnern ermöglichen. Der nächste Schritt ist es nun, die Autorisierung ebenfalls aus der lokalen Anwendung auszulagern und diese extern durchzu-führen – vorzugsweise unter dem Einfluss der Authentifizierungspartner.
Basierend auf der Analyse des State-of-the-Art wird in dieser Arbeit ein Framework vorges-tellt, das die verteilte Autorisierung von ADFS (Active Directory Federation Services) authenti-fizierten Benutzern auf Basis ihrer Gruppen oder ihrer persönlichen Identität ermöglicht. Es wird eine prototypische Implementation mit Diensten entwickelt, die für authentifizierte Be-nutzer Autorisierungsanfragen extern delegieren, sowie ein Dienst, der diese Autorisierungs-anfragen verarbeitet. Zusätzlich zeigt die Arbeit eine Integration dieses Autorisierungs-Frameworks in das .NET Framework, um die praxistaugliche Verwendbarkeit in einer aktuel-len Entwicklungsumgebung zu demonstrieren.
Abschließend wird ein Ausblick auf weitere Fragestellungen und Folgearbeiten gegeben. / Current software systems allow distributed authentication of users using directory services, which are located both in the intranet and in the extranet, to establish cooperation with part-ners over domain boundaries. The next step is to outsource the authorization out of the local applications and to delegate the authorization decisions to external parties. In particular the authorization request is back delegated to the authentication partner.
Based on an analysis of the state of the art this paper presents a framework which allows the distributed authorisation of ADFS authenticated users. The authorization decisions are based on the user’s identity and groups. In this work there will be developed a prototypical imple-mentation of services which are capable of delegating authorization requests. Additionally, this work points out the integration of these services into the .NET framework to demonstrate the usability in a modern development environment.
Finally there will be a prospect of further questions and work.
|
24 |
Nstroj pro sprvu Active Directory / Active Directory Management DashboardRadimk, Samuel January 2016 (has links)
This thesis is focused on the main concepts of Active Directory and the creation of an application allowing basic management tasks. It introduces the logical as well as physical components and provides an overview of existing servers that are using the services of Active Directory. The functionality of existing management applications is discussed and desired properties of management applications are discovered. On these grounds, a new application concept is introduced and the benefits of the new application over the existing ones is shown. According to the concept, a new application is developed supporting the management of users and groups and implementing additional features such as profile photo editing and a definition of customized object creation process. This application is also tested on different levels and possibilities of future improvements are given.
|
25 |
Správa podnikových datových sítí / Enterprise network managementVaclík, Michal January 2017 (has links)
Master’s thesis discusses the design and implementation of network infrastructure for computer laboratory in Department of Communications. Thesis focuses on VLAN definitions and deployment of server virtualization, including network monitoring station.
|
26 |
Detecting Lateral Movement in Microsoft Active Directory Log Files : A supervised machine learning approachUppströmer, Viktor, Råberg, Henning January 2019 (has links)
Cyberattacker utgör ett stort hot för dagens företag och organisationer, med engenomsnittlig kostnad för ett intrång på ca 3,86 miljoner USD. För att minimera kostnaden av ett intrång är det viktigt att detektera intrånget i ett så tidigt stadium som möjligt. Avancerande långvariga hot (APT) är en sofistikerad cyberattack som har en lång närvaro i offrets nätverk. Efter attackerarens första intrång kommer fokuset av attacken skifta till att få kontroll över så många enheter som möjligt på nätverket. Detta steg kallas för lateral rörelse och är ett av de mest kritiska stegen i en APT. Syftet med denna uppsats är att undersöka hur och hur väl lateral rörelse kan upptäckas med hjälp av en maskininlärningsmetod. I undersökningen jämförs och utvärderas fem maskininlärningsalgoritmer med upprepad korsvalidering följt av statistisk testning för att bestämma vilken av algoritmerna som är bäst. Undersökningen konkluderar även vilka attributer i det undersökta datasetet som är väsentliga för att detektera laterala rörelser. Datasetet kommer från en Active Directory domänkontrollant där datasetets attributer är skapade av korrelerade loggar med hjälp av datornamn, IP-adress och användarnamn. Datasetet består av en syntetisk, samt, en verklig del vilket skapar ett semi-syntetiskt dataset som innehåller ett multiklass klassifierings problem. Experimentet konkluderar att all fem algoritmer klassificerar rätt med en pricksäkerhet (accuracy) på 0.998. Algoritmen RF presterar med den högsta f-measure (0.88) samt recall (0.858), SVM är bäst gällande precision (0.972) och DT har denlägsta inlärningstiden (1237ms). Baserat på resultaten indikerar undersökningenatt algoritmerna RF, SVM och DT presterar bäst i olika scenarier. Till exempel kan SVM användas om en låg mängd falsk positiva larm är viktigt. Om en balanserad prestation av de olika prestanda mätningarna är viktigast ska RF användas. Undersökningen konkluderar även att en stor mängd utav de undersökta attributerna av datasetet kan bortses i framtida experiment, då det inte påverkade prestandan på någon av algoritmerna. / Cyber attacks raise a high threat for companies and organisations worldwide. With the cost of a data breach reaching $3.86million on average, the demand is high fora rapid solution to detect cyber attacks as early as possible. Advanced persistent threats (APT) are sophisticated cyber attacks which have long persistence inside the network. During an APT, the attacker will spread its foothold over the network. This stage, which is one of the most critical steps in an APT, is called lateral movement. The purpose of the thesis is to investigate lateral movement detection with a machine learning approach. Five machine learning algorithms are compared using repeated cross-validation followed statistical testing to determine the best performing algorithm and feature importance. Features used for learning the classifiers are extracted from Active Directory log entries that relate to each other, with a similar workstation, IP, or account name. These features are the basis of a semi-synthetic dataset, which consists of a multiclass classification problem. The experiment concludes that all five algorithms perform with an accuracy of 0.998. RF displays the highest f1-score (0.88) and recall (0.858), SVM performs the best with the performance metric precision (0.972), and DT has the lowest computational cost (1237ms). Based on these results, the thesis concludes that the algorithms RF, SVM, and DT perform best in different scenarios. For instance, SVM should be used if a low amount of false positives is favoured. If the general and balanced performance of multiple metrics is preferred, then RF will perform best. The results also conclude that a significant amount of the examined features can be disregarded in future experiments, as they do not impact the performance of either classifier.
|
27 |
Improving efficiency of standardized workplace processes through automatization utilizing scripts : Using PowerShell to optimize onboarding / Förbättring av standardiserade processer genom automatisering med hjälp av scripts : Onboarding optimisering med hjälp av PowerShellKrtalic, Dragan January 2022 (has links)
As companies continue to rely more and more on information technology (IT) infrastructure, automation of most processes become increasingly viable as a way of improving efficiency and cutting costs. One such application area is staff and user management. This becomes more and more important as a company grows and hires more and more people, as manually doing these management tasks becomes increasingly time-consuming and repetitive. Moreover, these management tasks become very costly for the company in terms of cumulative man-hours and decrease the workplace enjoyment of the staff that handles this processing. Many companies rely on Active Directory (AD) as a way of managing their staff and users. With the help of PowerShell, onboarding and offboarding of users can be entirely automated - as will be described in this thesis. This thesis describes the design and analysis of the entire onboarding process for a company. This description covers the connection between the human resources (HR) and information technology (IT) systems, as well as the script that takes data from the HR system and uses it to create users in AD and assign basic access rights to the user. In addition, the scripts also handle offboarding, which involves disabling users, removing their access rights, and eventually deleting them from the AD. As the host company already had strict requirements for the onboarding process, there was little room for researching alternative models and designs for the system; therefore, this thesis focuses on the design and execution of the script rather than the design and analysis of the onboarding process. The results are analyzed both quantitatively and qualitatively in terms of accuracy and time saved in comparison with the manual execution of the same tasks. The conclusion is that using automation saves on average 585.155 seconds per new user. If a single person were to do 10 onboardings per day, this amounts to 40 hours per month, saving one entire work week each month. This automation can save money and resources by freeing this employee to focus on other projects and tasks. / Automatisering i de flesta verksamheter blir alltmer genomförbart som en strategi för att öka effektiviteten och minska kostnaderna eftersom företag fortsätter att förlita sig mer och mer på informationsteknologi (IT) infrastruktur. Administration av användare och anställda är ett sådant applikationsområde. Detta är viktigt när ett företag expanderar och anställer fler anställda eftersom att utföra dessa förvaltningsuppgifter manuellt kräver oerhört mycket tid och upprepning. Därtill ökar det ledningsansvaret, företagets totala arbetskostnader samt sänker personalenstillfredsställelse med sina jobb. Active Directory (AD) är ett populärt verktyg som används av många företag för att hantera sina anställda och användare. PowerShell kan användas för att helt automatisera användares onboarding och offboarding, vilket denna avhandling kommer att förklara. Den fullständiga introduktionsproceduren för ett företag beskrivs i denna avhandling, tillsammans med dess design och analys. Förhållandet mellan mänskliga resurser (HR) och informationsteknologi (IT) täcks i den här beskrivningen, liksom skriptet som exploaterar information från HR-systemet för att skapa användare i AD och ge användaren minimala åtkomstbehörigheter. Skripten hanterar också offboarding, vilket innebär att blockera användare, ta bort deras åtkomstprivilegier och slutligen ta bort dem från AD:t. I denna avhandling fokuserar vi på designen och utförandet av skriptet snarare än designen och analysen av onboardingprocessen av den orsaken att värdorganisationen redan hade specifika kriterier för onboardingprocessen, vilket lämnar begränsad flexibilitet för andra modeller och konstruktioner. Jämfört med manuellt slutförande av identiska aktiviteter utvärderas resultaten både statistiskt och kvalitativt i termer av noggrannhet och tidsbesparing. Studiens resultat visar att varje ny användare som använder automation sparar i genomsnitt 585.155 sekunder. En arbetsvecka skulle sparas varje månad om en enskild person genomförde 10 onboarding per dag, eller 40 timmar per månad. Genom att låta medarbetaren koncentrera sig på andra uppgifter och projekt kan denna automatisering hjälpa till att spara pengar och resurser.
|
28 |
Exploring knowledge bases for engineering a user interests hierarchy for social network applicationsHaridas, Mandar January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / Gurdip Singh / In the recent years, social networks have become an integral part of our lives. Their outgrowth has resulted in opportunities for interesting data mining problems, such as interest or friendship recommendations. A global ontology over the interests specified by the users of a social network is essential for accurate recommendations. The focus of this work is on engineering such an interest ontology. In particular, given that the resulting ontology is meant to be used for data mining applications to social network problems, we explore only hierarchical ontologies. We propose, evaluate and compare three approaches to engineer an interest hierarchy. The proposed approaches make use of two popular knowledge bases, Wikipedia and Directory Mozilla, to extract interest definitions and/or relationships between interests. More precisely, the first approach uses Wikipedia to find interest definitions, the latent semantic analysis technique to measure the similarity between interests based on their definitions, and an agglomerative clustering algorithm to group similar interests into higher level concepts. The second approach uses the Wikipedia Category Graph to extract relationships between interests. Similarly, the third approach uses Directory Mozilla to extract relationships between interests. Our results indicate that the third approach, although the simplest, is the most effective for building an ontology over user interests. We use the ontology produced by the third approach to construct interest based features. These features are further used to learn classifiers for the friendship prediction task. The results show the usefulness of the ontology with respect to the results obtained in absence of the ontology.
|
29 |
Scale and Concurrency of Massive File System DirectoriesPatil, Swapnil 01 May 2013 (has links)
File systems store data in files and organize these files in directories. Over decades, file systems have evolved to handle increasingly large files: they distribute files across a cluster of machines, they parallelize access to these files, they decouple data access from metadata access, and hence they provide scalable file access for high-performance applications. Sadly, most cluster-wide file systems lack any sophisticated support for large directories. In fact, most cluster file systems continue to use directories that were designed for humans, not for large-scale applications. The former use-case typically involves hundreds of files and infrequent concurrent mutations in each directory, while the latter use-case consists of tens of thousands of concurrent threads that simultaneously create large numbers of small files in a single directory at very high speeds. As a result, most cluster file systems exhibit very poor file create rate in a directory either due to limited scalability from using a single centralized directory server or due to reduced concurrency from using a system-wide synchronization mechanism.
This dissertation proposes a directory architecture called GIGA+ that enables a directory in a cluster file system to store millions of files and sustain hundreds of thousands of concurrent file creations every second. GIGA+ makes two indexing technique to scale out a growing directory on many servers and an efficient layered design to scale up performance. GIGA+ uses a hash-based, incremental partitioning algorithm that enables highly concurrent directory indexing through asynchrony and eventual consistency of the internal indexing state (while providing strong consistency guarantees to the application data). This dissertation analyzes several trade-offs between data migration overhead, load balancing effectiveness, directory scan performance, and entropy of indexing state made by the GIGA+ design, and compares them with policies used in other systems. GIGA+ also demonstrates a modular implementation that separates directory distribution from directory representation. It layers a client-server middleware, which spreads work among many GIGA+ servers, on top of a backend storage system, which manages on-disk directory representation. This dissertation studies how system behavior is tightly dependent on both the indexing scheme and the on-disk implementations, and evaluates how the system performs for different backend configurations including local and shared-disk stores. The GIGA+ prototype delivers highly scalable directory performance (that exceeds the most demanding Petascale-era requirements), provides the traditional UNIX file system interface (that can run applications without any modifications) and offers a new functionality layered on existing cluster file systems (that lack support for distributed directories)contributions: a concurrent
|
30 |
Investigating and Implementing a DNS Administration SystemBrännström, Anders, Nilsson, Rickard January 2007 (has links)
<p>NinetechGruppen AB is an IT service providing company with about 30 employees, primarily based in Karlstad, Sweden. The company began to have problems with their DNS administration because the number of administrated domains had grown too large. A single employee was responsible for all the administration, and text editors were used for modifying the DNS configuration files directly on the name servers. This was an error prone process which also easily led to inconsistencies between the documentation and the real world.</p><p>NinetechGruppen AB decided to solve the administrative problems by incorporating a DNS administration system, either by using an existing product or by developing a new sys-tem internally. This thesis describes the process of simplifying the DNS administration procedures of NinetechGruppen AB.</p><p>Initially, an investigation was conducted where existing DNS administration tools were sought for, and evaluated against the requirements the company had on the new system.</p><p>The system was going to have a web administration interface, which was to be developed in ASP.NET 2.0 with C# as programming language. The administration interface had to run on Windows, use SQL Server 2005 as backend database server, and base access control on Active Directory. Further, the system had to be able of integrating customer handling with the domain administration, and any changes to the system information had to follow the Informa-tion Technology Infrastructure Library change management process.</p><p>The name servers were running the popular name server software BIND and ran on two different Linux distributions – Red Hat Linux 9 and SUSE Linux 10.0.</p><p>The investigation concluded that no existing system satisfied the requirements; hence a new system was to be developed, streamlined for the use at NinetechGruppen AB. A requirement specification and a functional description was created and used as the basis for the development. The finalized system satisfies all necessary requirements to some extent, and most of them are fully satisfied.</p>
|
Page generated in 0.0423 seconds