Spelling suggestions: "subject:"verver"" "subject:"cerver""
821 |
NNTP server jako služba pro systémy založené na technologii Windows-NT / NNTP Server as a Windows Network ServiceLoupanec, Josef January 2007 (has links)
This work includes specification and analysis of requirements, design and implementation of the internet news server. The server controls newsgroups and associated news. It provides availability of the articles by NNTP protocol and HTTP protocol (by web interface). The server supports a user authentication and an optional proxy mode, when all NNTP requests are resent to another remote NNTP server. A mechanism that provides news-downloading from remote NNTP servers and performs distribution function is included too. The application is designed to run on MS Windows NT (and higher version) as a NT service. The server is configurable by a graphic user interface. The work also includes theoretical information needed for successful accomplishment of the above-mentioned requirements.
|
822 |
A semi-formal comparison between the Common Object Request Broker Architecture (COBRA) and the Distributed Component Object Model (DCOM)Conradie, Pieter Wynand 06 1900 (has links)
The way in which application systems and software are built has changed dramatically over the past few
years. This is mainly due to advances in hardware technology, programming languages, as well as the
requirement to build better software application systems in less time. The importance of mondial (worldwide)
communication between systems is also growing exponentially. People are using network-based
applications daily, communicating not only locally, but also globally. The Internet, the global network,
therefore plays a significant role in the development of new software. Distributed object computing is one
of the computing paradigms that promise to solve the need to develop clienVserver application systems,
communicating over heterogeneous environments.
This study, of limited scope, concentrates on one crucial element without which distributed object computing
cannot be implemented. This element is the communication software, also called middleware, which allows
objects situated on different hardware platforms to communicate over a network. Two of the most important
middleware standards for distributed object computing today are the Common Object Request Broker
Architecture (CORBA) from the Object Management Group, and the Distributed Component Object
Model (DCOM) from Microsoft Corporation. Each of these standards is implemented in commercially
available products, allowing distributed objects to communicate over heterogeneous networks.
In studying each of the middleware standards, a formal way of comparing CORBA and DCOM is presented,
namely meta-modelling. For each of these two distributed object infrastructures (middleware), meta-models
are constructed. Based on this uniform and unbiased approach, a comparison of the two distributed object
infrastructures is then performed. The results are given as a set of tables in which the differences and
similarities of each distributed object infrastructure are exhibited. By adopting this approach, errors caused
by misunderstanding or misinterpretation are minimised. Consequently, an accurate and unbiased
comparison between CORBA and DCOM is made possible, which constitutes the main aim of this
dissertation. / Computing / M. Sc. (Computer Science)
|
823 |
Requirements analysis and architectural design of a web-based integrated weapons of mass destruction toolsetJones, Richard B. 06 1900 (has links)
Approved for public release, distribution is unlimited / In 1991, shortly after the combat portion of the Gulf War, key military and government leaders identified an urgent requirement for an accurate on-site tool for analysis of chemical, biological, and nuclear hazards. Defense Nuclear Agency (now Defense Threat Reduction Agency, DTRA) was tasked with the responsibility to develop a software tool to address the requirement. Based on extensive technical background, DTRA developed the Hazard Prediction Assessment Capability (HPAC). For over a decade HPAC addressed the users requirements through on-site training, exercise support and operational reachback. During this period the HPAC code was iteratively improved, but the basic architecture remained constant until 2002. In 2002, when the core requirements of the users started to evolve into more net-centric applications, DTRA began to investigate the potential of modifying their core capability into a new design architecture. This thesis documents the requirements, analysis, and architectural design of the newly prototyped architecture, Integrated Weapons of Mass Destruction Toolset (IWMDT). The primary goal of the IWMDT effort is to provide accessible, visible and shared data through shared information resources and tem plated assessments of CBRNE scenarios. This effort integrates a collection of computational capabilities as server components accessible through a web interface. Using the results from this thesis, DTRA developed a prototype of the IWMDT software. Lessons learned from the prototype and suggestions for follow-on work are presented in the thesis. / Major, United States Army
|
824 |
Správa sítí na bázi protokolu IP / Management of data networks based on IP protocolPatala, Petr January 2012 (has links)
The objective of this master’s thesis is monitoring and management of computer networks via SNMP protocol and its practical application. The main part describes working with SNMPc program in an experimental network through implementation of its parts into the network and configuration of SNMP agents on routers, switch and end station. This thesis includes the results of traffic testing, disconnected links, effects of traffic load on QoS parameters, making longterm statistics, baselines and alarms. The thesis also includes parametres obtained with SNMP protocol from network nodes and end station.
|
825 |
Master Data Management a jeho využití v praxi / Master Data Management and its usage in practiceKukačka, Pavel January 2011 (has links)
This thesis deals with the Master Data Management (MDM), specifically its implementation. The main objectives are to analyze and capture the general approaches of MDM implementation including best practices, describe and evaluate the implementation of MDM project using Microsoft SQL Server 2008 R2 Master Data Services (MDS) realized in the Czech environment and on the basis of the above theoretical background, experiences of implemented project and available technical literature create a general procedure for implementation of the MDS tool. To achieve objectives above are used these procedures: exploration of information resources (printed, electronic and personal appointments with consultants of Clever Decision), cooperation on project realized by Clever Decision and analysis of tool Microsoft SQL Server 2008 R2 Master Data Services. Contributions of this work are practically same as its goals. The main contribution is creation of a general procedure for implementation of the MDS tool. The thesis is divided into two parts. The first (theoretically oriented) part deals with basic concepts (including definition against other systems), architecture, implementing styles, market trends and best practices. The second (practically oriented) part deals at first with implementation of realized MDS project and hereafter describes a general procedure for implementation of the MDS tool.
|
826 |
Podpora pro práci s XML u databázového serveru Microsoft SQL Server 2008 / Support for XML in Microsoft SQL Server 2008Bábíčková, Radka Unknown Date (has links)
This thesis is focused on XML and related technologies. The XML language is directly linked to the databases and its support in databases. The overview of the XML support provided by various database products and systems are presented in this work. Support in the MS SQL Server 2008 is discussed in more detail starting with the mapping of relational data to XML and vice versa to support of the XML data type and work with it through XQuery. Also some indexing techniques are briefly presented. Finally, the support in MS SQL Server 2008 is demonstrated by means of a sample application, which verifes the theoretical knowledge in practice.
|
827 |
Hantering av nätverkscache i DNSLindqvist, Hans January 2019 (has links)
The Domain Name System, DNS, is a fundamental part in the usability of the Internet, but its caching function is challenged by the increase of address size, number of addresses and automation. Meanwhile, there are limits in the memory capacity of certain devices at the Internet’s edge towards the Internet of Things. This study has taken a closer look at concurrent needs of DNS resolution and considered how DNS is affected by IPv6 address propagation, mobile devices, content delivery networks and web browser functions. The investigation has, in two freely available DNS resolver implementations, searched for the optimal cache memory management in constrained devices on, or at the border of, the Internet of Things. By means of open source access to the programs, Unbound and PowerDNS Recursor, each of their structures have been interpreted in order to approximate and compare memory requirements. Afterwards a laboratory simulation has been made using fictitious DNS data with real-world characteristics to measure the actual memory consumption at the server process. The simulation avoided individual adaption of program settings, involvement of DNSSEC data and imposing memory constraints on the test environment. The source code analysis estimated that Unbound handled A+AAAA records more optimally while PowerDNS Recursor was more efficient for PTR records. When using both record types as a whole the measurements in the simulation showed that Unbound was able to store DNS data more densely than PowerDNS Recursor. The result has shown that the standardized wireformat for DNS data used in Unbound is less optimal than the object-based of PowerDNS Recursor. On the other hand, the study showed that Unbound which was procedurally written in the C language was able to manage the cache more sparingly than the object- oriented PowerDNS Recursor which was developed in C++. / Domännamnsystemet, DNS, utgör en fundamental del av användbarheten för Internet, men dess cachefunktion utmanas av adressers ökande storlek, antal och automatisering. Parallellt råder begränsad minneskapacitet hos vissa enheter i Internets utkant mot Internet of Things. Studien har tittat närmare på nutida behov av namnuppslagning och har då betraktat hur DNS påverkats av IPv6- adressutbredning, mobila enheter, innehållsleveransnätverk och webbläsarfunktioner. Undersökningen har i två fritt tillgängliga serverprogramvaror för DNS-uppslag sökt efter den optimala hanteringen av cache hos begränsade enheter i, eller på gränsen till, Sakernas Internet. Med hjälp av tillgången till öppen källkod för programmen, Unbound och PowerDNS Recursor, har dess respektive strukturer tolkats för att uppskatta och jämföra minnesbehov. Därefter har en simulering gjorts i en laborativ miljö med fiktiva DNS-data av verklighetstrogen karaktär för att mäta den faktiska förbrukningen av minne på DNS-serverns process. Vid simuleringen undveks att individuellt anpassa programmens inställningar, att blanda in data för DNSSEC, samt att införa minnesbegränsningar i testmiljön. Undersökningen av källkod beräknade att Unbound var mer optimalt för posttyperna A+AAAA medan PowerDNS Recursor var effektivare för posttypen PTR. För båda posttyperna som helhet visade mätningarna i simuleringen att Unbound kunde lagra DNS-data tätare än PowerDNS Recursor. Resultatet har visat att det standardiserade meddelandeformatet för DNS-data som används i Unbound är mindre optimalt än det objektbaserade i PowerDNS Recursor. Å andra sidan visades att Unbound som var procedurellt skrivet i programspråket C lyckades hushålla med cacheminnet bättre än det objektorienterade PowerDNS Recursor som utvecklats i C++.
|
828 |
Web Font Optimization for Mobile Internet Users : A performance study of resource prioritization approaches for optimizing custom fonts on the webNygren, Maria January 2019 (has links)
According to the HTTP Archive, 75% of websites are using web fonts. Multiple conditions have to be met before modern web browsers like Chrome, Firefox and Safari decide to download the web fonts needed on a page. As a result, web fonts are late discovered resources that can delay the First Meaningful Paint (FMP). Improving the FMP is relevant for the web industry, particularly for performance-conscious web developers. This paper gives insight into how the resource prioritization approaches HTTP/2 Preload and HTTP/2 Server Push can be used to optimize the delivery of web fonts for first-time visitors. Five font loading strategies that use HTTP/2 Server Push and/or Preload were implemented on replicas of the landing pages from five real-world websites. The font loading strategies were evaluated against each other, and against the non-optimized version of each landing page. All the evaluated font loading strategies in this degree project improved the time it took to deliver the first web font content to the user’s screen, resulting in a faster FMP. It was also discovered that HTTP/2 Server Push, on its own, is not a more performance efficient resource prioritization approach than HTTP/2 Preload when it comes to delivering web font content to the client. Further, HTTP/2 Server Push and HTTP/2 Preload appears to be more efficient when used together, in the context of optimizing the delivery of web font content. However, all conclusions in this paper are based on the results gathered from testing the font loading strategies in an emulated environment and are yet to be confirmed on actual mobile devices with real network conditions.
|
829 |
Compliance Issues In Cloud Computing SystemsUnknown Date (has links)
Appealing features of cloud services such as elasticity, scalability, universal access, low entry cost, and flexible billing motivate consumers to migrate their core businesses into the cloud. However, there are challenges about security, privacy, and compliance. Building compliant systems is difficult because of the complex nature of regulations and cloud systems. In addition, the lack of complete, precise, vendor neutral, and platform independent software architectures makes compliance even harder. We have attempted to make regulations clearer and more precise with patterns and reference architectures (RAs). We have analyzed regulation policies, identified overlaps, and abstracted them as patterns to build compliant RAs. RAs should be complete, precise, abstract, vendor neutral, platform independent, and with no implementation details; however, their levels of detail and abstraction are still debatable and there is no commonly accepted definition about what an RA should contain. Existing approaches to build RAs lack structured templates and systematic procedures. In addition, most approaches do not take full advantage of patterns and best practices that promote architectural quality. We have developed a five-step approach by analyzing features from available approaches but refined and combined them in a new way. We consider an RA as a big compound pattern that can improve the quality of the concrete architectures derived from it and from which we can derive more specialized RAs for cloud systems. We have built an RA for HIPAA, a compliance RA (CRA), and a specialized compliance and security RA (CSRA) for cloud systems. These RAs take advantage of patterns and best practices that promote software quality. We evaluated the architecture by creating profiles. The proposed approach can be used to build RAs from scratch or to build new RAs by abstracting real RAs for a given context. We have also described an RA itself as a compound pattern by using a modified POSA template. Finally, we have built a concrete deployment and availability architecture derived from CSRA that can be used as a foundation to build compliance systems in the cloud. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2015. / FAU Electronic Theses and Dissertations Collection
|
830 |
Avalia??o de desempenho e dimensionamento de redes de teleinform?tica centralizada para tr?fego de dados corporativos / Acting evaluation and centralized network dimension for corporate data trafficGarcez, Osvaldo Luis 28 June 2007 (has links)
Made available in DSpace on 2016-04-04T18:31:19Z (GMT). No. of bitstreams: 1
Osvaldo Luis Garcez.pdf: 3292369 bytes, checksum: 7591a0a6fd9734701447c8f0a80ebc92 (MD5)
Previous issue date: 2007-06-28 / In this work it is made a study in which the Server-Based Computing, that is an architecture of Information Technology (IT), where the applications are given, management, supported and executed 100% in the server and the Distributed Computation, or Distributed System, it is a reference to the parallel and decentralized computation, accomplished by two or more computers connected through a network, whose objective is to conclude a task in common, they are placed in it evidences seeking the information available and simulations which will serve as theoretical support to have taken decision. The two structure models have advantages and disadvantages depending on the type of transactions requested by the users where through detailed rising of the referred transactions, impact in the network traffic and the inherent processes to the activity will be pondered and schedule offering conclusive data for the choice of the ideal model being taken in consideration the each company transactions profile. / Neste trabalho ? feito um estudo no qual a Computa??o Baseada em Servidor, que ? uma arquitetura de Tecnologia da Informa??o (TI), onde as aplica??es s?o entregues, gerenciadas, suportadas e executadas 100% no servidor e a Computa??o Distribu?da, ou Sistema Distribu?do, ? uma refer?ncia ? computa??o paralela e descentralizada, realizada por dois ou mais computadores conectados atrav?s de uma rede, cujo objetivo ? concluir uma tarefa em comum, s?o colocados em evidencia visando a disponibiliza??o de informa??es e simula??es as quais servir?o de embasamento te?rico para tomada decis?o. Os dois modelos de estrutura t?m vantagens e desvantagens dependendo do tipo de transa??es requisitadas pelos usu?rios onde atrav?s de levantamento detalhado das referidas transa??es, impacto no tr?fego na rede e os processos inerentes ? atividade ser?o ponderados e planilhados oferecendo dados conclusivos para a escolha do modelo ideal levando-se em considera??o o perfil de transa??es de cada empresa.
|
Page generated in 0.0473 seconds