• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 99
  • 36
  • 22
  • 12
  • 9
  • 6
  • 6
  • 4
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 232
  • 232
  • 95
  • 81
  • 44
  • 38
  • 38
  • 34
  • 32
  • 30
  • 29
  • 26
  • 24
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Server a klient pro správu hostingových služeb s využitím frameworku Qt4 a Linuxu / Server and client management hosting services using Qt4 framework and Linux

Matas, Jakub January 2011 (has links)
This thesis deals with the design and the implementation of a client/server application for the administration of hosting services. Other solutions of hosting services administration are listed as well and they are contrasted and compared with the assigned solution. A description of the particular hosting services and their setting for the Ubuntu Linux distribution are provided. A communication protocol and a data store serving to save all the client accounts and servers were designed. Basic principles of working with C++ framework Qt and its usage for the implementation of both the server and the client application are demonstrated. The basic settings of the server application enabling it to be launched on the server as a service are mentioned as well. In the last part a description is stated of working with the client applications and administration of the client accounts.
162

Návrh a realizace síťové aplikace pro audit a monitorování počítačů / Design and Implementation of Network Application for Audit and Monitoring of Computers

Krym, David January 2014 (has links)
This diploma thesis deals with design and implementation of a network application for monitoring of computers for a chosen company. The application allows administrators to automate the gathering of hardware and software information. The purpose of the application is also to monitor hardware values, such as processor temperature or harddisk free space. The design uses client-server architecture. Three applications were created: server, client and graphical management console.
163

Ablaufszenarien fuer Client-Server Anwendungen mit CORBA 2.0

Falk, Edelmann 12 November 1997 (has links)
Die Common Object Request Broker Architecture (CORBA) der Object Management Group (OMG) bietet die Chance, nicht nur eine Plattform fuer neue verteilte Anwendungen zu sein, sondern erlaubt es auch, bestehende Anwendungen und Altsoftware hersteller- und systemuebergreifend zu integrieren. Diese Eigenschaft hebt CORBA von anderen Programmierplattformen ab und gibt CORBA das Potential, eine aussichtsreiche Basis fuer kuenftige Anwendungssysteme zu sein. Das Ziel dieser Studienarbeit besteht darin, die Umsetzbarkeit verschiedener Interaktionsarten in CORBA zu untersuchen und an Beispielen praktisch auszuprobieren. Moegliche Ablaufformen aus der Literatur, aus den Systemen DCE und MPI und anhand eigener Ueberlegungen werden im ersten Teil dieser Arbeit systematisch zusammengefasst. Danach folgt eine ausfuerliche Behandlung der Architektur von CORBA und der hier moeglichen Ablaufformen und Interaktionsszenarien. Abschliessend werden acht verschiedene Versionen eines einfachen verteilten Woerterbuches vorgestellt, um einige der in CORBA realisierten Konzepte am praktischen Beispiel zu verdeutlichen. Als CORBA-Plattform stand Orbix-MT 2.0.1 (multi-threaded) der Firma IONA Technologies Ltd. unter Solaris 2.x zur Verfuegung.
164

Nachrichtenklassifikation als Komponente in WEBIS

Krellner, Björn 25 September 2006 (has links)
In der Diplomarbeit wird die Weiterentwicklung eines Prototyps zur Nachrichtenklassifikation sowie die Integration in das bestehende Web-orientierte Informationssystem (WEBIS) beschrieben. Mit der entstandenen Software vorgenommene Klassifikationen werden vorgestellt und mit bisherigen Erkenntnissen verglichen.
165

Distributed Occlusion Culling for Realtime Visualization

Domaratius, Uwe 18 December 2006 (has links)
This thesis describes the development of a distributed occlusion culling solution for complex generic scenes. Moving these calculations onto a second computer should decrease the load on the actual rendering system and therefore allow higher framerates. This work includes an introduction to parallel rendering systems and discussion of suitable culling algorithms. Based on these parts, a client-server system for occlusion culling is developed. The test results of a prototypical implementation form the last part of this thesis.
166

Data Processing and Collection in Distributed Systems

Andersson, Sara January 2021 (has links)
Distributed systems can be seen in a variety of applications that is in use today. Tritech provides several systems that to some extent consist of distributed systems of nodes. These nodes collect data and the data have to be processed. A problem that often appears when designing these systems, is deciding where the data should be processed, i.e., which architecture is the most suitable one for the system. Decide the architecture for these systems are not simple, especially since it changes rather quickly due to the development in these areas. The thesis aims to perform a study regarding which factors affect the choice of architecture in a distributed system and how these factors relate to each other. To be able to analyze which factors do affect the choice of architecture and to what extent, a simulator was implemented. The simulator received information about the factors as input, and return one or several architecture configurations as output. By performing qualitative interviews, the input factors to the simulator were chosen. The factors that were analyzed in the thesis was: security, storage, working memory, size of data, number of nodes, data processing per data set, robust communication, battery consumption, and cost. From the qualitative interviews as well as from the prestudy five architecture configuration was chosen. The chosen architectures were: thin-client server, thick-client server, three-tier client-server, peer-to-peer, and cloud computing. The simulator was validated regarding the three given use cases: agriculture, the train industry, and industrial Internet of Things. The validation consisted of five existing projects from Tritech. From the results of the validation, the simulator produced correct results for three of the five projects. By using the simulator results, it could be seen which factors affect the choice of architecture more than others and are hard to provide in the same architecture since they are conflicting factors. The conflicting factors were security together with working memory and robust communication. The factor working memory together with battery consumption also showed to be conflicting factors and is hard to provide within the same architecture. Therefore, according to the simulator, it can be seen that the factors that affect the choice of architecture were working memory, battery consumption, security, and robust communication. By using the results of the simulator, a decision matrix was designed whose purpose was to facilitate the choice of architecture. The evaluation of the decision matrix consisted of four projects from Tritech including the three given use cases: agriculture, the train industry, and industrial Internet of Things. The evaluation of the decision matrix showed that the two architectures that received the most points, one of the architectures were used in the validated project. / Distribuerade system kan ses i en mängd olika applikationer som används idag. Tritech jobbar med flera produkter som till viss del består av distribuerade system av noder. Det dessa system har gemensamt är att noderna samlar in data och denna data kommer på ett eller ett annat sätt behöva bearbetas. En fråga som ofta behövs besvaras vid uppsättning av arkitekturen för sådana projekt är huruvida datan ska bearbetas, d.v.s. vilken arkitektkonfiguration som är mest lämplig för systemet. Att ta dessa beslut har visat sig inte alltid vara helt simpelt, och det ändrar sig relativt snabbt med den utvecklingen som sker på dessa områden. Denna uppsats syftar till att utföra en studie om vilka faktorer som påverkar valet av arkitektur för ett distribuerat system samt hur dessa faktorer förhåller sig mot varandra. För att kunna analysera vilka faktorer som påverkar valet av arkitektur och i vilken utsträckning, implementerades en simulator. Simulatorn tog faktorerna som input och returnerade en eller flera arkitekturkonfigurationer som output. Genom att utföra kvalitativa intervjuer valdes faktorerna till simulatorn. Faktorerna som analyserades i denna uppsats var: säkerhet, lagring, arbetsminne, storlek på data, antal noder, databearbetning per datamängd, robust kommunikation, batteriförbrukning och kostnad. Från de kvalitativa intervjuerna och från förstudien valdes även fem stycken arkitekturkonfigurationer. De valda arkitekturerna var: thin-client server, thick-client server, three-tier client-server, peer-to-peer, och cloud computing. Simulatorn validerades inom de tre givna användarfallen: lantbruk, tågindustri och industriell IoT. Valideringen bestod av fem befintliga projekt från Tritech. Från resultatet av valideringen producerade simulatorn korrekta resultat för tre av de fem projekten. Utifrån simulatorns resultat, kunde det ses vilka faktorer som påverkade mer vid valet av arkitektur och är svåra att kombinera i en och samma arkitekturkonfiguration. Dessa faktorer var säkerhet tillsammans med arbetsminne och robust kommunikation. Samt arbetsminne tillsammans med batteriförbrukning visade sig också vara faktorer som var svåra att kombinera i samma arkitektkonfiguration. Därför, enligt simulatorn, kan det ses att de faktorer som påverkar valet av arkitektur var arbetsminne, batteriförbrukning, säkerhet och robust kommunikation. Genom att använda simulatorns resultat utformades en beslutsmatris vars syfte var att underlätta valet av arkitektur. Utvärderingen av beslutsmatrisen bestod av fyra projekt från Tritech som inkluderade de tre givna användarfallen: lantbruk, tågindustrin och industriell IoT. Resultatet från utvärderingen av beslutsmatrisen visade att de två arkitekturerna som fick flest poäng, var en av arkitekturerna den som användes i det validerade projektet
167

A Client-Server Architecture for Collection of Game-based Learning Data

Jones, James R. 27 January 2015 (has links)
Advances in information technology are driving massive improvement to the education industry. The ubiquity of mobile devices has triggered a shift in the delivery of educational content. More lessons in a wide range of subjects are being disseminated by allowing students to access digital materials through mobile devices. One of the key materials is digital-based educational games. These games merge education with digital games to maximize engagement while somewhat obfuscating the learning process. The effectiveness is generally measured by assessments, either after or during gameplay, in the form of quizzes, data dumps, and/or manual analyses. Valuable gameplay information lost during the student's play sessions. This gameplay data provides educators and researchers with specific gameplay actions students perform in order to arrive at a solution, not just the correctness of the solution. This problem illustrates a need for a tool, enabling educators and players to quickly analyze gameplay data. in conjunction with correctness in an unobtrusive manner while the student is playing the game. This thesis describes a client-server software architecture that enables the collection of game-based data during gameplay. We created a collection of web services that enables games to transmit game-data for analysis. Additionally, the web application provides players with a portal to login and view various visualization of the captured data. Lastly, we created a game called "Taffy Town", a mathematics-based game that requires the player to manipulate taffy pieces in order to solve various fractions. Taffy Town transmits students' taffy transformations along with correctness to the web application. Students are able to view several dynamically created visualizations from the data sent by Taffy Town. Researchers are able to log in to the web application and see the same visualizations, however, aggregated across all Taffy Town players. This end-to-end mapping of problems, actions, and results will enable researchers, pedagogists, and teachers to improve the effectiveness of educational games. / Master of Science
168

A semi-formal comparison between the Common Object Request Broker Architecture (COBRA) and the Distributed Component Object Model (DCOM)

Conradie, Pieter Wynand 06 1900 (has links)
The way in which application systems and software are built has changed dramatically over the past few years. This is mainly due to advances in hardware technology, programming languages, as well as the requirement to build better software application systems in less time. The importance of mondial (worldwide) communication between systems is also growing exponentially. People are using network-based applications daily, communicating not only locally, but also globally. The Internet, the global network, therefore plays a significant role in the development of new software. Distributed object computing is one of the computing paradigms that promise to solve the need to develop clienVserver application systems, communicating over heterogeneous environments. This study, of limited scope, concentrates on one crucial element without which distributed object computing cannot be implemented. This element is the communication software, also called middleware, which allows objects situated on different hardware platforms to communicate over a network. Two of the most important middleware standards for distributed object computing today are the Common Object Request Broker Architecture (CORBA) from the Object Management Group, and the Distributed Component Object Model (DCOM) from Microsoft Corporation. Each of these standards is implemented in commercially available products, allowing distributed objects to communicate over heterogeneous networks. In studying each of the middleware standards, a formal way of comparing CORBA and DCOM is presented, namely meta-modelling. For each of these two distributed object infrastructures (middleware), meta-models are constructed. Based on this uniform and unbiased approach, a comparison of the two distributed object infrastructures is then performed. The results are given as a set of tables in which the differences and similarities of each distributed object infrastructure are exhibited. By adopting this approach, errors caused by misunderstanding or misinterpretation are minimised. Consequently, an accurate and unbiased comparison between CORBA and DCOM is made possible, which constitutes the main aim of this dissertation. / Computing / M. Sc. (Computer Science)
169

Requirements analysis and architectural design of a web-based integrated weapons of mass destruction toolset

Jones, Richard B. 06 1900 (has links)
Approved for public release, distribution is unlimited / In 1991, shortly after the combat portion of the Gulf War, key military and government leaders identified an urgent requirement for an accurate on-site tool for analysis of chemical, biological, and nuclear hazards. Defense Nuclear Agency (now Defense Threat Reduction Agency, DTRA) was tasked with the responsibility to develop a software tool to address the requirement. Based on extensive technical background, DTRA developed the Hazard Prediction Assessment Capability (HPAC). For over a decade HPAC addressed the users requirements through on-site training, exercise support and operational reachback. During this period the HPAC code was iteratively improved, but the basic architecture remained constant until 2002. In 2002, when the core requirements of the users started to evolve into more net-centric applications, DTRA began to investigate the potential of modifying their core capability into a new design architecture. This thesis documents the requirements, analysis, and architectural design of the newly prototyped architecture, Integrated Weapons of Mass Destruction Toolset (IWMDT). The primary goal of the IWMDT effort is to provide accessible, visible and shared data through shared information resources and tem plated assessments of CBRNE scenarios. This effort integrates a collection of computational capabilities as server components accessible through a web interface. Using the results from this thesis, DTRA developed a prototype of the IWMDT software. Lessons learned from the prototype and suggestions for follow-on work are presented in the thesis. / Major, United States Army
170

Compliance Issues In Cloud Computing Systems

Unknown Date (has links)
Appealing features of cloud services such as elasticity, scalability, universal access, low entry cost, and flexible billing motivate consumers to migrate their core businesses into the cloud. However, there are challenges about security, privacy, and compliance. Building compliant systems is difficult because of the complex nature of regulations and cloud systems. In addition, the lack of complete, precise, vendor neutral, and platform independent software architectures makes compliance even harder. We have attempted to make regulations clearer and more precise with patterns and reference architectures (RAs). We have analyzed regulation policies, identified overlaps, and abstracted them as patterns to build compliant RAs. RAs should be complete, precise, abstract, vendor neutral, platform independent, and with no implementation details; however, their levels of detail and abstraction are still debatable and there is no commonly accepted definition about what an RA should contain. Existing approaches to build RAs lack structured templates and systematic procedures. In addition, most approaches do not take full advantage of patterns and best practices that promote architectural quality. We have developed a five-step approach by analyzing features from available approaches but refined and combined them in a new way. We consider an RA as a big compound pattern that can improve the quality of the concrete architectures derived from it and from which we can derive more specialized RAs for cloud systems. We have built an RA for HIPAA, a compliance RA (CRA), and a specialized compliance and security RA (CSRA) for cloud systems. These RAs take advantage of patterns and best practices that promote software quality. We evaluated the architecture by creating profiles. The proposed approach can be used to build RAs from scratch or to build new RAs by abstracting real RAs for a given context. We have also described an RA itself as a compound pattern by using a modified POSA template. Finally, we have built a concrete deployment and availability architecture derived from CSRA that can be used as a foundation to build compliance systems in the cloud. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2015. / FAU Electronic Theses and Dissertations Collection

Page generated in 0.0532 seconds