• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 305
  • 291
  • 118
  • 94
  • 51
  • 50
  • 37
  • 22
  • 19
  • 10
  • 9
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1117
  • 305
  • 294
  • 219
  • 156
  • 149
  • 127
  • 125
  • 124
  • 120
  • 115
  • 112
  • 104
  • 103
  • 100
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Využití protokolu TCP v simulačním prostředí OPNET Modeler / Application of TCP in OPNET Modeler simulation environment

Tirinda, Viktor January 2008 (has links)
This diploma thesis describes a possibility of application protocol implementation in OPNET Modeler simulation environment. It presumes that this application protocol is going to use TCP protocol for their communication on transport layer. The first part of thesis is focused on a description of TCP. It is a connection oriented, reliable and confirmed protocol which maintains sequence of transmitted data. This data is after receiving positively confirmed. In the second chapter are described the main functions of OPNET Modeler simulation environment. OPNET is hierarchical divided into four editors. Each editor has a specific function by creating a network and setting his behavior. There is also focused on the two lowest layers of OPNET Modeler and their components in detail, which are participating at usage TCP on transport layer to communication. Implemented applications communicate by sockets, which are created and destroyed on request. Communication is controlled by manager process, whose function is maintenance particular connections and redirect dataflow into relevant process. This manager put in action as well a process, which simulates a single TCP. In the practical part I made two applications: one is a client type and a second one is a server type. Both applications are using TCP on transport layer. The establishment of connection initiates a client, who is sending a request to the server for a data. Then server sends back data in desired quantity. After sending the entire data, client terminates the connection. The result of simulation is statistics, where we pictured the size of the transferred data, a number of transferred packets and other parameters typical for TCP.
92

Aplikace pro PDA umožňující příjem a práci s multimediálním obsahem / Application for PDA for multimedia stream presentation and processing

Javorček, Martin January 2009 (has links)
This thesis deals with creating application for PDA for multimedia stream presentation and processing. Introduction of the thesis is devoted to the theoretical explanation of terms such as PDA (Personal Digital Assistant) and Windows Mobile 6.0. Readers will be acquainted with the technology of .NET Framework and also with its version for mobile equipment .NET Compact Framework. Advantages as well as disadvantages of .NET technology are dealt with in this section together with the same phenomena of C# programming language, which is the basic language used for running our application. Next chapters of the thesis deal with my own design of multimedia application, especially from the point of view of classes. For better orientation and understanding UML diagram is used. UML (Unified Modeling Language) is a graphic language intended for visualization and documentation of programming systems. Except of general description of the classes there is also a detailed explanation of individual attributes and application methods. Their function and purpose are explained here as well. In the next part of the thesis attention is paid to the network communication of the application, both in terms of individual multimedia files transmission and stream transmission. Principles of connection and disconnection between client and server are analyzed in this part. Client’ s part of the application is designed for the mobile equipment with Microsoft Windows Mobile 6.0 operation system and server’s part of application is designed for computers, where Microsoft Windows XP operation system is installed. A concise manual for operating both the applications is included. This part also describes some exceptions which may occur in case of problems with connection. Last chapters describe synchronization of mobile equipment or emulator with a desktop computer and provide the reader with the summary of supported audio formats in multimedia application. All achieved results are summarized in the concluding part of the thesis.
93

Srovnání serverových virtualizačních platforem pro potřeby MENDELU

Šupola, Martin January 2017 (has links)
The thesis deals with comparison of server virtualization platforms with needs of Mendel University. Real tests of each selected virtualization platform are per-formed as required by the Department of Information Technology. Based on the test results, a suitable solution for the university is suggested and costs of deploy-ing and operating of various virtualization platforms are economically evaluated.
94

Measuring, Rating, and Predicting the Energy Efficiency of Servers / Messung, Bewertung und Vorhersage von Serverenergieeffizienz

von Kistowski, Jóakim Gunnarsson January 2019 (has links) (PDF)
Energy efficiency of computing systems has become an increasingly important issue over the last decades. In 2015, data centers were responsible for 2% of the world's greenhouse gas emissions, which is roughly the same as the amount produced by air travel. In addition to these environmental concerns, power consumption of servers in data centers results in significant operating costs, which increase by at least 10% each year. To address this challenge, the U.S. EPA and other government agencies are considering the use of novel measurement methods in order to label the energy efficiency of servers. The energy efficiency and power consumption of a server is subject to a great number of factors, including, but not limited to, hardware, software stack, workload, and load level. This huge number of influencing factors makes measuring and rating of energy efficiency challenging. It also makes it difficult to find an energy-efficient server for a specific use-case. Among others, server provisioners, operators, and regulators would profit from information on the servers in question and on the factors that affect those servers' power consumption and efficiency. However, we see a lack of measurement methods and metrics for energy efficiency of the systems under consideration. Even assuming that a measurement methodology existed, making decisions based on its results would be challenging. Power prediction methods that make use of these results would aid in decision making. They would enable potential server customers to make better purchasing decisions and help operators predict the effects of potential reconfigurations. Existing energy efficiency benchmarks cannot fully address these challenges, as they only measure single applications at limited sets of load levels. In addition, existing efficiency metrics are not helpful in this context, as they are usually a variation of the simple performance per power ratio, which is only applicable to single workloads at a single load level. Existing data center efficiency metrics, on the other hand, express the efficiency of the data center space and power infrastructure, not focusing on the efficiency of the servers themselves. Power prediction methods for not-yet-available systems that could make use of the results provided by a comprehensive power rating methodology are also lacking. Existing power prediction models for hardware designers have a very fine level of granularity and detail that would not be useful for data center operators. This thesis presents a measurement and rating methodology for energy efficiency of servers and an energy efficiency metric to be applied to the results of this methodology. We also design workloads, load intensity and distribution models, and mechanisms that can be used for energy efficiency testing. Based on this, we present power prediction mechanisms and models that utilize our measurement methodology and its results for power prediction. Specifically, the six major contributions of this thesis are: We present a measurement methodology and metrics for energy efficiency rating of servers that use multiple, specifically chosen workloads at different load levels for a full system characterization. We evaluate the methodology and metric with regard to their reproducibility, fairness, and relevance. We investigate the power and performance variations of test results and show fairness of the metric through a mathematical proof and a correlation analysis on a set of 385 servers. We evaluate the metric's relevance by showing the relationships that can be established between metric results and third-party applications. We create models and extraction mechanisms for load profiles that vary over time, as well as load distribution mechanisms and policies. The models are designed to be used to define arbitrary dynamic load intensity profiles that can be leveraged for benchmarking purposes. The load distribution mechanisms place workloads on computing resources in a hierarchical manner. Our load intensity models can be extracted in less than 0.2 seconds and our resulting models feature a median modeling error of 12.7% on average. In addition, our new load distribution strategy can save up to 10.7% of power consumption on a single server node. We introduce an approach to create small-scale workloads that emulate the power consumption-relevant behavior of large-scale workloads by approximating their CPU performance counter profile, and we introduce TeaStore, a distributed, micro-service-based reference application. TeaStore can be used to evaluate power and performance model accuracy, elasticity of cloud auto-scalers, and the effectiveness of power saving mechanisms for distributed systems. We show that we are capable of emulating the power consumption behavior of realistic workloads with a mean deviation less than 10% and down to 0.2 watts (1%). We demonstrate the use of TeaStore in the context of performance model extraction and cloud auto-scaling also showing that it may generate workloads with different effects on the power consumption of the system under consideration. We present a method for automated selection of interpolation strategies for performance and power characterization. We also introduce a configuration approach for polynomial interpolation functions of varying degrees that improves prediction accuracy for system power consumption for a given system utilization. We show that, in comparison to regression, our automated interpolation method selection and configuration approach improves modeling accuracy by 43.6% if additional reference data is available and by 31.4% if it is not. We present an approach for explicit modeling of the impact a virtualized environment has on power consumption and a method to predict the power consumption of a software application. Both methods use results produced by our measurement methodology to predict the respective power consumption for servers that are otherwise not available to the person making the prediction. Our methods are able to predict power consumption reliably for multiple hypervisor configurations and for the target application workloads. Application workload power prediction features a mean average absolute percentage error of 9.5%. Finally, we propose an end-to-end modeling approach for predicting the power consumption of component placements at run-time. The model can also be used to predict the power consumption at load levels that have not yet been observed on the running system. We show that we can predict the power consumption of two different distributed web applications with a mean absolute percentage error of 2.2%. In addition, we can predict the power consumption of a system at a previously unobserved load level and component distribution with an error of 1.2%. The contributions of this thesis already show a significant impact in science and industry. The presented efficiency rating methodology, including its metric, have been adopted by the U.S. EPA in the latest version of the ENERGY STAR Computer Server program. They are also being considered by additional regulatory agencies, including the EU Commission and the China National Institute of Standardization. In addition, the methodology's implementation and the underlying methodology itself have already found use in several research publications. Regarding future work, we see a need for new workloads targeting specialized server hardware. At the moment, we are witnessing a shift in execution hardware to specialized machine learning chips, general purpose GPU computing, FPGAs being embedded into compute servers, etc. To ensure that our measurement methodology remains relevant, workloads covering these areas are required. Similarly, power prediction models must be extended to cover these new scenarios. / In den vergangenen Jahrzehnten hat die Energieeffizienz von Computersystemen stark an Bedeutung gewonnen. Bereits 2015 waren Rechenzentren für 2% der weltweiten Treibhausgasemissionen verantwortlich, was mit der durch den Flugverkehr verursachten Treibhausgasmenge vergleichbar ist. Dabei wirkt sich der Stromverbrauch von Rechenzentren nicht nur auf die Umwelt aus, sondern verursacht auch erhebliche, jährlich um mindestens 10% steigende, Betriebskosten. Um sich diesen Herausforderungen zu stellen, erwägen die U.S. EPA und andere Behörden die Anwendung von neuartigen Messmethoden, um die Energieeffizienz von Servern zu bestimmen und zu zertifizieren. Die Energieeffizienz und der Stromverbrauch eines Servers wird von vielen verschiedenen Faktoren, u.a. der Hardware, der zugrundeliegenden Ausführungssoftware, der Arbeitslast und der Lastintensität, beeinflusst. Diese große Menge an Einflussfaktoren führt dazu, dass die Messung und Bewertung der Energieeffizienz herausfordernd ist, was die Auswahl von energieeffizienten Servern für konkrete Anwendungsfälle erheblich erschwert. Informationen über Server und ihre Energieeffizienz bzw. ihren Stromverbrauch beeinflussenden Faktoren wären für potentielle Kunden von Serverhardware, Serverbetreiber und Umweltbehörden von großem Nutzen. Im Allgemeinen mangelt es aber an Messmethoden und Metriken, welche die Energieeffizienz von Servern in befriedigendem Maße erfassen und bewerten können. Allerdings wäre es selbst unter der Annahme, dass es solche Messmethoden gäbe, dennoch schwierig Entscheidungen auf Basis ihrer Ergebnisse zu fällen. Um derartige Entscheidungen zu vereinfachen, wären Methoden zur Stromverbrauchsvorhersage hilfreich, um es potentiellen Serverkunden zu ermöglichen bessere Kaufentscheidungen zu treffen und Serverbetreibern zu helfen, die Auswirkungen möglicher Rekonfigurationen vorherzusagen. Existierende Energieeffizienzbenchmarks können diesen Herausforderungen nicht vollständig begegnen, da sie nur einzelne Anwendungen bei wenigen Lastintensitätsstufen ausmessen. Auch sind die vorhandenen Energieeffizienzmetriken in diesem Kontext nicht hilfreich, da sie normalerweise nur eine Variation des einfachen Verhältnisses von Performanz zu Stromverbrauch darstellen, welches nur auf einzelne Arbeitslasten bei einer einzigen gemessenen Lastintensität angewandt werden kann. Im Gegensatz dazu beschreiben die existierenden Rechenzentrumseffizienzmetriken lediglich die Platz- und Strominfrastruktureffizienz von Rechenzentren und bewerten nicht die Effizienz der Server als solche. Methoden zur Stromverbrauchsvorhersage noch nicht für Kunden verfügbarer Server, welche die Ergebnisse einer ausführlichen Stromverbrauchsmessungs- und Bewertungsmethodologie verwenden, gibt es ebenfalls nicht. Stattdessen existieren Stromverbrauchsvorhersagemethoden und Modelle für Hardwaredesigner und Hersteller. Diese Methoden sind jedoch sehr feingranular und erfordern Details, welche für Rechenzentrumsbetreiber nicht verfügbar sind, sodass diese keine Vorhersage durchführen können. In dieser Arbeit werden eine Energieeffizienzmess- und Bewertungsmethodologie für Server und Energieeffizienzmetriken für diese Methodologie vorgestellt. Es werden Arbeitslasten, Lastintensitäten und Lastverteilungsmodelle und -mechanismen, die für Energieeffizienzmessungen und Tests verwendet werden können, entworfen. Darauf aufbauend werden Mechanismen und Modelle zur Stromverbrauchsvorhersage präsentiert, welche diese Messmethodologie und die damit produzierten Ergebnisse verwenden. Die sechs Hauptbeiträge dieser Arbeit sind: Eine Messmethodologie und Metriken zur Energieeffizienzbewertung von Servern, die mehrere, verschiedene Arbeitslasten unter verschiedenen Lastintensitäten ausführt, um die beobachteten Systeme vollständig zu charakterisieren. Diese Methodologie wird im Bezug auf ihre Wiederholbarkeit, Fairness und Relevanz evaluiert. Es werden die Stromverbrauchs- und Performanzvariationen von wiederholten Methodologieausführungen untersucht und die Fairness der Methodologie wird durch mathematische Beweise und durch eine Korrelationsanalyse anhand von Messungen auf 385 Servern bewertet. Die Relevanz der Methodologie und der Metrik wird gezeigt, indem Beziehungen zwischen Metrikergebnissen und der Energieeffizienz von anderen Serverapplikationen untersucht werden. Modelle und Extraktionsverfahren für sich mit der Zeit verändernde Lastprofile, sowie Lastverteilungsmechanismen und -regeln. Die Modelle können dazu verwendet werden, beliebige Lastintensitätsprofile, die zum Benchmarking verwendet werden können, zu entwerfen. Die Lastverteilungsmechanismen, hingegen, platzieren Arbeitslasten in hierarchischer Weise auf Rechenressourcen. Die Lastintensitätsmodelle können in weniger als 0,2 Sekunden extrahiert werden, wobei die jeweils resultierenden Modelle einen durchschnittlichen Medianmodellierungsfehler von 12,7% aufweisen. Zusätzlich dazu kann die neue Lastverteilungsstrategie auf einzelnen Servern zu Stromverbrauchseinsparungen von bis zu 10,7% führen. Ein Ansatz um kleine Arbeitslasten zu erzeugen, welche das Stromverbrauchsverhalten von größeren, komplexeren Lasten emulieren, indem sie ihre CPU Performance Counter-Profile approximieren sowie den TeaStore: Eine verteilte, auf dem Micro-Service-Paradigma basierende Referenzapplikation. Der TeaStore kann verwendet werden, um Strom- und Performanzmodellgenauigkeit, Elastizität von Cloud Autoscalern und die Effektivität von Stromsparmechanismen in verteilten Systemen zu untersuchen. Das Arbeitslasterstellungsverfahren kann das Stromverbrauchsverhalten von realistischen Lasten mit einer mittleren Abweichung von weniger als 10% und bis zu einem minimalen Fehler von 0,2 Watt (1%) nachahmen. Die Anwendung des TeaStores wird durch die Extraktion von Performanzmodellen, die Anwendung in einer automatisch skalierenden Cloudumgebung und durch eine Demonstration der verschiedenen möglichen Stromverbräuche, die er auf Servern verursachen kann, gezeigt. Eine Methode zur automatisierten Auswahl von Interpolationsstrategien im Bezug auf Performanz und Stromverbrauchscharakterisierung. Diese Methode wird durch einen Konfigurationsansatz, der die Genauigkeit der auslastungsabhängigen Stromvorhersagen von polynomiellen Interpolationsfunktionen verbessert, erweitert. Im Gegensatz zur Regression kann der automatisierte Interpolationsmethodenauswahl- und Konfigurationsansatz die Modellierungsgenauigkeit mit Hilfe eines Referenzdatensatzes um 43,6% verbessern und kann selbst ohne diesen Referenzdatensatz eine Verbesserung von 31,4% erreichen. Einen Ansatz, der explizit den Einfluss von Virtualisierungsumgebungen auf den Stromverbrauch modelliert und eine Methode zur Vorhersage des Stromverbrauches von Softwareapplikationen. Beide Verfahren nutzen die von der in dieser Arbeit vorgegestellten Stromverbrauchsmessmethologie erzeugten Ergebnisse, um den jeweiligen Stromverbrauch von Servern, die den Vorhersagenden sonst nicht zur Verfügung stehen, zu ermöglichen. Die vorgestellten Verfahren können den Stromverbrauch für verschiedene Hypervisorkonfigurationen und für Applikationslasten zuverlässig vorhersagen. Die Vorhersage des Stromverbrauchs von Serverapplikationen erreicht einen mittleren absoluten Prozentfehler von 9,5%. Ein Modellierungsansatz zur Stromverbrauchsvorhersage für Laufzeitplatzierungsentscheidungen von Softwarekomponenten, welcher auch dazu verwendet werden kann den Stromverbrauch für bisher nicht beobachtete Lastintensitäten auf dem laufenden System vorherzusagen. Der Modellierungsansatz kann den Stromverbrauch von zwei verschiedenen, verteilten Webanwendungen mit einem mittleren absoluten Prozentfehler von 2,2% vorhersagen. Zusätzlich kann er den Stromverbrauch von einem System bei einer in der Vergangenheit nicht beobachteten Lastintensität und Komponentenverteilung mit einem Fehler von 1,2% vorhersagen. Die Beiträge in dieser Arbeit haben sich bereits signifikant auf Wissenschaft und Industrie ausgewirkt. Die präsentierte Energieeffizienzbewertungsmethodologie, inklusive ihrer Metriken, ist von der U.S. EPA in die neueste Version des ENERGY STAR Computer Server-Programms aufgenommen worden und wird zurzeit außerdem von weiteren Behörden, darunter die EU Kommission und die Nationale Chinesische Standardisierungsbehörde, in Erwägung gezogen. Zusätzlich haben die Implementierung der Methodologie und die zugrundeliegende Methodologie bereits Anwendung in mehreren wissenschaftlichen Arbeiten gefunden. In Zukunft werden im Rahmen von weiterführenden Arbeiten neue Arbeitslasten erstellt werden müssen, um die Energieeffizienz von spezialisierter Hardware zu untersuchen. Zurzeit verändert sich die Server-Rechenlandschaft in der Hinsicht, dass spezialisierte Ausführungseinheiten, wie Chips zum maschinellen Lernen, GPGPU Rechenchips und FPGAs in Servern verbaut werden. Um sicherzustellen, dass die Messmethodologie aus dieser Arbeit weiterhin relevant bleibt, wird es nötig sein, Arbeitslasten zu erstellen, welche diese Fälle abdecken, sowie Stromverbrauchsmodelle zu entwerfen, die in der Lage sind, derartige spezialisierte Hardware zu betrachten.
95

Selection of best server to work on a network request of a client based on its physical and virtual location and distance to the server

Braeuning, Paul 24 October 2023 (has links)
When a service on the internet is scaled horizontally with multiple server instances, there are different solutions on how to map a client request to one of those server instances. In this paper I am evaluating a select few solutions for using the nearest server instance to handle a client request. I classified those solutions on the criteria ease of use, does this solution require a change of program behavior, how many resources are required for set up, the response time, are there already existing open source software or open data solutions available, how accurate is the solution, does it scale horizontally and lastly how robust the solution is. In this paper I evaluated GeoDNS, a central hyper text transfer protocol (HTTP) redirect server, decentralized instances and using an Anycast internet protocol (IP) address as a solution. Based on the described evaluation criteria I found that the central redirect server in combination with a GeoDNS server works best to map a client request to the nearest server instance. The decentralized instances are a specialization of the redirect server and setting up a public routable Anycast address is complicated. I compared three methods of matching a client IP address to a geolocation. In this practical implementation I found using local files matching IP ranges to countries works best over using regional internet registry provided registration data access protocol (RDAP) endpoints or using the same RDAP method with a cache. The local mapping file implementation is the fastest, compared to the other described implementation, and less error-prone. The entire source code of this work and implemented programs can be found here (https://paulgo.dev/mrpaulblack/bachelor-thesis).:1 Introduction and Intention 2 Solutions 2.1 Geolocation DNS 2.2 Central Redirect Server 2.3 Decentralized Implementation 2.4 IP Anycast 3 Implementation 3.1 Experiment Setup 3.2 Method of Observation 3.3 Observations and Analysis 4 Conclusion Bibliography List of Figures List of Tables List of Source Codes
96

Secure Network-Centric Application Access

Varma, Nitesh 23 December 1998 (has links)
In the coming millennium, the establishment of virtual enterprises will become increasingly common. In the engineering sector, global competition will require corporations to create agile partnerships to use each other’s engineering resources in mutually profitable ways. The Internet offers a medium for accessing such resources in a globally networked environment. However, remote access of resources require a secure and mutually trustable environment, which is lacking in the basic infrastructure on which the Internet is based. Fortunately, efforts are under way to provide the required security services on the Internet. This thesis presents a model for making distributed engineering software tools accessible via the Internet. The model consists of an extensible clientserver system interfaced with the engineering software tool on the server-side. The system features robust security support based on public-key and symmetric cryptography. The system has been demonstrated by providing Web-based access to a .STL file repair program through a Java-enabled Web browser. / Master of Science
97

Lagringsmätning för AD-information / Measurement Storage for AD-information

Ishak, Michel January 2012 (has links)
Målet med detta projekt var att hjälpa IT-Mästaren att kunna fakturera sina kunder på ett smidigare sätt. Deras dåvarande lösning av kundernas dataanvändning var tidskrävande och ineffektivt, därför ville de automatisera systemet. Det nya systemet skulle användas för att administrera kundernas servrar med hjälp av ett grafiskt gränssnitt och sedan kontakta dem för att få fram olika värden för respektive kund som ska presenteras på IT-Mästarens hemsida. På så sätt skulle de kunna fakturera sina kunder på ett mer effektivt sätt och även få bättre överblick av kundernas användning.  I och med utvecklingen utav detta system så fördjupande jag mig dels i egna kända områden men jag lärde mig också nya kunskaper. Det gällde inte enbart teknisk utveckling utan också undersökning av exempelvis hämtningen av alla data som skulle göras. Jag använde utvalda metoder för att kunna utföra detta projekt på ett smidigt och bra sätt. Metoderna beskrivs i rapporten. / The goal of this project was that the IT-Mästaren wanted to be able to bill their customers in a more comfortable way. Their solution of the customer’s server usage of that time was time consuming and inefficient, therefore they wanted to automate the system. The new system should manage their customer’s servers by using a graphical interface and then contact them to get different values for each customer to be presented at the IT-Mästaren's website. In this way they could bill their customers more efficiently and get a better understanding of their customers’ use.  With the development of this system I immersed in my own known areas and I also learned new knowledge. It was not only about technical development but also investigationamong other thingslike retrieval of all the data that should be collected. I used selected methods in order to perform this project in a smooth and effective manner. The methods are described in the report.
98

DNS library for a high-performance DNS server / DNS library for a high-performance DNS server

Slovák, Ľuboš January 2011 (has links)
In this thesis I design and implement a high-performance library for developing authoritative name server software. The library supports all basic as well as several advanced features of the DNS protocol, such as EDNS0, DNSSEC or zone transfers. It is designed to be modular, extensible and easy to use. The library was integrated into an experimental server implementation used for testing and benchmarking. Its performance is evaluated and proved to be superior to prevalent implementations in most cases. The thesis also provides theoretic background and a deep analysis of the task together with detailed description of the implemented solutions.
99

Mobilní aplikace typu klient-server / Mobile Client Server Application

Dohnal, Jakub Unknown Date (has links)
Tato diplomová práce se zabýv. vývojem mobilní aplikace typu klient-server na platformě Windows Phone. Obsahuje popis platformy Window Phone, vývojov. prostředí a jeho nástroj pro ladění a sledování prostředků na této platformě. Rozebírá architektury a protokoly využívající se pro model klient-server. Popisuje sdílení dat a zasílání zpráv mezi uživateli klientem a serverem. Srovnává dostupné protokoly pro komunikaci. V další kapitole je využití paměti na klientu při nedostupnosti připojení k internetu. Závěr se věnuje vizi dalšího vývoje projektu.
100

Evaluation of virtual servers for use incomputer science education : Utvardering av virtuella servrar f ör användning inom undervisning i datateknik

Jernlås, Johan January 2011 (has links)
Virtual servers are being increasingly utilized in higher education forclient computers, this thesis investigates if virtualization could also bebeneficial for servers. By providing three general models (the first beingthe current situation and the two latter leveraging virtualization) andevaluating each, a broad sense of the applicability, possibilities andremaining problems of introducing server virtualization is provided.For one specific course, TDDD27 - Advanced web programming, amore concrete analysis is done and specific recommendations are pro-vided.The conclusion is that there are still more work to be done, butboth of the proposed models are possible and suitable for some courses.Their introduction should have several positive effects, for instancefairer courses and more focus on the subject at hand.

Page generated in 0.0758 seconds