• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • 58
  • 55
  • 19
  • 15
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 328
  • 78
  • 62
  • 52
  • 41
  • 36
  • 35
  • 32
  • 31
  • 28
  • 27
  • 27
  • 25
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Implementace CDN a clusteringu v prostředí GNU/Linux s testy výkonnosti. / CDN and clustering in GNU/Linux with performance testing

Mikulka, Pavel January 2008 (has links)
Fault tolerance is essential in a production-grade service delivery network. One of the solution is build a clustered environment to keep system failure to a minimum. This thesis examines the use of high availability and load balancing services using open source tools in GNU/Linux. The thesis discusses some general technologies of high availability computing as virtualization, synchronization and mirroring. To build relatively cheap high availability clusters is suitable DRDB tool. DRDB is tool for build synchronized Linux block devices. This paper also examines Linux-HA project, Redhat Cluster Suite, LVS, etc. Content Delivery Networks (CDN) replicate content over several mirrored web servers strategically placed at various locations in order to deal with the flash crowds. A CDN has some combination a request-routing and replication mechanism. Thus CDNs offer fast and reliable applications and services by distributing content to cache servers located close to end-users. This work examines open-source CDNs Globule and CoralCDN and test performance of this CDNs in global deployment.
252

Elektronický záznam o pacientovi / Electronic Patient Record

Cáb, Tomáš January 2009 (has links)
This thesis handles about informative technologiesthat the find exercise in health sector. Below conception electronic health service we can introduce systemsthat the expressive in a way castigate and largely oversimplify work doctors with reference to legislature in Czech republic. On this account are pick out only such sizes compatibility use it electronic health documentation (so - called datal standards), which guarantee the quality technical protection and safeguard personal data before pertinent misuse. Part those diploma work work handles about proposal Internet health system realized by the help of web interface, making use programmatic languages HTML, CSS, PHP and database system MySQL. Information system makes it possible to distant repair and administration patients' data. For example browsing anamneses, diagnosis, medicines, survey doctors, medical arrangement and laboratory values. Next his feature is compatibility with informative system IZIPthat the will prove send round news in language XML.
253

Informační systém pro správu bezdrátové sítě s využitím routerboardu MikroTik / Iinformation system for wireless network management based on MikroTik routerboard

Hromádko, Petr January 2009 (has links)
The master`s thesis deals with a concept of a web information system, designed to manage clients of a wireless network based on Mikrotik routerboards. The system allows the internet provider to have a great survey over all clients of the wireless network and to manage the settings of Mikrotik based network hardware. Furthermore, it offers an ability of signal strength monitoring and data accounting. The thesis has been split into two sections. The first one describes the information system as such, used platforms, equipment, running services and server configuration. The second section is fully focused on the analysis of the application security, database schema design and functional forms. The strongest emphasis has been put to the conjunction of the Mikrotik OS based routerboards with a web server and to the testing in a real environment.
254

Web server s mikroprocesorem ARM / Web server with ARM microprocessor

Tesař, Jan January 2013 (has links)
Diploma thesis is dealing with design and implementation of the web server and manage- ment website in the developing kit FriendlyARM Mini6410 with installed OS GNU/Linux. The embedded packaging system PTXdist is described in terms of Kernel configuration and selection of suitable applications. The device should be later used for the remote management of atmospheric optical link.
255

Databázová podpora analýzy rizik při konstrukci strojů / Database Assistance of Risk Assessments

Sýkora, Ondřej January 2008 (has links)
This diploma thesis is concerned with creation of a computer program for management of technical documentation and its retrieval over a network. The program allows keyword search in documents in a database, either in all of them or in those selected according to given criteria. Any document found is then available for viewing. Moreover, users can create custom shortcuts to arbitrary places in documents for quick access. The program has been written in the PHP scripting language and employs an HTTP server, thus it can be used not only on a local area network but also remotely from other places with internet connectivity.
256

BigData řešení pro zpracování rozsáhlých dat ze síťových toků / BigData Approach to Management of Large Netflow Datasets

Melkes, Miloslav January 2014 (has links)
This master‘s thesis focuses on distributed processing of big data from network communication. It begins with exploring network communication based on TCP/IP model with focus on data units on each layer, which is necessary to process during analyzation. In terms of the actual processing of big data is described programming model MapReduce, architecture of Apache Hadoop technology and it‘s usage for processing network flows on computer cluster. Second part of this thesis deals with design and following implementation of the application for processing network flows from network communication. In this part are discussed main and problematic parts from the actual implementation. After that this thesis ends with a comparison with available applications for network analysis and evaluation set of tests which confirmed linear growth of acceleration.
257

Jämförelser av MySQL och Apache Spark : För aggregering av smartmätardata i Big Data format för en webbapplikation / Comparisons between MySQL and Apache Spark : For aggregation of smartmeter data in Big Data format for a web application

Danielsson, Robin January 2020 (has links)
Smarta elmätare är ett område som genererar data i storleken Big Data. Dessa datamängder medför svårigheter att hanteras med traditionella databaslösningar som MySQL. Ett ramverk som uppstått för att lösa dessa svårigheter är Apache Spark som implementerar MapReduce-modellen för klustrade nätverk av datorer. En frågeställning för arbetet är om Apache Spark har fördelar över MySQL på en enskild dator för att hantera stora mängder data i formatet JSON för aggregering mot webbapplikationer. Resultaten i detta arbete visar på att Apache Spark har lägre aggregeringstid än MySQLmot en webbapplikation vid minst ~6.7 GB data i formatet JSON vid mer komplexa aggregeringsfrågor på enskild dator. Resultatet visar även att MySQL lämpar sig bättre än Apache Spark vid enklare aggregeringsfrågor för samtliga datamängder i experimentet.
258

New Primitives for Tackling Graph Problems and Their Applications in Parallel Computing

Zhong, Peilin January 2021 (has links)
We study fundamental graph problems under parallel computing models. In particular, we consider two parallel computing models: Parallel Random Access Machine (PRAM) and Massively Parallel Computation (MPC). The PRAM model is a classic model of parallel computation. The efficiency of a PRAM algorithm is measured by its parallel time and the number of processors needed to achieve the parallel time. The MPC model is an abstraction of modern massive parallel computing systems such as MapReduce, Hadoop and Spark. The MPC model captures well coarse-grained computation on large data --- data is distributed to processors, each of which has a sublinear (in the input data) amount of local memory and we alternate between rounds of computation and rounds of communication, where each machine can communicate an amount of data as large as the size of its memory. We usually desire fully scalable MPC algorithms, i.e., algorithms that can work for any local memory size. The efficiency of a fully scalable MPC algorithm is measured by its parallel time and the total space usage (the local memory size times the number of machines). Consider an 𝑛-vertex 𝑚-edge undirected graph 𝐺 (either weighted or unweighted) with diameter 𝐷 (the largest diameter of its connected components). Let 𝑁=𝑚+𝑛 denote the size of 𝐺. We present a series of efficient (randomized) parallel graph algorithms with theoretical guarantees. Several results are listed as follows: 1) Fully scalable MPC algorithms for graph connectivity and spanning forest using 𝑂(𝑁) total space and 𝑂(log 𝐷loglog_{𝑁/𝑛} 𝑛) parallel time. 2) Fully scalable MPC algorithms for 2-edge and 2-vertex connectivity using 𝑂(𝑁) total space where 2-edge connectivity algorithm needs 𝑂(log 𝐷loglog_{𝑁/𝑛} 𝑛) parallel time, and 2-vertex connectivity algorithm needs 𝑂(log 𝐷⸱log²log_{𝑁/𝑛} n+\log D'⸱loglog_{𝑁/𝑛} 𝑛) parallel time. Here 𝐷' denotes the bi-diameter of 𝐺. 3) PRAM algorithms for graph connectivity and spanning forest using 𝑂(𝑁) processors and 𝑂(log 𝐷loglog_{𝑁/𝑛} 𝑛) parallel time. 4) PRAM algorithms for (1 + 𝜖)-approximate shortest path and (1 + 𝜖)-approximate uncapacitated minimum cost flow using 𝑂(𝑁) processors and poly(log 𝑛) parallel time. These algorithms are built on a series of new graph algorithmic primitives which may be of independent interests.
259

Similarity Search in Document Collections / Similarity Search in Document Collections

Jordanov, Dimitar Dimitrov January 2009 (has links)
Hlavním cílem této práce je odhadnout výkonnost volně šířeni balík  Sémantický Vektory a třída MoreLikeThis z balíku Apache Lucene. Tato práce nabízí porovnání těchto dvou přístupů a zavádí metody, které mohou vést ke zlepšení kvality vyhledávání.
260

Anomaly Detection in Wait Reports and its Relation with Apache Cassandra Statistics

Madhu, Abheyraj Singh, Rapolu, Sreemayi January 2021 (has links)
Background: Apache Cassandra is a highly scalable distributed system that can handle large amounts of data through several nodes / virtual machines grouped together as Apache Cassandra clusters. When one such node in an Apache Cassandra cluster is down, there is a need for a tool or an approach that can identify this failed virtual machine by analyzing the data generated from each of the virtual machines in the cluster. Manual analysis of this data is tedious and can be quite strenuous. Objectives: The objective of the thesis is to identify, build and evaluate a solution that can detect and report the behaviour of the erroneous or failed virtual machine by analyzing the data generated by each virtual machine in an Apache Cassandra cluster. In the study, we analyzed two specific data sources from each virtual machine, i.e., the wait reports and Apache Cassandra statistics, and proposed a tool named AnoDect to realize this objective. The tool has been built using the input provided by the technical support team at Ericsson through interviews and was also evaluated by them to realize its reliability, usability and, usefulness in an industrial setting. Methods: A case study methodology has been piloted at Ericsson and semi-structured interviews have been conducted to identify the key features in the data along with the functionalities AnoDect needs to perform to assist the CIL team (technical support team at Ericsson) to rectify the erroneous virtual machine in the cluster. An experimental evaluation and a static user evaluation have been conducted, as a part of the case study evaluation, where the experimental evaluation is conducted to identify the best technique for AnoDect's anomaly detection in wait reports and the static evaluation has been conducted to evaluate AnoDect for its reliability and usability once it is deployed for use. Results: From the feedback provided by the CIL team through the questionnaire, it has been observed that the results provided by the tool are quite satisfactory, in terms of usability and reliability of the tool.

Page generated in 0.0227 seconds