1 |
Avoiding Bad Query Mixes to Minimize Unsuccessful Client Requests Under Heavy LoadsTozer, Sean January 2009 (has links)
In three-tiered web applications, some form of admission control is required to ensure
that throughput and response times are not significantly harmed during periods of heavy
load. We propose Q-Cop, a prototype system for improving admission control decisions
that computes measures of load on the system based on the actual mix of queries being
executed. This measure of load is used to estimate execution times for incoming queries,
which allows Q-Cop to make control decisions with the goal of minimizing the number
of requests that are not serviced before the client, or their browser, times out.
Using TPC-W queries, we show that the response times of different types of queries
can vary significantly, in excess of 50% in our experiments, depending not just on the
number of queries being processed but on the mix of other queries that are running simultaneously.
The variation implies that admission control can benefit from taking into
account not just the number of queries being processed, but also the mix of queries. We
develop a model of expected query execution times that accounts for the mix of queries
being executed and integrate that model into a three-tiered system to make admission
control decisions. This approach makes more informed decisions about which queries
to reject, and our results show that it significantly reduces the number of unsuccessful
client requests. Our results show that this approach makes more informed decisions about
which queries to reject and as a result significantly reduces the number of unsuccessful
client requests.
For comparison, we develop several other models which represent related work in
the field, including an MPL-based approach and an approach that considers the type of
query but not the mix of queries. We show that Q-Cop does not need to re-compute
any modelling information in order to perform well, a strong advantage over most other
approaches. Across the range of workloads examined, an average of 47% fewer requests
are denied than the next best approach.
|
2 |
Avoiding Bad Query Mixes to Minimize Unsuccessful Client Requests Under Heavy LoadsTozer, Sean January 2009 (has links)
In three-tiered web applications, some form of admission control is required to ensure
that throughput and response times are not significantly harmed during periods of heavy
load. We propose Q-Cop, a prototype system for improving admission control decisions
that computes measures of load on the system based on the actual mix of queries being
executed. This measure of load is used to estimate execution times for incoming queries,
which allows Q-Cop to make control decisions with the goal of minimizing the number
of requests that are not serviced before the client, or their browser, times out.
Using TPC-W queries, we show that the response times of different types of queries
can vary significantly, in excess of 50% in our experiments, depending not just on the
number of queries being processed but on the mix of other queries that are running simultaneously.
The variation implies that admission control can benefit from taking into
account not just the number of queries being processed, but also the mix of queries. We
develop a model of expected query execution times that accounts for the mix of queries
being executed and integrate that model into a three-tiered system to make admission
control decisions. This approach makes more informed decisions about which queries
to reject, and our results show that it significantly reduces the number of unsuccessful
client requests. Our results show that this approach makes more informed decisions about
which queries to reject and as a result significantly reduces the number of unsuccessful
client requests.
For comparison, we develop several other models which represent related work in
the field, including an MPL-based approach and an approach that considers the type of
query but not the mix of queries. We show that Q-Cop does not need to re-compute
any modelling information in order to perform well, a strong advantage over most other
approaches. Across the range of workloads examined, an average of 47% fewer requests
are denied than the next best approach.
|
3 |
AN INTERNET-BASED REMOTE COMMAND AND TELEMETRY SYSTEM FOR A MICROWAVE PROPAGATION STUDYColapelle, Mario, Zamore, Brian, Kopp, Brian, Pierce, Randy 10 1900 (has links)
A research project investigating microwave radio frequency propagation in a 500 mile link across the Gulf of Mexico requires a remote-control process to command microcontroller-based devices including power control modules and antenna feedhorn positioners, and to telemeter system parameters back to the operators. The solution that was developed is a simple, webserver-based user-interface that can be accessed both locally and remotely via the internet. To interface the webservers with the microcontroller-based devices, a polling protocol, based on MODBUS, was developed that provides an efficient command and telemetry link over a serial RS-485 interface.
|
4 |
SW modul TCP/IP a Modbus pro OS FreeRTOS / TCP/IP and Modbus modules for OS FreeRTOSŠťastný, Ladislav January 2012 (has links)
The aim of this work is to get familiar with operating system FreeRTOS and its usage in device design. It also explains usage of SW module LwIP (TCP/IP stack) and Free-MODBUS. Consequently is designed example device, simple operational panel. The panel communicates through ethernet interface using Modbus TCP protocol with connected PLCs on separate network. Its meet function of webserver, HID and also source of real time.
|
5 |
Decentralized Web SearchHaque, Md Rakibul 08 June 2012 (has links)
Centrally controlled search engines will not be sufficient and reliable for indexing and
searching the rapidly growing World Wide Web in near future. A better solution is to enable the Web to index itself in a decentralized manner. Existing distributed approaches for ranking search results do not provide flexible searching, complete results and ranking with high accuracy. This thesis presents a decentralized Web search mechanism, named DEWS, which enables existing webservers to collaborate with each other to form a distributed index of the Web. DEWS can rank the search results based on query keyword relevance and relative importance of websites in a distributed manner preserving a hyperlink overlay on top of a structured P2P overlay. It also supports approximate matching of query keywords using phonetic codes and n-grams along with list decoding of a linear covering
code. DEWS supports incremental retrieval of search results in a decentralized manner which reduces network bandwidth required for query resolution. It uses an efficient routing mechanism extending the Plexus routing protocol with a message aggregation technique. DEWS maintains replica of indexes, which reduces routing hops and makes DEWS robust to webservers failure. The standard LETOR 3.0 dataset was used to validate the DEWS protocol. Simulation results show that the ranking accuracy of DEWS is close to the centralized case, while network overhead for collaborative search and indexing is logarithmic on network size. The results also show that DEWS is resilient to changes in the available pool of indexing webservers and works efficiently even in the presence of heavy query load.
|
6 |
Dienstbar - aber sicherSchreiber, Alexander 20 March 2000 (has links) (PDF)
Der Server ist am Netz und läuft, ftp-Anfragen
werden beantwortet, http-requests sind kein
Problem, alles ist bestens - solange keine
ungebetenen Gäste kommen.
Der Vortrag behandelt die Maßnahmen, die ein
Systembetreiber ergreifen kann oder soll, um
sich gegen einige der gebräuchlichen Attacken zu
wappnen. Es wird die Gratwanderung zwischen der
Forderung nach hoher Sicherheit einerseits und
voller Funktionalität vieler Internet-Dienste
andererseits beschrieben.
|
7 |
Secure WebServerNeubert, Janek 01 July 2004 (has links) (PDF)
Beschreibung und Implementierung einer Lösungsvariante für den sicheren Einsatz von OpenAFS, Apache und serverseitigen Skriptsprachen wie PHP oder PERL in Multiuserumgebungen.
|
8 |
Decentralized Web SearchHaque, Md Rakibul 08 June 2012 (has links)
Centrally controlled search engines will not be sufficient and reliable for indexing and
searching the rapidly growing World Wide Web in near future. A better solution is to enable the Web to index itself in a decentralized manner. Existing distributed approaches for ranking search results do not provide flexible searching, complete results and ranking with high accuracy. This thesis presents a decentralized Web search mechanism, named DEWS, which enables existing webservers to collaborate with each other to form a distributed index of the Web. DEWS can rank the search results based on query keyword relevance and relative importance of websites in a distributed manner preserving a hyperlink overlay on top of a structured P2P overlay. It also supports approximate matching of query keywords using phonetic codes and n-grams along with list decoding of a linear covering
code. DEWS supports incremental retrieval of search results in a decentralized manner which reduces network bandwidth required for query resolution. It uses an efficient routing mechanism extending the Plexus routing protocol with a message aggregation technique. DEWS maintains replica of indexes, which reduces routing hops and makes DEWS robust to webservers failure. The standard LETOR 3.0 dataset was used to validate the DEWS protocol. Simulation results show that the ranking accuracy of DEWS is close to the centralized case, while network overhead for collaborative search and indexing is logarithmic on network size. The results also show that DEWS is resilient to changes in the available pool of indexing webservers and works efficiently even in the presence of heavy query load.
|
9 |
Genix: desenvolvimento de uma nova pipeline automatizada para anotação de genomas microbianos / Genix: development of a new automated pipeline for microbial genome annotationKremer, Frederico Schmitt 17 February 2016 (has links)
Submitted by Maria Beatriz Vieira (mbeatriz.vieira@gmail.com) on 2017-10-18T12:09:03Z
No. of bitstreams: 2
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
dissertacao_frederico_schmitt_kremer.pdf: 1606431 bytes, checksum: 192db9fb559b24dfd0b3038659fdd5b7 (MD5) / Approved for entry into archive by Aline Batista (alinehb.ufpel@gmail.com) on 2017-10-23T11:10:01Z (GMT) No. of bitstreams: 2
dissertacao_frederico_schmitt_kremer.pdf: 1606431 bytes, checksum: 192db9fb559b24dfd0b3038659fdd5b7 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Aline Batista (alinehb.ufpel@gmail.com) on 2017-10-23T11:11:40Z (GMT) No. of bitstreams: 2
dissertacao_frederico_schmitt_kremer.pdf: 1606431 bytes, checksum: 192db9fb559b24dfd0b3038659fdd5b7 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2017-10-23T11:11:52Z (GMT). No. of bitstreams: 2
dissertacao_frederico_schmitt_kremer.pdf: 1606431 bytes, checksum: 192db9fb559b24dfd0b3038659fdd5b7 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Previous issue date: 2016-02-17 / Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq / O advento do sequenciamento de DNA de nova geração (NGS) reduziu significativamente o custo dos projetos de sequenciamento de genomas. Quanto mais fácil é de obter novos dados genômicos, mais acuradas deve ser a etapa de anotação, de forma a se reduzir a perda de informações relevantes e efetuar o acúmulo de erros que possam afetar a acurácia das análises posteriores. No caso dos genomas bacterianos, um grande número de programas para anotação já foi desenvolvido, entretanto, muitos destes softwares não incorporaram etapas para otimizar os seus resultados, como filtragem de proteínas falso-positivas/spurious e a anotação mais completa de RNA não-codificantes. O presente trabalho descreve o desenvolvimento do Genix, uma nova pipeline automatizada que combina a funcionalidade de diferentes softwares, incluindo Prodigal, tRNAscan-SE, RNAmmer, Aragorn, INFERNAL, NCBI-BLAST+, CD-HIT, Rfam e Uniprot, com a intenção de aumentar a afetividade dos resultados de anotação. Para avaliar a acurácia da presente ferramenta, foram usados como modelo de estudo os genomas de referência de Escherichia coli K-12, Leptospira interrogans cepa Fiocruz L1-130, Listeria monocytogenese EGD-e e Mycobacterium tuberculosis H37Rv. Os resultados obtidos pelo Genix foram comparados às anotações originais e as obtidas pelas ferramentas de anotação RAST e BASys, considerando genes novos, faltantes e exclusivos, informações de anotação funcional e predições de ORFs spurious. De forma a se quantificar o grau de acurácia, uma nova métrica, denominada discrepância de anotação foi também proposta. Na análise comparativa o Genix apresentou para todos os genomas o menor valor de discrepância, variando entre 0,96 e 5,71%, sendo o maior valor observado no genoma de L. interrogans, para o qual RAST e BASys apresentaram valores superiores a 14,0%. Além disso, foram identificadas proteínas spurious nas anotações geradas pelos demais programas, e, em menor número, nas anotações de referência, indicando que a utilização do Antifam permite um melhor controle do número de genes falso positivos. A partir dos testes realizados, foi possível demonstrar que o Genix é capaz de gerar anotação com boa acurácia (baixo discrepância), menor perda de genes relevantes (funcionais) e menor número de genes falso positivos. / The advent of next-generation sequencing (NGS) significantly reduced the cost of genome sequencing projects. The easier it is to generate genomic data, the more accurate the annotation steps must to be to avoid both the loss of information and the accumulation of erroneous features that may affect the accuracy of further analysis. In the case of bacteria genomes, a range of web annotation software has been developed; however, many applications have not incorporated the steps required to improve the output (eg: false-positive/spurious ORF filtering and a more complete non-coding RNA annotation). The present work describes the implementation of Genix, a new bacteria genome annotation pipeline that combines the functionality of the programs Prodigal, tRNAscan-SE, RNAmmer, Aragorn, INFERNAL, NCBI-BLAST+, CD-HIT, Rfam and UniProt, with the intention of increasing the effectiveness of the annotation results. To evaluate the accuracy of Genix, we used as models of study the reference genomes of Escherichia coli K-12, Leptospira interrogans strain Fiocruz L1-130, Listeria monocytogenes EGD-e and Mycobacterium tuberculosis H37Rv. the results obtained by Genix were compared to the original annotation and to those from the annotation pipelines RAST and BASys considering new, missing and exclusive genes, functional annotation information and the prediction of spurious ORFs. To quantify the annotation accuracy, a new metric, called “annotation discrepancy” was developed. In a comparative analysis, Genix showed the smallest discrepancy for the four genomes, ranging for 0.96 to 5.71%, the highest discrepancy was bserved in the L. interrogans genome, for which RAST and BASys resulted in discrepancies greater than 14.0%. Additionally, several spurious proteins were identified in the annotations generated by RAST and BASys, and, in smaller number, in the reference annotations, indicating that the use of the Antifam database allows a better control of the number of false-positive genes. Based on the evaluations, it was possible to show that Genix is able to generate annotations with good accuracy (low discrepancy), low omission of relevant (functional) genes and a small number of false-positive genes.
|
10 |
Webserver-Techniken (eingebettete Interpreter mod_perl, mod_dtcl ...)Schmidt, Jürgen 08 May 2000 (has links)
Gemeinsamer Workshop von Universitaetsrechenzentrum und
Professur Rechnernetze und verteilte Systeme (Fakultaet fuer
Informatik) der TU Chemnitz.
Workshop-Thema: Infrastruktur der ¨Digitalen Universitaet¨
Es gibt viele Möglichkeiten, dynamische Web Inhalte zu erzeugen.
Dieser Vortrag soll einen Überblick über Erweiterungsmöglichkeiten
auf der Serverseite geben. Mit Hinblick auf Performance werden im
Vergleich zum CGI eingebettete Interpreter beleuchtet und spezielle
Scriptsprachen wie PHP,Perl oder Tcl genannnt.
|
Page generated in 0.0348 seconds