Spelling suggestions: "subject:"1nternet programming"" "subject:"1nternet erogramming""
11 |
An advanced signal processing toolkit for Java applicationsShah, Vijay Pravin, January 2002 (has links)
Thesis (M.S.)--Mississippi State University. Department of Electrical and Computer Engineering. / Title from title screen. Includes bibliographical references.
|
12 |
Adaptable stateful application server replicationWu, Huaigu, 1975- January 2008 (has links)
In recent years, multi-tier architectures have become the standard computing environment for web- and enterprise applications. The application server tier is often the heart of the system embedding the business logic. Adaptability, in particular the capability to adjust to the load submitted to the system and to handle the failure of individual components, are of outmost importance in order to provide 7/24 access and high performance. Replication is a common means to achieve these reliability and scalability requirements. With replication, the application server tier consists of several server replicas. Thus, if one replica fails, others can take over. Furthermore, the load can be distributed across the available replicas. Although many replication solutions have been proposed so far, most of them have been either developed for fault-tolerance or for scalability. Furthermore, only few have considered that the application server tier is only one tier in a multi-tier architecture, that this tier maintains state, and that execution in this environment can follow complex patterns. Thus, existing solutions often do not provide correctness beyond some basic application scenarios. / In this thesis we tackle the issue of replication of the application server tier from ground off and develop a unified solution that provides both fault-tolerance and scalability. We first describe a set of execution patterns that describe how requests are typically executed in multi-tier architectures. They consider the flow of execution across client tier, application server tier, and database tier. In particular, the execution patterns describe how requests are associated with transactions, the fundamental execution units at application server and database tiers. Having these execution patterns in mind, we provide a formal definition of what it means to provide a correct execution across all tiers, even in case failures occur and the application server tier is replicated. Informally, a replicated system is correct if it behaves exactly as a non-replicated that never fails. From there, we propose a set of replication algorithms for fault-tolerance that provide correctness for the execution patterns that we have identified The main principle is to let a primary AS replica to execute all client requests, and to propagate any state changes performed by a transaction to backup replicas at transaction commit time. The challenges occur as requests can be associated in different ways with transactions. Then, we extend our fault-tolerance solution and develop a unified solution that provides both fault-tolerance and load-balancing. In this extended solution, each application server replica is able to execute client requests as a primary and at the same time serves as backup for other replicas. The framework provides a transparent, truly distributed and lightweight load distribution mechanism which takes advantage of the fault-tolerance infrastructure. Our replication tool is implemented as a plug-in of JBoss application server and the performance is carefully evaluated, comparing with JBoss' own replication solutions. The evaluation shows that our protocols have very good performance and compare favorably with existing solutions.
|
13 |
An automated XPATH to SQL transformation methodology for XML dataJandhyala, Sandeep. January 2006 (has links)
Thesis (M.S.)--Georgia State University, 2006. / Rajshekhar Sunderraman, committee chair; Sushil Prasad, Alex Zelikovsky, committee members. Electronic text (58 p.) : digital, PDF file. Description based on contents viewed Aug. 13, 2007. Includes bibliographical references (p. 58).
|
14 |
Satellite-based web serverMaluleke, Enock Vongani 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2002. / ENGLISH ABSTRACT: There is a large variety of telemetry recervmg software currently available for the
reception of telemetry information from different satellites. Most of the software used in
receiving telemetry data is satellite specific. Hence, a user-friendly way is needed to
make telemetry data easily accessible. A satellite-based web server is aimed at providing
telemetry information to any standard web browser as a way of bringing space
technology awareness to the people. Two different satellite-based web server methods are
examined in this thesis. Based on the evaluation, the on-board File server with proxy
server was proposed for satellite-based web server development. This requires that the
File server be ported to the on-board computer of the satellite. The web proxy server is
placed on the ground segment with the necessary communication requirements to
communicate with the on-board File server. In the absence of satellite, the satellite-based
web server was successfully implemented on two computers, laying a good foundation
for implementation on the on-board computer of the satellite (OBe). / AFRIKAANSE OPSOMMING: Daar is 'n groot verskeidenheid telemetrie ontvangs sagteware huidiglik beskikbaar vir
die ontvangs van telemetrie informasie vanaf verskillende satelliete. Die meeste van die
sagteware wat gebruik word om telemetrie data te ontvang is satelliet spesifiek.
Gevolglik, 'n gebruikers vriendelike metode is nodig om telemetrie data maklik
beskikbaar te maak. 'n Satelliet-gebaseerde web-bediener word beoog om telemetrie
informasie te verskaf aan enige standaard web-blaaier as 'n metode om mense bewus te
maak van ruimte tegnologie. Twee verskillende satelliet gebaseerde web-bediener
metodes salondersoek word in hierdie tesis. Gebaseer op 'n evaluering, word die
aanboord leêr-bediener met instaanbediener voorgestel vir satelliet-gebaseerde webbediener
ontwikkeling. Hiervoor is dit nodig dat die leêr-bediener na die aanboord
rekenaar van die satelliet gepoort word. Die web instaanbediener word op die grond
segment geplaas met die nodige kommunikasie benodighede, om te kommunikeer met
die aanboord leêr-bediener. In die afwesigheid van die satelliet was die satellietgebaseerde
web-bediener met sukses geïmplementeer op twee rekenaars, met die gevolg
dat 'n goeie fondasie gelê is vir die implementering op die aanboord rekenaar van die
satelliet (OBC).
|
15 |
Surveygen: A web-based survey editorHan, Kwon Soo 01 January 1998 (has links)
No description available.
|
16 |
Platforma pro tvorbu elektronického obchodu / Platform for Creating E-commerceHladiš, Petr January 2009 (has links)
The work describes possibilities, which the electronic store can offer its customers and due to which it is possible to gain a competitive advantage over other stores. Aside from the description of the functioning of the store itself, there are also described the possibilities of the store´s promotion and marketing on the Internet and basic programming techniques, that should be used for the programming of the store.
|
17 |
Adaptable stateful application server replicationWu, Huaigu, 1975- January 2008 (has links)
No description available.
|
18 |
Internet-Scale Information Monitoring: A Continual Query ApproachTang, Wei 08 December 2003 (has links)
Information monitoring systems are publish-subscribe systems that
continuously track information changes and notify users (or
programs acting on behalf of humans) of relevant updates according
to specified thresholds. Internet-scale information monitoring
presents a number of new challenges. First, automated change
detection is harder when sources are autonomous and updates are
performed asynchronously. Second, information source heterogeneity
makes the problem of modelling and representing changes harder
than ever. Third, efficient and scalable mechanisms are needed to
handle a large and growing number of users and thousands or even
millions of monitoring triggers fired at multiple sources.
In this dissertation, we model users' monitoring requests using
continual queries (CQs) and present a suite of efficient and
scalable solutions to large scale information monitoring over
structured or semi-structured data sources. A CQ is a standing
query that monitors information sources for interesting events
(triggers) and notifies users when new information changes meet
specified thresholds. In this dissertation, we first present the
system level facilities for building an Internet-scale continual
query system, including the design and development of two
operational CQ monitoring systems OpenCQ and WebCQ, the
engineering issues involved, and our solutions. We then describe a
number of research challenges that are specific to large-scale
information monitoring and the techniques developed in the context
of OpenCQ and WebCQ to address these challenges. Example issues
include how to efficiently process large number of continual
queries, what mechanisms are effective for building a scalable
distributed trigger system that is capable of handling tens of
thousands of triggers firing at hundreds of data sources, how to
effectively disseminate fresh information to the right users at
the right time. We have developed a suite of techniques to
optimize the processing of continual queries, including an
effective CQ grouping scheme, an auxiliary data structure to
support group-based indexing of CQs, and a differential CQ
evaluation algorithm (DRA). The third contribution is the design
of an experimental evaluation model and testbed to validate the
solutions. We have engaged our evaluation using both measurements
on real systems (OpenCQ/WebCQ) and simulation-based approach. To
our knowledge, the research documented in this dissertation is to
date the first one to present a focused study of research and
engineering issues in building large-scale information monitoring
systems using continual queries.
|
19 |
Modeling performance of internet-based services using causal reasoningTariq, Muhammad Mukarram Bin 06 April 2010 (has links)
The performance of Internet-based services depends on many
server-side, client-side, and network related factors. Often, the
interaction among the factors or their effect on service performance
is not known or well-understood. The complexity of these services
makes it difficult to develop analytical models. Lack of models
impedes network management tasks, such as predicting performance while
planning for changes to service infrastructure, or diagnosing causes
of poor performance.
We posit that we can use statistical causal methods to model
performance for Internet-based services and facilitate performance
related network management tasks. Internet-based services are
well-suited for statistical learning because the inherent variability
in many factors that affect performance allows us to collect
comprehensive datasets that cover service performance under a wide
variety of conditions. These conditional distributions represent the
functions that govern service performance and dependencies that are
inherent in the service infrastructure. These functions and
dependencies are accurate and can be used in lieu of analytical models
to reason about system performance, such as predicting performance of
a service when changing some factors, finding causes of poor
performance, or isolating contribution of individual factors in
observed performance.
We present three systems, What-if Scenario Evaluator (WISE), How to
Improve Performance (HIP), and Network Access Neutrality Observatory
(NANO), that use statistical causal methods to facilitate network
management tasks. WISE predicts performance for what-if configurations
and deployment questions for content distribution networks. For this,
WISE learns the causal dependency structure among the latency-causing
factors, and when one or more factors is changed, WISE estimates
effect on other factors using the dependency structure. HIP extends
WISE and uses the causal dependency structure to invert the
performance function, find causes of poor performance, and help
answers questions about how to improve performance or achieve
performance goals. NANO uses causal inference to quantify the impact
of discrimination policies of ISPs on service performance. NANO is the
only tool to date for detecting destination-based discrimination
techniques that ISPs may use.
We have evaluated these tools by application to large-scale
Internet-based services and by experiments on wide-area Internet.
WISE is actively used at Google for predicting network-level and
browser-level response time for Web search for new datacenter
deployments. We have used HIP to find causes of high-latency Web
search transactions in Google, and identified many cases where
high-latency transactions can be significantly mitigated with simple
infrastructure changes. We have evaluated NANO using experiments on
wide-area Internet and also made the tool publicly available to
recruit users and deploy NANO at a global scale.
|
20 |
Proportional integrator with short-lived flows adjustmentKim, Minchong. January 2004 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: PI; PISA; PIMC; cwnd; TCP. Includes bibliographical references (p. 49-50).
|
Page generated in 0.0839 seconds