Spelling suggestions: "subject:"1nternet applicatications"" "subject:"1nternet applicationoptions""
1 |
Distributed Crawling of Rich Internet ApplicationsMir Taheri, Seyed Mohammad January 2015 (has links)
Web crawlers visit internet applications, collect data, and learn about new web pages from visited pages. Web crawlers have a long and interesting history. Quick expansion of the web, and the complexity added to web applications have made the process of crawling a very challenging one. Different solutions have been proposed to reduce the time and cost of crawling. New generation of web applications, known as Rich Internet Applications (RIAs), pose major challenges to the web crawlers. RIAs shift a portion of the computation to the client side. Shifting a portion of the application to the client browser influences the web crawler in two ways: First, the one-to-one correlation between the URL and the state of the application, that exists in traditional web applications, is broken. Second, reaching a state of the application is no longer a simple operation of navigating to the target URL, but often means navigating to a seed URL and executing a chain of events from it. Due to these challenges, crawling a RIA can take a prohibitively long time. This thesis studies applying distributed computing and parallel processing principles to the field of RIA crawling to reduce the time. We propose different algorithms to concurrently crawl a RIA over several nodes. The proposed algorithms are used as a building block to construct a distributed crawler of RIAs. The different algorithms proposed represent different trade-offs between communication and computation. This thesis explores the effect of making different trade-offs and their effect on the time it takes to crawl RIAs. We study the cost of running a distributed RIA crawl with client-server architecture and compare it with a peer-to-peer architecture. We further study distribution of different crawling strategies, namely: Breath-First search, Depth-First search, Greedy algorithm, and Probabilistic algorithm. To measure the effect of different design decisions in practice, a prototype of each algorithm is implemented. The implemented prototypes are used to obtain empirical performance measurements and to refine the algorithms. The ultimate refined algorithm is used for experimentation with a wide range of applications under different circumstances. This thesis finally includes two theoretical studies of load balancing algorithms and distributed component-based crawling and sets the stage for future studies.
|
2 |
M-crawler: Crawling Rich Internet Applications Using Menu Meta-modelChoudhary, Suryakant 27 July 2012 (has links)
Web applications have come a long way both in terms of adoption to provide information and services and in terms of the technologies to develop them. With the emergence of richer and more advanced technologies such as Ajax, web applications have become more interactive, responsive and user friendly. These applications, often called Rich Internet Applications (RIAs) changed the traditional web applications in two primary ways: Dynamic manipulation of client side state and Asynchronous communication with the server. At the same time, such techniques also introduce new challenges. Among these challenges, an important one is the difficulty of automatically crawling these new applications. Crawling is not only important for indexing the contents but also critical to web application assessment such as testing for security vulnerabilities or accessibility. Traditional crawlers are no longer sufficient for these newer technologies and crawling in RIAs is either inexistent or far from perfect. There is a need for an efficient crawler for web applications developed using these new technologies. Further, as more and more enterprises use these new technologies to provide their services, the requirement for a better crawler becomes inevitable. This thesis studies the problems associated with crawling RIAs. Crawling RIAs is fundamentally more difficult than crawling traditional multi-page web applications. The thesis also presents an efficient RIA crawling strategy and compares it with existing methods.
|
3 |
M-crawler: Crawling Rich Internet Applications Using Menu Meta-modelChoudhary, Suryakant 27 July 2012 (has links)
Web applications have come a long way both in terms of adoption to provide information and services and in terms of the technologies to develop them. With the emergence of richer and more advanced technologies such as Ajax, web applications have become more interactive, responsive and user friendly. These applications, often called Rich Internet Applications (RIAs) changed the traditional web applications in two primary ways: Dynamic manipulation of client side state and Asynchronous communication with the server. At the same time, such techniques also introduce new challenges. Among these challenges, an important one is the difficulty of automatically crawling these new applications. Crawling is not only important for indexing the contents but also critical to web application assessment such as testing for security vulnerabilities or accessibility. Traditional crawlers are no longer sufficient for these newer technologies and crawling in RIAs is either inexistent or far from perfect. There is a need for an efficient crawler for web applications developed using these new technologies. Further, as more and more enterprises use these new technologies to provide their services, the requirement for a better crawler becomes inevitable. This thesis studies the problems associated with crawling RIAs. Crawling RIAs is fundamentally more difficult than crawling traditional multi-page web applications. The thesis also presents an efficient RIA crawling strategy and compares it with existing methods.
|
4 |
M-crawler: Crawling Rich Internet Applications Using Menu Meta-modelChoudhary, Suryakant January 2012 (has links)
Web applications have come a long way both in terms of adoption to provide information and services and in terms of the technologies to develop them. With the emergence of richer and more advanced technologies such as Ajax, web applications have become more interactive, responsive and user friendly. These applications, often called Rich Internet Applications (RIAs) changed the traditional web applications in two primary ways: Dynamic manipulation of client side state and Asynchronous communication with the server. At the same time, such techniques also introduce new challenges. Among these challenges, an important one is the difficulty of automatically crawling these new applications. Crawling is not only important for indexing the contents but also critical to web application assessment such as testing for security vulnerabilities or accessibility. Traditional crawlers are no longer sufficient for these newer technologies and crawling in RIAs is either inexistent or far from perfect. There is a need for an efficient crawler for web applications developed using these new technologies. Further, as more and more enterprises use these new technologies to provide their services, the requirement for a better crawler becomes inevitable. This thesis studies the problems associated with crawling RIAs. Crawling RIAs is fundamentally more difficult than crawling traditional multi-page web applications. The thesis also presents an efficient RIA crawling strategy and compares it with existing methods.
|
5 |
Efficient Reconstruction of User Sessions from HTTP Traces for Rich Internet ApplicationsHooshmand, Salman January 2017 (has links)
The generated HTTP traffic of users' interactions with a Web application can be logged for further analysis. In this thesis, we present the ``Session Reconstruction'' problem that is the reconstruction of user interactions from recorded request/response logs of a session. The reconstruction is especially useful when the only available information about the session is its HTTP trace, as could be the case during a forensic analysis of an attack on a website.
New Web technologies such as AJAX and DOM manipulation have provided more responsive and smoother Web applications, sometimes called ``Rich Internet Applications''(RIAs). Despite the benefits of RIAs, the previous session reconstruction methods for traditional Web applications are not effective anymore. Recovering information from a log in RIAs is significantly more challenging as compared with classical Web applications, because the HTTP traffic contains often only application data and no obvious clues about what the user did to trigger that traffic.
This thesis studies applying different techniques for efficient reconstruction of RIA sessions. We define the problem in the context of the client/server applications, and propose a solution for it. We present different algorithms to make the session reconstruction possible in practice: learning mechanisms to guide the session reconstruction process efficiently, techniques for recovering user-inputs and handling client-side randomness, and also algorithms for detections of actions that do not generate any HTTP traffic. In addition, to further reduce the session reconstruction time, we propose a distributed architecture to concurrently reconstruct a RIA session over several nodes.
To measure the effectiveness of our proposed algorithms, a prototype called D-ForenRIA is implemented. The prototype is made of a proxy and a set of browsers. Browsers are responsible for trying candidate actions on each state, and the proxy, which contains the observed HTTP trace, is responsible for responding to browsers' requests and validating attempted actions on each state. We have used this tool to measure the effectiveness of the proposed techniques during session reconstruction process. The results of our evaluation on several RIAs show that the proposed solution can efficiently reconstruct use-sessions in practice.
|
6 |
A Priority-Based Admission Control Scheme for Commercial Web ServersNafea, Ibtehal T., Younas, M., Holton, Robert, Awan, Irfan U. January 2014 (has links)
No / This paper investigates into the performance and load management of web servers that are deployed in commercial websites. Such websites offer various services such as flight/hotel booking, online banking, stock trading, and product purchases among others. Customers are increasingly relying on these round-the-clock services which are easier and (generally) cheaper to order. However, such an increasing number of customers' requests makes a greater demand on the web servers. This leads to web servers' overload and the consequential provisioning of inadequate level of service. This paper addresses these issues and proposes an admission control scheme which is based on the class-based priority scheme that classifies customer's requests into different classes. The proposed scheme is formally specified using -calculus and is implemented as a Java-based prototype system. The prototype system is used to simulate the behaviour of commercial website servers and to evaluate their performance in terms of response time, throughput, arrival rate, and the percentage of dropped requests. Experimental results demonstrate that the proposed scheme significantly improves the performance of high priority requests but without causing adverse effects on low priority requests.
|
7 |
Model-driven development of Rich Internet Applications on the Semantic WebHermida Carbonell, Jesús María 09 April 2013 (has links)
In the last decade, the Web 2.0 brought technological changes in the manner of interaction and communication between users and applications, and among applications as well. Rich Internet Applications (RIA) offer user interfaces with a higher level of interactivity, similar to desktop interfaces, embed multimedia contents and minimise the communication between client and server components. Nonetheless, RIAs behave as black boxes that show the information in a user-friendly manner but this information can be only visualised gradually, according to the events triggered by the users on the Web browser, which limits the access of software agents, e.g., Web searchers. In the context of the present Internet, where the value has been moved from the Web applications to the data they manage, the use of open technological solutions is a need. In this way, the Semantic Web was aimed at solving issues of semantic incompatibility among systems by means of standard techniques and technologies (from knowledge representation and sharing to trust and security), which can be the key to solving the issues detected in RIA. Although some solutions exist, they do not cover all the possible types of RIA or they are dependent on the technology chosen for the implementation of the Web application. As a first contribution, this thesis introduces the concept of Semantic Rich Internet Application (SRIA), which can be defined as a RIA that extensively uses Semantic Web technologies to provide a representation of its contents and to reuse existing knowledge sources on the Web. The solution proposed is adapted to the existing RIA types and technologies. The thesis presents the architecture proposed for this type of application, describing its software modules and components. The evaluation of the solution was performed based on a collection of case studies. The development of Web applications, especially in the context of the Semantic Web, is a process traditionally performed manually and, given the complexity of the SRIA applications in this case, it is a process which might be prone to errors. The application of model-driven engineering techniques can reduce the cost of development and maintenance (in terms of time and resources) of the proposed applications, as demonstrated their use in other types of Web applications. Moreover, they can facilitate the adoption of the solution by the community. In the light of these issues, as a second contribution, this thesis presents the Sm4RIA methodology (Semantic Models for RIA) for the development of SRIA, as an extension of the OOH4RIA methodology. The thesis describes the development process, the models (with the corresponding metamodels) and the transformations included in the methodology. The evaluation of the methodology consisted in the development of the case studies proposed. The application of this model-driven methodology can speed up the development of these Web applications and simplify the reuse of external sources of knowledge. Finally, the thesis describes the Sm4RIA extension for OIDE, i.e., an extension of the OIDE CASE tool that implements all the elements of the Sm4RIA methodology.
|
8 |
Modelagem e desenvolvimento de sistemas de informações geográficas para web com tecnologias de rich internet applications. / Web-based geographical information systems modeling and development with rich internet applications technologies.Leonardo Chaves Machado 29 January 2009 (has links)
Os SIG estão se popularizando cada vez mais e isso tem se dado principalmente
através da Internet. Os assim chamados SIG-Web no entanto, quando desenvolvidos com as
tecnologias tradicionais de web, apresentam as mesmas fraquezas daquelas, a saber:
sincronicidade e pobreza na interação com o usuário. As tecnologias usadas para
Rich Internet Applications (RIA) são uma alternativa que resolvem esses problemas. Na
presente dissertação será demonstrada a factibilidade do seu uso para o desenvolvimento de
SIG-Web, oferecendo um conjunto de códigos e estratégias para desenvolvimentos futuros, a
partir de um conjunto básico de operações a se realizar em um SIG-Web. Adicionalmente
será proposta a UWE-R, uma extensão a uma metodologia de engenharia web existente, para
modelagem de RIA e SIG-Web. / GIS are getting more popular, mainly due to the Internet. Nonetheless the so called
Web-GIS, when developed using traditional web technologies, inherit the same weaknesses
from those, i.e.: sinchronicity and poor user interaction. Rich Internet Applications (RIA)
technologies are an alternative for those issues. In the present work the feasibility of their
usage for Web-GIS will be demonstrated, providing a set of codes and strategies for future
developments, taking into account a basic set of Web-GIS operations. Additionally UWE-R,
an extension to a web engineering methodology will be proposed for RIA modeling and
Web-GIS.
|
9 |
Modelagem e desenvolvimento de sistemas de informações geográficas para web com tecnologias de rich internet applications. / Web-based geographical information systems modeling and development with rich internet applications technologies.Leonardo Chaves Machado 29 January 2009 (has links)
Os SIG estão se popularizando cada vez mais e isso tem se dado principalmente
através da Internet. Os assim chamados SIG-Web no entanto, quando desenvolvidos com as
tecnologias tradicionais de web, apresentam as mesmas fraquezas daquelas, a saber:
sincronicidade e pobreza na interação com o usuário. As tecnologias usadas para
Rich Internet Applications (RIA) são uma alternativa que resolvem esses problemas. Na
presente dissertação será demonstrada a factibilidade do seu uso para o desenvolvimento de
SIG-Web, oferecendo um conjunto de códigos e estratégias para desenvolvimentos futuros, a
partir de um conjunto básico de operações a se realizar em um SIG-Web. Adicionalmente
será proposta a UWE-R, uma extensão a uma metodologia de engenharia web existente, para
modelagem de RIA e SIG-Web. / GIS are getting more popular, mainly due to the Internet. Nonetheless the so called
Web-GIS, when developed using traditional web technologies, inherit the same weaknesses
from those, i.e.: sinchronicity and poor user interaction. Rich Internet Applications (RIA)
technologies are an alternative for those issues. In the present work the feasibility of their
usage for Web-GIS will be demonstrated, providing a set of codes and strategies for future
developments, taking into account a basic set of Web-GIS operations. Additionally UWE-R,
an extension to a web engineering methodology will be proposed for RIA modeling and
Web-GIS.
|
10 |
Interactive digital technologies and the user experience of time and placeFishenden, Jerry January 2013 (has links)
This research examines the relationship between the development of a portfolio of interactive digital techniques and compositions, and its impact on user experiences of time and place. It is designed to answer two research questions: (i) What are some effective methods and techniques for evoking an enhanced awareness of past time and place using interactive digital technologies (IDTs)? (ii) How can users play a role in improving the development and impact of interfaces made with IDTs? The principal creative and thematic element of the portfolio is the concept of the palimpsest, and its artistic potential to reveal visual and aural layers that lie behind the landscapes and soundscapes around us. This research thus contributes to an evolving cadre of creative interest in palimpsests, developing techniques and compositions in the context of testing, collating user experience feedback, and improving the ways in which IDTs enable an artistic exploration and realisation of hidden layers, both aural and visual, of the past of place. An iterative theory-composition-testing methodology is developed and applied to optimise techniques for enabling users to navigate multiple layers of content, as well as in finding methods that evoke an increased emotional connection with the past of place. This iterative realisation cycle comprises four stages - of content origination, pre-processing, mapping and user interaction. The user interaction stage of this cycle forms an integral element of the research methodology, involving the techniques being subjected to formalised user experience testing, both to assist with their further refinement and to assess their value in evoking an increased awareness of time and place. Online usability testing gathered 5,451 responses over three years of iterative cycles of composition development and refinement, with more detailed usability labs conducted involving eighteen participants. Usability lab response categories span efficiency, accuracy, recall and emotional response. The portfolio includes a variety of interactive techniques developed and improved during its testing and refinement. User experience feedback data plays an essential role in influencing the development and direction of the portfolio, helping refine techniques to evoke an enhanced awareness of the past of place by identifying those that worked most, and least, effectively for users. This includes an analysis of the role of synthetic and authentic content on user perception of various digital techniques and compositions. The contributions of this research include: • the composition portfolio and the associated IDT techniques originated, developed, tested and refined in its research and creation • the research methodology developed and applied, utilising iterative development of aspects of the portfolio informed by user feedback obtained both online and in usability labs • the findings from user experience testing, in particular the extent to which various visual and aural techniques help evoke a heightened sense of the past of place • an exploration of the extent to which the usability testing substantiates that user responses to the compositions have the potential to establish an evocative connection that communicates a sense close to that of Barthes' punctum (something that pierces the viewer) rather than solely that of the studium • the role of synthetic and authentic content on user perception and appreciation of the techniques and compositions • the emergence of an analytical framework with the potential for wider application to the development, analysis and design of IDT compositions
|
Page generated in 0.1428 seconds