1 |
Logic Programming Tools for Dynamic Content Generation and Internet Data MiningGupta, Anima 12 1900 (has links)
The phenomenal growth of Information Technology requires us to elicit, store and maintain huge volumes of data. Analyzing this data for various purposes is becoming increasingly important. Data mining consists of applying data analysis and discovery algorithms that under acceptable computational efficiency limitations, produce a particular enumeration of patterns over the data. We present two techniques based on using Logic programming tools for data mining. Data mining analyzes data by extracting patterns which describe its structure and discovers co-relations in the form of rules. We distinguish analysis methods as visual and non-visual and present one application of each. We explain that our focus on the field of Logic Programming makes some of the very complex tasks related to Web based data mining and dynamic content generation, simple and easy to implement in a uniform framework.
|
2 |
Redakční systém s podporou dynamicky generovaného obsahu / CMS Supporting Dynamically Generated ContentNádraský, Václav January 2012 (has links)
The topic of this thesis covers design and development of content management system which is easily extensible. It allows creating websites out of components which can be placed at any place in a web site. These components can contain a complex application logic which is independent of a layout of user controls. Content management system also contains a component allowing to place any data from any database into web site content without need to program or to create SQL queries.
|
3 |
Reducing Data Copying Overhead in Web ServersYeung, Gary 06 1900 (has links)
Web servers that generate dynamic content are widely used in the development of Internet applications. With the Internet highly connected to people’s lifestyles, the service requirements of Internet applications have increased significantly. This increasing trend intensifies the need to improve server performance in dynamic content generation.
In this thesis, we describe the opportunity to improve server performance by co-locating the web server and the application server on the same machine. We identify related work and discuss their respective advantages and deficiencies. We then introduce and explain our technique that passes the client socket’s file descriptor from the web server process to the application server. This allows the application server to reply to the client directly, reducing the amount of data copied and improving server performance. Experiments were designed to evaluate the performance of this technique and provide a detailed analysis of processor time and data copying during response delivery. A performance comparison against alternative approaches has been performed. We analyze the results to understand factors in data copying efficiency and determine that cache misses are an important factor in server performance.
There are four major contributions in this thesis. First, we show that in multiprocessor environments, co-locating web servers and application servers can take advantage of faster communication. Second, we introduce a new technique that reduces the amount of data copied by two-thirds. This technique requires no modifications to the application server code (other existing techniques do), and it is also applicable in a variety of systems, allowing easy adoption in production environments. Third, we provide a performance comparison against other approaches and raise questions regarding data copying efficiency. Our technique attains an average peak throughput of 1.27 times the FastCGI with Unix domain sockets in both uniprocessor and multiprocessor environments. Finally, our analysis on the effect of cache misses on server performance provides valuable insights into why these benefits are obtained.
|
4 |
Reducing Data Copying Overhead in Web ServersYeung, Gary 06 1900 (has links)
Web servers that generate dynamic content are widely used in the development of Internet applications. With the Internet highly connected to people’s lifestyles, the service requirements of Internet applications have increased significantly. This increasing trend intensifies the need to improve server performance in dynamic content generation.
In this thesis, we describe the opportunity to improve server performance by co-locating the web server and the application server on the same machine. We identify related work and discuss their respective advantages and deficiencies. We then introduce and explain our technique that passes the client socket’s file descriptor from the web server process to the application server. This allows the application server to reply to the client directly, reducing the amount of data copied and improving server performance. Experiments were designed to evaluate the performance of this technique and provide a detailed analysis of processor time and data copying during response delivery. A performance comparison against alternative approaches has been performed. We analyze the results to understand factors in data copying efficiency and determine that cache misses are an important factor in server performance.
There are four major contributions in this thesis. First, we show that in multiprocessor environments, co-locating web servers and application servers can take advantage of faster communication. Second, we introduce a new technique that reduces the amount of data copied by two-thirds. This technique requires no modifications to the application server code (other existing techniques do), and it is also applicable in a variety of systems, allowing easy adoption in production environments. Third, we provide a performance comparison against other approaches and raise questions regarding data copying efficiency. Our technique attains an average peak throughput of 1.27 times the FastCGI with Unix domain sockets in both uniprocessor and multiprocessor environments. Finally, our analysis on the effect of cache misses on server performance provides valuable insights into why these benefits are obtained.
|
5 |
Caching dynamic data for web applicationsMahdavi, Mehregan, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Web portals are one of the rapidly growing applications, providing a single interface to access different sources (providers). The results from the providers are typically obtained by each provider querying a database and returning an HTML or XML document. Performance and in particular providing fast response time is one of the critical issues in such applications. Dissatisfaction of users dramatically increases with increasing response time, resulting in abandonment of Web sites, which in turn could result in loss of revenue by the providers and the portal. Caching is one of the key techniques that address the performance of such applications. In this work we focus on improving the performance of portal applications via caching. We discuss the limitations of existing caching solutions in such applications and introduce a caching strategy based on collaboration between the portal and its providers. Providers trace their logs, extract information to identify good candidates for caching and notify the portal. Caching at the portal is decided based on scores calculated by providers and associated with objects. We evaluate the performance of the collaborative caching strategy using simulation data. We show how providers can trace their logs and calculate cache-worthiness scores for their objects and notify the portal. We also address the issue of heterogeneous scoring policies by different providers and introduce mechanisms to regulate caching scores. We also show how portal and providers can synchronize their meta-data in order to minimize the overhead associated with collaboration for caching.
|
6 |
Web Market Analysis: Static, Dynamic And Content EvaluationErdal, Feride 01 September 2012 (has links) (PDF)
Importance of web services increases as the technology improves and the need for the challenging e-commerce strategies increases. This thesis focuses on web market analysis of web sites by evaluating from the perspectives of static, dynamic and content. Firstly, web site evaluation methods and web analytic tools are introduced. Then evaluation methodology is described from three perspectives. Finally, results obtained from the evaluation of 113 web sites are presented as well as their correlations.
|
7 |
Dynamically Downloading Games to Minimise Start-up Time, Disk Space and Bandwidth RequirementsEk Johansson, Filip January 2022 (has links)
Video games are increasing in size. A lot of computer and console games nowadays are well over a hundred gigabytes which can create significant delays between starting the download and being able to play the game. The game might also take up a great percentage of the user’s storage drive. This paper creates a subsystem for content for the Unreal Engine that allows packages of game content to be downloaded and mounted into the game at runtime. It also provides a method of building Unreal Engine games in such a way that they can be split into packages. Finally, the subsystem manages all the packages and their relations to each other, downloading dependent ones and removing ones that will not be used again. The solution is evaluated on how much it decreases the time it takes to download and start a game, how much disk space it saves and how it affects the environment in comparison to a conventionally downloaded game. Result show that such a system reduces a significant amount of start-up time and disk usage, as well as reduce the amount of greenhouse gases depending on how interconnected the game packages are. / Datorspel har blivit större och större och det är inte längre ovanligt att se spel som är över ett hundra gigabyte. Det kan därför ta väldigt långt tid mellan att starta en nedladdning och att kunna spela spelet. Spelet kan också ta upp en signifikant del av användarens lagringsutrymme. Den här uppsatsen skapar ett subsystem till Unreal Engine som gör att paket av spelinnehåll kan laddas ner och monteras in i spelet medan det kör. Den beskriver också hur Unreal Engine-spel kan byggas för att kunna delas upp i paket. Systemet hanterar paketen och deras relationer till varandra och laddar ner beroenden samt tar bort de som inte kommer användas igen. Lösningen utvärderas efter hur väl den minskar tiden det tar att ladda ner och starta ett spel, hur mycket lagringsutrymme som krävs och hur den minskar miljöpåverkan jämfört med vanlig spelnedladdning. Resultatet visar att systemet minskar starttiden och diskanvändningen men också att den minskar växthusgasutsläppen beroende på hur beroende paketen är av varandra.
|
8 |
Towards Efficient Delivery of Dynamic Web ContentRamaswamy, Lakshmish Macheeri 26 August 2005 (has links)
Advantages of cache cooperation on edge cache networks serving dynamic web content were studied. Design of cooperative edge cache grid a large-scale cooperative edge cache network for delivering highly dynamic web content with varying server update frequencies was presented. A cache clouds-based architecture was proposed to promote low-cost cache cooperation in cooperative edge cache grid. An Internet landmarks-based scheme, called selective landmarks-based server-distance sensitive clustering scheme, for grouping edge caches into cooperative clouds was presented. Dynamic hashing technique for efficient, load-balanced, and reliable documents lookups and updates was presented. Utility-based scheme for cooperative document placement in cache clouds was proposed. The proposed architecture and techniques were evaluated through trace-based simulations using both real-world and synthetic traces. Results showed that the proposed techniques provide significant performance benefits.
A framework for automatically detecting cache-effective fragments in dynamic web pages was presented. Two types of fragments in web pages, namely, shared fragments and lifetime-personalization fragments were identified and formally defined. A hierarchical fragment-aware web page model called the augmented-fragment tree model was proposed. An efficient algorithm to detect maximal fragments that are shared among multiple documents was proposed. A practical algorithm for detecting fragments based on their lifetime and personalization characteristics was designed. The proposed framework and algorithms were evaluated through experiments on real web sites. The effect of adopting the detected fragments on web-caches and origin-servers is experimentally studied.
|
9 |
How to Build a Web Scraper for Social MediaLloyd, Oskar, Nilsson, Christoffer January 2019 (has links)
In recent years, the act of scraping websites for information has become increasingly relevant. However, along with this increase in interest, the internet has also grown substantially and advances and improvements to websites over the years have in fact made it more difficult to scrape. One key reason for this is that scrapers simply account for a significant portion of the traffic to many websites, and so developers often implement anti-scraping measures along with the Robots Exclusion Protocol (robots.txt) to try to stymie this traffic. The popular use of dynamically loaded content – content which loads after user interaction – poses another problem for scrapers. In this paper, we have researched what kinds of issues commonly occur when scraping and crawling websites – more specifically when scraping social media – and how to solve them. In order to understand these issues better and to test solutions, a literature review was performed and design and creation methods were used to develop a prototype scraper using the frameworks Scrapy and Selenium. We found that automating interaction with dynamic elements worked best to solve the problem of dynamically loaded content. We also theorize that having an artificial random delay when scraping and randomizing intervals between each visit to a website would counteract some of the anti-scraping measures. Another, smaller aspect of our research was the legality and ethicality of scraping. Further thoughts and comments on potential solutions to other issues have also been included.
|
10 |
Towards electronic assessment of web-based textual responsesConradie, Martha Maria 30 June 2003 (has links)
Web-based learning should move away from static transmission of instruction to dynamic pages
for effective interactive learning. Furthermore, automated assessment of learning should move
beyond rigid quizzes or multiple-choice questions.
This study describes the design, development, implementation, testing and evaluation of two
prototypes of an electronic assessment tool to enhance the effectiveness of automated
assessment. The tool was developed in the context of a distance-learning organisation and
was built according to a development research model entailing a cyclic design-intervention-outcomes
process.
The first variant, E-Grader, was developed to test an algorithm for assigning marks to open-ended
textual responses. The second variant, Web-Grader, was an interactive web-based
extension of E-Grader. It provided immediate interactive support to students as they responded
textually to content-based questions.
This multi-disciplinary study incorporates principles and techniques from software engineering,
formal computer science, database development and instructional design in the quest towards
electronic assessment of web-based textual inputs. / Computing / M.Sc. (Information Systems)
|
Page generated in 0.0915 seconds