Spelling suggestions: "subject:"etching""
1 |
XML Enabled Page-GroupingLee, Hor-Tzung 04 July 2000 (has links)
As more and more services are provided via WWW, how to reduce the perceived delay in WWW interaction becomes very import to the service providers to keep their users. Pre-fetching is an important technique for reducing latency in distributes systems like the WWW. Page pre-fetching takes advantage of local machine idle period of user viewing current page to deliver pages that user like to access in the near future. Being motivated by pre-fetching ideas and its practical bothers, we propose a server-initiated page pre-fetching method: XML enabled page-grouping to reduce Web latency.
In our page-grouping scheme, we anticipate the page that user will like to access in the near future based on hyperlink and referral access probabilities of each page. The predictive pages are grouped and converted into a XML file embedding in the page that user currently requests. If the user clicks the predictive linked page, the corresponding HTML is regenerated directly from the embedded XML document. The proposed scheme allows alternative of batch grouping or on-line grouping. For the reason of avoiding server extra load, we suggest that the task of grouping static pages is performed periodically at server off-peak loading time. Beside static pages, we also hope to group dynamic page generated by CGI and illustrate the feasibility with an example of Web-based database query.
When compared to previous page pre-fetching techniques, our page-grouping method has simplicity and practicability. By using XML document, add-on application modules are no more needed because that the XML processor is supported in new generator browsers like Microsoft IE 5.0. Furthermore, the way of converting grouping pages into embedded XML document makes predictive pages transparent to the proxy servers and the server side speculative service can work no matter whether there are proxy servers between the server and clients.
Using trace simulations based on the logs of HTTP server http://www.kcg.gov.tw, we show that 67.84% URL request is referral request. It means that the probability is about 2/3 that users retrieve next Web page by clicking hyperlinks on currently viewing page. The logs are categorized according to the kind of official service. And the statistical results of every class of logs indicate that page always has a persistent referral access probabilities for a few days. It encourages us to get high hit rate of a predictive page by selecting it according to its high referral access probability.
Considering bandwidth tradeoff, we discuss hit rate, traffic increase due to grouping and traffic intensity based on M/M/1 model.
For online grouping of dynamic page, we take an example of database querying page on our simulating HTTP server. The experiment result leads to the conclusion that page grouping of pages of Web-based database querying can reduce server load of CGI processing, as the hit rate of the next page is about 18.48%.
|
2 |
The Impact of Database Querying Exactitude in Intellectual Property Law Practice in BrazilHemerly, Henrique January 2020 (has links)
In current business affairs, most executive professions require one or several kinds of data consultation in their practice. Nowadays, the majority of data either is or has been digitalized and digital data is defined as information represented in a discrete and discontinuous manner. For accessibility purposes, data are often stored in databases that organize information via design and modeling techniques to facilitate querying. Data retrieval is crucial and if this process lacks efficacy, users either are presented incomplete information or are forced to perform repetitive queries. Intellectual property (IP) lawyers in Brazil are among that group and must regularly access a private database for trademark information. While it contains all the data they require, the database’s querying mechanisms are not tailored for IP law practice. The existing filters and lack of replacement algorithms often yield incomplete results, increasing time and resources dispended. With millions of dollars in potential lawsuits and work-hours, the purpose of this study is to investigate whether an IP-focused querying system could help mitigate this resource waste, facilitating the trademark comparison work of IP lawyers. For this, a new orthographic and phonetically focused querying logic was implemented. ANOVA tests and a questionnaire were used to compare the existing querying mechanism with the new one in terms of time, work satisfaction and querying accuracy. Results indicate the new querying system significantly decreased the amount of searches needed to execute a complete trademark analysis, while lawyers averaged the same amount of time to complete their work. Lawyers also reported higher work satisfaction levels and perceived increase in work efficiency.
|
3 |
Developing an emulator for 360° video : intended for algorithm developmentLindskog, Eric January 2020 (has links)
Streaming 360° video has become more commonplace with content delivery services such as YouTube having support for it. By its nature, 360° video requires more bandwidth as only a fraction of the image is actually in view, while the user is expecting the same "in view" quality as with a regular video. Several studies and lots of work have been done to mitigate this higher demand for bandwidth. One solution is advanced algorithms that take in to account the direction that the user is looking when fetching the video from the server; e.g., by fetching content that is not in the user’s view at a lower quality or by not fetching this data at all. Developing these algorithms is a timely process, especially in the later stages where tweaking one parameter might require the video to be re-encoded, and therefore taking up time that could otherwise be spent on getting results and continued iteration on the algorithm. The viewer should also be considered as the best experience might not correlate with the mathematically best solution calculated by the algorithm. This thesis presents a modular emulator that allows for easy implementation of fetching algorithms that make use of state-of-the-art techniques. It intends to reduce the time it takes to iterate over an algorithm through removing the need to set up a server and encode the video in all of the wanted quality levels when a parameter change would require it. It also makes it easy to include the viewer in the process so that the subjective performance is taken into consideration. The emulator is evaluated through the implementation and evaluation of two algorithms, one serving as a baseline to the second one, which is based on an algorithm developed by another group of researchers. These algorithms are tested on two different types of 360° videos, under four different network conditions and with two values for the maximum buffer size. The results from the evaluation of the two algorithms suggest that the emulator functions as intended from a technical point of view, and as such fulfills its purpose. There is, however, future work that would further prove the emulators performance in regards to replicating real scenarios and a few examples are suggested.
|
4 |
Performance of frameworks for declarative data fetching : An evaluation of Falcor and Relay+GraphQLCederlund, Mattias January 2016 (has links)
With the rise of mobile devices claiming a greater and greater portion of internet traffic, optimizing performance of data fetching becomes more important. A common technique of communicating between subsystems of online applications is through web services using the REpresentational State Transfer (REST) architectural style. However, REST is imposing restrictions in flexibility when creating APIs that are potentially introducing suboptimal performance and implementation difficulties. One proposed solution for increasing efficiency in data fetching is through the use of frameworks for declarative data fetching. During 2015 two open source frameworks for declarative data fetching, Falcor and Relay+ GraphQL, were released. Because of their recency, no information of how they impact performance could be found. Using the experimental approach, the frameworks were evaluated in terms of latency, data volume and number of requests using test cases based on a real world news application. The test cases were designed to test single requests, parallel and sequential data flows. Also the filtering abilities of the frameworks were tested. The results showed that Falcor introduced an increase in response time for all test cases and an increased transfer size for all test cases but one, a case where the data was filtered extensively. The results for Relay+GraphQL showed a decrease in response time for parallel and sequential data flows, but an increase for data fetching corresponding to a single REST API access. The results for transfer size were also inconclusive, but the majority showed an increase. Only when extensive data filtering was applied the transfer size could be decreased. Both frameworks could reduce the number of requests to a single request independent of how many requests the corresponding REST API needed. These results led to a conclusion that whenever it is possible, best performance can be achieved by creating custom REST endpoints. However, if this is not feasible or there are other implementation benefits and the alternative is to resort to a "one-size-fits-all" API, Relay+GraphQL can be used to reduce response times for parallel and sequential data flows but not for single request-response interactions. Data transfer size can only be reduced if filtering offered by the frameworks can reduce the response size more than the increased request size introduced by the frameworks. / Alteftersom användningen av mobila enheter ökar och står för en allt större andel av trafiken på internet blir det viktigare att optimera prestandan vid datahämtning. En vanlig teknologi för kommunikation mellan delar internet-applikationer är webbtjänster användande REpresentational State Transfer (REST)-arkitekturen. Dock introducerar REST restriktioner som minskar flexibiliteten i hur API:er bör konstrueras, vilka kan leda till försämrad prestanda och implementations-svårigheter. En möjlig lösning för ökad effektivitet vid data-hämtning är användningen av ramverk som implementerar deklarativ data-hämtning. Under 2015 släpptes två sådana ramverk med öppen källkod, Falcor och Relay+GraphQL. Eftersom de nyligen introducerades kunde ingen information om dess prestanda hittas. Med hjälp av den experimentella metoden utvärderades ramverken beträffande svarstider, datavolym och antalet anrop mellan klient och server. Testerna utformades utifrån en verklig nyhetsapplikation med fokus på att skapa testfall för enstaka anrop och anrop utförda både parallellt och sekventiellt. Även ramverkens förmåga att filtrera svarens data-fält testades. Vid användning av Falcor visade resultaten på en ökad svarstid i alla testfall och en ökad datavolym för alla testfall utom ett. I testfallet som utgjorde undantaget utfördes en mycket omfattande filtrering av datafälten. Resultaten för Relay+GraphQL visade på minskad svarstid vid parallella och sekventiella anrop, medan ökade svarstider observerades för hämtningar som motsvarades av ett enda anrop till REST API:et. Även resultaten gällande datavolym var tvetydiga, men majoriteten visade på en ökning. Endast vid en mer omfattande filtrering av datafälten kunde datavolymen minskas. Antalet anrop kunde med hjälp av båda ramverken minskas till ett enda oavsett hur många som krävdes vid användning av motsvarande REST API. Dessa resultat ledde till slutsatsen att när det är möjligt att skräddarsy REST API:er kommer det att ge den bästa prestandan. När det inte är möjligt eller det finns andra implementations-fördelar och alternativet är att använda ett icke optimerat REST API kan användande av Relay+ GraphQL minska svarstiden för parallella och sekventiella anrop. Däremot leder det i regel inte till någon förbättring för enstaka interaktioner. Den totala datavolymen kan endast minskas om filtreringen tar bort mer data från svaret än vad som introduceras genom den ökade anrops-storleken som användningen av ett frågespråk innebär.
|
5 |
Jämförelse av prestanda mellan GraphQL och REST / Comparison of performance between GraphQL and RESTOnval, Sara, Dualeh, Iman January 2020 (has links)
Med dagens snabba utveckling av informationsteknologin och med ökningen av antalet människor som är uppkopplade mot Internet, blir utvecklingen av webbtjänster allt viktigare. Eftersom webbtjänster har en betydande roll för utvecklingen av Internet, uppstår frågan om vilka verktyg som bör användas för att uppnå den prestanda som dagens användare kräver. Ett vanligt tillvägagångssätt för implementering av webbtjänster är med arkitekturen REST. Dock har REST prestandasvagheter som overfetching, underfetching och hantering av slutpunkter som uppstår i fall där flera slutpunkter nås. Ett alternativ till REST är frågespråket GraphQL som utvecklades för att utesluta de svagheter som REST har och således förbättra prestanda vid datahämtning. I detta arbete utfördes prestandamätningar där latens och datavolym mättes vid olika typer av frågor för respektive GraphQL, REST utan cache och REST med cache. Latens är tidsintervallet från att en klient skickar en förfrågan till att klienten mottar svaret, och datavolym avser storleken på data i ett svarspaket som överförs från en server till en klient. REST med cache togs med i prestandamätningarna då det inte har undersökts i tidigare arbeten som jämfört prestanda mellan GraphQL och REST. Resultaten visade att GraphQL presterar bättre med avseende på både latens och datavolym, i jämförelse med de övriga systemen i fall där förfrågningar ställs mot två eller flera slutpunkter i REST. GraphQL presterade sämre än övriga system, med avseende på latens, när endast en slutpunkt i REST kontaktades. Däremot presterade GraphQL bättre än de övriga systemen, med avseende på datavolym, i samtliga fall. Vid jämförelse av REST med och utan cache visade det sig att ju fler slutpunkter som kontaktades, desto bättre presterade REST utan cache med avseende på datavolym medan REST med cache presterade bättre med avseende på latens. / With today’s rapid development of information technology and with the increase in the number of people connected to the Internet, the development of web services is becoming more important. As web services play a significant role in the development of the Internet, the question arises as to which tools should be used to achieve the performance required by today’s users. A common approach for implementing web services is with the architecture REST. However, REST has performance weaknesses such as overfetching, underfetching, and maintenance of endpoints, that arise in cases where multiple endpoints are accessed. An alternative to REST is the GraphQL query language, which was developed to exclude the weaknesses that REST has and thus improve performance in data retrieval. In this work, performance measurements were conducted where latency and data volume were measured for different types of queries for GraphQL, REST without cache and, REST with cache. Latency is the time interval between a client sending a request and the client receiving the response, and data volume refers to the size of data in a response packet that is transmitted from a server to a client. REST with cache was included in the experiment as it has not been investigated in previous work comparing performance between GraphQL and REST. The results showed that GraphQL performs better, in terms of both latency and data volume, compared to the other systems in cases where requests are made to two or more endpoints in REST. GraphQL performed worse than the other systems, in terms of latency, when only one endpoint in REST was contacted. However, GraphQL performed better than the other systems in terms of data volume in all cases. When comparing REST with and without cache, it turned out that the more endpoints that were contacted, the better REST without cache performed in terms of data volume, while REST with cache performed better in terms of latency.
|
Page generated in 0.0399 seconds