• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 34
  • 24
  • 10
  • 8
  • 7
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 191
  • 191
  • 44
  • 32
  • 31
  • 31
  • 30
  • 25
  • 24
  • 22
  • 22
  • 20
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A study of transient bottlenecks: understanding and reducing latency long-tail problem in n-tier web applications

Wang, Qingyang 21 September 2015 (has links)
An essential requirement of cloud computing or data centers is to simultaneously achieve good performance and high utilization for cost efficiency. High utilization through virtualization and hardware resource sharing is critical for both cloud providers and cloud consumers to reduce management and infrastructure costs (e.g., energy cost, hardware cost) and to increase cost-efficiency. Unfortunately, achieving good performance (e.g., low latency) for web applications at high resource utilization remains an elusive goal. Both practitioners and researchers have experienced the latency long-tail problem in clouds during periods of even moderate utilization (e.g., 50%). In this dissertation, we show that transient bottlenecks are an important contributing factor to the latency long-tail problem. Transient bottlenecks are bottlenecks with a short lifespan on the order of tens of milliseconds. Though short-lived, transient bottleneck can cause a long-tail response time distribution that spans a spectrum of 2 to 3 orders of magnitude, from tens of milliseconds to tens of seconds, due to the queuing effect propagation and amplification caused by complex inter-tier resource dependencies in the system. Transient bottlenecks can arise from a wide range of factors at different system layers. For example, we have identified transient bottlenecks caused by CPU dynamic voltage and frequency scaling (DVFS) control at the CPU architecture layer, Java garbage collection (GC) at the system software layer, and virtual machine (VM) consolidation at the application layer. These factors interact with naturally bursty workloads from clients, often leading to transient bottlenecks that cause overall performance degradation even if all the system resources are far from being saturated (e.g., less than 50%). By combining fine-grained monitoring tools and a sophisticated analytical method to generate and analyze monitoring data, we are able to detect and study transient bottlenecks in a systematic way.
52

Μοντελοποίηση εφαρμογών Παγκόσμιου Ιστού μέσω τεχνικών αντίστροφης μηχανίκευσης / Modeling web applications through reverse engineering techniques

Μποβίλας, Κώστας 24 November 2014 (has links)
Στόχοι της παρούσας διπλωματικής εργασίας είναι η μελέτη τεχνικών αντίστροφης μηχανίκευσης εφαρμογών παγκόσμιου ιστού και η αξιολόγησή τους, εξάγοντας χρήσιμα συμπεράσματα σχετικά με την τρέχουσα κατάσταση και τις διαμορφούμενες μελλοντικές κατευθύνσεις. Αρχικά, γίνεται επισκόπηση των μεθόδων μοντελοποίησης εφαρμογών παγκόσμιου ιστού που έχουν προταθεί από την ερευνητική κοινότητα και παρουσιάζονται τα σχεδιαστικά πρότυπα που έχουν οριστεί πάνω σε αυτές τις μεθόδους. Κατόπιν, παρουσιάζονται οι βασικές έννοιες της αντίστροφης μηχανίκευσης καθώς και συγκεκριμένες τεχνικές που έχουν αναπτυχθεί για την επίτευξή της. Τελικά, παραθέτουμε χρήσιμα συμπεράσματα που προκύπτουν από τη σύγκριση και αξιολόγηση των προτεινόμενων τεχνικών αντίστροφης μηχανίκευσης. / The main goal of this thesis is to study reverse engineering methods and techniques applied to web applications and to evaluate these methods extracting useful conclusions about the present and the future directions of this research area. At start, we study the various modeling methods that have been proposed, as well as the design patterns that have been defined and the reverse engineering methods that have been developed. Then, we present the basic concepts of reverse engineering and some of the methods that have been developed from the research community. Finally, we state our conclusions extracted from the evaluation of the techniques.
53

Explorative authoring of Active Web content in a mobile environment

Calmez, Conrad, Hesse, Hubert, Siegmund, Benjamin, Stamm, Sebastian, Thomschke, Astrid, Hirschfeld, Robert, Ingalls, Dan, Lincke, Jens January 2013 (has links)
Developing rich Web applications can be a complex job - especially when it comes to mobile device support. Web-based environments such as Lively Webwerkstatt can help developers implement such applications by making the development process more direct and interactive. Further the process of developing software is collaborative which creates the need that the development environment offers collaboration facilities. This report describes extensions of the webbased development environment Lively Webwerkstatt such that it can be used in a mobile environment. The extensions are collaboration mechanisms, user interface adaptations but as well event processing and performance measuring on mobile devices. / Vielseitige Webanwendungen zu entwickeln kann eine komplexe Aufgabe sein - besonders wenn es die Unterstützung mobiler Geräte betrifft. Webbasierte Umgebungen wie Lively Kernel können Entwicklern helfen Webanwendungen zu entwickeln, indem sie den Entwicklungsprozess direkter und interaktiver gestalten. Zudem sind Entwicklungsprozesse von Software kollaborativ, d.h. Enwicklungsumgebungen müssen so gestaltet sein, dass sie mit kollaborativen Elementen zu unterstützen. Diese Arbeit beschreibt die Erweiterungen der webbasierten Entwicklungsumgebung Lively Webwerkstatt, so dass diese in einer mobilen Umgebung genutzt werden kann. Die Reichweite dieser Erweiterungen erstreckt sich von Kollaborationsmechanismen und Benutzerschnittstellen bis hin zu Eventbehandlung und Performanzmessungen auf mobilen Geräten.
54

Systemutveckling av Trouble Report : Hur väljer och prioriterar man tekniska funktioner i vidareutveckling av ett etablerat system? / System development of Trouble Report : How to choose and prioritize technical functions when redeveloping an established system?

Tjörnebro, Anna January 2013 (has links)
As part of an internship at Ericsson, this report was written to enhance the understanding of how it is to develop a system that is well established at the workplace. To improve an already existing system is not always as easy as many developers may think. In this report the pros and cons of developing an already existing system has been researched and analyzed. Do note that the results are only from one development of a specific system and that comparison of other developments has been made from other reports and not from experiencing it firsthand. It was found that the choices made can have an impact on further developing and it is important to write down what has been done. Writing down why you choose to do something may help you further down the process why you did what you did. / Som en del av ett praktiskt examensarbete, har denna rapport skrivits för att öka förståelsen av hur det är vidareutveckla ett befintligt och etablerat system. Att förbättra ett redan befintligt system är inte alltid så lätt som många systemutvecklare har uppfattning om. I denna rapport har fördelar och nackdelar med utvecklingen av ett redan befintligt system undersökts och analyserat med hjälp av egna upplevelser. Notera att resultatet är framtaget från en enda upplevelse av en specifik utveckling av ett system. Detta betyder att jämförelser endast gjorts med andra rapporteringar av liknande fall och inte med egen erfarenhet då bara ett system utvecklats under denna tid. Resultatet visar att det är viktigt att du antecknar dina tankar kring de val du gör då det kan hjälpa dig med andra val senare i projektets process.
55

Approaches for contextualization and large-scale testing of mobile applications

Wang, Jiechao 15 May 2013 (has links)
In this thesis, we focused on two problems in mobile application development: contextualization and large-scale testing. We identified the limitations of current contextualization and testing solutions. On one hand, advanced-remote-computing- based mobilization does not provide context awareness to the mobile applications it mobilized, so we presented contextify to provide context awareness to them without rewriting the applications or changing their source code. Evaluation results and user surveys showed that contextify-contextualized applications reduce users' time and effort to complete tasks. On the other hand, current mobile application testing solutions cannot conduct tests at the UI level and in a large-scale manner simultaneously, so we presented and implemented automated cloud computing (ACT) to achieve this goal. Evaluation results showed that ACT can support a large number of users and it is stable, cost-efficiency as well as time-efficiency.
56

Model-based Crawling - An Approach to Design Efficient Crawling Strategies for Rich Internet Applications

Dincturk, Mustafa Emre 02 August 2013 (has links)
Rich Internet Applications (RIAs) are a new generation of web applications that break away from the concepts on which traditional web applications are based. RIAs are more interactive and responsive than traditional web applications since RIAs allow client-side scripting (such as JavaScript) and asynchronous communication with the server (using AJAX). Although these are improvements in terms of user-friendliness, there is a big impact on our ability to automatically explore (crawl) these applications. Traditional crawling algorithms are not sufficient for crawling RIAs. We should be able to crawl RIAs in order to be able to search their content and build their models for various purposes such as reverse-engineering, detecting security vulnerabilities, assessing usability, and applying model-based testing techniques. One important problem is designing efficient crawling strategies for RIAs. It seems possible to design crawling strategies more efficient than the standard crawling strategies, the Breadth-First and the Depth-First. In this thesis, we explore the possibilities of designing efficient crawling strategies. We use a general approach that we called Model-based Crawling and present two crawling strategies that are designed using this approach. We show by experimental results that model-based crawling strategies are more efficient than the standard strategies.
57

Development of an interactive energy management web application for residential end users

Du Preez, Catharina 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Energy efficiency, as the effective use of energy, is recognized as one of the simplest ways to improve the sustainable use of resources and by implication involves the end-user. The 2008 power crisis which South Africa experienced, highlighted supply exigencies and prompted a subsequent emphasis on affordable, rapidly scalable solutions, notably energy efficiency. As the establishment of new supply capacity is both costly and time-consuming, the logical alternative has been to focus intervention on the demand side. Residential electrical end-use has been identified as an area where the potential for change exists and strategies to address residential demand have gained momentum. The vulnerability of energy systems affects energy security on technical, economic and social levels. South African consumers are confronted with rising living costs and a substantial increase in electricity prices according to the Integrated Resource Plan for Electricity (2010-2030). Integral to addressing end-use is the ensuing behaviour of the end-user. End-use analysis aims to grasp and model customer usage by considering the electric demand per customer type, end-use category, appliance type and time of use. This project has focussed on the development of an interactive web application as a tool for residential end-users to improve energy efficiency through modified consumption behaviour and the adoption of energy efficient habits. The objectives have been aimed at educating an end-user through exposure to energy efficient guidelines and consumption analysis. Based on a Time Of Use-framework, a consumer’s understanding of appliance usage profiles can help realize the cost benefits associated with appliance scheduling. In order to achieve the desired functionality and with extendibility and ease of maintenance in mind, the application relies on the provision of dynamic content by means of a relational database structured around end-use categories and appliance types. In an effort to convey only relevant information in the simplest way, current web technology was evaluated. The resulting design has favoured an interactive, minimalistic, graphic presentation of content in the form of a Rich Internet Application. The development process has been divided into two phases. The residential energy consumption context has been substantiated with a case study of which the main objective and outcome has been to devise a methodology to generate usage profiles for household appliances. Phase one of the development process has been completed, as well as the case study. The conceptualization and framework for phase two has been established and the recommendation is to incorporate the methodology and usage profile results from the case study for implementation of the second phase. The effectiveness of the tool can only be evaluated once phase two of the application is complete. A beta release version of the final product can then be made available to a focus group for feedback. / AFRIKAANSE OPSOMMING: Energie effektiwiteit, gesien as die effektiewe aanwending van energie, word herken as een van die eenvoudigste maniere om die volhoubare gebruik van hulpbronne te bevorder en betrek by implikasie die verbruiker. Die 2008 kragvoorsieningskrisis wat Suid-Afrika beleef het, het dringende tekorte aan die lig gebring en ’n gevolglike klemverskuiwing na bekostigbare, maklik aanpasbare oplossings, vernaamlik energie effektiwiteit. Aangesien die daarstelling van nuwe voorsieningskapasiteit beide duur is en baie tyd in beslag neem, was die voor die hand liggende alternatief om te fokus op vraag-kant toetrede. Huishoudelike elektriese verbruik is geïdentifiseer as ’n area waar die potensiaal vir verandering bestaan en strategieë om residensiële aanvraag aan te spreek het momentum gekry. Die kwesbaarheid van energiestelsels affekteer energie sekuriteit op tegniese, ekonomiese en sosiale vlakke. Suid-Afrikaanse verbruikers word gekonfronteer met stygende lewenskoste en ’n aansienlike toename in elektrisiteitspryse volgens die Geïntegreerde Hulpbron-Plan vir Elektrisiteit (2010-2030). Eie aan die aanspreek van verbruik is die voortvloeiende gedrag van die verbruiker. Verbruiksanalise poog om verbruik te begryp en te modelleer deur die elektriese aanvraag na gelang van verbruikerstipe, verbruikskategorie, toesteltipe en tyd van verbruik in aanmerking te neem. Hierdie projek het gefokus op die ontwikkeling van ’n interaktiewe web-toepassing as ’n instrument vir residensiële verbruikers om energie effektiwiteit te verbeter deur gewysigde verbruiksgedrag en die ingebruikneming van energie effektiewe gewoontes. Die doelwitte is gerig op die opvoeding van ’n verbruiker deur blootstelling aan riglyne vir energie effektiewe verbruik en verbruiksanalise. Gebaseer op ’n Tyd-Van-Verbruik-raamwerk, kan ’n verbruiker se begrip van toestelle se verbruiksprofiele ’n bydrae lewer om die koste-voordele geassosieer met toestel-skedulering te realiseer. Om sodoende die verlangde funksionaliteit te bewerkstellig en met verlengbaarheid en gemak van onderhoud voor oë, steun die toepassing op die verskaffing van dinamiese inhoud deur middel van ’n relasionele databasis wat gestruktureer is rondom verbruikskategorieë en toesteltipes. In ’n poging om slegs toepaslike informasie in die eenvoudigste vorm weer te gee, is teenswoordige web tegnologie geevalueer. Die vooruitspruitende ontwerp is ’n interaktiewe, minimalistiese, grafiese aanbieding van die inhoud in die vorm van ’n sogenaamde "Rich Internet Application". Die ontwikkelingsproses is ingedeel in twee fases. Die huishoudelike energieverbruikskonteks is bevestig deur middel van ’n gevallestudie waarvan die vernaamste doelwit en uitkoms was om ’n metodologie daar te stel om verbruiksprofiele van huishoudelike toestelle te genereer. Fase een van die ontwikkelingsproses is voltooi asook die gevallestudie. Die konsepsuele onwikkeling en raamwerk vir fase twee is reeds gevestig en die aanbeveling is om die metodologie en verbruiksprofielresultate van die gevallestudie te inkorporeer vir implementering van die tweede fase. Die effektiwiteit van die toepassing kan eers geevalueer word sodra fase twee afgehandel is. ’n Beta-weergawe vrystelling van die finale produk kan dan beskikbaar gestel word aan ’n fokusgroep vir terugvoer.
58

Usando Assertivas de Correspondência para Especificação e Geração de Visões XML para Aplicações Web / Using assertive of correspondence for specification and generation of XML view for applications Web

Lemos, Fernando Cordeiro de January 2007 (has links)
LEMOS, Fernando Cordeiro de. Usando Assertivas de Correspondência para Especificação e Geração de Visões XML para Aplicações Web. 2007. 115 f. : Dissertação (mestrado) - Universidade Federal do Ceará, Centro de Ciências, Departamento de Computação, Fortaleza-CE, 2007. / Submitted by guaracy araujo (guaraa3355@gmail.com) on 2016-06-24T19:44:28Z No. of bitstreams: 1 2007_dis_fclemos.pdf: 1586971 bytes, checksum: d5add67ad3fb40e35813240332a35900 (MD5) / Approved for entry into archive by guaracy araujo (guaraa3355@gmail.com) on 2016-06-24T19:47:37Z (GMT) No. of bitstreams: 1 2007_dis_fclemos.pdf: 1586971 bytes, checksum: d5add67ad3fb40e35813240332a35900 (MD5) / Made available in DSpace on 2016-06-24T19:47:37Z (GMT). No. of bitstreams: 1 2007_dis_fclemos.pdf: 1586971 bytes, checksum: d5add67ad3fb40e35813240332a35900 (MD5) Previous issue date: 2007 / Web applications that have large number of pages, whose contents are dynamically extracted from one or more databases, and that requires data intensive access and update, are known as "data-intensive Web applications" (DIWA applications) [7]. In this work, the requirements for the content of each page of the application are specified by an XML view, which is called Navigation View (NV). We believe that the data of NVs are stored in a relational or XML database. In this work, we propose an approach to specify and generate NVs for Web applications whose content is extracted from one or more data sources. In the proposed approach, a NV is specified conceptually with the help of a set of Correspondence Assertions [44], so that the definition of NV can be generated automatically based on assertions of view. / Aplicações Web que possuem grande número de páginas, cujos conteúdos são dinamicamente extraídos de banco de dados, e que requerem intenso acesso e atualização dos dados, são conhecidas como “data-intensive Web applications” (aplicações DIWA). Neste trabalho, os requisitos de conteúdo de cada página da aplicação são especificados através de uma visão XML, a qual denominamos Visão de Navegação (VN). Consideramos que os dados das VNs estão armazenados em um banco de dados relacional ou XML. Nesse trabalho, propomos um enfoque para especificação e geração de VNs para aplicações Web cujo conteúdo é extraído de uma ou mais fontes de dados. No enfoque proposto, uma VN é especificada conceitualmente com a ajuda de um conjunto de Assertivas de Correspondência, de forma que a definição da VN pode ser gerada automaticamente a partir das assertivas da visão.
59

Geração de testes de aceitação a partir de modelos U2TP para sistemas web / Acceptance tests generation from U2TP models for web applications

Feller, Nadjia Jandt January 2015 (has links)
A utilização desta abordagem no ciclo de desenvolvimento de uma aplicação web traz algumas vantagens, como ser necessário gerar manualmente apenas o modelo de comportamento de cada funcionalidade da aplicação, (pois os demais artefatos são gerados automaticamente), consumindo menos tempo e estando menos sujeitos a erros, além de prevenir diferentes interpretações dos requisitos pelos stakeholders, desenvolvedores e testadores. O tempo despendido na especificação dos modelos é compensado pelo tempo economizado com a geração dos cenários e do código de testes. / The testing activity throughout software development is fundamental to the pursuit of software quality and reliability, finding faults to be removed. However, despite its importance, software testing is often an underutilized phase in software development. Moreover, tests are proved to be expensive, difficult and problematic when not done in the appropriate way. A new paradigm for software testing is model-driven testing (MDT), which can be defined as software testing where test cases are derived from a model that describes some aspects of the system being tested, such as behavior, for example. This description, often using UML diagrams and/or its profiles, can be processed to produce a set of test cases. Software specifications based on usage scenarios expressed by appropriate UML diagrams are considered significant and effective, because they describe the system’s requirements from an intuitive and visual perspective. Thus, they can be used for the description of acceptance tests, which validate that the system meets user requirements. These specifications also facilitate the automation of this kind of test. Test automation can decrease time spent on testing, thereby reducing the cost of this activity. Thus, this work proposes an approach for automated generation of acceptance tests from U2TP (the UML 2.0 test profile) diagrams for web applications, based on behavior driven development (BDD) paradigm, obtaining acceptance scenarios and executable test code supported by an acceptance testing automation framework. This approach was applied on an actual development environment, by means of an experiment. Using this approach in an web application development cycle has some advantages, such as being required only to manually generate the model of behavior of each application functionality (because other artifacts are generated automatically), thus being less time consuming and less prone to errors, and preventing different interpretations of requirements by stakeholders, developers and testers. The time spent at the models’ specification is compensated by the time saved with the generation of scenarios and test code.
60

Penetration testing for the inexperienced ethical hacker : A baseline methodology for detecting and mitigating web application vulnerabilities / Penetrationstestning för den oerfarne etiska hackaren : En gedigen grundmetodologi för detektering och mitigering av sårbarheter i webbapplikationer

Ottosson, Henrik, Lindquist, Per January 2018 (has links)
Having a proper method of defense against attacks is crucial for web applications to ensure the safety of both the application itself and its users. Penetration testing (or ethical hacking) has long been one of the primary methods to detect vulnerabilities against such attacks, but is costly and requires considerable ability and knowledge. As this expertise remains largely individual and undocumented, the industry remains based on expertise. A lack of comprehensive methodologies at levels that are accessible to inexperienced ethical hackers is clearly observable. While attempts at automating the process have yielded some results, automated tools are often specific to certain types of flaws, and lack contextual flexibility. A clear, simple and comprehensive methodology using automatic vulnerability scanners complemented by manual methods is therefore necessary to get a basic level of security across the entirety of a web application. This master's thesis describes the construction of such a methodology. In order to define the requirements of the methodology, a literature study was performed to identify the types of vulnerabilities most critical to web applications, and the applicability of automated tools for each of them. These tools were tested against various existing applications, both intentionally vulnerable ones, and ones that were intended to be secure. The methodology was constructed as a four-step process: Manual Review, Testing, Risk Analysis, and Reporting. Further, the testing step was defined as an iterative process in three parts: Tool/Method Selection, Vulnerability Testing, and Verification. In order to verify the sufficiency of the methodology, it was subject to Peer-review and Field experiments. / Att ha en gedigen metodologi för att försvara mot attacker är avgörande för att upprätthålla säkerheten i webbapplikationer, både vad gäller applikationen själv och dess användare. Penetrationstestning (eller etisk hacking) har länge varit en av de främsta metoderna för att upptäcka sårbarheter mot sådana attacker, men det är kostsamt och kräver stor personlig förmåga och kunskap. Eftersom denna expertis förblir i stor utsträckning individuell och odokumenterad, fortsätter industrin vara baserad på expertis. En brist på omfattande metodiker på nivåer som är tillgängliga för oerfarna etiska hackare är tydligt observerbar. Även om försök att automatisera processen har givit visst resultat är automatiserade verktyg ofta specifika för vissa typer av sårbarheter och lider av bristande flexibilitet. En tydlig, enkel och övergripande metodik som använder sig av automatiska sårbarhetsverktyg och kompletterande manuella metoder är därför nödvändig för att få till en grundläggande och heltäckande säkerhetsnivå. Denna masteruppsats beskriver konstruktionen av en sådan metodik. För att definiera metodologin genomfördes en litteraturstudie för att identifiera de typer av sårbarheter som är mest kritiska för webbapplikationer, samt tillämpligheten av automatiserade verktyg för var och en av dessa sårbarhetstyper. Verktygen i fråga testades mot olika befintliga applikationer, både mot avsiktligt sårbara, och sådana som var utvecklade med syfte att vara säkra. Metodiken konstruerades som en fyrstegsprocess: manuell granskning, sårbarhetstestning, riskanalys och rapportering. Vidare definierades sårbarhetstestningen som en iterativ process i tre delar: val av verkyg och metoder, sårbarhetsprovning och sårbarhetsverifiering. För att verifiera metodens tillräcklighet användes metoder såsom peer-review och fältexperiment.

Page generated in 0.0935 seconds