• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 169
  • 144
  • 49
  • 21
  • 14
  • 8
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 472
  • 472
  • 144
  • 139
  • 130
  • 88
  • 86
  • 82
  • 68
  • 64
  • 63
  • 59
  • 54
  • 52
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Evaluating and comparing the web application security testing tools: Identifying and Applying Key Metrics

Thota, Sanmay Bhavanish, Vemula, Sai Ajit Jayasimha January 2024 (has links)
Background: Web application security (WAS) testing is crucial for protecting web applications from cyber threats. However, organizations often struggle to select effective WAS testing tools due to the lack of a well-defined set of evaluation criteria. This research aims to address this need by identifying the key metrics for evaluating and comparing WAS testing tools.  Objectives: The primary objectives of this research are to identify the key metrics for comparing WAS testing tools, validate the significance of these metrics through semi-structured interviews, and perform a comparison between WAS testing tools using the validated metrics. This research aims to find a set of validated metrics for evaluating and comparing WAS testing tools.  Methods: The research methodology consisted of three main phases: a literature review to compile a comprehensive set of technical and non-technical metrics commonly used for assessing and comparing WAS testing tools, semi-structured interviews with security experts to validate the significance of the identified metrics, and an experiment to compare three WAS testing tools - ZAP, Burp Suite, and Acunetix - using the OWASP Benchmark project. These three tools were selected based on the author’s recommendations in the literature.  Results: The initial literature review found 37 evaluation metrics for WAS testing tools. Through interviews, experts confirmed some of these were important, but also said some were not very useful. The experts additionally suggested some new metrics that were not in the literature. Incorporating this feedback, the final list was refined down to 35 metrics for evaluating WAS testing tools. An experiment was then conducted to compare three WAS testing tools - ZAP, Burp Suite, and Acunetix with the test subject as the OWASP Benchmark Project and by using the validated set of metrics. The results of this experiment revealed differences in the performance of the tools, with Burp Suite emerging as the best performer.  Conclusions: This research has provided a valid set of metrics for comparing and evaluating WAS testing tools, empowering organizations to make more informed decisions. Security professionals can optimise their WAS testing tool selection by understanding the key metrics and their relative significance, as established through the literature and interviews. Based on the experimental analysis, Burp Suite performed better than other tools. Therefore, for organizations initiating the selection process of the WAS testing tool, Burp Suite stands out as a good choice.
222

MealMate : A study on how to create a web application that is navigable and trustworthy

Lovén, Johan, Tetlie, Andreas, Rydersten, Jacob, Eriksson, Louise, Hallbäck, Martin, Carlstedt, Olle, Atuma Löfgren, Pontus, Malmström, Richard January 2024 (has links)
The purpose of this study was to develop a web application where users can advertise home-cooked meals for other users to buy while focusing on identifying the critical design and technological features that enhance navigability and credibility in a web application designed for advertising and selling home-cooked lunch boxes to students, and how these features contribute to optimising the purchasing process. Based on a pre-study where market analysis, user testing on similar applications, and prototypes were examined the team defined an outline for the agile development process. The development followed some best practices regarding version handling, development methodology, database management, etc. The research culminated in a fully functional and aesthetically pleasing web application where the results and conclusions were solidified by extensive user and unit testing. Concluded by the research study, a navigable and credible web application can be created by implementing a simple base layout with a few interactive elements, carefully chosen colours, and creating credibility through user ratings and high-quality pictures. Moreover, the analysed factors are proved to streamline the purchasing process. / Denna forskning genomfördes som en kandidatuppsats vid Linköpings universitet. Syftet var att utveckla en webbapplikation där användare kan annonsera hemlagade måltider för andra användare att köpa. Fokuset låg på forskningsfrågan där analysen syftar till att identifiera kritiska design- och teknikaspekter som förbättrar navigerbarheten och trovärdigheten i en webbapplikation utformad för att annonsera och sälja hemlagade lunchlådor till studenter, och hur dessa funktioner bidrar till att optimera köpprocessen. Baserat på en förstudie där marknadsanalys, användartester på liknande applikationer och prototyper undersöktes, definierade teamet en agil utvecklingsprocess. Utvecklingen följde vissa ”best practices” avseende versionshantering, utvecklingsmetodik, databashantering etc. Forskningen kulminerade i en fullt fungerande och estetiskt tilltalande webbapplikation där resultaten och slutsatserna stärktes genom omfattande användar- och enhetstester. Konkluderat var att en navigerbar och trovärdig webbapplikation kan skapas genom att implementera en enkel baslayout med ett fåtal interaktiva element, noggrant utvalda färger och skapad trovärdighet genom användarbetyg och högkvalitativa bilder. Dessutom visade sig de analyserade faktorerna effektivisera köpprocessen.
223

A knowledge based system for construction health and safety competence assessment

Yu, Hao January 2009 (has links)
Organisational and individual Health and Safety (H&S) competence is an essential element to the successful completion of a construction project in a safe way and without hazards to the health of all workforce. Under the Construction (Design and Management) (CDM) Regulations 2007, the client should take reasonable steps to ensure that the appointed duty-holders and engaged people are H&S competent to design, build or co-ordinate the project. Although the CDM Regulations 2007 and its Approved Code of Practice (ACoP) have established ‘Core Criteria’ to guide the client to assess duty-holders’ H&S competence in the outset of a project, it is still difficult for most inexperienced clients to discharge the duty of making the key decisions in H&S competence assessment. In order to help the client implement H&S competence assessment, it is important to develop a tool that can effectively and efficiently support the client to make reasonable decisions in the selection of H&S competent duty-holders. According to the findings of the case study of existing formal H&S competence assessment schemes undertaken as part of this work, H&S competence assessment was characterised as a subjective, qualitative and non-linear regulation-compliance checking process. In addition, the case study helped identify the latent shortcomings in the ‘Core Critiera’ and the operational drawbacks in current practice of implementing H&S competence assessment. Based on a review of Information Technology (I.T.) and Artificial Intelligence (A.I.) applications in construction, Knowledge-Based System (KBS) is identified as being a suitable tool to support decision-making in H&S competence assessment, mainly due to its appropriateness to solve regulation-compliance checking problems and support subjective and qualitative decision-making process. Following a decision-making framework for H&S competence assessment, a KBS decision-support model was developed, applying three mechanisms to support the reasonable decision-making for H&S competence assessment. In order to develop an appropriate and practical KBS for H&S competence assessment, a textual knowledge base was developed, specifying the minimum satisfaction standards and a rating indicator system for ‘Core Criteria’. As a result, an online KBS was developed using Java Server Pages (JSP) technology and MySQL. The online KBS applied the textual knowledge base to support the screen, rating, ranking and reporting decision-supporting mechanisms. Simultaneously, the case inquiry and expert inquiry facilities were also included in the KBS for effective decision-making. Finally, construction experts and practitioners in H&S management evaluated the validity and usability of the KBS through a questionnaire survey. The prototype KBS was borne out to be an effective and efficient decision-support tool for H&S competence assessment and have the potential to be applied in practice.
224

Maintaining Web Applications Integrity Running on RADIUM

Ur-Rehman, Wasi 08 1900 (has links)
Computer security attacks take place due to the presence of vulnerabilities and bugs in software applications. Bugs and vulnerabilities are the result of weak software architecture and lack of standard software development practices. Despite the fact that software companies are investing millions of dollars in the research and development of software designs security risks are still at large. In some cases software applications are found to carry vulnerabilities for many years before being identified. A recent such example is the popular Heart Bleed Bug in the Open SSL/TSL. In today’s world, where new software application are continuously being developed for a varied community of users; it’s highly unlikely to have software applications running without flaws. Attackers on computer system securities exploit these vulnerabilities and bugs and cause threat to privacy without leaving any trace. The most critical vulnerabilities are those which are related to the integrity of the software applications. Because integrity is directly linked to the credibility of software application and data it contains. Here I am giving solution of maintaining web applications integrity running on RADIUM by using daikon. Daikon generates invariants, these invariants are used to maintain the integrity of the web application and also check the correct behavior of web application at run time on RADIUM architecture in case of any attack or malware. I used data invariants and program flow invariants in my solution to maintain the integrity of web-application against such attack or malware. I check the behavior of my proposed invariants at run-time using Lib-VMI/Volatility memory introspection tool. This is a novel approach and proof of concept toward maintaining web application integrity on RADIUM.
225

Application of Web Mashup Technology to Oyster Information Services

Chuindja Ngniah, Christian 15 December 2012 (has links)
Web mashup is a lightweight technology used to integrate data from remote sources without direct access to their databases. As a data consumer, a Web mashup application creates new contents by retrieving data through the Web application programming interface (API) provided by the external sources. As a data provider, the service program publishes its Web API and implements the specified functions. In the project reported by this thesis, we have implemented two Web mashup applications to enhance the Web site oystersentinel.org: the Perkinsus marinus model and the Oil Spill model. Each model overlay geospatial data from a local database on top of a coastal map from Google Maps. In addition, we have designed a Web-based data publishing service. In this experimental system, we illustrated a successful Web mashup interface that allows outside developers to access the data about the local oyster stock assessment.
226

Processos de apoio ao desenvolvimento de aplicações web / Supporting process for Web applications development

Souza, Osnete Ribeiro de 06 May 2005 (has links)
Num curto período de tempo, a Web tornou-se o aspecto central de muitas aplicações em diferentes áreas. Rapidamente, cresceu e diversificou seu uso, e diversos setores de negócio realizam suas operações no ambiente da Web. Entretanto, à medida que cresce a extensão de uso de aplicações Web, que se tornam mais complexas, aumenta a preocupação com a maneira como as aplicações são desenvolvidas. Freqüentemente, a abordagem de desenvolvimento é ad hoc, desprovida de técnicas sistemáticas e de metodologias sólidas, resultando em aplicações de baixa qualidade. A fim de alcançar aplicações Web bem sucedidas, há a necessidade de melhores princípios de esenvolvimento que devem considerar as características especiais dessas aplicações. Dessa forma, o objetivo do trabalho é propor Processos de Apoio ao desenvolvimento das aplicações Web, baseando-se nas características dessas aplicações e nos Processos de Apoio da ISO/IEC 12207. Os processos de Apoio visam auxiliar outros processos do ciclo de vida da aplicação a alcançar produtos de qualidade e o sucesso do projeto / Within a short period, the Web has become the central aspect of many applications in different areas. Rapidly, it has grown and diversified in its scope and use. Diverse business sectors improve their operations in Web environment. The Web application has become more complex and it has increased the concern with the way the application is developed. Frequently, the development is ad hoc, unprovide of systematic methods and methodologies, resulting in poor quality applications. In order to have successful Web applications, it is necessary better principles of development that have to consider the characteristics of these applications. The objective of this work is to propose Supporting Process for development Web applications, based on the characteristics of these applications and the Supporting Process of ISO/IEC 12207. The Supporting processes aim to assist other processes of the cycle of life of the application to reach quality products and successful projects
227

Proposta de uma metodologia de medição e priorização de segurança de acesso para aplicações WEB. / Proposal of a methodology for measuring and prioritization access security for WEB applications.

Colombo, Regina Maria Thienne 26 March 2014 (has links)
Em um mundo tecnológico e globalmente interconectado, em que indivíduos e organizações executam transações na web com frequência, a questão da segurança de software é imprescindível, ela é necessária em diversos nichos: segurança das redes de computadores, dos computadores e dos softwares. A implantação de um sistema de segurança que abrange todos os aspectos é extensa e complexa, ao mesmo tempo em que a exploração de vulnerabilidades e ataques é exponencialmente crescente. Por causa da natureza do software e de sua disponibilidade na web, a garantia de segurança nunca será total, porém é possível planejar, implementar, medir e avaliar o sistema de segurança e finalmente melhorá-la. Atualmente, o conhecimento específico em segurança é detalhado e fragmentado em seus diversos nichos, a visão entre os especialistas de segurança é sempre muito ligada ao ambiente interno da computação. A medição de atributos de segurança é um meio de conhecer e acompanhar o estado da segurança de um software. Esta pesquisa tem como objetivo apresentar uma abordagem top-down para medição da segurança de acesso de aplicações web. A partir de um conjunto de propriedades de segurança reconhecidas mundialmente, porém propriedades estas intangíveis, é proposta uma metodologia de medição e priorização de atributos de segurança para conhecer o nível de segurança de aplicações web e tomar as ações necessárias para sua melhoria. Define-se um modelo de referência para segurança de acesso e o método processo de análise hierárquica apoia a obtenção de atributos mensuráveis e visualização do estado da segurança de acesso de uma aplicação web. / In a technological world and globally interconnected, in which individuals and organizations perform transactions on the web often, the issue of software security is essential, it is needed in several niches: security of computer networks, computers and software. The implementation of a security system that covers all aspects is extensive and complex, while the exploitation of vulnerabilities and attacks are increasing exponentially. Because of the nature of software and its availability on the web, ensure security will never be complete, but it is possible to plan, implement, measure and evaluate the security system and ultimately improve it. Currently, the specific knowledge in security is detailed and fragmented into its various niches; the view among security experts is always connected to the internal environment of computing. The measurement of security attributes is a way to know and monitor the state of software security. This research aims to present a top-down approach for measuring the access security of web applications. From a set of security properties globally recognized, however these intangible properties, I propose a measurement methodology and prioritization of security attributes to meet the security level of web applications and take necessary actions for improvement. It is defined a reference model for access security and a method of analytic hierarchy process to support the achievement of measurable attributes and status of the access security of a web application.
228

Vývoj webového ekonomicko-právního informačního systému / Development of economic-law information web-system

PECH, Jan January 2019 (has links)
The first part of the thesis is about web applications, MVC architecture and web frameworks. Their role in application development is discussed and the advantages and disadvantages of their use are outlined. In the following part, the Symfony web framework 3.4 is discussed further. Specifically, this section develops its core components and functionality that it uses (structure, configuration files, controllers, entities, templates, models, security, etc.) The second part of the thesis is dedicated to the development of economic-legal information system EPIS Online, which was created by Symfony 3.4. Its three modules (security and user license structure, legislation and payment gateway) are described further. The specific processes in the system, actions of controllers, templates, services and many other components are explained in this part.
229

Monitoring en temps réel de la vitesse de déplacement sur dispositif connecté : modélisation mathématique sur plateforme mobile interfacée avec une base de données d'entraînement et d'audit physiologique / Real-time motion tracking via a portable (wearable) device : data analytics on a portable platform synched with a training and physiological database

Carbillet, Thomas 12 April 2019 (has links)
L'amélioration de la performance en course à pied, lors de courses sur du plat, raisonne avec l'actualité et l'objectif de voir un athlète descendre en dessous des deux heures sur Marathon. Cependant, il existe peu d'équipes travaillant de façon transverse sur les sujets de préparation et de stratégie de course à destination du grand public. Les programmes d'entraînement, fondés sur l'expérience des entraîneurs, ne sont pas, ou peu, personnalisés ce qui conduit à des risques de blessures et une perte de la motivation. Une limite de l'analyse actuelle des programmes d'entraînement semble avoir été atteinte et l'entreprise BillaTraining® a pour objectif de construire un pont entre la recherche scientifique et les coureurs amateurs.L'objectif principal de ce travail de thèse est double. Premièrement, nous avons essayé d'apporter une contribution à la recherche dans le domaine de la course à pied. Après avoir accumulé et formaté des données d'entraînements et de courses provenant de différentes sources, nous avons cherché à décrire des phénomènes tels que l'accélération humaine ou encore les différentes stratégies de course employées sur Marathon pour des coureurs dont les performances de durée de course sont comprises entre 2h30 et 4 heures.Deuxièmement, nous avons développé une application web intégrant les trois phases de la méthode BillaTraining®. La première étape est un audit énergétique qui n'est autre qu'un entraînement de trente minutes à la sensation, permettant de connaitre les capacités physiques d'un coureur. La second étape, le radar énergétique, est le bilan de l'audit et agit comme point d'entrée de l'entraînement sur-mesure qui est la troisième et dernière étape.Afin de répondre à ces deux objectifs, nous avons intégré des notions de physiologie, de mathématiques et d'informatique.Les connaissances en physiologie sont basées sur l'état de l'art ainsi que les recherches passées et présentes du professeur Véronique Billat. Par extension, il s'agit du coeur de métier de l'entreprise BillaTraining®. Les idées de recherche présentent dans cette thèse émanent de la physiologie.Les mathématiques, quand à elles, nous ont permis de décrire certains phénomènes physiologiques grâce notamment aux statistiques. Nous avons eu l'occasion d'utiliser le modèle d'Ornstein-Uhlenbeck de retour à la moyenne ou encore la méthode PELT (Pruned Exact Linear Time) pour la détection d'instants de ruptures dans une série temporelle.Finalement, l'informatique permet de faire communiquer les mathématiques et la physiologie à des fins de recherche scientifique et commerciales. / The improvement running performance has become a major topic lately. We are getting closer to running a marathon in under 2 hours. However, there are not so many professionals working transversally regarding pre-race and in-race preparation concerning the general public. Training plans are based on trainers' experience and are often not custom-made. This exposes the runners to injury risk and motivation loss. It seems that the current analysis of training plans has reached a limit. The aim for BillaTraining® is to go beyond this limit by connecting the research with the general public of runners.This PhD has two main goals. The first one is trying to contribute to the research about running. After gathering and formatting trainings and races data from different origins, we tried to isolate and describe how humans run marathons including 2.5 to 4-hour performances. We studied acceleration, speed and heart rate time series among other things, with the idea of understanding the different running strategies.The second one is the development of a web application embracing the three steps of the BillaTraining® method. The first step is an energetic audit which is a 30-minute running session guided by the runner's sensations. The second step is the energetic radar which is the results of the audit. The last step is a tailor-made training plan built depending on the runner's objectives.In order to come up with a solution, we had to bring together Physiology, Mathematics and Computer Science.The knowledge we had in Physiology was based on professor Véronique Billat's past and current researches. These researches are now part of BillaTraining® and are central for the growth of the company.We used Mathematics to try to describe physiological phenomenons thanks to Statistics. By applying the Ornstein-Uhlenbeck model, we found that humans are able to run at an even acceleration. By using the PELT (Pruned Exact Linear Time) method we automated changepoints detection in time series.Finally, Computer Science allowed a communication between Physiology and Mathematics for research, as well as marketing training tools at the forefront of innovation.
230

Understanding and automating application-level caching / Entendendo e automatizando cache a nível de aplicação

Mertz, Jhonny Marcos Acordi January 2017 (has links)
O custo de serviços na Internet tem encorajado o uso de cache a nível de aplicação para suprir as demandas dos usuários e melhorar a escalabilidade e disponibilidade de aplicações. Cache a nível de aplicação, onde desenvolvedores manualmente controlam o conteúdo cacheado, tem sido adotada quando soluções tradicionais de cache não são capazes de atender aos requisitos de desempenho desejados. Apesar de sua crescente popularidade, este tipo de cache é tipicamente endereçado de maneira ad-hoc, uma vez que depende de detalhes específicos da aplicação para ser desenvolvida. Dessa forma, tal cache consiste em uma tarefa que requer tempo e esforço, além de ser altamente suscetível a erros. Esta dissertação avança o trabalho relacionado a cache a nível de aplicação provendo uma compreensão de seu estado de prática e automatizando a identificação de conteúdo cacheável, fornecendo assim suporte substancial aos desenvolvedores para o projeto, implementação e manutenção de soluções de caching. Mais especificamente, este trabalho apresenta três contribuições: a estruturação de conhecimento sobre caching derivado de um estudo qualitativo, um levantamento do estado da arte em abordagens de cache estáticas e adaptativas, e uma técnica que automatiza a difícil tarefa de identificar oportunidades de cache O estudo qualitativo, que envolveu a investigação de dez aplicações web (código aberto e comercial) com características diferentes, permitiu-nos determinar o estado de prática de cache a nível de aplicação, juntamente com orientações práticas aos desenvolvedores na forma de padrões e diretrizes. Com base nesses padrões e diretrizes derivados, também propomos uma abordagem para automatizar a identificação de métodos cacheáveis, que é geralmente realizado manualmente por desenvolvedores. Tal abordagem foi implementada como um framework, que pode ser integrado em aplicações web para identificar automaticamente oportunidades de cache em tempo de execução, com base na monitoração da execução do sistema e gerenciamento adaptativo das decisões de cache. Nós avaliamos a abordagem empiricamente com três aplicações web de código aberto, e os resultados indicam que a abordagem é capaz de identificar oportunidades de cache adequadas, melhorando o desempenho das aplicações em até 12,16%. / Latency and cost of Internet-based services are encouraging the use of application-level caching to continue satisfying users’ demands, and improve the scalability and availability of origin servers. Application-level caching, in which developers manually control cached content, has been adopted when traditional forms of caching are insufficient to meet such requirements. Despite its popularity, this level of caching is typically addressed in an adhoc way, given that it depends on specific details of the application. Furthermore, it forces application developers to reason about a crosscutting concern, which is unrelated to the application business logic. As a result, application-level caching is a time-consuming and error-prone task, becoming a common source of bugs. This dissertation advances work on application-level caching by providing an understanding of its state-of-practice and automating the decision regarding cacheable content, thus providing developers with substantial support to design, implement and maintain application-level caching solutions. More specifically, we provide three key contributions: structured knowledge derived from a qualitative study, a survey of the state-of-the-art on static and adaptive caching approaches, and a technique and framework that automate the challenging task of identifying cache opportunities The qualitative study, which involved the investigation of ten web applications (open-source and commercial) with different characteristics, allowed us to determine the state-of-practice of application-level caching, along with practical guidance to developers as patterns and guidelines to be followed. Based on such patterns and guidelines derived, we also propose an approach to automate the identification of cacheable methods, which is often manually done and is not supported by existing approaches to implement application-level caching. We implemented a caching framework that can be seamlessly integrated into web applications to automatically identify and cache opportunities at runtime, by monitoring system execution and adaptively managing caching decisions. We evaluated our approach empirically with three open-source web applications, and results indicate that we can identify adequate caching opportunities by improving application throughput up to 12.16%. Furthermore, our approach can prevent code tangling and raise the abstraction level of caching.

Page generated in 0.125 seconds