Spelling suggestions: "subject:"browser"" "subject:"tbrowser""
41 |
Dynamic Scoping for Browser Based Access Control SystemNadipelly, Vinaykumar 25 May 2012 (has links)
We have inorganically increased the use of web applications to the point of using them for almost everything and making them an essential part of our everyday lives. As a result, the enhancement of privacy and security policies for the web applications is becoming increasingly essential. The importance and stateless nature of the web infrastructure made the web a preferred target of attacks. The current web access control system is a reason behind the victory of attacks. The current web consists of two major components, the browser and the server, where the effective access control system needs to be implemented. In terms of an access control system, the current web has adopted the inadequate same origin policy and same session policy for the browser and server, respectively. The current web access control system policies are sufficient for the earlier day's web, which became inadequate to address the protection needs of today's web.
In order to protect the web application from un-trusted contents, we provide an enhanced browser based access control system by enabling the dynamic scoping. Our security model for the browser will allow the client and trusted web application contents to share a common library and protect web contents from each other, while they still get executed at different trust levels. We have implemented a working model of an enhanced browser based access control system in Java, under the Lobo browser.
|
42 |
Cross-platform testing and maintenance of web and mobile applicationsRoy Choudhary, Shauvik 08 June 2015 (has links)
Modern software applications need to run on a variety of web and mobile platforms with diverse software and hardware-level features. Thus, developers of such software need to duplicate the testing and maintenance effort on a wide range of platforms. Often developers are not able to cope with this increasing demand and release software that is broken on certain platforms, thereby affecting a class of customers using such platforms. Hence, there is a need for automating such duplicate activities to assist the developer in coping with the ever increasing demand. The goal of my work is to improve the testing and maintenance of cross-platform web and mobile applications by developing automated techniques for comparing and matching the behavior of such applications across different platforms.
To achieve this goal, I have identified three problems that are relevant in the context of cross-platform testing and maintenance: 1) automated identification of inconsistencies in the same application's behavior across multiple platforms, 2) detecting features that are present in the application on one platform, but missing on another platform version of the same application, and, 3) automated migration of test suites and possibly other software artifacts across platforms. I present three different scenarios for the development of {cross-platform} web and mobile applications, and formulate each of the three problems in the scenario where it is most relevant. To address and mitigate these problems in their corresponding scenarios, I present the principled design, development and evaluation of the two techniques, and a third preliminary technique to highlight the research challenges of test migration. The first technique, X-pert identifies inconsistencies in a web application running on multiple web browsers. The second technique, FMAP matches features between the desktop and mobile versions of a web application and reports any features found missing on either of the platform versions. The final technique, MigraTest attempts to automatically migrate test cases from a mobile application on one platform to its counterpart on another platform.
To evaluate these techniques, I implemented them as prototype tools and ran these tools on real-world subject applications. The empirical evaluation of X-pert shows that it is accurate and effective in detecting real-world inconsistencies in web applications. In the case of FMAP, the results of my evaluation show that it was able to correctly identify missing features between desktop and mobile versions of the web applications considered, as confirmed by my analysis of user reports and software fixes for these applications. The third technique, MigraTest was able to efficiently migrate test cases between two mobile platform versions of the subject applications.
|
43 |
Failų tvarkymas taikant trimatį interfeisą / Using 3d interface for file managementMitrikevičius, Gediminas 25 November 2010 (has links)
Naudojant įprastas 2D failų naršykles esant dideliems (>1000) failų medžiams, kyla problemų, kaip patogiai ir greitai tvarkyti bylas. Šio darbo tikslas yra pasiūlyti patogesnį failų pavaizdavimo būdą ir sukurti didelių medžių failų naršyklės 3D vizualizaciją. Jai iškelti panaudojamumo kriterijai, pagal juos apžvelgiant pavaizdavimo būdus pasirinkta kūginių medžių vizualizacija, aprašyti sistemos reikalavimai, pasirinkta technologija ir realizuotas sistemos prototipas. Sukurta sistema FSN tenkina iškeltus kriterijus ir yra tinkama didelių failų medžių pavaizdavimui. / It is not convenient to browse large file systems (more than 1000 files) using ordinary 2D file browsers. The purpose of this paper is to suggest file tree visualization, suitable for large file trees and implement 3D file system browser prototype. According to the needed browser tasks, there was selected cone tree visualization, was defined specification, selected suitable technology and implemented 3D browser prototype. This system implemented all specified tasks and is suitable for large file tree visualization.
|
44 |
Visualizing things in construction photos: time, spatial coverage, and content for construction managementWu, Fuqu 30 July 2009 (has links)
PhotoScope, a novel visualization, visualizes the spatiotemporal coverage of photos in a construction photo collection. It extends the standard photo browsing paradigm in two main ways: visualizing spatial coverage of photos on floor plans, and indexing photos by a combination of spatial coverage, time, and content specifications. This approach enables users to browse and search space- and time-indexed photos more effectively. We designed PhotoScope specifically to address challenges in the construction management industry, where large photo collections are amassed to document project progress. These ideas may also apply to any photo collection that is spatially constrained and must be searched using spatial, temporal, and content criteria. Design choices made when developing PhotoScope are also described.
Civil, mechanical and electrical engineers, and professionals from construction management validated the visualization mechanisms and functionalities of PhotoScope in a usability study. Empirical findings on the cognitive behaviors of participants are also discussed in this thesis.
|
45 |
Interaktivní procházení webu a extrakce dat / Interactive web crawling and data extractionFejfar, Petr January 2018 (has links)
Title: Interactive crawling and data extraction Author: Bc. Petr Fejfar Author's e-mail address: pfejfar@gmail.com Department: Department of Distributed and Dependable Systems Supervisor: Mgr. Pavel Je ek, Ph.D., Department of Distributed and De- pendable Systems Abstract: The subject of this thesis is Web crawling and data extraction from Rich Internet Applications (RIA). The thesis starts with analysis of modern Web pages along with techniques used for crawling and data extraction. Based on this analysis, we designed a tool which crawls RIAs according to the instructions defined by the user via graphic interface. In contrast with other currently popular tools for RIAs, our solution is targeted at users with no programming experience, including business and analyst users. The designed solution itself is implemented in form of RIA, using the Web- Driver protocol to automate multiple browsers according to user-defined instructions. Our tool allows the user to inspect browser sessions by dis- playing pages that are being crawled simultaneously. This feature enables the user to troubleshoot the crawlers. The outcome of this thesis is a fully design and implemented tool enabling business user to extract data from the RIAs. This opens new opportunities for this type of user to collect data from Web pages for use...
|
46 |
Caravela: um navegador para metagenomas / Caravela: a new metagenomic browserGianluca Major Machado da Silva 12 June 2017 (has links)
Metagenômica é a técnica que permite analisar os genomas de microorganismos que habitam determinados nichos do ambiente sem a necessidade de isolar e cultivar cada um separadamente. Ao conjunto de microorganismos que habita um determinado nicho se dá o nome de microbi- oma. Análises do perfil da diversidade taxonômica e funcional de comunidades microbianas em microbiomas são comuns em estudos de metagenômica. No entanto, atualmente as plata- formas de uso geral (como MG-RAST e IMG/M) tendem a separar as análises baseadas em reads (sequências não montadas) das baseadas em contigs (sequências montadas), isto dificulta as análises destes dados. Motivado por esta separação, desenvolvemos uma plataforma web, batizada de CARAVELA, que facilita a conexão entre os resultados de análises de diversidade taxonômica e funcional baseadas em reads e contigs respectivamente. Uma das principais fun- ções da plataforma CARAVELA é associar a identificação taxonômica de cada read com o contig que este read faz parte e, anotações funcionais do contig, quando existirem. Essa função deve permitir a rápida identificação de contigs potencialmente quiméricos bem como contigs taxonomicamente bem resolvidos. Também é possvel fazer buscas, tais como: listar todos os contigs que tenham um ou mais reads classificados como Pseudoxanthomonas suwonensis em sua composição e ainda, é possvel navegar nos contigs de maneira similar a navegadores de metagenomas tradicionais. Podem ser utilizados como arquivos de entrada a sada de outros programas, desde que o formato atenda certos padrões. A plataforma CARAVELA foi desenvol- vida com Java, HTML, CSS, Javascript e Mysql, e com o fim de testar a ferramenta, utilizamos o conjunto de dados metagnômicos obtidos a partir da operação de compostagem do Parque Zoológico de São Paulo. / The taxonomic diversity analysis (read-based) and functional analysis (contig / gene-based) from metagenomic studies usually generate information that is complementary. However, the tools that produce gene annotations (eg IMG / M) and taxonomic assignments (eg MyTaxa) do not allow easy integration of these results. Motivated by this split, we are develop a web platform called Caravela to facilitate the integration, search and visualization of information provided by read-based analyzes and contig / gene-based analyzes. The tool is able to display the list of contigs and for each contig, it displays annotated genes, reads participating in its composition and rate associated with each read (when such association exists). Such a capability enable manual / automated curation of assembly as well as taxonomic assignments (detection of possible mis-assignments). The platform able to accept output files from a variety of tools, as long as the file formats follow certain standards. The tests was performed on a dataset of metagenomic reads obtained from the composting operation of the São Paulo Zoological Park. The tool was implemented using Java technology, HTML, CSS and Javascript. Information was stored in a MySQL database.
|
47 |
Leveraging Scalable Data Analysis to Proactively Bolster the Anti-Phishing EcosystemJanuary 2020 (has links)
abstract: Despite an abundance of defenses that work to protect Internet users from online threats, malicious actors continue deploying relentless large-scale phishing attacks that target these users. Effectively mitigating phishing attacks remains a challenge for the security community due to attackers' ability to evolve and adapt to defenses, the cross-organizational nature of the infrastructure abused for phishing, and discrepancies between theoretical and realistic anti-phishing systems. Although technical countermeasures cannot always compensate for the human weakness exploited by social engineers, maintaining a clear and up-to-date understanding of the motivation behind---and execution of---modern phishing attacks is essential to optimizing such countermeasures.
In this dissertation, I analyze the state of the anti-phishing ecosystem and show that phishers use evasion techniques, including cloaking, to bypass anti-phishing mitigations in hopes of maximizing the return-on-investment of their attacks. I develop three novel, scalable data-collection and analysis frameworks to pinpoint the ecosystem vulnerabilities that sophisticated phishing websites exploit. The frameworks, which operate on real-world data and are designed for continuous deployment by anti-phishing organizations, empirically measure the robustness of industry-standard anti-phishing blacklists (PhishFarm and PhishTime) and proactively detect and map phishing attacks prior to launch (Golden Hour). Using these frameworks, I conduct a longitudinal study of blacklist performance and the first large-scale end-to-end analysis of phishing attacks (from spamming through monetization). As a result, I thoroughly characterize modern phishing websites and identify desirable characteristics for enhanced anti-phishing systems, such as more reliable methods for the ecosystem to collectively detect phishing websites and meaningfully share the corresponding intelligence. In addition, findings from these studies led to actionable security recommendations that were implemented by key organizations within the ecosystem to help improve the security of Internet users worldwide. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2020
|
48 |
Visualisering av och mätning i punktmoln : En jämförelse av fyra mjukvarorNiklasson, Pierre, Kalén, Niclas January 2017 (has links)
In this thesis, various software for point cloud visualization has been investigated. Laser scanning is widely used to create three-dimensional models, but there is a lack of software for visualization. Point clouds usually have a large file size and need convenient methods for visualization and presentation to third parties. The development of browsers means that there are good opportunities today to visualize point clouds on web-based services. The purpose has been to investigate professional software with open source and free software in how they manage to visualize, measure and present point clouds. Details in point clouds is controlled by its point density. Higher point density will result in better details but will take longer time to scan and requires more storage space. The density of the point cloud is controlled by the requirement from the client. It is not certain that a high point density is necessary to strive for considering it will result in more data to handle. The software that has been investigated is Autodesk ReCap, Leica Truview, Pointscene and Potree, and they have all been compared to Leica Cyclone. Only three of them have been able to read the PTS-file format, while Potree and Truview have received the point cloud converted and exported to their proprietary file formats. The comparison between the softwares was mainly based on differences in length measurements, as angle and area-specific tools are not available in all softwares. The length measurements were repeated 30 times and it is the average and the uncertainty for each software that has been used in the comparison. The survey shows that there are small differences between the software except for Truview, which is the only software with significant deviations from Cyclone. There is not any significant differences in length measurements that arise when there have been conversions to Potree. Pointscene and Potree have visual similarities, Pointscene is however the preferred software because its own servers available which simplifies sharing point clouds to other users.
|
49 |
Vizualizace rozsáhlých grafových dat na webu / Large Graph Data Visualisation on the WebJarůšek, Tomáš January 2020 (has links)
Graph databases provide a form of data storage that is fundamentally different from a relational model. The goal of this thesis is to visualize the data and determine the maximum volume that current web browsers are able to process at once. For this purpose, an interactive web application was implemented. Data are stored using the RDF (Resource Description Framework) model, which represents them as triples with a form of subject - predicate - object. Communication between this database, which runs on server and client is realized via REST API. The client itself is then implemented in JavaScript. Visualization is performed by using the HTML element canvas and can be done in different ways by applying three specially designed methods: greedy, greedy-swap and force-directed. The resulting boundaries were determined primarily by measuring time complexities of different parts and were heavily influenced by user's goals. If it is necessary to visualize as much data as possible, then 150000 triples were set to be the limiting volume. On the other hand, if the goal is maximum quality and application smoothness, then the limit doesn't exceed a few thousand.
|
50 |
Segmentace stránky ve webovém prohlížeči / Page Segmentation in a Web BrowserZubrik, Tomáš January 2021 (has links)
This thesis deals with the web page segmentation in a web browser. The implementation of Box Clustering Segmentation (BCS) method in JavaScript using an automated browser was created. The actual implementation consists of two main steps, which are the box extraction (leaf DOM nodes) from the browser context and their subsequent clustering based on the similarity model defined in BCS. Main result of this thesis is a functional implementation of BCS method usable for web page segmentation. The evaluation of the functionality and accuracy of the implementation is based on a comparison with a reference implementation created in Java.
|
Page generated in 0.0327 seconds