Spelling suggestions: "subject:"contextsensitive."" "subject:"kontextsensitive.""
1 |
Nové přístupy k automatické detekci XSS chyb / New Approaches Towards Automated XSS Flaw DetectionSteinhauser, Antonín January 2020 (has links)
Cross-site scripting (XSS) flaws are a class of security flaws particular to web applications. XSS flaws generally allow an attacker to supply affected web application with a malicious input that is then included in an output page without being properly encoded (sanitized). Recent advances in web applica- tion technologies and web browsers introduced various prevention mechanisms, narrowing down the scope of possible XSS attacks, but those mechanisms are usually selective and prevent only a subset of XSS flaws. Among the types of XSS flaws that are largely omitted are the context- sensitive XSS flaws. A context-sensitive XSS flaw occurs when the potentially malicious input is sanitized by the affected web application before being included in the output page, but the sanitization is not appropriate for the browser con- text of the sanitized value. Another type of XSS flaws, which is already better known, but still insufficiently prevented, are the stored XSS flaws. Applica- tions affected by the stored XSS flaws store the unsafe client input in persistent storage and return it in another HTTP response to (possibly) another client. Our work is focused on advancing state-of-the-art automated detection of those two types of XSS flaws using various analysis techniques ranging from purely static analysis to dynamic graybox analysis.
|
2 |
On the learnibility of Mildly Context-Sensitive languages using positive data and correction queriesBecerra Bonache, Leonor 06 March 2006 (has links)
Con esta tesis doctoral aproximamos la teoría de la inferencia gramatical y los estudios de adquisición del lenguaje, en pos de un objetivo final: ahondar en la comprensión del modo como los niños adquieren su primera lengua mediante la explotación de la teoría inferencial de gramáticas formales.Nuestras tres principales aportaciones son:1. Introducción de una nueva clase de lenguajes llamada Simple p-dimensional external contextual (SEC). A pesar de que las investigaciones en inferencia gramatical se han centrado en lenguajes regulares o independientes del contexto, en nuestra tesis proponemos centrar esos estudios en clases de lenguajes más relevantes desde un punto de vista lingüístico (familias de lenguajes que ocupan una posición ortogonal en la jerarquía de Chomsky y que son suavemente dependientes del contexto, por ejemplo, SEC).2. Presentación de un nuevo paradigma de aprendizaje basado en preguntas de corrección. Uno de los principales resultados positivos dentro de la teoría del aprendizaje formal es el hecho de que los autómatas finitos deterministas (DFA) se pueden aprender de manera eficiente utilizando preguntas de pertinencia y preguntas de equivalencia. Teniendo en cuenta que en el aprendizaje de primeras lenguas la corrección de errores puede jugar un papel relevante, en nuestra tesis doctoral hemos introducido un nuevo modelo de aprendizaje que reemplaza las preguntas de pertinencia por preguntas de corrección.3. Presentación de resultados basados en las dos previas aportaciones. En primer lugar, demostramos que los SEC se pueden aprender a partir de datos positivos. En segundo lugar, demostramos que los DFA se pueden aprender a partir de correcciones y que el número de preguntas se reduce considerablemente.Los resultados obtenidos con esta tesis doctoral suponen una aportación importante para los estudios en inferencia gramatical (hasta el momento las investigaciones en este ámbito se habían centrado principalmente en los aspectos matemáticos de los modelos). Además, estos resultados se podrían extender a diversos campos de aplicación que gozan de plena actualidad, tales como el aprendizaje automático, la robótica, el procesamiento del lenguaje natural y la bioinformática. / With this dissertation, we bring together the Theory of the Grammatical Inference and Studies of language acquisition, in pursuit of our final goal: to go deeper in the understanding of the process of language acquisition by using the theory of inference of formal grammars. Our main three contributions are:1. Introduction of a new class of languages called Simple p-dimensional external contextual (SEC). Despite the fact that the field of Grammatical Inference has focused its research on learning regular or context-free languages, we propose in our dissertation to focus these studies in classes of languages more relevant from a linguistic point of view (families of languages that occupy an orthogonal position in the Chomsky Hierarchy and are Mildly Context-Sensitive, for example SEC).2. Presentation of a new learning paradigm based on correction queries. One of the main results in the theory of formal learning is that deterministic finite automata (DFA) are efficiently learnable from membership query and equivalence query. Taken into account that in first language acquisition the correction of errors can play an important role, we have introduced in our dissertation a novel learning model by replacing membership queries with correction queries.3. Presentation of results based on the two previous contributions. First, we prove that SEC is learnable from only positive data. Second, we prove that it is possible to learn DFA from corrections and that the number of queries is reduced considerably.The results obtained with this dissertation suppose an important contribution to studies of Grammatical Inference (the current research in Grammatical Inference has focused mainly on the mathematical aspects of the models). Moreover, these results could be extended to studies related directly to machine translation, robotics, natural language processing, and bioinformatics.
|
3 |
The design and implementation of an interactive proof editorRitchie, Brian January 1988 (has links)
This thesis describes the design and implementation of the IPE, an interactive proof editor for first-order intuitionistic predicate calculus, developed at the University of Edinburgh during 1983-1986, by the author together with John Cartmell and Tatsuya Hagino. The IPE uses an attribute grammar to maintain the state of its proof tree as a context-sensitive structure. The interface allows free movement through the proof structure, and encourages a "proof-byexperimentation" approach, since no proof step is irrevocable. We describe how the IPE's proof rules can be derived from natural deduction rules for first-order intuitionistic logic, how these proof rules are encoded as an attribute grammar, and how the interface is constructed on top of the grammar. Further facilities for the manipulation of the IPE's proof structures are presented, including a notion of IPE-tactic for their automatic construction. We also describe an extension of the IPE to enable the construction and use of simply-structured collections of axioms and results, the main provision here being an interactive "theory browser" which looks for facts which match a selected problem.
|
4 |
Partnering with poetry: poetry in American education standards from 1971-2010Van Zant, Melissa G. January 1900 (has links)
Doctor of Philosophy / Department of Curriculum and Instruction / F. Todd Goodson / American education is increasingly driven by standards and high-stakes tests. This creates a dynamic in which curricular content addressed in the standards will be subjected to high-stakes tests while that not addressed in the standards risks being ignored. Such a dynamic threatens poetry—a subject whose strength resides in its ambiguity instead of one correct answer. The literature review establishes poetry as an important area of study for K-12 students and explores how the Standards Movement has affected poetry instruction in other English-speaking countries. This research used context-sensitive textual analysis to examine the treatment of poetry in American English language arts standards from 1971 to 2010 as demonstrated in the following three documents: (1) Representative Performance Objectives for High School English written by the Tri-University Project in 1971, (2) Standards for the English Language Arts written by the National Council of Teachers of English and the International Reading Association in 1996, and (3) the Common Core State Standards for English Language Arts written in 2010. Context-sensitive textual analysis (Huckin, 1992) presumes that the contexts in which texts are written and read impact their meanings. The study describes those impacts, their implications, and suggestions for continued study.
|
5 |
A context-based name resolution approach for semantic schema integrationBELIAN, Rosalie Barreto 31 January 2008 (has links)
Made available in DSpace on 2014-06-12T15:50:47Z (GMT). No. of bitstreams: 2
arquivo1988_1.pdf: 1433897 bytes, checksum: 2bd67eddaeadba13aa380ec5c913b7e0 (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2008 / Uma das propostas da Web Semântica é fornecer uma grande diversidade de serviços de
diferentes domínios na Web. Estes serviços são, em sua maioria, colaborativos, cujas tarefas
se baseiam em processos de tomada de decisão. Estas decisões, por sua vez, serão mais bem
embasadas se considerarem a maior quantidade possível de informação relacionada às tarefas
em execução. Neste sentido, este cenário encoraja o desenvolvimento de técnicas e
ferramentas orientadas para a integração de informação, procurando soluções para a
heterogeneidade das fontes de dados.
A arquitetura baseada em mediação, utilizada no desenvolvimento de sistemas de
integração de informações tem como objetivo isolar o usuário das fontes de dados
distribuídas utilizando uma camada intermediária de software chamada de mediador. O
mediador, em um sistema de integração de informações, utiliza um esquema global para a
execução das consultas do usuário que são reformuladas em sub-consultas de acordo com os
esquemas locais das fontes de dados. Neste caso, um processo de integração de esquemas
gera o esquema global (esquema de mediação) como resultado da integração dos esquemas
individuais das fontes de dados.
O problema maior em integração de esquemas é a heterogeneidade das fontes de dados
locais. Neste sentido, a resolução semântica é primordial. A utilização de métodos puramente
estruturais e sintáticos na integração de esquemas é pouco eficaz se antes não houver a
identificação do real significado dos elementos dos esquemas. Um processo de integração de
esquemas tem como resultado um esquema global integrado e um conjunto de mapeamentos
inter-esquema e usualmente, compreende algumas etapas básicas como: pré-integração,
comparação, mapeamento e unificação de esquemas e geração do esquema de mediação.
Em integração de esquemas, resolução de nomes é o processo que determina a qual
entidade do mundo real um dado elemento de esquema se refere, levando em consideração
um conjunto de informações semânticas disponíveis. A informação semântica necessária para
resolução de nomes, em geral, é obtida de vocabulários genéricos e/ou específicos de um
determinado domínio de conhecimento. Nomes de elementos podem apresentar significados
diferentes dependendo do contexto semântico ao qual eles estão relacionados. Assim, o uso
de informação contextual, além da de domínio, pode trazer uma maior precisão na
interpretação dos elementos permitindo modificar o seu significado de acordo com um dado
contexto.
Este trabalho propõe uma abordagem de resolução de nomes baseada em contexto para
integração de esquemas. Um de seus pontos fortes é a utilização e modelagem da informação
contextual necessária à resolução de nomes em diferentes etapas do processo de integração
de esquemas. A informação contextual está modelada utilizando uma ontologia, o que
favorece a utilização de mecanismos de inferência, compartilhamento e reuso da informação.
Além disto, este trabalho propõe um processo de integração de esquemas simples e
extensível de forma que seu desenvolvimento se concentrasse principalmente nos requisitos
relacionados à resolução de nomes. Este processo foi desenvolvido para um sistema de
integração de informações baseado em mediação, que adota a abordagem GAV e XML como
modelo comum para intercâmbio de dados e integração de fontes de dados na Web
|
6 |
Individualized Virtual Reality Rehabilitation after Brain InjuriesKoenig, Sebastian January 2012 (has links)
Context-sensitive cognitive rehabilitation aims to address the specific deficits of patients by taking into account the unique strengths and weaknesses of each brain-injured individual. However, this approach requires customized assessments and trainings that are difficult to validate, time-consuming or simply unavailable for daily clinical use. Given the currently struggling economy and an increasing number of patients with brain injuries, a feasible and efficient solution for this individualized rehabilitation concept is needed.
This dissertation addresses the development and evaluation of a VE-based training and assessment for context-sensitive cognitive rehabilitation. The proposed application is designed to closely resemble real-world places that are relevant to each individual neurological patient. Despite such an ecologically valid approach to rehabilitation, the application also integrates traditional process-specific tasks that offer potential for standardization and collection of normative data across patient populations.
Three cognitive tasks (navigation, orientation, spatial memory) have been identified for use in individualized VEs. In three experimental trials the feasibility and validity of the technological implementation and theoretical foundation of these tasks has been assessed. In a fourth trial one of the tasks has been used for the rehabilitation of a brain-injured patient. Based on the results of these studies a workflow for the rapid development of VEs has been established which allows a VR developer to provide clinicians with individualized cognitive tasks. In addition, promising results for the clinical use and validation of the proposed system form the basis for future randomized controlled clinical trials.
In conclusion, this dissertation elaborates how context-sensitive and process-specific rehabilitation approaches each offer a unique perspective on cognitive rehabilitation and how combining both through the means of VR technology may offer new opportunities to further this clinical discipline.
|
7 |
Complexities of Order-Related Formal Language Extensions / Komplexiteter hos ordnings-relaterade utökningar av formella språkBerglund, Martin January 2014 (has links)
The work presented in this thesis discusses various formal language formalisms that extend classical formalisms like regular expressions and context-free grammars with additional abilities, most relating to order. This is done while focusing on the impact these extensions have on the efficiency of parsing the languages generated. That is, rather than taking a step up on the Chomsky hierarchy to the context-sensitive languages, which makes parsing very difficult, a smaller step is taken, adding some mechanisms which permit interesting spatial (in)dependencies to be modeled. The most immediate example is shuffle formalisms, where existing language formalisms are extended by introducing operators which generate arbitrary interleavings of argument languages. For example, introducing a shuffle operator to the regular expressions does not make it possible to recognize context-free languages like anbn, but it does capture some non-context-free languages like the language of all strings containing the same number of as, bs and cs. The impact these additions have on parsing has many facets. Other than shuffle operators we also consider formalisms enforcing repeating substrings, formalisms moving substrings around, and formalisms that restrict which substrings may be concatenated. The formalisms studied here all have a number of properties in common. They are closely related to existing regular and context-free formalisms. They operate in a step-wise fashion, deriving strings by sequences of rule applications of individually limited power. Each step generates a constant number of symbols and does not modify parts that have already been generated. That is, strings are built in an additive fashion that does not explode in size (in contrast to e.g. Lindenmayer systems). All languages here will have a semi-linear Parikh image. They feature some interesting characteristic involving order or other spatial constraints. In the example of the shuffle multiple derivations are in a sense interspersed in a way that each is unaware of. All of the formalisms are intended to be limited enough to make an efficient parsing algorithm at least for some cases a reasonable goal. This thesis will give intuitive explanations of a number of formalisms fulfilling these requirements, and will sketch some results relating to the parsing problem for them. This should all be viewed as preparation for the more complete results and explanations featured in the papers given in the appendices. / Denna avhandling diskuterar utökningar av klassiska formalismer inom formella språk, till exempel reguljära uttryck och kontextfria grammatiker. Utökningarna handlar på ett eller annat sätt omordning, och ett särskilt fokus ligger på att göra utökningarna på ett sätt som dels har intressanta spatiala/ordningsrelaterade effekter och som dels bevarar den effektiva parsningen som är möjlig för de ursprungliga klassiska formalismerna. Detta står i kontrast till att ta det större steget upp i Chomsky-hierarkin till de kontextkänsliga språken, vilket medför ett svårt parsningsproblem. Ett omedelbart exempel på en sådan utökning är s.k. shuffle-formalismer. Dessa utökar existerande formalismer genom att introducera operatorer som godtyckligt sammanflätar strängar från argumentspråk. Om shuffle-operator introduceras till de reguljära uttrycken ger det inte förmågan att känna igen t.ex. det kontextfria språket anbn, men det fångar istället vissa språk som inte är kontextfria, till exempel språket som består av alla strängar som innehåller lika många a:n, b:n och c:n. Sättet på vilket dessa utökningar påverkar parsningsproblemet är mångfacetterat. Utöver dessa shuffle-operatorer tas också formalismer där delsträngar kan upprepas, formalismer där delsträngar flyttas runt, och formalismer som begränsar hur delsträngar får konkateneras upp. Formalismerna som tas upp här har dock vissa egenskaper gemensamma. De är nära besläktade med de klassiska reguljära och kontextfria formalismerna. De arbetar stegvis, och konstruerar strängar genom successiva applikationer av individuellt enkla regler. Varje steg genererar ett konstant antal symboler och modifierar inte det som redan genererats. Det vill säga, strängar byggs additivt och längden på dem kan inte explodera (i kontrast till t.ex. Lindenmayer-system). Alla språk som tas upp kommer att ha en semi-linjär Parikh-avbildning. De har någon instressant spatial/ordningsrelaterad egenskap. Exempelvis sättet på vilket shuffle-operatorer sammanflätar annars oberoende deriveringar. Alla formalismerna är tänkta att vara begränsade nog att det är resonabelt att ha effektiv parsning som mål. Denna avhandling kommer att ge intuitiva förklaring av ett antal formalismer som uppfyller ovanstående krav, och kommer att skissa en blandning av resultat relaterade till parsningsproblemet för dem. Detta bör ses som förberedande inför läsning av de mer djupgående och komplexa resultaten och förklaringarna i de artiklar som finns inkluderade som appendix.
|
8 |
Complexities of Parsing in the Presence of ReorderingBerglund, Martin January 2012 (has links)
The work presented in this thesis discusses various formalisms for representing the addition of order-controlling and order-relaxing mechanisms to existing formal language models. An immediate example is shuffle expressions, which can represent not only all regular languages (a regular expression is a shuffle expression), but also features additional operations that generate arbitrary interleavings of its argument strings. This defines a language class which, on the one hand, does not contain all context-free languages, but, on the other hand contains an infinite number of languages that are not context-free. Shuffle expressions are, however, not themselves the main interest of this thesis. Instead we consider several formalisms that share many of their properties, where some are direct generalisations of shuffle expressions, while others feature very different methods of controlling order. Notably all formalisms that are studied here have a semi-linear Parikh image, are structured so that each derivation step generates at most a constant number of symbols (as opposed to the parallel derivations in for example Lindenmayer systems), feature interesting ordering characteristics, created either by derivation steps that may generate symbols in multiple places at once, or by multiple generating processes that produce output independently in an interleaved fashion, and are all limited enough to make the question of efficient parsing an interesting and reasonable goal. This vague description already hints towards the formalisms considered; the different classes of mildly context-sensitive devices and concurrent finite-state automata. This thesis will first explain and discuss these formalisms, and will then primarily focus on the associated membership problem (or parsing problem). Several parsing results are discussed here, and the papers in the appendix give a more complete picture of these problems and some related ones.
|
9 |
CEManTIKA: a Domain-independent framework for designing context sensitive systemsSANTOS, Vaninha Vieira dos 31 January 2008 (has links)
Made available in DSpace on 2014-06-12T15:51:14Z (GMT). No. of bitstreams: 2
arquivo2013_1.pdf: 7106085 bytes, checksum: 47ad31fd4b9b044b146cc59b0e2bc197 (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2008 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Em uma época em que os usuários precisam processar uma quantidade cada vez
maior de informação e executar tarefas cada vez mais complexas em um
intervalo menor de tempo, a introdução do conceito de contexto em sistemas
computacionais torna-se uma necessidade. Contexto é definido como as
condições interelacionadas em que alguma coisa existe ou ocorre . Contexto é o
que viabiliza a identificação do que é ou não relevante em uma dada situação.
Sistemas sensíveis ao contexto são aqueles que utilizam contexto para prover
informações ou serviços relevantes para a execução de uma tarefa. Projetar um
sistema sensível ao contexto não é trivial, uma vez que é necessário lidar com
questões relacionadas a que tipo de informação considerar como contexto, como
representar essas informações, como podem ser adquiridas e processadas e
como projetar o uso do contexto pelo sistema. Embora existam trabalhos que
tratem desafios específicos envolvidos no desenvolvimento de sistemas
sensíveis ao contexto, a maioria das soluções é proprietária ou restrita a um
determinado tipo de aplicação e não são facilmente replicáveis em diferentes
domínios de aplicação. Além disso, um outro problema é que projetistas de
software têm dificuldade em especificar o que exatamente considerar como
contexto e como projetar a sua representação, gerenciamento e uso. Esta tese
propõe um framework de apoio ao projeto de sistemas sensíveis ao contexto
em diferentes domínios, o qual é composto por quatro elementos principais: (i)
uma arquitetura genérica para sistemas sensíveis ao contexto, (ii) um
metamodelo de contexto independente de domínio, que guia a modelagem de
contexto em diferentes aplicações; (iii) um conjunto de perfis UML que
considera a estrutura do contexto e do comportamento sensível ao contexto; e
(iv) um processo que direciona a execução de atividades relacionadas à
especificação do contexto e ao projeto de sistemas sensíveis ao contexto. Para
investigar a viabilidade da proposta, desenvolvemos o projeto de duas
aplicações em diferentes domínios. Para uma destas aplicações, foi criado um
protótipo funcional, o qual foi avaliado por usuários finais
|
10 |
To Force a Bug : Extending Hybrid FuzzingNäslund, Johan, Nero, Henrik January 2020 (has links)
One of the more promising solutions for automated binary testing today is hybrid fuzzing, a combination of the two acknowledged approaches, fuzzing and symbolic execution, for detecting errors in code. Hybrid fuzzing is one of the pioneering works coming from the authors of Angr and Driller, opening up for the possibility for more specialized tools such as QSYM to come forth. These hybrid fuzzers are coverage guided, meaning they measure their success in how much code they have covered. This is a typical approach but, as with many, it is not flawless. Just because a region of code has been covered does not mean it has been fully tested. Some flaws depend on the context in which the code is being executed, such as double-free vulnerabilities. Even if the free routine has been invoked twice, it does not mean that a double-free bug has occurred. To cause such a vulnerability, one has to free the same memory chunk twice (without it being reallocated between the two invocations to free). In this research, we will extend one of the current state-of-the-art hybrid fuzzers, QSYM, which is an open source project. We do this extension, adding double-free detection, in a tool we call QSIMP. We will then investigate our hypothesis, stating that it is possible to implement such functionality without losing so much performance that it would make the tool impractical. To test our hypothesis we have designed two experiments. One experiment tests the ability of our tool to find double-free bugs (the type of context-sensitive bug that we have chosen to test with). In our second experiment, we explore the scalability of the tool when this functionality is executed. Our experiments showed that we were able to implement context-sensitive bug detection within QSYM. We can find most double-free vulnerabilities we have tested it on, although not all, because of some optimizations that we were unable to build past. This has been done with small effects on scalability according to our tests. Our tool can find the same bugs that the original QSYM while adding functionality to find double-free vulnerabilities. / En av de mer lovande lösningarna för automatiserad binärtestning är i dagsläget hybrid fuzzing, en kombination av två vedertagna tillvägagångssätt, fuzzing och symbolisk exekvering. Forskarna som utvecklade Angr och Driller anses ofta vara några av de första med att testa denna approach. Detta har i sin tur öppnat upp för fler mer specialiserade verktyg som QSYM. Dessa hybrid fuzzers mäter oftast sin framgång i hänsyn till hur mycket kod som nås under testningen. Detta är ett typiskt tillvägagångssätt, men som med många metoder är det inte felfri. Kod som har exekverats, utan att en bugg utlösts, är inte nödvändigtvis felfri. Vissa buggar beror på vilken kontext maskininstruktioner exekveras i -- ett exempel är double-free sårbarheter. Att minne har frigjorts flera gånger betyder inte ovillkorligen att en double-free sårbarhet har uppstått. För att en sådan sårbarhet ska uppstå måste samma minne frigöras flera gånger (utan att detta minne omallokerats mellan anropen till free). I detta projekt breddar vi en av de främsta hybrid fuzzers, QSYM, ett projekt med öppen källkod. Det vi tillför är detektering av double-free i ett verktyg vi kallar QSIMP. Vi undersöker sedan vår hypotes, som säger att det är möjligt att implementera sådan funktionalitet utan att förlora så mycket prestanda att det gör verktyget opraktiskt. För att bepröva hypotesen har vi designat två experiment. Ett experiment testar verktygets förmåga att detektera double-free sårbarheter (den sortens kontext-känsliga sårbarheter vi har valt att fokusera på). I det andra experimentet utforskar vi huruvida verktyget är skalbart då den nya funktionaliteten körs. Våra experiment visar att vi har möjliggjort detektering av kontext-känsliga buggar genom vidareutveckling av verktyget QSYM. QSIMP hittar double-free buggar, dock inte alla, på grund av optimiseringar som vi ej har lyckats arbeta runt. Detta har gjorts utan större effekter på skalbarheten av verktyget enligt resultaten från våra experiment. Vårt verktyg hittar samma buggar som orignal verktyget QSYM, samtidigt som vi tillägger funktionalitet för att hitta double-free sårbarheter.
|
Page generated in 0.096 seconds