• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 9
  • 2
  • 1
  • 1
  • Tagged with
  • 32
  • 32
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Investigating the Neural Representations of Taste and Health

Londeree, Allison M. 23 October 2019 (has links)
No description available.
2

Similarity analysis of industrial alarm flood data

Ahmed, Kabir Unknown Date
No description available.
3

Similarity Based Large Scale Malware Analysis: Techniques and Implications

Li, Yuping 07 June 2018 (has links)
Malware analysis and detection continues to be one of the central battlefields for cybersecurity industry. For the desktop malware domain, we observed multiple significant ransomware attacks in the past several years, e.g., it was estimated that in 2017 the WannaCry ransomware attack affected more than 200,000 computers across 150 countries with hundreds of millions damages. Similarly, we witnessed the increased impacts of Android malware on global individuals due to the popular smartphone and IoT devices worldwide. In this dissertation, we describe similarity comparison based novel techniques that can be applied to achieve large scale desktop and Android malware analysis, and the practical implications of machine learning based approaches for malware detection. First, we propose a generic and effective solution for accurate and efficient binary similarity analysis of desktop malware. Binary similarity analysis is an essential technique for a variety of security analysis tasks, including malware detection and malware clustering. Even though various solutions have been developed, existing binary similarity analysis methods still suffer from limited efficiency, accuracy, and usability. In this work, we propose a novel graphical fuzzy hashing scheme for accurate and efficient binary similarity analysis. We first abstract control flow graphs (CFGs) of binary codes to extract blended n-gram graphical features of the CFGs, and then encode the graphical features into numeric vectors (called graph signatures) to measure similarity by comparing the graph signatures. We further leverage a fuzzy hashing technique to convert the numeric graph signatures into smaller fixed size fuzzy hash outputs for efficient comparisons. Our comprehensive evaluation demonstrates that our blended n-gram graphical feature based CFG comparison is more effective and efficient compared to existing CFG comparison techniques. Based on our CFG comparison method, we develop BingSim, a binary similarity analysis tool, and show that BingSim outperforms existing binary similarity analysis tools while conducting similarity analysis based malware detection and malware clustering. Second, we identify the challenges faced by overall similarity based Android malware clustering and design a specialized system for solving the problems. Clustering has been well studied for desktop malware analysis as an effective triage method. Conventional similarity-based clustering techniques, however, cannot be immediately applied to Android malware analysis due to the excessive use of third-party libraries in Android application development and the widespread use of repackaging in malware development. We design and implement an Android malware clustering system through iterative mining of malicious payloads and checking whether malware samples share the same version of malicious payloads. Our system utilizes a hierarchical clustering technique and an efficient bit-vector format to represent Android apps. Experimental results demonstrate that our clustering approach achieves precision of 0.90 and recall of 0.75 for the Android Genome mal- ware dataset, and average precision of 0.98 and recall of 0.96 with respect to manually verified ground-truth. Third, we study the fundamental issues faced by traditional machine learning (ML) based Android malware detection systems, and examine the role of ML for Android malware detection in practice, which leads to a revised evaluation strategy that evaluates an ML based malware detection system by checking their zero-day detection capabilities. Existing machine learning based Android malware research obtains the ground truth by consulting AV products, and uses the same label set for training and testing. However, there is a mismatch between how the ML system has been evaluated, and the true purpose of using ML system in practice. The goal of applying ML is not to reproduce or verify the same potentially imperfect knowledge, but rather to produce something that is better — closer to the ultimate ground truth about the apps’ maliciousness. Therefore, it will be more meaningful to check their zero-day detection capabilities than detection accuracy for known malware. This evaluation strategy is aligned with how an ML algorithm can potentially benefit malware detection in practice, by acknowledging that any ML classifier has to be trained on imperfect knowledge, and such knowledge evolves over time. Besides the traditional malware prediction approaches, we also examine the mislabel identification approaches. Through extensive experiments, we demonstrate that: (a) it is feasible to evaluate ML based Android malware detection systems with regard to their zero-day malware detection capabilities; (b) both malware prediction and mislabel identification approaches can be used to achieve verifiable zero-day malware detection, even when trained with an old and noisy ground truth dataset.
4

Social Cohesion Analysis of Networks: A Novel Method for Identifying Cohesive Subgroups in Social Hypertext

Chin, Alvin Yung Chian 23 September 2009 (has links)
Finding subgroups within social networks is important for understanding and possibly influencing the formation and evolution of online communities. This thesis addresses the problem of finding cohesive subgroups within social networks inferred from online interactions. The dissertation begins with a review of relevant literature and identifies existing methods for finding cohesive subgroups. This is followed by the introduction of the SCAN method for identifying subgroups in online interaction. The SCAN (Social Cohesion Analysis of Networks) methodology involves three steps: selecting the possible members (Select), collecting those members into possible subgroups (Collect) and choosing the cohesive subgroups over time (Choose). Social network analysis, clustering and partitioning, and similarity measurement are then used to implement each of the steps. Two further case studies are presented, one involving the TorCamp Google group and the other involving YouTube vaccination videos, to demonstrate how the methodology works in practice. Behavioural measures of Sense of Community and the Social Network Questionnaire are correlated with the SCAN method to demonstrate that the SCAN approach can find meaningful subgroups. Additional empirical findings are reported. Betweenness centrality appears to be a useful filter for screening potential subgroup members, and members of cohesive subgroups have stronger community membership and influence than others. Subgroups identified using weighted average hierarchical clustering are consistent with the subgroups identified using the more computationally expensive k-plex analysis. The value of similarity measurement in assessing subgroup cohesion over time is demonstrated, and possible problems with the use of Q modularity to identify cohesive subgroups are noted. Applications of this research to marketing, expertise location, and information search are also discussed.
5

Social Cohesion Analysis of Networks: A Novel Method for Identifying Cohesive Subgroups in Social Hypertext

Chin, Alvin Yung Chian 23 September 2009 (has links)
Finding subgroups within social networks is important for understanding and possibly influencing the formation and evolution of online communities. This thesis addresses the problem of finding cohesive subgroups within social networks inferred from online interactions. The dissertation begins with a review of relevant literature and identifies existing methods for finding cohesive subgroups. This is followed by the introduction of the SCAN method for identifying subgroups in online interaction. The SCAN (Social Cohesion Analysis of Networks) methodology involves three steps: selecting the possible members (Select), collecting those members into possible subgroups (Collect) and choosing the cohesive subgroups over time (Choose). Social network analysis, clustering and partitioning, and similarity measurement are then used to implement each of the steps. Two further case studies are presented, one involving the TorCamp Google group and the other involving YouTube vaccination videos, to demonstrate how the methodology works in practice. Behavioural measures of Sense of Community and the Social Network Questionnaire are correlated with the SCAN method to demonstrate that the SCAN approach can find meaningful subgroups. Additional empirical findings are reported. Betweenness centrality appears to be a useful filter for screening potential subgroup members, and members of cohesive subgroups have stronger community membership and influence than others. Subgroups identified using weighted average hierarchical clustering are consistent with the subgroups identified using the more computationally expensive k-plex analysis. The value of similarity measurement in assessing subgroup cohesion over time is demonstrated, and possible problems with the use of Q modularity to identify cohesive subgroups are noted. Applications of this research to marketing, expertise location, and information search are also discussed.
6

Sherlock N-Overlap: normalization invasive and overlap coefficient for analysis of similarity between source code in programming disciplines / Sherlock N-Overlap: normalizaÃÃo invasiva e coeficiente de sobreposiÃÃo para anÃlise de similaridade entre cÃdigos-fonte em disciplinas de programaÃÃo

Danilo Leal Maciel 07 July 2014 (has links)
CoordenaÃÃo de AperfeÃoamento de Pessoal de NÃvel Superior / This work is contextualized in the problem of plagiarism detection among source codes in programming classes. Despite the wide set of tools available for the detection of plagiarism, only few tools are able to effectively identify all lexical and semantic similarities between pairs of codes, because of the complexity inherent to this type of analysis. Therefore to the problem and the scenario in question, it was made a study about the main approaches discussed in the literature on detecting plagiarism in source code and as a main contribution, conceived to be a relevant tool in the field of laboratory practices. The tool is based on Sherlock algorithm, which has been enhanced as of two perspectives: firstly, with changes in the similarity coefficient used by the algorithm in order to improve its sensitivity for comparison of signatures; secondly, proposing intrusive techniques preprocessing that, besides eliminating irrelevant information, are also able to overemphasize structural aspects of the programming language, or gathering separating strings whose meaning is more significant for the comparison or even eliminating sequences less relevant to highlight other enabling better inference about the degree of similarity. The tool, called Sherlock N-Overlap was subjected to rigorous evaluation methodology, both in simulated scenarios as classes in programming, with results exceeding tools currently highlighted in the literature on plagiarism detection. / Este trabalho se contextualiza no problema da detecÃÃo de plÃgio entre cÃdigos-fonte em turmas de programaÃÃo. Apesar da ampla quantidade de ferramentas disponÃveis para a detecÃÃo de plÃgio, poucas sÃo capazes de identificar, de maneira eficaz, todas as semelhanÃas lÃxicas e semÃnticas entre pares de cÃdigos, o que se deve à complexidade inerente a esse tipo de anÃlise. Fez-se, portanto, para o problema e o cenÃrio em questÃo, um estudo das principais abordagens discutidas na literatura sobre detecÃÃo de plÃgio em cÃdigo-fonte e, como principal contribuiÃÃo, concebeu-se uma ferramenta aplicÃvel no domÃnio de prÃticas laboratoriais. A ferramenta tem por base o algoritmo Sherlock, que foi aprimorado sob duas perspectivas: a primeira, com modificaÃÃes no coeficiente de similaridade usado pelo algoritmo, de maneira a melhorar a sua sensibilidade para comparaÃÃo de assinaturas; a segunda, propondo tÃcnicas de prÃ-processamento invasivas que, alÃm de eliminar informaÃÃo irrelevante, sejam tambÃm capazes de sobrevalorizar aspectos estruturais da linguagem de programaÃÃo, reunindo ou separando sequÃncias de caracteres cujo significado seja mais expressivo para a comparaÃÃo ou, ainda, eliminando sequÃncias menos relevantes para destacar outras que permitam melhor inferÃncia sobre o grau de similaridade. A ferramenta, denominada Sherlock N-Overlap, foi submetida a rigorosa metodologia de avaliaÃÃo, tanto em cenÃrios simulados como em turmas de programaÃÃo, apresentando resultados superiores a ferramentas atualmente em destaque na literatura sobre detecÃÃo de plÃgio.
7

UMA ABORDAGEM PARA A JUNÇÃO DE ONTOLOGIAS E SUA UTILIZAÇÃO NO DESENVOLVIMENTO DE ONTOLOGIAS DE APLICAÇÃO / AN APPROACH TO ONTOLOGY JUNCTION AND THEIR USE IN DEVELOPMENT APPLICATION ONTOLOGIES

Silva, Antonio Fhillipi Maciel 07 November 2014 (has links)
Made available in DSpace on 2016-08-17T14:52:37Z (GMT). No. of bitstreams: 1 Dissertacao_Antonio Fhillipi Maciel Silva.pdf: 1349205 bytes, checksum: 378ccd63a39eda537c5123a20fcd3948 (MD5) Previous issue date: 2014-11-07 / The reuse of ontologies is a process in which the existing ontological knowledge is used as input to generate new ontologies, in order to reduce costs and increase the quality of the final product. However, techniques for building ontologies do not address reuse satisfactorily, even though this is an indispensable phase in ontology engineering. This work presents OntoJoin, a process for joining ontologies which employs lexical, structural and relational similarity analysis as mapping mechanisms. These mechanisms are responsible for identifying correspondences between elements of two ontologies given as input. These mechanisms are used to match similar elements, thus resulting in a new ontology generated from reuse. The set of mechanisms use "Lexical Comparison", which performs a comparison between the labels of the terms of the ontology elements; "Structural Comparison", which performs an analysis of the concepts and their respective hierarchical structure; "Relational Comparison", which performs an analysis of concepts, their properties and non-taxonomic relationships; and "Index of Terms", which alter the concepts for their better representation, is the main feature that gives OntoJoin the potential to achieve greater effectiveness at the junction of ontologies compared with previously proposed techniques. An experimental evaluation has been performed according two procedures based on the principle of comparing the joint ontology against a reference one. This experiment consisted in measuring with recall and precision the effectiveness of the process for combining two ontologies in the tourism and sales domains. The preliminary results demonstrate the feasibility of the proposed process in joining ontologies. / O reúso de ontologias é um processo em que o conhecimento ontológico existente é usado como entrada para gerar novas ontologias, visando a redução de custos e o aumento da qualidade do produto final. No entanto, as técnicas para construção de ontologias não abordam de maneira satisfatória o reúso, mesmo este sendo uma fase indispensável para a engenharia de ontologias. O presente trabalho apresenta o OntoJoin, um processo para junção de ontologias, que emprega a análise de similaridade lexical, estrutural e relacional como mecanismos de mapeamento. Esses mecanismos são responsáveis por identificar correspondências entre elementos de duas ontologias dadas como entrada. Essa correspondência é utilizada para combinar os elementos similares, resultando assim, em uma nova ontologia gerada a partir do reúso. O uso conjunto dos mecanismos Comparação Lexical , que realiza uma comparação entre os rótulos dos termos dos elementos; Comparação Estrutural , que realiza uma análise dos conceitos e sua respectiva estrutura hierárquica; Comparação Relacional , que realiza uma análise dos conceitos, suas propriedades e relações não taxonômicas; e Indexação dos Termos , que altera a hierarquia dos conceitos para representar melhor semanticamente os termos a serem combinados, é a característica principal que confere ao OntoJoin o potencial de obter maior efetividade na junção de ontologias em relação a técnicas até então propostas. A avaliação experimental do processo foi realizada conforme dois procedimentos baseados no princípio de comparação da ontologia combinada com a de referência. Esse experimento consistiu em mensurar com as medidas de avaliação cobertura e precisão, a efetividade do processo em combinar duas ontologias nos domínio do turismo e vendas. Os resultados obtidos demonstram preliminarmente a viabilidade do processo proposto na junção de ontologias.
8

USING MOLECULAR SIMILARITY ANALYSIS FOR STRUCTURE-ACTIVITY RELATIONSHIP STUDIES

FAN, WEIGUO 27 November 2012 (has links)
No description available.
9

Components of the Neural Valuation Network of Monetary Rewards

Kanayet, Frank Joseph 30 August 2012 (has links)
No description available.
10

Hippocampal Representations of Targeted Memory Reactivation and Reactivated Temporal Sequences

Alm, Kylie H January 2017 (has links)
Why are some memories easy to retrieve, while others are more difficult to access? Here, we tested whether we could bias memory replay, a process whereby newly learned information is reinforced by reinstating the neuronal patterns of activation that were present during learning, towards particular memory traces. The goal of this biasing is to strengthen some memory traces, making them more easily retrieved. To test this, participants were scanned during interleaved periods of encoding and rest. Throughout the encoding runs, participants learned triplets of images that were paired with semantically related sound cues. During two of the three rest periods, novel, irrelevant sounds were played. During one critical rest period, however, the sound cues learned in the preceding encoding period were played in an effort to preferentially increase reactivation of the associated visual images, a manipulation known as targeted memory reactivation. Representational similarity analyses were used to compare multi-voxel patterns of hippocampal activation across encoding and rest periods. Our index of reactivation was selectively enhanced for memory traces that were targeted for preferential reactivation during offline rest, both compared to information that was not targeted for preferential reactivation and compared to a baseline rest period. Importantly, this neural effect of targeted reactivation was related to the difference in delayed order memory for information that was cued versus uncued, suggesting that preferential replay may be a mechanism by which specific memory traces can be selectively strengthened for enhanced subsequent memory retrieval. We also found partial evidence of discrimination of unique temporal sequences within the hippocampus. Over time, multi-voxel patterns associated with a given triplet sequence became more dissimilar to the patterns associated with the other sequences. Furthermore, this neural marker of sequence preservation was correlated with the difference in delayed order memory for cued versus uncued triplets, signifying that the ability to reactivate particular temporal sequences within the hippocampus may be related to enhanced temporal order memory for the cued information. Taken together, these findings support the claim that awake replay can be biased towards preferential reactivation of particular memory traces and also suggest that this preferential reactivation, as well as representations of reactivated temporal sequences, can be detected within patterns of hippocampal activation. / Psychology

Page generated in 0.0918 seconds