Spelling suggestions: "subject:"memorybased"" "subject:"memory.based""
1 |
Malicious DHTML Detection by Model-based ReasoningLin, Shih-Fen 21 August 2007 (has links)
¡@Including of HTML, client-side script, and other relative technology, Dynamic HTML (DHTML) is a mechanism of creating dynamic contents in a web page. Nowadays, because of the demand of dynamic web pages and the diffusion of web applications, attackers get a new, easily-spread, and hard-detected intrusion vector ¡Ð DHTML. And commercial anti-virus softwares, commonly using pattern-matching approach, still have weakness against commonly obfuscated malicious DHTML.
¡@According to this condition, we propose a new detective algorithm Model-based Reasoning (MoBR), basing on the respects of model and reasoning, that is resilient to common obfuscations used by attackers and can correctly determine whether a webpage is malicious or not. Through describing text and semantic signatures, we constructs the model of a malicious DHTML by the mechanism of templates. Experimental evaluation by actual DHTML demonstrates that our detection algorithm is tolerant to obfuscation and perform much superior to commercial anti-virus softwares. Furthermore, it can detect variants of malicious DHTML with a low false positive rate.
|
2 |
Energy Efficient Computing in FPGA Through Embedded RAM BlocksGhosh, Anandaroop 16 August 2013 (has links)
No description available.
|
3 |
Semantic Role Labeling with Analogical ModelingCasbeer, Warren C. 14 July 2008 (has links) (PDF)
Semantic role labeling has become a popular natural language processing task in recent years. A number of conferences have addressed this task for the English language and many different approaches have been applied to the task. In particular, some have used a memory-based learning approach. This thesis further develops the memory-based learning approach to semantic role labeling through the use of analogical modeling of language. Data for this task were taken from a previous conference (CoNLL-2005) so that a direct comparison could be made with other algorithms that attempted to solve this task. It will be shown here that the current approach is able to closely compare to other memory-based learning systems on the same task. Future work is also addressed.
|
4 |
Memory-efficient graph search applied to multiple sequence alignmentZhou, Rong 06 August 2005 (has links)
Graph search is used in many areas of computer science. It is well-known that the scalability of graph-search algorithms such as A* is limited by their memory requirements. In this dissertation, I describe three complementary strategies for reducing the memory requirements of graph-search algorithms, especially for multiple sequence alignment (a central problem in computational molecular biology). These search strategies dramatically increase the range and difficulty of multiple sequence alignment problems that can be solved. The first strategy uses a divide-and-conquer method of solution reconstruction, and one of my contributions is to show that when divide-and-conquer solution reconstruction is used, a layer-by-layer strategy for multiple sequence alignment is more memory-efficient than a bestirst strategy. The second strategy is a new approach to duplicate detection in external-memory graph search that involves partitioning the search graph based on an abstraction of the state space. For graphs with sufficient local structure, it allows graph-search algorithms to use external memory, such as disk storage, almost as efficiently as internal memory. The third strategy is a technique for reducing the memory requirements of sub-alignment search heuristics that are stored in lookup tables. It uses the start and goal states of a problem instance to restrict the region of the state space for which a table-based heuristic is needed, making it possible to store more accurate heuristic estimates in the same amount of memory. These three strategies dramatically improve the scalability of graph search not only for multiple sequence alignment, but for many other graph-search problems, and generalizations of these search strategies for other graph-search problems are discussed throughout the dissertation.
|
5 |
The effects of performance goals on the automaticity of cognitive skillsWilkins, Nicolas Jon 06 July 2010 (has links)
No description available.
|
6 |
High-Performance Knowledge-Based Entity ExtractionMiddleton, Anthony M. 01 January 2009 (has links)
Human language records most of the information and knowledge produced by organizations and individuals. The machine-based process of analyzing information in natural language form is called natural language processing (NLP). Information extraction (IE) is the process of analyzing machine-readable text and identifying and collecting information about specified types of entities, events, and relationships.
Named entity extraction is an area of IE concerned specifically with recognizing and classifying proper names for persons, organizations, and locations from natural language. Extant approaches to the design and implementation named entity extraction systems include: (a) knowledge-engineering approaches which utilize domain experts to hand-craft NLP rules to recognize and classify named entities; (b) supervised machine-learning approaches in which a previously tagged corpus of named entities is used to train algorithms which incorporate statistical and probabilistic methods for NLP; or (c) hybrid approaches which incorporate aspects of both methods described in (a) and (b).
Performance for IE systems is evaluated using the metrics of precision and recall which measure the accuracy and completeness of the IE task. Previous research has shown that utilizing a large knowledge base of known entities has the potential to improve overall entity extraction precision and recall performance. Although existing methods typically incorporate dictionary-based features, these dictionaries have been limited in size and scope.
The problem addressed by this research was the design, implementation, and evaluation of a new high-performance knowledge-based hybrid processing approach and associated algorithms for named entity extraction, combining rule-based natural language parsing and memory-based machine learning classification facilitated by an extensive knowledge base of existing named entities. The hybrid approach implemented by this research resulted in improved precision and recall performance approaching human-level capability compared to existing methods measured using a standard test corpus. The system design incorporated a parallel processing system architecture with capabilities for managing a large knowledge base and providing high throughput potential for processing large collections of natural language text documents.
|
7 |
How are Three-Deminsional Objects Represented in the Brain?Buelthoff, Heinrich H., Edelman, Shimon Y., Tarr, Michael J. 01 April 1994 (has links)
We discuss a variety of object recognition experiments in which human subjects were presented with realistically rendered images of computer-generated three-dimensional objects, with tight control over stimulus shape, surface properties, illumination, and viewpoint, as well as subjects' prior exposure to the stimulus objects. In all experiments recognition performance was: (1) consistently viewpoint dependent; (2) only partially aided by binocular stereo and other depth information, (3) specific to viewpoints that were familiar; (4) systematically disrupted by rotation in depth more than by deforming the two-dimensional images of the stimuli. These results are consistent with recently advanced computational theories of recognition based on view interpolation.
|
8 |
Scaling real-time event detection to massive streamsWurzer, Dominik Stefan January 2017 (has links)
In today’s world the internet and social media are omnipresent and information is accessible to everyone. This shifted the advantage from those who have access to information to those who do so first. Identifying new events as they emerge is of substantial value to financial institutions who consider realtime information in their decision making processes, as well as for journalists that report about breaking news and governmental agencies that collect information and respond to emergencies. First Story Detection is the task of identifying those documents in a stream of documents that talk about new events first. This seemingly simple task is non-trivial as the computational effort increases with every processed document. Standard approaches to solve First Story Detection determine a document’s novelty by comparing it to previously seen documents. This results in the highest reported accuracy but even the currently fastest system only scales to 10% of the Twitter stream. In this thesis, we propose a new algorithm family, called memory-based methods, able to scale to the full Twitter stream on a single core. Our memory-based method computes a document’s novelty up to two orders of magnitude faster than state-of-the-art systems without sacrificing accuracy. This thesis additional provides original work on the impact of processing unbounded data streams on detection accuracy. Our experiments reveal for the first time that the novelty scores of state-of-the-art comparison based and memory-based methods decay over time. We show how to counteract the discovered novelty decay and increase detection accuracy. Additionally, we show that memory-based methods are applicable beyond First Story Detection by building the first real time rumour detection system on social media streams.
|
9 |
Consider the forest or the trees? The effects of mindset abstraction on memory-based consideration set formationLu, Fang-Chi 01 May 2013 (has links)
Consideration set formation has been suggested as an important decision-making stage prior to choice. The current research focuses on consideration sets in the memory-based choice context and addresses the gaps in the existing literature by investigating the effects of mindset abstraction on memory retrieval and the number of considered choice alternatives retrieved from memory. I propose that individuals in a concrete (vs. abstract) mindset think more contextual and specific details (vs. fewer essences) about a certain decision situation; therefore concrete and fine-grained mental representations, compared to abstract and rough representations, will activate more associated cues in memory and lead to larger memory-based consideration sets. Through a word association task, studies 1a and 1b show that concrete mindsets leads to more proliferative associations and a greater number of conceptual cues than abstract mindsets. In the domain of product consideration (i.e., snack and dinner), studies 2a and 2b directly demonstrate that individuals in concrete mindsets form a larger memory-based consideration set than ones in abstract mindsets. I further propose the Hypothesis of Top-down versus Bottom-up Approach of Memory Retrieval to explain the mechanism that underlies the mindset abstraction effect on size of memory-based consideration sets. Studies 3 and 4, using an episodic memory paradigm, support this hypothesis and reveal that the type of retrieval cues (superordinate vs. subordinate cues) used by individuals in an abstract versus a concrete mindset determines the likelihood that a brand is considered, and that the richer associations located at the subordinate level contribute to a greater number of choice alternatives that people consider in a concrete mindset. The theoretical contributions, practical implications, and future research directions of this research are finally discussed.
|
10 |
Latency Bounds for Memory-Based FFTs with Applications in OFDM CommunicationTan, Xiangbin, Negash, Tadesse Hadush January 2023 (has links)
Future communication systems require low latency Fast Fourier transform (FFT)computation with a small cost of area. In this study, a memory-based FFT processorwith low latency is designed. To reduce latency and maintain constant outputsample rate, a scheduling method suitable for input sample rate and clock rateis used in the radix-2 butterfly processing elements. The scheduling scheme employsa combination of ASAP and ALAP scheduling strategies. A mathematicalexpression that models FFT’s latency is given. The size of FFT, the input samplerate, and the number of processing elements are the input parameters of the expression.The effect of using pipelined processing element is also studied. Lastly,the proposed low latency design is compared with other low-latency FFT designs.The result shows that, in the 4G LTE application scenario, our memory-based designcan do the FFT computations faster with a small area.
|
Page generated in 0.0443 seconds