Spelling suggestions: "subject:"computerscience"" "subject:"composerscience""
161 |
DECOMPOSITION OF FUZZY SWITCHING FUNCTIONSUnknown Date (has links)
Functional decomposition is to the process of expressing a function of n variables as a composition of a number of functions, each depending on less than n variables. If the function to be decomposed is a Boolean function, then the methods proposed by Ashenhurst and Curtis can be used. However, if the function is a fuzzy valued function, then a different method is necessary to decompose the function. / In this paper, after the basic definitions of fuzzy algebra are given, a brief review of the work in fuzzy switching logic is presented. The insufficiencies of Boolean methods and a previous method by Kandel for the decomposition of a fuzzy switching function are discussed. A new theorem for determining if a function possesses a fuzzy simple disjunctive decomposition is developed as well as a method for decomposing such a function. / Source: Dissertation Abstracts International, Volume: 42-03, Section: B, page: 1083. / Thesis (Ph.D.)--The Florida State University, 1981.
|
162 |
Methods of Detecting Intrusions in Security ProtocolsUnknown Date (has links)
Since the explosion of computer systems and computer networks within the past decade, e-commerce, online banking, and other "internet" oriented applications have risen exponentially. According to Forrester Research Group, online shopping in the US grew 580% from 1998 to 2000 accounting for more than $45 billion in sales [10]. Online Banking Report states there are over 100 million people participating in online banking worldwide, an increase of 80% since 1984. This number is expected to rise to 300 million households by 2012 [3]. These applications rely on secure communications for information passing such as credit card numbers and bank account information. The secure communication is realized through the use of cryptography and security protocols for key exchange, authentication etcetera. These protocols can be attacked, possibly resulting in vital information being compromised. This paper discusses classic methodologies concerning intrusion detection and how they are being applied to security protocols. Three methods are presented for detecting and/or preventing intrusions in security protocols. The first method is a simple method aimed at detecting intrusions from attackers with rudimentary skills. The second method, a modified version of the original model, provides a more formidable defense to the sophisticated attacker. Lastly, this paper discusses the third method, IPSec, and how it provides the best security for detecting intrusions in security protocols. Each method is tested with known attacks and the results are discussed. / A Thesis submitted to the Department of Computer Science in partial fulfillment of
the requirements for the degree of Master of Science. / Degree Awarded: Fall Semester, 2004. / Date of Defense: July 20, 2004. / Intrusion detection, security protocols, IPSec, AH, ESP, tunnel mode, transport mode, IKE / Includes bibliographical references. / Mike Burmester, Professor Directing Thesis; Alec Yasinsac, Committee Member; Lois Hawkes, Committee Member.
|
163 |
Multischema: Dynamic view management for object database systemsUnknown Date (has links)
This dissertation reports on the investigation of the dynamic views for object database systems in support of multiple applications. It develops object views as a way to support multiple (possibly conflicting) applications on shared data, and demonstrates that dynamic views can improve the data abstractions for specific applications, enhance the security of object database systems, and provide a way to adapt views dynamically. / In this dissertation, we present a comprehensive model of object views for object database systems. This model is compatible with ODMG-93 standard. In this model, object views are characterized by view object types and view relationship types within view schemas. Object views can be derived in many different ways from the descriptions of object databases. / An object view management system and some tools are also presented in this dissertation. The management system serves as a kernel of the view system based on the object view model, and provides an application programming interface. The tools built on top of the kernel include and object view definition language and an object view manipulation language. / The investigation is carried out in two aspects: The intension of object views and the extension of object views. The intension aspect is concerned with view types, view characteristics and view relationship types. The extension aspect is concerned with view objects and their common operations. / Source: Dissertation Abstracts International, Volume: 57-04, Section: B, page: 2667. / Major Professor: Gregory A. Riccardi. / Thesis (Ph.D.)--The Florida State University, 1996.
|
164 |
A fault-tolerant multiple bus interconnection networkUnknown Date (has links)
Fault-tolerance and interconnection networks are two areas of computer science that have captured the attention of researchers for several decades. While early computer systems were designed more to avoid faults, today's technology has made it possible as well as practical for a system to continue to function efficiently even in the presence of faults. Thus the current emphasis is on how many faults can be tolerated rather than on whether or not a single fault in the system can exist. / A typical computer environment of today consists of several computers working together to solve a given problem, rather than a single computer working in isolation. These cooperating computers require a fast, flexible, efficient and reasonably priced communication system. This research proposes such a system. / The multiple bus interconnection network described herein uses projective geometry as its basis. Even though not all processors are connected to every bus, it has a diameter of 1 which means that it allows the fastest possible communication between processors in a fault-free environment. Other attractive features of this design include a small number of connections per node, several routing paths between nodes, a high degree of uniformness and balance, and it is highly fault-tolerant. In addition, it permits distributed routing and fault-diagnosis algorithms. / Source: Dissertation Abstracts International, Volume: 56-02, Section: B, page: 0931. / Major Professor: Lois W. Hawkes. / Thesis (Ph.D.)--The Florida State University, 1994.
|
165 |
Hierarchical analysis of visual motionUnknown Date (has links)
The problem addressed in this dissertation is the hierarchical analysis of visual motion. Visual motion is a fundamental task at any low level visual system. Visual perception and motion are linked in a single model that verifies Daugman's properties for visual processing. Following the neurophysiological hypothesis of hexagonal tesselation of the visual plane, an heptarchy was built that verifies this hypothesis. The model of the receptive fields at the level of the striate cortex, more especially at the level of the simple cells, was realized. A pyramidal implementation of the Walsh Transform was performed. A Fourier analysis of the SC, or the Synaptic Links, is implemented and an agreement between the results obtained and the neurophysiological data of the receptive fields is confirmed. The set of operators obtained were applied on different kinds of geometric moving objects and excellent results were obtained. This first model corresponds to the Component Direction Selective (C.D.S.) type of neurons. A second stage of disambiguation is necessary at the level of the MT to solve the ambiguity problem. This stage corresponds to the Pattern direction selectivity (P.D.S.) type of neurons, having for inputs the Component Direction Selective type of neurons. / A hierarchical cross correlator in hexagonal grid has been realized and applied to the same moving objects mentioned above. The results of this hierarchical correlator are compared with a serial correlator and the improvement of time computation is shown. This motion analyzer gives the direction of motion of the whole object. It is a viable and simpler alternative to the first model which makes the task needlessly difficult. If the first stage of motion analysis is non-oriented, this model should have no problem in computing the pattern's motion. / The results of the two models are confronted and the linkage between the two stages of the first model at the neuronal level is still an important step to be solved by neurophysiologists. / Source: Dissertation Abstracts International, Volume: 49-08, Section: B, page: 3299. / Major Professor: Abraham Kandel. / Thesis (Ph.D.)--The Florida State University, 1988.
|
166 |
Neptune: The application of coarse-grain data flow methods to scientific parallel programmingUnknown Date (has links)
The dissertation investigates a data flow programming style for developing efficient and machine-independent scientific programs on multiprocessor computers. I have designed and implemented a programming system called Neptune. Both functional and data parallelism are incorporated into Neptune. A coarse-grain data flow model is used to explicitly specify functional parallelism. Data parallelism is represented within the data flow model by an activity decomposition model which ensures efficient execution of data parallel computation. Four scientific applications have been implemented in this data flow style to evaluate execution performance. / The machine-independence of the data flow model is demonstrated by obtaining speedup performance on three different parallel architectures (Sun network, Sequent Balance 12000 and a Cray Y-MP/464). The dissertation provides a detailed description of the implementation of the runtime environment on three main classes of parallel architectures (network, shared memory and distributed memory). Each implementation takes advantage of an architecture and provides a suitable data mapping strategy for minimizing overhead and optimizing resource utilization. / Developing parallel programs presents a major difficulty. In addition to the sequential problems, the programmer has to design and debug parallel constructs which express concurrency. The dissertation describes the Neptune programming system, which is specially designed for supporting a data flow methodology. Neptune provides an effective visual environment for designing and debugging data flow programs. Applications developed and debugged on a network of workstations can be scaled up and run on multiprocessor computers without requiring any software modifications. / Source: Dissertation Abstracts International, Volume: 51-12, Section: B, page: 5982. / Major Professor: Gregory A. Riccardi. / Thesis (Ph.D.)--The Florida State University, 1990.
|
167 |
Improving Monte Carlo Linear Solvers Through Better Iterative ProcessesUnknown Date (has links)
Monte Carlo (MC) linear solvers are fundamentally based on the ability to estimate a matrix-vector product, using a random sampling process. They use the fact that deterministic stationary iterative processes to solve linear systems can be written as sums of a series of matrix-vector products. Replacing the deterministic matrix-vector products with MC estimates yields a MC linear solver. While MC linear solvers have a long history, they did not gain widespread acceptance in the numerical linear algebra community, for the following reasons: (i) their slow convergence, and (ii) the limited class of problems for which they converged. Slow convergence is caused by both, the MC process for estimating the matrix-vector product, and the stationary process underlying the MC technique, while the latter is caused primarily by the stationary iterative process. The MC linear algebra community made significant advances in reducing the errors from slow convergence through better techniques for estimating the matrix-vector product, and also through a variety of variance reduction techniques. However, use of MC linear algebra is still limited, since the techniques use only stationary iterative processes resulting from a diagonal splitting (for example, Jacobi), which have poor convergence properties. The reason for using such splittings is because it is believed that efficient MC implementations of more sophisticated splittings is not feasible. Consequently, little effort has been placed by the MC community on addressing this important issue. In this thesis, we address the issue of improving the iterative process underlying the MC linear solvers. In particular, we demonstrate that the reasons for considering only diagonal splitting is not valid, and show a specific non-diagonal splitting for which an efficient MC implementation is feasible, even though it superficially suffers from the drawbacks for which non-diagonal splittings were not considered by the MC linear algebra community. We also show that conventional techniques to improve deterministic iterative processes, such as the Chebyshev method, show promise in improving MC techniques too. Despite such improvements, we do not expect MC techniques to be competitive with modern deterministic techniques to accurately solve linear systems. However, MC techniques have the advantage that they can obtain approximate solutions fast. For example, an estimate of the solution can be obtained in constant time, independent of the size of the matrix, if we permit a small amount of preprocessing. There are other advantages too, such as the ability to estimate specific components of a solution, and latency and fault tolerance in parallel and distributed environments. There are a variety of applications where fast, approximate, solutions are useful, such as preconditioning, graph partitioning, and information retrieval. Thus MC linear algebra techniques are of relevance to important classes of applications. We demonstrate this by showing its benefits in an application to dynamic load balancing of parallel computations. / A Thesis submitted to the Department of Computer Science in partial fulfillment of
the requirements for the degree of Master of Science. / Degree Awarded: Summer Semester, 2004. / Date of Defense: May 12, 2004. / Linear solvers, Monte Carlo, Chebyshev, Jacobi / Includes bibliographical references. / Ashok Srinivasan, Professor Directing Thesis; Michael Mascagni, Committee Member; Robert van Engelen, Committee Member.
|
168 |
Enhancing Pattern Classification with Relational Fuzzy Neural Networks and Square BK-ProductsUnknown Date (has links)
This research presents several important developments in pattern classification using fuzzy neural networks and BK-Square products and presents extensions to max-min fuzzy neural network research. In this research, the max and min operations used in the fuzzy operations are replaced by more general t-norms and co-norms, respectively. In addition, instead of the £ukasiewicz equivalence connective used in network of Reyes-Garcia and Bandler, this research introduces a variety of equivalence connectives. A new software tool was developed specifically for this research, allowing for greater experimental flexibility, as well as some interesting options that allow greater exploitation of the merits of the relational BK-square network. The effectiveness of this classifier is explored in the domain of phoneme recognition, taxonomic classification, and diabetes diagnosis. This research finds that the variance of fuzzy operations in equivalence and implication formulae, in complete divergence from classical composition, produces drastically different performance within this classifier. Techniques are presented that select effective fuzzy operation combinations. In addition, this classifier is shown to be effective at feature selection by using a technique which usually would be impractical with standard neural networks, but is made practical through the unique nature of this classifier. / A Dissertation submitted to the Department of Computer Science in partial
fulfillment of the Requirements for the degree of Doctor of Philosophy. / Degree Awarded: Summer Semester, 2006. / Date of Defense: May 10, 2006. / Pattern Classification, Fuzzy Set, Fuzzy Logic, Neural Network / Includes bibliographical references. / Ladislav J. Kohout, Professor Directing Dissertation; Anke Meyer-Bäse, Outside Committee Member; Robert van Engelen, Committee Member; R. Christopher Lacher, Committee Member; Ernest L. McDuffie, Committee Member.
|
169 |
Instruction Caching in Multithreading Processors Using GuaranteesUnknown Date (has links)
The OpenSPARC T1 is a multithreading processor developed and open sourced by Sun Microsystems (now Oracle). This paper presents an implementation of the low-power Tagless-Hit Instruction Cache (TH-IC) for the T1, after adapting it to the multithreading architecture found in that processor. The TH-IC eliminates the need for many instruction cache and ITLB accesses, by guaranteeing that accesses within a much smaller L0-style cache will hit. The OpenSPARC T1 uses a 16KB, 4-way set associative instruction, and a 64-entry fully associative ITLB. The addition of the TH-IC eliminates approximately 75% of accesses to these structures, instead processing the fetch directly from a much smaller 128 byte data array. Adding the TH-IC to the T1 also demonstrates that even already power efficient processors can be made more efficient using this technique. / A Thesis submitted to the Department of Computer Science in partial fulfillment of
the requirements for the degree of Master of Science. / Degree Awarded: Fall Semester, 2010. / Date of Defense: October 29, 2010. / Simultaneous Multithreading, Low Power / Includes bibliographical references. / Gary Tyson, Professor Directing Thesis; David Whalley, Committee Member; Xin Yuan, Committee Member.
|
170 |
Formal Security Evaluation of Ad Hoc Routing ProtocolsUnknown Date (has links)
Research into routing protocol development for mobile ad hoc networks has been a significant undertaking since the late 1990's. Secure routing protocols for mobile ad hoc networks provide the necessary functionality for proper network operation. If the underlying routing protocol cannot be trusted to follow the protocol operations, additional trust layers cannot be obtained. For instance, authentication between nodes is meaningless without a trusted underlying route. Security analysis procedures to formally evaluate these developing protocols have been significantly lagging, resulting in unstructured security analysis approaches and numerous secure ad hoc routing protocols that can easily be broken. Evaluation techniques to analyze security properties in ad hoc routing protocols generally rely on manual, non-exhaustive approaches. Non-exhaustive analysis techniques may conclude a protocol is secure, while in reality the protocol may contain unapparent or subtle flaws. Using formalized exhaustive evaluation techniques to analyze security properties increases protocol confidence. Intertwined to the security evaluation process is the threat model chosen to form the analysis. Threat models drive analysis capabilities, affecting how we evaluate trust. Current attacker threat models limit the results obtained during protocol security analysis over ad hoc routing protocols. Developing a proper threat model to evaluate security properties in mobile ad hoc routing protocols presents a significant challenge. If the attacker strength is too weak, we miss vital security flaws. If the attacker strength is too strong, we cannot identify the minimum required attacker capabilities needed to break the routing protocol. To solve these problems, we contribute to the field in the following ways: Adaptive Threat Modeling. We develop an adaptive threat model to evaluate route discovery attacks against ad hoc routing protocols. Adaptive threat modeling enables us to evaluate trust in the ad hoc routing process and allows us to identify minimum requirements an attacker needs to break a given routing protocol. Automated Security Evaluation. We develop an automated evaluation process to analyze security properties in the route discovery phase for on-demand source routing protocols. Using the automated security evaluation process, we are able to produce and analyze all topologies for a given network size. The individual network topologies are fed into the SPIN model checker to exhaustively evaluate protocol models against an attacker attempting to corrupt the route discovery process. Our contributions provide the first automated exhaustive analysis approach to evaluate ad hoc on-demand source routing protocols. / A Dissertation submitted to the Department of Computer Science in partial
fulfillment of the requirements for the degree of Doctor of Philosophy. / Degree Awarded: Fall Semester, 2007. / Date of Defense: November 13, 2007. / Model Checking, Security Analysis, Formal Methods, Secure Routing Protocols / Includes bibliographical references. / Alec Yasinsac, Professor Directing Dissertation; Michelle Kazmer, Outside Committee Member; Sudhir Aggarwal, Committee Member; Breno de Medeiros, Committee Member; Gary Tyson, Committee Member.
|
Page generated in 0.0858 seconds