• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27395
  • 5236
  • 1472
  • 1299
  • 1299
  • 1299
  • 1299
  • 1299
  • 1289
  • 1210
  • 867
  • 671
  • 512
  • 158
  • 156
  • Tagged with
  • 42908
  • 42908
  • 14635
  • 10978
  • 3180
  • 2983
  • 2818
  • 2597
  • 2583
  • 2520
  • 2481
  • 2471
  • 2387
  • 2288
  • 2088
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

An investigation of guarded assignments as a coordination language for large grain parallel programming

Unknown Date (has links)
In this dissertation we propose guarded assignments (GA) as a model for coordination languages for parallel processing. GA is a combination of guarded commands, proposed by Dijkstra, and Unity, proposed by Chandy and Misra. Gerth and Pnueli have shown that Unity could have been derived as a specialization of Manna and Pnueli's temporal logic proof methodology together with Gerth's transition logic. We show a stronger equivalence between Gerth and Pnueli's theory and Unity than had been asserted. / The semantics of GA are specified using an operational model similar to Unity. GA can be used as a coordination language for parallel processing by using computational language subroutines to implement assignments. The addition of guards makes GA more suited to implementation of programs than Unity, which is intended only as a design methodology. But we show that GA is so closely rooted in Unity that the proof theory of Unity, and indirectly Manna and Pnueli's temporal logic, can be applied. / As an example of the use of GA we specify efficient parallel execution strategies for Unity and large grain GA programs. We do this by specifying translations from Unity or GA to a GA program which correctly implements the properties of the source program. The resulting GA program expresses the potential for parallel execution which existed in the source program and limits the unproductive execution of assignments in Unity and evaluation of guards in GA. / Determinism is defined for parallel programs and conditions ensuring determinism are given and proved to be sufficient. Programs which are deterministic do not depend upon fairness for any initial state or input which can lead to a final or fixed point state. / We show that large grain dataflow can be represented in GA, making it possible to define new types of nonstandard nodes without resorting to ad hoc semantics. Extensions to the dataflow model based on GA are proposed. / Source: Dissertation Abstracts International, Volume: 53-11, Section: B, page: 5825. / Major Professor: Gregory Riccardi. / Thesis (Ph.D.)--The Florida State University, 1992.
142

Use of fuzzy relational information retrieval technique for generating control strategies in resolution-based automated reasoning

Unknown Date (has links)
Current resolution based automated theorem provers, such as ITP and OTTER, adopt strategies to avoid many fruitless paths by their judicious and "informed" application. Without a suitable strategy guiding the inference, too many often irrelevant clauses may derive, and those clauses may lead the program easily into a blind alley. Therefore, the strategies are the must in any serious use of automated reasoning. Weighting strategy is one of the necessary strategies to produce an answer in allowably short time and space along with the set of support strategy in the area of the resolution based automated reasoning. But, the weighting strategy is still based on the user's knowledge or intuition of the problem to be solved. / The dissertation suggests a method for control of inferential strategies of resolution based architectures, and then applies the method to some of domains in automated reasoning fields to inspect the effect of the new scheme. Also, the results of using the various fuzzy implication operators were compared by means of the new weighting mechanism to give a choice of a fuzzy implication operator in a specific domain of automated reasoning. The method for speeding up the logical inference is tested in conjunction with both of the theorem provers called ITP and OTTER. / The new weighting mechanism will help the user of the resolution based mechanical theorem prover decide the weighting pattern and the weights to reduce the deduction time and space automatically from the given input problem. The new mechanism employs the triangle fuzzy relational products and fast fuzzy relational algorithm. / Source: Dissertation Abstracts International, Volume: 53-07, Section: B, page: 3598. / Major Professor: Ladislav J. Kohout. / Thesis (Ph.D.)--The Florida State University, 1992.
143

A Mechanism for Tracking the Effects of Requirement Changes in Enterprise Software Systems

Unknown Date (has links)
Managing the effects of changing requirements remains one of the greatest challenges of enterprise software development. The iterative and incremental model provides an expedient framework for addressing such concerns. This thesis proposes a set of metrics – Mutation Index, Component Set, Dependency Index – and a methodology to measure the effects of requirement changes from one iteration to another. To evaluate the effectiveness of the proposed metrics, sample calculations and results from a real life case study are included. Future directions of our work based on this mechanism are also discussed. / A Thesis submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Science. / Degree Awarded: Summer Semester, 2006. / Date of Defense: June 30, 2006. / Requirements, Analysis, Measurement, Management, Algorithms, Software Engineering, Computer Science, Design, Iterative and Incremental Model, Metrics / Includes bibliographical references. / Robert van Engelen, Professor Directing Thesis; Lois Hawkes, Committee Member; Alec Yasinsac, Committee Member.
144

An Interface for Collaborative Digital Forensics

Unknown Date (has links)
This thesis presents a novel interface for collaborative Digital Forensics. The improvement in the process management and remote access apropos of the use of current Digital Forensic tools in the area of Digital Forensics is described in this thesis. The architecture presented, uses current technology and implements standard security procedures. In addition, the development of software modules, elaborated later on in this thesis, makes this architecture secure, portable, robust, reliable, scalable and convenient as a solution. Such a solution,presented in this thesis, is not specific to any Digital Forensics tool or operating platform making it a portable architecture. A primary goal of this thesis has been the development of a solution that could support law-enforcement agency needs for remote digital decryption. The interface presented here aims to achieve this goal. The use of two popular Digital Forensic tools and their integration with this interface had led to a fully operational portal with 24X7 digital decryption processing capabilities for agents to use. A secondary goal was to investigate ideas and techniques that could be helpful in the eld of passphrase" generation and recovery. The implementation of certain computational models to support in this research is under way. The interface has been designed with features that would be part of the foundational work of developing new pass phrase breaking software components. Establishing a dedicated setup for the Digital Forensic tools and creating a secure, reliable and user-friendly interface for it, has been a major component of the overall development in creating the portal. / A Thesis submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Science. / Degree Awarded: Fall Semester, 2007. / Date of Defense: September 19, 2007. / Digital Decryption, Security, User Interface, Digital Forensics, Java, Jsp / Includes bibliographical references. / Sudhir Aggarwal, Professor Directing Thesis; Breno de Medeiros, Committee Member; Zhenhai Duan, Committee Member.
145

Metrics and Techniques to Guide Software Development

Unknown Date (has links)
The objective of my doctoral dissertation research is to formulate, implement, and validate metrics and techniques towards perceiving some of the influences on software development, predicting the impact of user initiated changes on a software system, and prescribing guidelines to aid decisions affecting software development. Some of the topics addressed in my dissertation are: Analyzing the extent to which changing requirements affect a system's design, how the delegation of responsibilities to software components can be guided, how Aspect Oriented Programming (AOP) may be combined with Object Oriented Programming (OOP) to best deliver a system's functionality, whether and how characteristics of a system's design are influenced by a outsourced and offshore development. The metrics and techniques developed in my dissertation serve as heuristics across the software development life cycle, helping practitioners evaluate options and take decisions. By way of validation, the metrics and techniques have been applied to more than 10 real life software systems. To facilitate the application of the metrics and techniques, I have led the development of automated tools which can process software development artifacts such as code and Unified Modeling Language (UML) diagrams. The design and implementation of such tools are also discussed in the dissertation. / A Dissertation submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Degree Awarded: Spring Semester, 2009. / Date of Defense: March 2, 2009. / Software Architecture, Software Design, Software, Sofware Engineering, Software Metrics / Includes bibliographical references. / Robert van Engelen, Professor Directing Dissertation; Ian Douglas, Outside Committee Member; Lois Hawkes, Committee Member; Theodore Baker, Committee Member; Daniel Schwartz, Committee Member; Michael Mascagni, Committee Member.
146

Detection Framework for Phishing Websites

Unknown Date (has links)
This paper discusses a combined and platform-independent solution to detect websites that fake their identity. The approach combines white-listing, black-listing and heuristic strategies to provide an optimal phishing detection ratio against these so-called phishing websites while at the same time making sure that the number of wrongly classified legitimate websites remains as low as possible. For the implementation, a prototype solution was written in platform-independent Java. Practical challenges during the implementation as well as first practical results will be discussed. / A Thesis submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Science. / Degree Awarded: Spring Semester, 2009. / Date of Defense: April 7, 2009. / Wolff, Heuristics, Phishing, Antiphishing, Security, It-Security, Internet, Computer Science / Includes bibliographical references. / Sudhir Aggarwal, Professor Directing Thesis; Zhenghai Duan, Committee Member; Zhenghao Zhang, Committee Member.
147

Comparing Samos Document Search Performance between Apache Solr and Neo4j

Unknown Date (has links)
The Distributed Oceanographic Match-Up Service (DOMS) currently under development is a centralized service that allows researchers to easily match in situ and satellite oceanographic data from distributed sources to facilitate satellite calibration, validation, and retrieval algorithm development. The Shipboard Automated Meteorological and Oceanographic System (SAMOS) initiative provides routine access to high-quality marine meteorological and near-surface oceanographic observations from research vessels. SAMOS is one of several endpoints connected into the DOMS network, providing in-situ data for the match-up service. DOMS in-situ endpoints currently use Apache Solr as a backend search engine on each node in the distributed network. While Solr is a high-performance solution that facilitates creation and maintenance of indexed data, it is limited in the sense that its schema is fixed. The property graph model escapes this limitation by removing any prohibiting requirements on the data model, and permitting relationships between data objects. This paper documents the development of the SAMOS Neo4j property graph database including new search possibilities that take advantage of the property graph model, performance comparisons with Apache Solr, and a vision for graph databases as a storage tool for oceanographic data. The integration of the SAMOS Neo4j graph into DOMS is also described. Various data models are explored including spatial-temporal records from SAMOS added to a time tree using Graph Aware technology. This extension provides callable Java procedures within the CYPHER query language that generate in-graph structures used in data retrieval. Neo4j excels at performing relationship and path-based queries, which challenge relational-SQL databases because they require memory intensive joins due to the limitation of their design. Consider a user who wants to find records over several years, but only for specific months. If a traditional database only stores timestamps, this type of query could be complex and likely prohibitively slow. Using the time tree model in a graph, one can specify a path from the root to the data which restricts resolutions to certain time frames (e.g., months). This query can be executed without joins, unions, or other compute-intensive operations, putting Neo4j at a computational advantage to the SQL database alternative. That said, while this advantage may be useful, it should not be interpreted as an advantage to Solr in the context of DOMS. Solr makes use of Apache Lucene indexing at its core, while Neo4j provides its own native schema indexes. Ultimately they each provide unique solutions for data retrieval that are geared for specific tasks. In the DOMS setting it would appear that Solr is the most suitable option, as there seems to be very limited use cases where Neo4j does outperform Solr. This is primarily because the use case as a subsetting tool does not require the flexibility and path-based queries that graph database tools offer. Rather, DOMS nodes are using high performance indexing structures to quickly filter large amounts of raw data that are not deeply connected, a feature of large data sets where graph queries would indeed become useful. / A Thesis submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Science. / Spring Semester 2017. / April 17, 2017. / CYPHER, database, graph, Neo4j, SAMOS, Solr / Includes bibliographical references. / Peixiang Zhao, Professor Co-Directing Thesis; Shawn Smith, Professor Co-Directing Thesis; Sonia Haiduc, Committee Member; Adrian Nistor, Committee Member.
148

Feistel-Inspired Scrambling Improves the Quality of Linear Congruential Generators

Unknown Date (has links)
Pseudorandom number generators (PRNGs) are an essential tool in many areas, including simulation studies of stochastic processes, modeling, randomized algorithms, and games. The performance of any PRNGs depends on the quality of the generated random sequences; they must be generated quickly and have good statistical properties. Several statistical test suites have been developed to evaluate a single stream of random numbers, such as TestU01, DIEHARD, the tests from the SPRNG package, and a set of tests designed to evaluate bit sequences developed at NIST. TestU01 provides batteries of test that are sets of the mentioned suites. The predefined batteries are SmallCrush (10 tests, 16 p-values) that runs quickly, Crush (96 tests, 187 p-values) and BigCrush (106 tests, 2254 p-values) batteries that take longer to run. Most pseudorandom generators use recursion to produce sequences of numbers that appear to be random. The linear congruential generator is one of the well-known pseudorandom generators, the next number in the random sequences is determined by the previous one. The recurrences start with a value called the seed. Each time a recurrence starts with the same seed the same sequence is produced. This thesis develops a new pseudorandom number generation scheme that produces random sequences with good statistical properties via scrambling linear congruential generators. The scrambling technique is based on a simplified version of Feistel network, which is a symmetric structure used in the construction of cryptographic block ciphers. The proposed research seeks to improve the quality of the linear congruential generators’ output streams and to break up the regularities existing in the generators. / A Dissertation submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester 2017. / May 4, 2017. / Feistel network, Linear congruential generators, Pseudorandom numbers / Includes bibliographical references. / Michael Mascagni, Professor Directing Dissertation; Dennis Duke, University Representative; Ashok Srinivasan, Committee Member; Robert van Engelen, Committee Member.
149

Dependency Collapsing in Instruction-Level Parallel Architectures

Unknown Date (has links)
Processors that employ instruction fusion can improve performance and energy usage beyond traditional processors by collapsing and simultaneously executing dependent instruction chains on the critical path. This paper describes compiler mechanisms that can facilitate and guide instruction fusion in processors built to execute fused instructions. The compiler support discussed in this paper includes compiler annotations to guide fusion, exploring multiple new fusion configurations, and developing scheduling algorithms that effectively select and order fusible instructions. The benefits of providing compiler support for dependent instruction fusion include statically detecting fusible instruction chains without the need for hardware dynamic detection support and improved performance by increasing available parallelism. / A Thesis submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Science. / Summer Semester 2017. / July 21, 2017. / compiler, computer architecture, computer science, dependent instruction, parellelism / Includes bibliographical references. / David Whalley, Professor Directing Thesis; Gary Tyson, Committee Member; Xin Yuan, Committee Member.
150

Matching Physical File Representation to Logical Access Patterns for Better Performance

Unknown Date (has links)
Over the years, the storage substrate of operating systems has evolved with changing storage devices and workloads [2, 6, 7, 8, 12, 15, 18, 26, 29, 33, 34, 35, 39, 41, 42, 44, 47, 48, 54]. Both academia and industry have devoted significant research effort to the file system component, a critical part of the storage system. A file system directs the underlying device-specific software to perform data reads and writes as well as providing the notion of files to interact with users and applications. To achieve this, a file system represents logical files internally or physically with data (the file content) and metadata (information required to locate, index, and operate on data). Most file system optimizations assume this one-to-one coupling of logical and physical representations [2, 7, 8, 18, 25, 26, 29, 33, 34, 35, 48]. This dissertation presents the design, implementation, and evaluation of two new systems, which decouple these representations and offer a new class of optimization opportunities not previously possible. First, the Composite-File File System (CFFS) exploits the observation that many files are frequently accessed together. By consolidating related file metadata, performance can be improved by up to 27%. Second, the Fine-grained Journal Store (FJS) exploits the observation that typically only subregions of a metadata entry are updated, but the heavyweight reliability and storage mechanisms then affect the entire metadata entry. This results in many unnecessary metadata writes that harm both the performance and the lifespan of certain storage devices. By focusing on only the updated metadata regions and consolidating storage and reliability mechanisms, the Fine-grained Journal Store can both improve the performance up to 15x and reduce unnecessary writes up to 5.8x. Overall, the decoupling of logical and physical representations allows more flexible matching of the physical representations to the workload patterns, and the results show that this approach is promising. / A Dissertation submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester 2018. / June 26, 2018. / File system, Metadata / Includes bibliographical references. / Andy An-I Wang, Professor Directing Dissertation; Jinfeng Zhang, University Representative; David Whalley, Committee Member; Peixiang Zhao, Committee Member.

Page generated in 0.0939 seconds