• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 185
  • 14
  • 2
  • Tagged with
  • 223
  • 223
  • 223
  • 22
  • 21
  • 19
  • 19
  • 18
  • 17
  • 16
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Confidentiality enforcement using dynamic information flow analyses

Le Guernic, Gurvan January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / David A. Schmidt, Anindya Banerjee, Thomas Jensen / With the intensification of communication in information systems, interest in security has increased. The notion of noninterference is typically used as a baseline security policy to formalize confidentiality of secret information manipulated by a program. This notion, based on ideas from classical information theory, has first been introduced by Goguen and Meseguer (1982) as the absence of strong dependency (Cohen, 1977). "information is transmitted from a source to a destination only when variety in the source can be conveyed to the destination" Cohen (1977) Building on the notion proposed by Goguen and Meseguer, a program is typically said to be noninterfering if the values of its public outputs do not depend on the values of its secret inputs. If that is not the case then there exist illegal information flows that allow an attacker, having knowledge about the source code of the program, to deduce information about the secret inputs from the public outputs of the execution. In contrast to the vast majority of previous work on noninterference which are based on static analyses (especially type systems), this PhD thesis report considers dynamic monitoring of noninterference. A monitor enforcing noninterference is more complex than standard execution monitors. "the information carried by a particular message depends on the set it comes from. The information conveyed is not an intrinsic property of the individual message." Ashby (1956). The work presented in this report is based on the combination of dynamic and static information flow analyses. The practicality of such an approach is demonstrated by the development of a monitor for concurrent programs including synchronization commands. This report also elaborates on the soundness with regard to noninterference and precision of such approaches.
192

Pointer analysis and separation logic

Sims, Elodie-Jane January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / David A. Schmidt / We are interested in modular static analysis to analyze softwares automatically. We focus on programs with data structures, and in particular, programs with pointers. The final goal is to find errors in a program (problems of dereferencing, aliasing, etc) or to prove that a program is correct (regarding those problems) in an automatic way. Isthiaq, Pym, O'Hearn and Reynolds have recently developed separation logics, which are Hoare logics with assertions and predicates language that allow to prove the correctness of programs that manipulate pointers. The semantics of the logic's triples ({P}C{P'}) is defined by predicate transformers in the style of weakest preconditions. We expressed and proved the correctness of those weakest preconditions (wlp) and strongest postconditions (sp), in particular in the case of while-loops. The advance from the existing work is that wlp and sp are defined for any formula, while previously existing rules had syntactic restrictions. We added fixpoints to the logic as well as a postponed substitution which then allow to express recursive formula. We expressed wlp and sp in the extended logic and proved their correctness. The postponed substitution is directly useful to express recursive formula. For example, [equations removed, still appears in abstract] describes the set of memory where x points to a list of integers. Next, the goal was to use separation logic with fixpoints as an interface language for pointer analysis. That is, translating the domains of those analyses into formula of the logic (and conversely) and to prove their correctness. One might also use the translations to prove the correctness of the pointer analysis itself. We illustrate this approach with a simple pointers-partitioning analysis. We translate the logic formula into an abstract language we designed which allows us to describe the type of values registered in the memory (nil, integer, booleans, pointers to pairs of some types, etc.) as well as the aliasing and non-aliasing relations between variables and locations in the memory. The main contribution is the definition of the abstract language and its semantics in a concrete domain which is the same as the one for the semantics of formula. In particular, the semantics of the auxiliary variables, which is usually a question of implementation, is explicit in our language and its semantics. The abstract language is a partially reduced product of several subdomains and can be parametrised with existing numerical domains. We created a subdomain which is a tabular data structure to cope with the imprecision from not having sets of graphs. We expressed and proved the translations of formula into this abstract language.
193

A simulation framework to ensure data consistency in sensor networks

Shah, Nikhil Jeevanlal January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Gurdip Singh / The objective of this project is to address the problem of data consistency in sensor network applications. An application may involve data being gathered from several sources to be delivered to multiple sinks, resulting in multiple data streams with several sources and sinks for each stream. There may be several inter-stream constraints to be satisfied in order to ensure data consistency. In this report, we model this problem as that of variable sharing between the components in an application, and propose a framework for implementing variable sharing in a distributed sensor network. In this framework, we define the notion of variable sharing in component based systems. We allow the application designer to specify data consistency constraints in an application. Given an application, we implement a tool to identify various types of shared variables in an application. Given the shared variables and the data consistency constraints, we provide an infrastructure to implement the shared variables. This infrastructure has tools to synthesize the code to be deployed on each of the nodes in the physical topology. The infrastructure has been built for the TinyOS platform. We have evaluated the framework using several examples using the TOSSIM simulator.
194

Toward autism recognition using hidden Markov models

Lancaster, Joseph Paul Jr. January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / David A. Gustafson / The use of hidden Markov models in autism recognition and analysis is investigated. More specifically, we would like to be able to determine a person's level of autism (AS, HFA, MFA, LFA) using hidden Markov models trained on observations taken from a subject's behavior in an experiment. A preliminary model is described that includes the three mental states self-absorbed, attentive, and join-attentive. Futhermore, observations are included that are more or less indicative of each of these states. Two experiments are described, the first on a single subject and the second on two subjects. Data was collected from one individual in the second experiment and observations were prepared for input to hidden Markov models and the resulting hidden Markov models were studied. Several questions subsequently arose and tests, written in Java using the JaHMM hidden Markov model tool- kit, were conducted to learn more about the hidden Markov models being used as autism recognizers and the training algorithms being used to train them. The tests are described along with the corresponding results and implications. Finally, suggestions are made for future work. It turns out that we aren't yet able to produce hidden Markov models that are indicative of a persons level of autism and the problems encountered are discussed and the suggested future work is intended to further investigate the use of hidden Markov models in autism recognition.
195

Universal object segmentation in fused range-color data

Finley, Jeffery Michael January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Christopher L. Lewis / This thesis presents a method to perform universal object segmentation on fused SICK laser range data and color CCD camera images collected from a mobile robot. This thesis also details the method of fusion. Fused data allows for higher resolution than range-only data and provides more information than color-only data. The segmentation method utilizes the Expectation Maximization (EM) algorithm to detect the location and number of universal objects modeled by a six-dimensional Gaussian distribution. This is achieved by continuously subdividing objects previously identified by EM. After several iterations, objects with similar traits are merged. The universal object model performs well in environments consisting of both man-made (walls, furniture, pavement) and natural objects (trees, bushes, grass). This makes it ideal for use in both indoor and outdoor environments. The algorithm does not require the number of objects to be known prior to calculation nor does it require a training set of data. Once the universal objects have been segmented, they can be processed and classified or left alone and used inside robotic navigation algorithms like SLAM.
196

Comparative text summarization of product reviews

Singi Reddy, Dinesh Reddy January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / William H. Hsu / This thesis presents an approach towards summarizing product reviews using comparative sentences by sentiment analysis. Specifically, we consider the problem of extracting and scoring features from natural language text for qualitative reviews in a particular domain. When shopping for a product, customers do not find sufficient time to learn about all products on the market. Similarly, manufacturers do not have proper written sources from which to learn about customer opinions. The only available techniques involve gathering customer opinions, often in text form, from e-commerce and social networking web sites and analyzing them, which is a costly and time-consuming process. In this work I address these issues by applying sentiment analysis, an automated method of finding the opinion stated by an author about some entity in a text document. Here I first gather information about smart phones from many e-commerce web sites. I then present a method to differentiate comparative sentences from normal sentences, form feature sets for each domain, and assign a numerical score to each feature of a product and a weight coefficient obtained by statistical machine learning, to be used as a weight for that feature in ranking various products by linear combinations of their weighted feature scores. In this thesis I also explain what role comparative sentences play in summarizing the product. In order to find the polarity of each feature a statistical algorithm is defined using a small-to-medium sized data set. Then I present my experimental environment and results, and conclude with a review of claims and hypotheses stated at the outset. The approach specified in this thesis is evaluated using manual annotated trained data and also using data from domain experts. I also demonstrate empirically how different algorithms on this summarization can be derived from the technique provided by an annotator. Finally, I review diversified options for customers such as providing alternate products for each feature, top features of a product, and overall rankings for products.
197

Slicing of extended finite state machines

Atchuta, Kaushik January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Torben Amtoft / An EFSM (Extended Finite State Machine) is a tuple (S, T, E, V) where S is a finite set of states, T is a finite set of transitions, E is a finite set of events, and V is a finite set of variables. Every transition t in T has a source state and a target state, both in S. There is a need to develop a GUI which aids in building such machines and simulating them so that a slicing algorithm can be implemented on such graphs. This was the main idea of Dr. Torben Amtoft, who has actually written the slicing algorithm and wanted this to be implemented in code. The project aims at implementing a GUI which is effective to simulate and build the graph with minimum user effort. Poor design often fails to attract users. So, the initial effort is to build a simple and effective GUI which serves the purpose of taking input from the user, building graphs and simulating it. The scope of this project is to build and implement an interface so that the users can do the following in an effective way:  Input a specification of an EFSM  Store and later retrieve EFSMs  Displaying an EFSM in a graphical form  Simulating the EFSM  Modify an EFSM  Implement the slicing algorithm All the above mentioned features must be integrated into the GUI and it should only fail if the input specification is wrong.
198

A transaction model for environmental resource dependent Cyber-Physical Systems

Zhu, Huang January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / Gurdip Singh / Cyber-Physical Systems (CPSs) represent the next-generation systems characterized by strong coupling of computing, sensing, communication, and control technologies. They have the potential to transform our world with more intelligent and efficient systems, such as Smart Home, Intelligent Transportation System, Energy-Aware Building, Smart Power Grid, and Surgical Robot. A CPS is composed of a computational and a physical subsystem. The computational subsystem monitors, coordinates and controls operations of the physical subsystem to create desired physical effects, while the physical subsystem performs physical operations and gives feedback to the computational subsystem. This dissertation contributes to the research of CPSs by proposing a new transaction model for Environmental Resource Dependent Cyber-Physical Systems (ERDCPSs). The physical operations of such type of CPSs rely on environmental resources, and they are commonly seen in areas such as transportation and manufacturing. For example, an autonomous car views road segments as resources to make movements and a warehouse robot views storage spaces as resources to fetch and place goods. The operating environment of such CPSs, CPS Network, contains multiple CPS entities that share common environmental resources and interact with each other through usages of these resources. We model physical operations of an ERDCPS as a set of transactions of different types that achieve different goals, and each transaction consists of a sequence of actions. A transaction or an action may require environmental resources for its operations, and the usage of an environmental resource is precise in both time and space. Moreover, a successful execution of a transaction or an action requires exclusive access to certain resources. Transactions from different CPS entities of a CPS Network constitute a schedule. Since environmental resources are shared, transactions in the schedule may have conflicts in using these resources. A schedule must remain consistent to avoid unexpected consequences caused by resource usage conflicts between transactions. A two-phase commit algorithm is proposed to process transactions. In the pre-commit phase, a transaction is scheduled by reserving usage times of required resources, and potential conflicts are detected and resolved using different strategies, such as Win-Lose, Win-Win, and Transaction Preemption. Two general algorithms are presented to process transactions in the pre-commit phase for both centralized and distributed resource management environments. In the commit phase, a transaction is executed using reserved resources. An exception occurs when the real-time resource usage is different from what has been predicted. By doing internal and external check before a scheduled transaction is executed, exceptions can be detected and handled properly. A simulation platform (CPSNET) is developed to simulate the transaction model. The simulation platform simulates a CPS Network, where different CPS entities coordinate resource usages of their transactions through a Communication Network. Depending on the resource management environment, a Resource Server may exist in the CPS Network to manage resource usages of all CPS entities. The simulation platform is highly configurable and configuration of the simulation environment, CPS entities and two-phase commit algorithm are supported. Moreover, various statistical information and operation logs are provided to monitor and evaluate the platform itself and the transaction model. Seven groups of simulation experiments are carried out to verify the simulation platform and the transaction model. Simulation results show that the platform is capable of simulating a large load of CPS entities and transactions, and entities and components perform their functions correctly with respect to the processing of transactions. The two-phase commit algorithm is evaluated, and the results show that, compared with traditional cases where no conflict resolving is applied or a conflicting transaction is directly aborted, the proposed conflict resolving strategies improve the schedule productivity by allowing more transactions to be executed and the scheduling throughput by maintaining a higher concurrency level.
199

Predicting the behavior of robotic swarms in discrete simulation

Lancaster, Joseph Paul, Jr January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / David Gustafson / We use probabilistic graphs to predict the location of swarms over 100 steps in simulations in grid worlds. One graph can be used to make predictions for worlds of different dimensions. The worlds are constructed from a single 5x5 square pattern, each square of which may be either unoccupied or occupied by an obstacle or a target. Simulated robots move through the worlds avoiding the obstacles and tagging the targets. The interactions between the robots and the robots and the environment lead to behavior that, even in deterministic simulations, can be difficult to anticipate. The graphs capture the local rate and direction of swarm movement through the pattern. The graphs are used to create a transition matrix, which along with an occupancy matrix, can be used to predict the occupancy in the patterns in the 100 steps using 100 matrix multiplications. In the future, the graphs could be used to predict the movement of physical swarms though patterned environments such as city blocks in applications such as disaster response search and rescue. The predictions could assist in the design and deployment of such swarms and help rule out undesirable behavior.
200

A comprehensive approach to enterprise network security management

Homer, John January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / Xinming (Simon) Ou / Enterprise network security management is a vitally important task, more so now than ever before. Networks grow ever larger and more complex, and corporations, universities, government agencies, etc. rely heavily on the availability of these networks. Security in enterprise networks is constantly threatened by thousands of known software vulnerabilities, with thousands more discovered annually in a wide variety of applications. An overwhelming amount of data is relevant to the ongoing protection of an enterprise network. Previous works have addressed the identification of vulnerabilities in a given network and the aggregated collection of these vulnerabilities in an attack graph, clearly showing how an attacker might gain access to or control over network resources. These works, however, do little to address how to evaluate or properly utilize this information. I have developed a comprehensive approach to enterprise network security management. Compared with previous methods, my approach realizes these issues as a uniform desire for provable mitigation of risk within an enterprise network. Attack graph simplification is used to improve user comprehension of the graph data and to enable more efficient use of the data in risk assessment. A sound and effective quantification of risk within the network produces values that can form a basis for valuation policies necessary for the application of a SAT solving technique. SAT solving resolves policy conflicts and produces an optimal reconfiguration, based on the provided values, which can be verified by a knowledgeable human user for accuracy and applicability within the context of the enterprise network. Empirical study shows the effectiveness and efficiency of these approaches, and also indicates promising directions for improvements to be explored in future works. Overall, this research comprises an important step toward a more automated security management initiative.

Page generated in 0.0468 seconds