• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • Tagged with
  • 14
  • 14
  • 14
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Athena: An online proposal development system

Rahim, Humaira 01 January 2005 (has links)
Athena - Online Proposal Development System was the first version of a vision of Dr. Richard Botting, Professor, Department of Computer Science at California State University, San Bernardino. The program, a JSP based system incorporating a MySql database, moves the writing, review, and annotation of project proposals into the digital environment. It allows Computer Science Master's students to provide their project proposals online for review and annotation by the committee members.
2

System based ladder logic simulation and debugging

Krishnan, Krishna Kumar 07 November 2008 (has links)
PLCs are extensively used for the discrete and continuous control of non-intelligent shop-floor devices. The debugging phase of ladder logic development for PLCs is very cumbersome and difficult. Most often on-line debugging which is expensive and time consuming is used for debugging. Computer simulation techniques applied to this problem, leaves much to be desired. The best technique developed for ladder logic debugging is the use of ladder-based triggers. A ladder-based trigger is a function which suspends simulation execution whenever a vector of ladder variables equates to a vector of predefined states. System-based debugging facilities are those which aid a programmer in error detection at the system level. System based triggers will identify system faults and set traps within a simulation model to detect their occurrence. This approach will provide information necessary for a faster correction of the ladder logic once a trigger is activated. The system based debugging tool developed is capable of scanning a boolean representation of a PLC program with input coils, counters, timers, "and" conditions, "or conditions and output coils. The program provides the following facilities: 1. Graphics programs can be attached to the simulation program for better visualization. 2. The simulation program allows interactive control over the test bed developed. In a non-interactive simulation it can be executed in a timed sequential mode or random mode. 3. Triggers can be set by the user depending on the conditions that are to be monitored. 4. The program stops execution whenever a trigger is activated. 5. The program provides a trace of the output that caused the trigger and also of the inputs to this output, along with their state values at the time of activation. The use of system based techniques and graphics in the debugging of PLC ladder logic is demonstrated. Further the use of an object oriented frame work in the development of the debugging software is also demonstrated. / Master of Science
3

Improvements to the complex question answering models

Imam, Md. Kaisar January 2011 (has links)
In recent years the amount of information on the web has increased dramatically. As a result, it has become a challenge for the researchers to find effective ways that can help us query and extract meaning from these large repositories. Standard document search engines try to address the problem by presenting the users a ranked list of relevant documents. In most cases, this is not enough as the end-user has to go through the entire document to find out the answer he is looking for. Question answering, which is the retrieving of answers to natural language questions from a document collection, tries to remove the onus on the end-user by providing direct access to relevant information. This thesis is concerned with open-domain complex question answering. Unlike simple questions, complex questions cannot be answered easily as they often require inferencing and synthesizing information from multiple documents. Hence, we considered the task of complex question answering as query-focused multi-document summarization. In this thesis, to improve complex question answering we experimented with both empirical and machine learning approaches. We extracted several features of different types (i.e. lexical, lexical semantic, syntactic and semantic) for each of the sentences in the document collection in order to measure its relevancy to the user query. We have formulated the task of complex question answering using reinforcement framework, which to our best knowledge has not been applied for this task before and has the potential to improve itself by fine-tuning the feature weights from user feedback. We have also used unsupervised machine learning techniques (random walk, manifold ranking) and augmented semantic and syntactic information to improve them. Finally we experimented with question decomposition where instead of trying to find the answer of the complex question directly, we decomposed the complex question into a set of simple questions and synthesized the answers to get our final result. / x, 128 leaves : ill. ; 29 cm
4

Novel Cryptographic Primitives and Protocols for Censorship Resistance

Dyer, Kevin Patrick 24 July 2015 (has links)
Internet users rely on the availability of websites and digital services to engage in political discussions, report on newsworthy events in real-time, watch videos, etc. However, sometimes those who control networks, such as governments, censor certain websites, block specific applications or throttle encrypted traffic. Understandably, when users are faced with egregious censorship, where certain websites or applications are banned, they seek reliable and efficient means to circumvent such blocks. This tension is evident in countries such as a Iran and China, where the Internet censorship infrastructure is pervasive and continues to increase in scope and effectiveness. An arms race is unfolding with two competing threads of research: (1) network operators' ability to classify traffic and subsequently enforce policies and (2) network users' ability to control how network operators classify their traffic. Our goal is to understand and progress the state-of-the-art for both sides. First, we present novel traffic analysis attacks against encrypted communications. We show that state-of-the-art cryptographic protocols leak private information about users' communications, such as the websites they visit, applications they use, or languages used for communications. Then, we investigate means to mitigate these privacy-compromising attacks. Towards this, we present a toolkit of cryptographic primitives and protocols that simultaneously (1) achieve traditional notions of cryptographic security, and (2) enable users to conceal information about their communications, such as the protocols used or websites visited. We demonstrate the utility of these primitives and protocols in a variety of real-world settings. As a primary use case, we show that these new primitives and protocols protect network communications and bypass policies of state-of-the-art hardware-based and software-based network monitoring devices.
5

Computing Research in Academia: Classifications, Keywords, Perceptions, and Connections

Kim, Sung Han 01 May 2016 (has links)
The Association for Computing Machinery (ACM) recognizes five computing disciplines: Computer Science (CS), Computer Engineering (CE), Information Technology (IT), Information Systems (IS), and Software Engineering (SE). Founded in 1947 the ACM is the world's largest society for computing educators, researchers, and professionals. While Computer Science has been a degree program since 1962, the other four are relatively new. This research focuses on understanding the graduate research in four of the five ACM disciplines (CS, CE, IT, and IS) using a large body of thesis and dissertation metadata. SE is not found in the metadata and graduate work in SE is not included. IS is no longer officially found in the metadata so its representative ProQuest replacement, Information Science although not an ACM recognized discipline is used based on the commonality of the associated ProQuest Classification code. The research is performed using co-word and graph analysis of author-supplied Classifications, Departments, and keywords. Similarities and differences between the disciplines are identified. Whether the computing discipline is the primary or the secondary focus of the research makes a large difference in the connections it makes with other academic disciplines. It was found that the Departments from which computing research originates varies widely but the majority come from computing-related Departments. Finally, gaps are apparent from the practitioners' views of the computing disciplines versus the public's view.
6

A framework for high speed lexical classification of malicious URLs

Egan, Shaun Peter January 2014 (has links)
Phishing attacks employ social engineering to target end-users, with the goal of stealing identifying or sensitive information. This information is used in activities such as identity theft or financial fraud. During a phishing campaign, attackers distribute URLs which; along with false information, point to fraudulent resources in an attempt to deceive users into requesting the resource. These URLs are made obscure through the use of several techniques which make automated detection difficult. Current methods used to detect malicious URLs face multiple problems which attackers use to their advantage. These problems include: the time required to react to new attacks; shifts in trends in URL obfuscation and usability problems caused by the latency incurred by the lookups required by these approaches. A new method of identifying malicious URLs using Artificial Neural Networks (ANNs) has been shown to be effective by several authors. The simple method of classification performed by ANNs result in very high classification speeds with little impact on usability. Samples used for the training, validation and testing of these ANNs are gathered from Phishtank and Open Directory. Words selected from the different sections of the samples are used to create a `Bag-of-Words (BOW)' which is used as a binary input vector indicating the presence of a word for a given sample. Twenty additional features which measure lexical attributes of the sample are used to increase classification accuracy. A framework that is capable of generating these classifiers in an automated fashion is implemented. These classifiers are automatically stored on a remote update distribution service which has been built to supply updates to classifier implementations. An example browser plugin is created and uses ANNs provided by this service. It is both capable of classifying URLs requested by a user in real time and is able to block these requests. The framework is tested in terms of training time and classification accuracy. Classification speed and the effectiveness of compression algorithms on the data required to distribute updates is tested. It is concluded that it is possible to generate these ANNs in a frequent fashion, and in a method that is small enough to distribute easily. It is also shown that classifications are made at high-speed with high-accuracy, resulting in little impact on usability.
7

Pharmacodynamics miner : an automated extraction of pharmacodynamic drug interactions

Lokhande, Hrishikesh 11 December 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Pharmacodynamics (PD) studies the relationship between drug concentration and drug effect on target sites. This field has recently gained attention as studies involving PD Drug-Drug interactions (DDI) assure discovery of multi-targeted drug agents and novel efficacious drug combinations. A PD drug combination could be synergistic, additive or antagonistic depending upon the summed effect of the drug combination at a target site. The PD literature has grown immensely and most of its knowledge is dispersed across different scientific journals, thus the manual identification of PD DDI is a challenge. In order to support an automated means to extract PD DDI, we propose Pharmacodynamics Miner (PD-Miner). PD-Miner is a text-mining tool, which is capable of identifying PD DDI from in vitro PD experiments. It is powered by two major features, i.e., collection of full text articles and in vitro PD ontology. The in vitro PD ontology currently has four classes and more than hundred subclasses; based on these classes and subclasses the full text corpus is annotated. The annotated full text corpus forms a database of articles, which can be queried based upon drug keywords and ontology subclasses. Since the ontology covers term and concept meanings, the system is capable of formulating semantic queries. PD-Miner extracts in vitro PD DDI based upon references to cell lines and cell phenotypes. The results are in the form of fragments of sentences in which important concepts are visually highlighted. To determine the accuracy of the system, we used a gold standard of 5 expert curated articles. PD-Miner identified DDI with a recall of 75% and a precision of 46.55%. Along with the development of PD Miner, we also report development of a semantically annotated in vitro PD corpus. This corpus includes term and sentence level annotations and serves as a gold standard for future text mining.
8

Mining Biomedical Literature to Extract Pharmacokinetic Drug-Drug Interactions

Karnik, Shreyas 03 February 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Polypharmacy is a general clinical practice, there is a high chance that multiple administered drugs will interfere with each other, such phenomenon is called drug-drug interaction (DDI). DDI occurs when drugs administered change each other's pharmacokinetic (PK) or pharmacodynamic (PD) response. DDIs in many ways affect the overall effectiveness of the drug or at some times pose a risk of serious side effects to the patients thus, it becomes very challenging to for the successful drug development and clinical patient care. Biomedical literature is rich source for in-vitro and in-vivo DDI reports and there is growing need to automated methods to extract the DDI related information from unstructured text. In this work we present an ontology (PK ontology), which defines annotation guidelines for annotation of PK DDI studies. Using the ontology we have put together a corpora of PK DDI studies, which serves as excellent resource for training machine learning, based DDI extraction algorithms. Finally we demonstrate the use of PK ontology and corpora for extracting PK DDIs from biomedical literature using machine learning algorithms.
9

Characterizing software components using evolutionary testing and path-guided analysis

McNeany, Scott Edward 16 December 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Evolutionary testing (ET) techniques (e.g., mutation, crossover, and natural selection) have been applied successfully to many areas of software engineering, such as error/fault identification, data mining, and software cost estimation. Previous research has also applied ET techniques to performance testing. Its application to performance testing, however, only goes as far as finding the best and worst case, execution times. Although such performance testing is beneficial, it provides little insight into performance characteristics of complex functions with multiple branches. This thesis therefore provides two contributions towards performance testing of software systems. First, this thesis demonstrates how ET and genetic algorithms (GAs), which are search heuristic mechanisms for solving optimization problems using mutation, crossover, and natural selection, can be combined with a constraint solver to target specific paths in the software. Secondly, this thesis demonstrates how such an approach can identify local minima and maxima execution times, which can provide a more detailed characterization of software performance. The results from applying our approach to example software applications show that it is able to characterize different execution paths in relatively short amounts of time. This thesis also examines a modified exhaustive approach which can be plugged in when the constraint solver cannot properly provide the information needed to target specific paths.
10

Context specific text mining for annotating protein interactions with experimental evidence

Pandit, Yogesh 03 January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Proteins are the building blocks in a biological system. They interact with other proteins to make unique biological phenomenon. Protein-protein interactions play a valuable role in understanding the molecular mechanisms occurring in any biological system. Protein interaction databases are a rich source on protein interaction related information. They gather large amounts of information from published literature to enrich their data. Expert curators put in most of these efforts manually. The amount of accessible and publicly available literature is growing very rapidly. Manual annotation is a time consuming process. And with the rate at which available information is growing, it cannot be dealt with only manual curation. There need to be tools to process this huge amounts of data to bring out valuable gist than can help curators proceed faster. In case of extracting protein-protein interaction evidences from literature, just a mere mention of a certain protein by look-up approaches cannot help validate the interaction. Supporting protein interaction information with experimental evidence can help this cause. In this study, we are applying machine learning based classification techniques to classify and given protein interaction related document into an interaction detection method. We use biological attributes and experimental factors, different combination of which define any particular interaction detection method. Then using predicted detection methods, proteins identified using named entity recognition techniques and decomposing the parts-of-speech composition we search for sentences with experimental evidence for a protein-protein interaction. We report an accuracy of 75.1% with a F-score of 47.6% on a dataset containing 2035 training documents and 300 test documents.

Page generated in 0.1092 seconds