• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 745
  • 350
  • 73
  • 73
  • 73
  • 73
  • 73
  • 72
  • 48
  • 31
  • 9
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 1694
  • 1694
  • 271
  • 253
  • 236
  • 208
  • 186
  • 185
  • 173
  • 166
  • 145
  • 138
  • 137
  • 126
  • 125
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
651

A finite difference approach to buckling of concrete plates

Wiley, Francis Alan January 2011 (has links)
Digitized by Kansas Correctional Industries
652

Loading the numerical control machine code from AD-APT onto a microcomputer controlled floppy disk

Li, Xiaowen January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
653

"Space and function analysis" : A computer system for the generation of functional layouts in the S.A.R. methodology.

Govela, Alfonso January 1977 (has links)
Thesis. 1977. M.Arch.A.S.--Massachusetts Institute of Technology. Dept. of Architecture. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ROTCH. / Blank leaf bound in after leaf 5; no leaf 148. / Includes bibliographical references. / M.Arch.A.S.
654

Design criteria for a knowledge-based English language system for management : an experimental analysis

Malhotra, Ashok January 1975 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Alfred P. Sloan School of Management, 1975. / "February 1975." Vita. / Bibliography: leaves 240-246. / by Ashok Malhotra. / Ph.D.
655

Secure Computation in Heterogeneous Environments: How to Bring Multiparty Computation Closer to Practice?

Raykova, Mariana Petrova January 2012 (has links)
Many services that people use daily require computation that depends on the private data of multiple parties. While the utility of the final result of such interactions outweighs the privacy concerns related to output release, the inputs for such computations are much more sensitive and need to be protected. Secure multiparty computation (MPC) considers the question of constructing computation protocols that reveal nothing more about their inputs than what is inherently leaked by the output. There have been strong theoretical results that demonstrate that every functionality can be computed securely. However, these protocols remain unused in practical solutions since they introduce efficiency overhead prohibitive for most applications. Generic multiparty computation techniques address homogeneous setups with respect to the resources available to the participants and the adversarial model. On the other hand, realistic scenarios present a wide diversity of heterogeneous environments where different participants have different available resources and different incentives to misbehave and collude. In this thesis we introduce techniques for multiparty computation that focus on heterogeneous settings. We present solutions tailored to address different types of asymmetric constraints and improve the efficiency of existing approaches in these scenarios. We tackle the question from three main directions: New Computational Models for MPC - We explore different computational models that enable us to overcome inherent inefficiencies of generic MPC solutions using circuit representation for the evaluated functionality. First, we show how we can use random access machines to construct MPC protocols that add only polylogarithmic overhead to the running time of the insecure version of the underlying functionality. This allows to achieve MPC constructions with computational complexity sublinear in the size for their inputs, which is very important for computations that use large databases. We also consider multivariate polynomials which yield more succinct representations for the functionalities they implement than circuits, and at the same time a large collection of problems are naturally and efficiently expressed as multivariate polynomials. We construct an MPC protocol for multivariate polynomials, which improves the communication complexity of corresponding circuit solutions, and provides currently the most efficient solution for multiparty set intersection in the fully malicious case. Outsourcing Computation - The goal in this setting is to utilize the resources of a single powerful service provider for the work that computationally weak clients need to perform on their data. We present a new paradigm for constructing verifiable computation (VC) schemes, which enables a computationally limited client to verify efficiently the result of a large computation. Our construction is based on attribute-based encryption and avoids expensive primitives such as fully homomorphic encryption andprobabilistically checkable proofs underlying existing VC schemes. Additionally our solution enjoys two new useful properties: public delegation and verification. We further introduce the model of server-aided computation where we utilize the computational power of an outsourcing party to assist the execution and improve the efficiency of MPC protocols. For this purpose we define a new adversarial model of non-collusion, which provides room for more efficient constructions that rely almost completely only on symmetric key operations, and at the same time captures realistic settings for adversarial behavior. In this model we propose protocols for generic secure computation that offload the work of most of the parties to the computation server. We also construct a specialized server-aided two party set intersection protocol that achieves better efficiencies for the two participants than existing solutions. Outsourcing in many cases concerns only data storage and while outsourcing the data of a single party is useful, providing a way for data sharing among different clients of the service is the more interesting and useful setup. However, this scenario brings new challenges for access control since the access control rules and data accesses become private data for the clients with respect to the service provide. We propose an approach that offers trade-offs between the privacy provided for the clients and the communication overhead incurred for each data access. Efficient Private Search in Practice - We consider the question of private search from a different perspective compared to traditional settings for MPC. We start with strict efficiency requirements motivated by speeds of available hardware and what is considered acceptable overhead from practical point of view. Then we adopt relaxed definitions of privacy, which still provide meaningful security guarantees while allowing us to meet the efficiency requirements. In this setting we design a security architecture and implement a system for data sharing based on encrypted search, which achieves only 30% overhead compared to non-secure solutions on realistic workloads.
656

Learning cell states from high-dimensional single-cell data

Levine, Jacob Harrison January 2016 (has links)
Recent developments in single-cell measurement technologies have yielded dramatic increases in throughput (measured cells per experiment) and dimensionality (measured features per cell). In particular, the introduction of mass cytometry has made possible the simultaneous quantification of dozens of protein species in millions of individual cells in a single experiment. The raw data produced by such high-dimensional single-cell measurements provide unprecedented potential to reveal the phenotypic heterogeneity of cellular systems. In order to realize this potential, novel computational techniques are required to extract knowledge from these complex data. Analysis of single-cell data is a new challenge for computational biology, as early development in the field was tailored to technologies that sacrifice single-cell resolution, such as DNA microarrays. The challenges for single-cell data are quite distinct and require multidimensional modeling of complex population structure. Particular challenges include nonlinear relationships between measured features and non-convex subpopulations. This thesis integrates methods from computational geometry and network analysis to develop a framework for identifying the population structure in high-dimensional single-cell data. At the center of this framework is PhenoGraph, and algorithmic approach to defining subpopulations, which when applied to healthy bone marrow data was shown to reconstruct known immune cell types automatically without prior information. PhenoGraph demonstrated superior accuracy, robustness, and efficiency, compared to other methods. The data-driven approach becomes truly powerful when applied to less characterized systems, such as malignancies, in which the tissue diverges from its healthy population composition. Applying PhenoGraph to bone marrow samples from a cohort of acute myeloid leukemia (AML) patients, the thesis presents several insights into the pathophysiology of AML, which were extracted by virtue of the computational isolation of leukemic subpopulations. For example, it is shown that leukemic subpopulations diverge from healthy bone marrow but not without bound: Leukemic cells are apparently free to explore only a restricted phenotypic space that mimics normal myeloid development. Further, the phenotypic composition of a sample is associated with its cytogenetics, demonstrating a genetic influence on the population structure of leukemic bone marrow. The thesis goes on to show that functional heterogeneity of leukemic samples can be computationally inferred from molecular perturbation data. Using a variety of methods that build on PhenoGraph's foundations, the thesis presents a characterization of leukemic subpopulations based on an inferred stem-like signaling pattern. Through this analysis, it is shown that surface phenotypes often fail to reflect the true underlying functional state of the subpopulation, and that this functional stem-like state is in fact a powerful predictor of survival in large, independent cohorts. Altogether, the thesis takes the existence and importance of cellular heterogeneity as its starting point and presents a mathematical framework and computational toolkit for analyzing samples from this perspective. It is shown that phenotypic and functional heterogeneity are robust characteristics of acute myeloid leukemia with clinically significant ramifications.
657

Compiler-assisted Adaptive Software Testing

Petsios, Theofilos January 2018 (has links)
Modern software is becoming increasingly complex and is plagued with vulnerabilities that are constantly exploited by attackers. The vast numbers of bugs found in security-critical systems and the diversity of errors presented in commercial off-the-shelf software require effective, scalable testing frameworks. Unfortunately, the current testing ecosystem is heavily fragmented, with the majority of toolchains targeting limited classes of errors and applications without offering provably strong guarantees. With software codebases continuously becoming more diverse and complex, the large-scale deployment of monolithic, non-adaptive analysis engines is likely to increase the aforementioned fragmentation. Instead, modern software testing requires adaptive, hybrid techniques that target errors selectively. This dissertation argues that adopting context-aware analyses will enable us to set the foundations for retargetable testing frameworks while further increasing the accuracy and extensibility of existing toolchains. To this end, we initially examine how compiler analyses can become context-aware, prioritizing certain errors over others of the same type. As a use case of our proposed approach, we extend a state-of-the-art compiler's integer error detection pipeline to suppress reports of benign errors by up to 89% in real-world workloads, while allowing for reporting of serious errors. Subsequently, we demonstrate how compiler-based instrumentation can be utilized by feedback-driven evolutionary fuzzers to provide multifaceted analyses targeting broader classes of bugs. In this direction, we present differential diversity (δ-diversity), we propose a generic methodology for offering state-aware guidance in feedback-driven frameworks, and we demonstrate how to retrofit state-of-the-art fuzzers to target broader classes of errors. We provide two such prototype implementations: NEZHA, the first differential generic fuzzer capable of handling logic bugs, as well as SlowFuzz, the first generic fuzzer targeting complexity vulnerabilities. We applied both prototypes on production software, and demonstrate their effectiveness. We found that NEZHA discovered hundreds of logic discrepancies across a wide variety of applications (SSL/TLS libraries, parsers, etc.), while SlowFuzz successfully generated inputs triggering slowdowns in complex, real-world software, including zip parsers, regular expression libraries, and hash table implementations.
658

Combining Programs to Enhance Security Software

Kang, Yuan Jochen January 2018 (has links)
Automatic threats require automatic solutions, which become automatic threats themselves. When software grows in functionality, it grows in complexity, and in the number of bugs. To keep track of and counter all of the possible ways that a malicious party can exploit these bugs, we need security software. Such software helps human developers identify and remove bugs, or system administrators detect attempted attacks. But like any other software, and likely more so, security software itself can have blind spots or flaws. In the best case, it stops working, and becomes ineffective. In the worst case, the security software has privileged access to the system it is supposed to protect, and the attacker can hijack those privileges for its own purposes. So we will need external programs to compensate for their weaknesses. At the same time, we need to minimize the additional attack surface and development time due to creating new solutions. To address both points, this thesis will explore how to combine multiple programs to overcome a number of weaknesses in individual security software: (1) When login authentication and physical protections of a smart phone fail, fake, decoy applications detect unauthorized usage and draw the attacker away from truly sensitive applications; (2) when a fuzzer, an automatic software testing tool, requires a diverse set of initial test inputs, manipulating the tools that a human uses to generate these inputs multiplies the generated inputs; (3) when the software responsible for detecting attacks, known as an intrusion detection system, itself needs protection against attacks, a simplified state machine tracks the software's interaction with the underlying platform, without the complexity and risks of a fully functional intrusion detection system; (4) when intrusion detection systems run on multiple, independent machines, a graph-theoretic framework drives the design for how the machines cooperatively monitor each other, forcing the attacker to not only perform more work, but also do so faster. Instead of introducing new, stand-alone security software, the above solutions only require a fixed number of new tools that rely on a diverse selection of programs that already exist. Nor do any of the programs, old or new, require additional privileges that the old programs did not have before. In other words, we multiply the power of security software without multiplying their risks.
659

Semi-supervised document clustering with active learning. / CUHK electronic theses & dissertations collection

January 2008 (has links)
Most existing semi-supervised document clustering approaches are model-based clustering and can be treated as parametric model taking an assumption that the underlying clusters follow a certain pre-defined distribution. In our semi-supervised document clustering, each cluster is represented by a non-parametric probability distribution. Two approaches are designed for incorporating pairwise constraints in the document clustering approach. The first approach, term-to-term relationship approach (TR), uses pairwise constraints for capturing term-to-term dependence relationships. The second approach, linear combination approach (LC), combines the clustering objective function with the user-provided constraints linearly. Extensive experimental results show that our proposed framework is effective. / This thesis presents a new framework for automatically partitioning text documents taking into consideration of constraints given by users. Semi-supervised document clustering is developed based on pairwise constraints. Different from traditional semi-supervised document clustering approaches which assume pairwise constraints to be prepared by user beforehand, we develop a novel framework for automatically discovering pairwise constraints revealing the user grouping preference. Active learning approach for choosing informative document pairs is designed by measuring the amount of information that can be obtained by revealing judgments of document pairs. For this purpose, three models, namely, uncertainty model, generation error model, and term-to-term relationship model, are designed for measuring the informativeness of document pairs from different perspectives. Dependent active learning approach is developed by extending the active learning approach to avoid redundant document pair selection. Two models are investigated for estimating the likelihood that a document pair is redundant to previously selected document pairs, namely, KL divergence model and symmetric model. / Huang, Ruizhang. / Adviser: Wai Lam. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3600. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 117-123). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
660

A robust anti-tampering scheme for software piracy protection. / 有效防止盜版軟件的防篡改解決方案 / You xiao fang zhi dao ban ruan jian de fang cuan gai jie jue fang an

January 2011 (has links)
Tsang, Hing Chung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 79-92). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.1 / Chapter 1.2 --- Software Cracking --- p.2 / Chapter 1.3 --- Objectives --- p.4 / Chapter 1.4 --- Contributions --- p.5 / Chapter 1.5 --- Thesis Outline --- p.6 / Chapter 2 --- Related Work --- p.8 / Chapter 2.1 --- Hardware-based Protection --- p.8 / Chapter 2.2 --- Network-based Protection --- p.9 / Chapter 2.3 --- Software-based Protection --- p.11 / Chapter 2.3.1 --- Obfuscation --- p.11 / Chapter 2.3.2 --- Code Encryption --- p.13 / Chapter 2.3.3 --- Virtual Machine --- p.15 / Chapter 2.3.4 --- Self-checksumming --- p.16 / Chapter 2.3.5 --- Watermarking --- p.20 / Chapter 2.3.6 --- Self-modifying Code --- p.22 / Chapter 2.3.7 --- Software Aging --- p.23 / Chapter 3 --- Proposed Protection Scheme --- p.24 / Chapter 3.1 --- Introduction --- p.24 / Chapter 3.2 --- Protector --- p.27 / Chapter 3.2.1 --- A Traditional Protector Structure --- p.28 / Chapter 3.2.2 --- Protector Construction --- p.31 / Chapter "3,2.3" --- Protector Implementation - Version 1 --- p.32 / Chapter 3.2.4 --- Protector Implementation - Version 2 --- p.35 / Chapter 3.2.5 --- Tamper Responses --- p.37 / Chapter 3.3 --- Protection Tree --- p.39 / Chapter 3.4 --- Non-deterministic Execution of Functions --- p.43 / Chapter 3.4.1 --- Introduction to n-version Functions --- p.44 / Chapter 3.4.2 --- Probability Distributions --- p.45 / Chapter 3.4.3 --- Implementation Issues --- p.47 / Chapter 3.5 --- Desired Properties --- p.49 / Chapter 4 --- Cracking Complexity and Security Analysis --- p.52 / Chapter 4.1 --- Cracking Complexity --- p.52 / Chapter 4.2 --- Security Analysis --- p.55 / Chapter 4.2.1 --- Automation Attacks --- p.55 / Chapter 4.2.2 --- Control Flow Graph Analysis --- p.55 / Chapter 4.2.3 --- Cloning Attack --- p.56 / Chapter 4.2.4 --- Dynamic Tracing --- p.56 / Chapter 5 --- Experiments --- p.58 / Chapter 5.1 --- Execution Time Overhead --- p.59 / Chapter 5.2 --- Tamper Responses --- p.67 / Chapter 6 --- Conclusion and Future Work --- p.73 / Chapter 6.1 --- Conclusion --- p.73 / Chapter 6.2 --- Comparison --- p.75 / Chapter 6.3 --- Future Work --- p.77 / Bibliography --- p.79

Page generated in 0.5448 seconds