• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 736
  • 173
  • 83
  • 60
  • 59
  • 23
  • 20
  • 18
  • 10
  • 9
  • 6
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1526
  • 300
  • 288
  • 284
  • 233
  • 193
  • 175
  • 146
  • 127
  • 123
  • 122
  • 111
  • 111
  • 92
  • 89
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

The Palliser survey: 1857-1860

Denholm, James J. January 1950 (has links)
The pages of history are dotted with the names of men who have made only a small contribution to the sum of human knowledge. Often only a name, linked with a brief mention of some achievement, are all that remain to remind us that a man did exist. This thesis is an attempt to save one such man from near-obscurity. Much of Captain John Palliser has already been forgotten - his early life, his background, his character, are at least veiled if not completely obscured. All that remains is the record of his achievement; the report of the surveying expedition which, between 1857 and 1860, he led across the plains and mountains of what is now western Canada. Many historians and agriculturalists have consulted this report, but in my opinion only a few demonstrate more than a superficial knowledge of the document, and most have misinterpreted the conclusions there set down. This thesis is an attempt to reassess the Palliser survey. The report prepared by Captain John Palliser is well-written, very detailed, and comprehensive; in short, a perfect hunting ground for the research student. On the surface the study of this report is an integral unit falling within easily definable limits, but in reality, a complete reappraisal of its contents would require the combined skills of scholars in many fields, from anthropology through to astronomy. The problems of the scientist have been largely dropped in this study; a criticism of the geological, botanical, meteorological, and other similar observations has been left to the specialists in those particluar fields. Except where it has been necessary to draw upon the knowledge of the agronomist or economist, this thesis is an attempt to study the Palliser survey from the point of view of the historian. It has already been noted that the Palliser surveying expedition was in the field from 1857 to 1860. Between 1860 and the opening decades of the twentieth century, many other surveying parties traversed the plains and mountains of western Canada. This thesis is not an attempt to compare the Palliser survey with surveys conducted in the late nineteenth and early twentieth centuries, it is an attempt to evaluate Palliser's observations in the light of present-day knowledge. Finally, I would like to thank the members of the Faculty without whose assistance this thesis would not have been completed. The advice of Dr. M.Y. Williams and Dr. J.L., Robinson of the Department of Geology and Geography was invaluable in the preparation of the final chapter. Nevertheless, the opinions expressed in this thesis are my own. / Arts, Faculty of / History, Department of / Graduate
102

Bezpečné objevování sousedů / Secure Neighbor Discovery Protocol

Bezdíček, Lukáš January 2014 (has links)
This report deals with designing and implementing of a complete SEND protocol for operating systems GNU/Linux. The first part of the document contains a description of ND and SEND protocols. The second part of the document defines security threats connected with unsecured ND. The third part of the report describes a design and implementation of SEND protocol named sendd . Conclusion of the document is dedicated to a summary of accomplished results and information about future development of this project.
103

Säker grannupptäck i IPv6 / Secure Neighbor Discovery in IPv6

Huss, Philip January 2011 (has links)
The IPv6 protocol offers with some new functions, one of them is auto configuration. With auto configuration it is possible for nodes, i.e. hosts and routers, for automatically associated with IPv6 addresses without manual configuration. Auto configuration it is another protocol as it uses Neighbor Discovery protocol (ND) messages (ND is mandatory in the IPv6 stack). The main purpose of ND is that nodes can discover other nodes on the local link, perform address resolution, check that addresses are unique, and check the reachability with active nodes. There are exactly the same vulnerabilities of IPv6 as IPv4 and is now exception, ND if not properly secured. IPsec is a standard security mechanism for IPv6 but it does not solve the problem of secure auto configuration due the bootstrapping problem. Therefore the Internet Engineering Task Force (IETF) introduced Secure Neighbor Discovery (SEND). SEND is a mechanism for authentication, message protection, and router authentication. One important element of SEND is the use of Cryptographically Generated Address (CGA) an important mechanism to prove that the sender of the ND message is the actual owner of the address it claims NDprotector is an open-source implementation of SEND served as the basis for the analysis presented in this report. This implementation was evaluated in a small lab environment against some attacks in order to establish if it can defend itself from these attacks. / IPv6 protokollet kom det ett par nya funktioner där en av dem är autokonfiguration. Autokonfiguration gör det möjligt för noder, d.v.s. hostar och routrar för att automatiskt bli tilldelade IPv6 adresser manuell konfigurering. För att autokonfiguration ska fungera så används Neighbor Discovery (ND) meddelanden som är ett obligatoriskt protokoll i IPv6- stacken. ND har till huvudsaklig uppgift att noder kan upptäcka andra noder på den lokala länken, utföra adressöversättningar, kolltrollera så att adresser är unika samt kontrollera tillgängligheten hos aktiva noder. Precis som IPv4 så har IPv6 en hel del sårbarheter och med ND så är det inget undantag då det inte är säkrat. IPsec som är en den standard säkerhets mekanism till IPv6 löser inte problemet på grund av bootstrapping problemet. Det var därför Internet Engineering Task Force (IETF) introducerade Secure Neighbor Discovery (SEND). SEND är en mekanism för autentisering, meddelande skydd och router autentisering. En viktig del av SEND är Cryptographilcally Generated Address (CGA), en teknik som används för att försäkra sig så att det är den sändaren av ND meddelandet som är den riktiga ägaren av den hävdade adressen. NDprotector är en öppen källkods implementation av SEND som jag har valt att ha som grund för denna rapport. Jag kommer att sätta upp NDprotector i en liten labbmiljö där jag kommer att utföra olika attacker samt se efter om det klarar att försvara sig emot attackerna.
104

FURTHER CONTRIBUTIONS TO MULTIPLE TESTING METHODOLOGIES FOR CONTROLLING THE FALSE DISCOVERY RATE UNDER DEPENDENCE

Zhang, Shiyu, 0000-0001-8921-2453 12 1900 (has links)
This thesis presents innovative approaches for controlling the False Discovery Rate (FDR) in both high-dimensional statistical inference and finite-sample cases, addressing challenges arising from various dependency structures in the data. The first project introduces novel multiple testing methods for matrix-valued data, motivated by an electroencephalography (EEG) experiment, where we model the inherent complex row-column cross-dependency using a matrix normal distribution. We proposed two methods designed for structured matrix-valued data, to approximate the true FDP that captures the underlying cross-dependency with statistical accuracy. In the second project, we focus on simultaneous testing of multivariate normal means under diverse covariance matrix structures. By adjusting p-values using a BH-type step-up procedure tailored to the known correlation matrix, we achieve robust finite-sample FDR control. Both projects demonstrate superior performance through extensive numerical studies and real-data applications, significantly advancing the field of multiple testing under dependency. The third project presented exploratory simulation results to demonstrate the methods constructed based on the paired-p-values framework that controls the FDR within the multivariate normal means testing framework. / Statistics
105

Service Discovery in Pervasive Computing Environments

Thompson, Michael Stewart 17 October 2006 (has links)
Service discovery is a driving force in realizing pervasive computing. It provides a way for users and services to locate and interact with other services in a pervasive computing environment. Unfortunately, current service discovery solutions do not capture the effects of the human or physical world and do not deal well with diverse device populations; both of which are characteristics of pervasive computing environments. This research concentrates on the examination and fulfillment of the goals of two of the four components of service discovery, service description and dissemination. It begins with a review of and commentary on current service discovery solutions. Following this review, is the formulation of the problem statement, including a full explanation of the problems mentioned above. The problem formulation is followed by an explanation of the process followed to design and build solutions to these problems. These solutions include the Pervasive Service Description Language (PSDL), the Pervasive Service Query Language (PSQL), and the Multi-Assurance Delivery Protocol (MADEP). Prototype implementations of the components are used to validate feasibility and evaluate performance. Experimental results are presented and analyzed. This work concludes with a discussion of overall conclusions, directions for future work, and a list of contributions. / Ph. D.
106

Dependency discovery for data integration

Bauckmann, Jana January 2013 (has links)
Data integration aims to combine data of different sources and to provide users with a unified view on these data. This task is as challenging as valuable. In this thesis we propose algorithms for dependency discovery to provide necessary information for data integration. We focus on inclusion dependencies (INDs) in general and a special form named conditional inclusion dependencies (CINDs): (i) INDs enable the discovery of structure in a given schema. (ii) INDs and CINDs support the discovery of cross-references or links between schemas. An IND “A in B” simply states that all values of attribute A are included in the set of values of attribute B. We propose an algorithm that discovers all inclusion dependencies in a relational data source. The challenge of this task is the complexity of testing all attribute pairs and further of comparing all of each attribute pair's values. The complexity of existing approaches depends on the number of attribute pairs, while ours depends only on the number of attributes. Thus, our algorithm enables to profile entirely unknown data sources with large schemas by discovering all INDs. Further, we provide an approach to extract foreign keys from the identified INDs. We extend our IND discovery algorithm to also find three special types of INDs: (i) Composite INDs, such as “AB in CD”, (ii) approximate INDs that allow a certain amount of values of A to be not included in B, and (iii) prefix and suffix INDs that represent special cross-references between schemas. Conditional inclusion dependencies are inclusion dependencies with a limited scope defined by conditions over several attributes. Only the matching part of the instance must adhere the dependency. We generalize the definition of CINDs distinguishing covering and completeness conditions and define quality measures for conditions. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. The challenge for this task is twofold: (i) Which (and how many) attributes should be used for the conditions? (ii) Which attribute values should be chosen for the conditions? Previous approaches rely on pre-selected condition attributes or can only discover conditions applying to quality thresholds of 100%. Our approaches were motivated by two application domains: data integration in the life sciences and link discovery for linked open data. We show the efficiency and the benefits of our approaches for use cases in these domains. / Datenintegration hat das Ziel, Daten aus unterschiedlichen Quellen zu kombinieren und Nutzern eine einheitliche Sicht auf diese Daten zur Verfügung zu stellen. Diese Aufgabe ist gleichermaßen anspruchsvoll wie wertvoll. In dieser Dissertation werden Algorithmen zum Erkennen von Datenabhängigkeiten vorgestellt, die notwendige Informationen zur Datenintegration liefern. Der Schwerpunkt dieser Arbeit liegt auf Inklusionsabhängigkeiten (inclusion dependency, IND) im Allgemeinen und auf der speziellen Form der Bedingten Inklusionsabhängigkeiten (conditional inclusion dependency, CIND): (i) INDs ermöglichen das Finden von Strukturen in einem gegebenen Schema. (ii) INDs und CINDs unterstützen das Finden von Referenzen zwischen Datenquellen. Eine IND „A in B“ besagt, dass alle Werte des Attributs A in der Menge der Werte des Attributs B enthalten sind. Diese Arbeit liefert einen Algorithmus, der alle INDs in einer relationalen Datenquelle erkennt. Die Herausforderung dieser Aufgabe liegt in der Komplexität alle Attributpaare zu testen und dabei alle Werte dieser Attributpaare zu vergleichen. Die Komplexität bestehender Ansätze ist abhängig von der Anzahl der Attributpaare während der hier vorgestellte Ansatz lediglich von der Anzahl der Attribute abhängt. Damit ermöglicht der vorgestellte Algorithmus unbekannte Datenquellen mit großen Schemata zu untersuchen. Darüber hinaus wird der Algorithmus erweitert, um drei spezielle Formen von INDs zu finden, und ein Ansatz vorgestellt, der Fremdschlüssel aus den erkannten INDs filtert. Bedingte Inklusionsabhängigkeiten (CINDs) sind Inklusionsabhängigkeiten deren Geltungsbereich durch Bedingungen über bestimmten Attributen beschränkt ist. Nur der zutreffende Teil der Instanz muss der Inklusionsabhängigkeit genügen. Die Definition für CINDs wird in der vorliegenden Arbeit generalisiert durch die Unterscheidung von überdeckenden und vollständigen Bedingungen. Ferner werden Qualitätsmaße für Bedingungen definiert. Es werden effiziente Algorithmen vorgestellt, die überdeckende und vollständige Bedingungen mit gegebenen Qualitätsmaßen auffinden. Dabei erfolgt die Auswahl der verwendeten Attribute und Attributkombinationen sowie der Attributwerte automatisch. Bestehende Ansätze beruhen auf einer Vorauswahl von Attributen für die Bedingungen oder erkennen nur Bedingungen mit Schwellwerten von 100% für die Qualitätsmaße. Die Ansätze der vorliegenden Arbeit wurden durch zwei Anwendungsbereiche motiviert: Datenintegration in den Life Sciences und das Erkennen von Links in Linked Open Data. Die Effizienz und der Nutzen der vorgestellten Ansätze werden anhand von Anwendungsfällen in diesen Bereichen aufgezeigt.
107

Data driven approaches to improve the drug discovery process : a virtual screening quest in drug discovery

Ebejer, Jean-Paul January 2014 (has links)
Drug discovery has witnessed an increase in the application of in silico methods to complement existing in vitro and in vivo experiments, in an attempt to 'fail fast' and reduce the high attrition rates of clinical phases. Computer algorithms have been successfully employed for many tasks including biological target selection, hit identification, lead optimization, binding affinity determination, ADME and toxicity prediction, side-effect prediction, drug repurposing, and, in general, to direct experimental work. This thesis describes a multifaceted approach to virtual screening, to computationally identify small-molecule inhibitors against a biological target of interest. Conformer generation is a critical step in all virtual screening methods that make use of atomic 3D data. We therefore analysed the ability of computational tools to reproduce high quality, experimentally resolved conformations of organic small-molecules. We selected the best performing method (RDKit), and developed a protocol that generates a non-redundant conformer ensemble which tends to contain low-energy structures close to those experimentally observed. We then outline the steps we took to build a multi-million, small-molecule database (including molecule standardization and efficient exact, substructure and similarity searching capabilities), for use in our virtual screening experiments. We generated conformers and descriptors for the molecules in the database. We tagged a subset of the database as `drug-like' and clustered this to provide a reduced, diverse set of molecules for use in more computationally-intensive virtual screening protocols. We next describe a novel virtual screening method we developed, called Ligity, that makes use of known protein-ligand holo structures as queries to search the small-molecule database for putative actives. Ligity has been validated against targets from the DUD-E dataset, and has shown, on average, better performance than other 3D methods. We also show that performance improved when we fused the results from multiple input structures. This bodes well for Ligity's future use, especially when considering that protein structure databases such as the Protein Data Bank are growing exponentially every year. Lastly, we describe the fruitful application of structure-based and ligand-based virtual screening methods to Plasmodium falciparum Subtilisin-like Protease 1 (PfSUB1), an important drug target in the human stages of the life-cycle of the malaria parasite. Our ligand-based virtual screening study resulted in the discovery of novel PfSUB1 inhibitors. Further lead optimization of these compounds, to improve binding affinity in the nanomolar range, may promote them as drug candidates. In this thesis we postulate that the accuracy of computational tools in drug discovery may be enhanced to take advantage of the exponential increase of experimental data and the availability of cheaper computational power such as cloud computing.
108

Automated discovery of inductive lemmas

Johansson, Moa January 2009 (has links)
The discovery of unknown lemmas, case-splits and other so called eureka steps are challenging problems for automated theorem proving and have generally been assumed to require user intervention. This thesis is mainly concerned with the automated discovery of inductive lemmas. We have explored two approaches based on failure recovery and theory formation, with the aim of improving automation of firstand higher-order inductive proofs in the IsaPlanner system. We have implemented a lemma speculation critic which attempts to find a missing lemma using information from a failed proof-attempt. However, we found few proofs for which this critic was applicable and successful. We have also developed a program for inductive theory formation, which we call IsaCoSy. IsaCoSy was evaluated on different inductive theories about natural numbers, lists and binary trees, and found to successfully produce many relevant theorems and lemmas. Using a background theory produced by IsaCoSy, it was possible for IsaPlanner to automatically prove more new theorems than with lemma speculation. In addition to the lemma discovery techniques, we also implemented an automated technique for case-analysis. This allows IsaPlanner to deal with proofs involving conditionals, expressed as if- or case-statements.
109

Facilitating Web Service Discovery and Publishing: A Theoretical Framework, A Prototype System, and Evaluation

Hwang, Yousub January 2007 (has links)
The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component-based software development to promote application interaction and integration within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web services repositories not only be well-structured but also provide efficient tools for an environment supporting reusable software components for both service providers and consumers. As the potential of Web services for service-oriented computing is becoming widely recognized, the demand for an integrated framework that facilitates service discovery and publishing is concomitantly growing.In our research, we propose a framework that facilitates Web service discovery and publishing by combining clustering techniques and leveraging the semantics of the XML-based service specification in WSDL files. We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the Web service domain. Our proposed approach has several appealing features: (1) It minimizes the requirements of prior knowledge from both service providers and consumers, (2) It avoids exploiting domain-dependent ontologies,(3) It is able to visualize the information space of Web services by providing a category map that depicts the semantic relationships among them,(4) It is able to semi-automatically generate Web service taxonomies that reflect both capability and geographic context, and(5) It allows service consumers to combine multiple search strategies in a flexible manner.We have developed a Web service discovery tool based on the proposed approach using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web services repositories. We believe that both service providers and consumers in a service-oriented computing environment can benefit from our Web service discovery approach.
110

Actionable Knowledge Discovery using Multi-Step Mining

DharaniK, Kalpana Gudikandula 01 December 2012 (has links)
Data mining at enterprise level operates on huge amount of data such as government transactions, banks, insurance companies and so on. Inevitably, these businesses produce complex data that might be distributed in nature. When mining is made on such data with a single-step, it produces business intelligence as a particular aspect. However, this is not sufficient in enterprise where different aspects and standpoints are to be considered before taking business decisions. It is required that the enterprises perform mining based on multiple features, data sources and methods. This is known as combined mining. The combined mining can produce patterns that reflect all aspects of the enterprise. Thus the derived intelligence can be used to take business decisions that lead to profits. This kind of knowledge is known as actionable knowledge. / Data mining is a process of obtaining trends or patterns in historical data. Such trends form business intelligence that in turn leads to taking well informed decisions. However, data mining with a single technique does not yield actionable knowledge. This is because enterprises have huge databases and heterogeneous in nature. They also have complex data and mining such data needs multi-step mining instead of single step mining. When multiple approaches are involved, they provide business intelligence in all aspects. That kind of information can lead to actionable knowledge. Recently data mining has got tremendous usage in the real world. The drawback of existing approaches is that insufficient business intelligence in case of huge enterprises. This paper presents the combination of existing works and algorithms. We work on multiple data sources, multiple methods and multiple features. The combined patterns thus obtained from complex business data provide actionable knowledge. A prototype application has been built to test the efficiency of the proposed framework which combines multiple data sources, multiple methods and multiple features in mining process. The empirical results revealed that the proposed approach is effective and can be used in the real world.

Page generated in 0.052 seconds