• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 742
  • 173
  • 83
  • 60
  • 59
  • 23
  • 20
  • 18
  • 10
  • 10
  • 6
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1539
  • 303
  • 292
  • 290
  • 235
  • 196
  • 177
  • 146
  • 127
  • 123
  • 122
  • 111
  • 111
  • 92
  • 90
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Knowledge-Discovery Incorporated Evolutionary Search for Microcalcification Detection in Breast Cancer Diagnosis.

Peng, Yonghong, Yao, Bin, Jiang, Jianmin January 2006 (has links)
No / Objectives The presence of microcalcifications (MCs), clusters of tiny calcium deposits that appear as small bright spots in a mammogram, has been considered as a very important indicator for breast cancer diagnosis. Much research has been performed for developing computer-aided systems for the accurate identification of MCs, however, the computer-based automatic detection of MCs has been shown difficult because of the complicated nature of surrounding of breast tissue, the variation of MCs in shape, orientation, brightness and size. Methods and materials This paper presents a new approach for the effective detection of MCs by incorporating a knowledge-discovery mechanism in the genetic algorithm (GA). In the proposed approach, called knowledge-discovery incorporated genetic algorithm (KD-GA), the genetic algorithm is used to search for the bright spots in mammogram and a knowledge-discovery mechanism is integrated to improve the performance of the GA. The function of the knowledge-discovery mechanism includes evaluating the possibility of a bright spot being a true MC, and adaptively adjusting the associated fitness values. The adjustment of fitness is to indirectly guide the GA to extract the true MCs and eliminate the false MCs (FMCs) accordingly. Results and conclusions The experimental results demonstrate that the incorporation of knowledge-discovery mechanism into the genetic algorithm is able to eliminate the FMCs and produce improved performance comparing with the conventional GA methods. Furthermore, the experimental results show that the proposed KD-GA method provides a promising and generic approach for the development of computer-aided diagnosis for breast cancer.
102

An Integrated Knowledge Discovery and Data Mining Process Model

Sharma, Sumana 30 September 2008 (has links)
Enterprise decision making is continuously transforming in the wake of ever increasing amounts of data. Organizations are collecting massive amounts of data in their quest for knowledge nuggets in form of novel, interesting, understandable patterns that underlie these data. The search for knowledge is a multi-step process comprising of various phases including development of domain (business) understanding, data understanding, data preparation, modeling, evaluation and ultimately, the deployment of the discovered knowledge. These phases are represented in form of Knowledge Discovery and Data Mining (KDDM) Process Models that are meant to provide explicit support towards execution of the complex and iterative knowledge discovery process. Review of existing KDDM process models reveals that they have certain limitations (fragmented design, only a checklist-type description of tasks, lack of support towards execution of tasks, especially those of the business understanding phase etc) which are likely to affect the efficiency and effectiveness with which KDDM projects are currently carried out. This dissertation addresses the various identified limitations of existing KDDM process models through an improved model (named the Integrated Knowledge Discovery and Data Mining Process Model) which presents an integrated view of the KDDM process and provides explicit support towards execution of each one of the tasks outlined in the model. We also evaluate the effectiveness and efficiency offered by the IKDDM model against CRISP-DM, a leading KDDM process model, in aiding data mining users to execute various tasks of the KDDM process. Results of statistical tests indicate that the IKDDM model outperforms the CRISP model in terms of efficiency and effectiveness; the IKDDM model also outperforms CRISP in terms of quality of the process model itself.
103

The exploration of the South Sea, 1519 to 1644 : a study of the influence of physical factors, with a reconstruction of the routes of the explorers

Wallis, Helen January 1954 (has links)
No description available.
104

The Palliser survey: 1857-1860

Denholm, James J. January 1950 (has links)
The pages of history are dotted with the names of men who have made only a small contribution to the sum of human knowledge. Often only a name, linked with a brief mention of some achievement, are all that remain to remind us that a man did exist. This thesis is an attempt to save one such man from near-obscurity. Much of Captain John Palliser has already been forgotten - his early life, his background, his character, are at least veiled if not completely obscured. All that remains is the record of his achievement; the report of the surveying expedition which, between 1857 and 1860, he led across the plains and mountains of what is now western Canada. Many historians and agriculturalists have consulted this report, but in my opinion only a few demonstrate more than a superficial knowledge of the document, and most have misinterpreted the conclusions there set down. This thesis is an attempt to reassess the Palliser survey. The report prepared by Captain John Palliser is well-written, very detailed, and comprehensive; in short, a perfect hunting ground for the research student. On the surface the study of this report is an integral unit falling within easily definable limits, but in reality, a complete reappraisal of its contents would require the combined skills of scholars in many fields, from anthropology through to astronomy. The problems of the scientist have been largely dropped in this study; a criticism of the geological, botanical, meteorological, and other similar observations has been left to the specialists in those particluar fields. Except where it has been necessary to draw upon the knowledge of the agronomist or economist, this thesis is an attempt to study the Palliser survey from the point of view of the historian. It has already been noted that the Palliser surveying expedition was in the field from 1857 to 1860. Between 1860 and the opening decades of the twentieth century, many other surveying parties traversed the plains and mountains of western Canada. This thesis is not an attempt to compare the Palliser survey with surveys conducted in the late nineteenth and early twentieth centuries, it is an attempt to evaluate Palliser's observations in the light of present-day knowledge. Finally, I would like to thank the members of the Faculty without whose assistance this thesis would not have been completed. The advice of Dr. M.Y. Williams and Dr. J.L., Robinson of the Department of Geology and Geography was invaluable in the preparation of the final chapter. Nevertheless, the opinions expressed in this thesis are my own. / Arts, Faculty of / History, Department of / Graduate
105

Bezpečné objevování sousedů / Secure Neighbor Discovery Protocol

Bezdíček, Lukáš January 2014 (has links)
This report deals with designing and implementing of a complete SEND protocol for operating systems GNU/Linux. The first part of the document contains a description of ND and SEND protocols. The second part of the document defines security threats connected with unsecured ND. The third part of the report describes a design and implementation of SEND protocol named sendd . Conclusion of the document is dedicated to a summary of accomplished results and information about future development of this project.
106

Säker grannupptäck i IPv6 / Secure Neighbor Discovery in IPv6

Huss, Philip January 2011 (has links)
The IPv6 protocol offers with some new functions, one of them is auto configuration. With auto configuration it is possible for nodes, i.e. hosts and routers, for automatically associated with IPv6 addresses without manual configuration. Auto configuration it is another protocol as it uses Neighbor Discovery protocol (ND) messages (ND is mandatory in the IPv6 stack). The main purpose of ND is that nodes can discover other nodes on the local link, perform address resolution, check that addresses are unique, and check the reachability with active nodes. There are exactly the same vulnerabilities of IPv6 as IPv4 and is now exception, ND if not properly secured. IPsec is a standard security mechanism for IPv6 but it does not solve the problem of secure auto configuration due the bootstrapping problem. Therefore the Internet Engineering Task Force (IETF) introduced Secure Neighbor Discovery (SEND). SEND is a mechanism for authentication, message protection, and router authentication. One important element of SEND is the use of Cryptographically Generated Address (CGA) an important mechanism to prove that the sender of the ND message is the actual owner of the address it claims NDprotector is an open-source implementation of SEND served as the basis for the analysis presented in this report. This implementation was evaluated in a small lab environment against some attacks in order to establish if it can defend itself from these attacks. / IPv6 protokollet kom det ett par nya funktioner där en av dem är autokonfiguration. Autokonfiguration gör det möjligt för noder, d.v.s. hostar och routrar för att automatiskt bli tilldelade IPv6 adresser manuell konfigurering. För att autokonfiguration ska fungera så används Neighbor Discovery (ND) meddelanden som är ett obligatoriskt protokoll i IPv6- stacken. ND har till huvudsaklig uppgift att noder kan upptäcka andra noder på den lokala länken, utföra adressöversättningar, kolltrollera så att adresser är unika samt kontrollera tillgängligheten hos aktiva noder. Precis som IPv4 så har IPv6 en hel del sårbarheter och med ND så är det inget undantag då det inte är säkrat. IPsec som är en den standard säkerhets mekanism till IPv6 löser inte problemet på grund av bootstrapping problemet. Det var därför Internet Engineering Task Force (IETF) introducerade Secure Neighbor Discovery (SEND). SEND är en mekanism för autentisering, meddelande skydd och router autentisering. En viktig del av SEND är Cryptographilcally Generated Address (CGA), en teknik som används för att försäkra sig så att det är den sändaren av ND meddelandet som är den riktiga ägaren av den hävdade adressen. NDprotector är en öppen källkods implementation av SEND som jag har valt att ha som grund för denna rapport. Jag kommer att sätta upp NDprotector i en liten labbmiljö där jag kommer att utföra olika attacker samt se efter om det klarar att försvara sig emot attackerna.
107

FURTHER CONTRIBUTIONS TO MULTIPLE TESTING METHODOLOGIES FOR CONTROLLING THE FALSE DISCOVERY RATE UNDER DEPENDENCE

Zhang, Shiyu, 0000-0001-8921-2453 12 1900 (has links)
This thesis presents innovative approaches for controlling the False Discovery Rate (FDR) in both high-dimensional statistical inference and finite-sample cases, addressing challenges arising from various dependency structures in the data. The first project introduces novel multiple testing methods for matrix-valued data, motivated by an electroencephalography (EEG) experiment, where we model the inherent complex row-column cross-dependency using a matrix normal distribution. We proposed two methods designed for structured matrix-valued data, to approximate the true FDP that captures the underlying cross-dependency with statistical accuracy. In the second project, we focus on simultaneous testing of multivariate normal means under diverse covariance matrix structures. By adjusting p-values using a BH-type step-up procedure tailored to the known correlation matrix, we achieve robust finite-sample FDR control. Both projects demonstrate superior performance through extensive numerical studies and real-data applications, significantly advancing the field of multiple testing under dependency. The third project presented exploratory simulation results to demonstrate the methods constructed based on the paired-p-values framework that controls the FDR within the multivariate normal means testing framework. / Statistics
108

Service Discovery in Pervasive Computing Environments

Thompson, Michael Stewart 17 October 2006 (has links)
Service discovery is a driving force in realizing pervasive computing. It provides a way for users and services to locate and interact with other services in a pervasive computing environment. Unfortunately, current service discovery solutions do not capture the effects of the human or physical world and do not deal well with diverse device populations; both of which are characteristics of pervasive computing environments. This research concentrates on the examination and fulfillment of the goals of two of the four components of service discovery, service description and dissemination. It begins with a review of and commentary on current service discovery solutions. Following this review, is the formulation of the problem statement, including a full explanation of the problems mentioned above. The problem formulation is followed by an explanation of the process followed to design and build solutions to these problems. These solutions include the Pervasive Service Description Language (PSDL), the Pervasive Service Query Language (PSQL), and the Multi-Assurance Delivery Protocol (MADEP). Prototype implementations of the components are used to validate feasibility and evaluate performance. Experimental results are presented and analyzed. This work concludes with a discussion of overall conclusions, directions for future work, and a list of contributions. / Ph. D.
109

Dependency discovery for data integration

Bauckmann, Jana January 2013 (has links)
Data integration aims to combine data of different sources and to provide users with a unified view on these data. This task is as challenging as valuable. In this thesis we propose algorithms for dependency discovery to provide necessary information for data integration. We focus on inclusion dependencies (INDs) in general and a special form named conditional inclusion dependencies (CINDs): (i) INDs enable the discovery of structure in a given schema. (ii) INDs and CINDs support the discovery of cross-references or links between schemas. An IND “A in B” simply states that all values of attribute A are included in the set of values of attribute B. We propose an algorithm that discovers all inclusion dependencies in a relational data source. The challenge of this task is the complexity of testing all attribute pairs and further of comparing all of each attribute pair's values. The complexity of existing approaches depends on the number of attribute pairs, while ours depends only on the number of attributes. Thus, our algorithm enables to profile entirely unknown data sources with large schemas by discovering all INDs. Further, we provide an approach to extract foreign keys from the identified INDs. We extend our IND discovery algorithm to also find three special types of INDs: (i) Composite INDs, such as “AB in CD”, (ii) approximate INDs that allow a certain amount of values of A to be not included in B, and (iii) prefix and suffix INDs that represent special cross-references between schemas. Conditional inclusion dependencies are inclusion dependencies with a limited scope defined by conditions over several attributes. Only the matching part of the instance must adhere the dependency. We generalize the definition of CINDs distinguishing covering and completeness conditions and define quality measures for conditions. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. The challenge for this task is twofold: (i) Which (and how many) attributes should be used for the conditions? (ii) Which attribute values should be chosen for the conditions? Previous approaches rely on pre-selected condition attributes or can only discover conditions applying to quality thresholds of 100%. Our approaches were motivated by two application domains: data integration in the life sciences and link discovery for linked open data. We show the efficiency and the benefits of our approaches for use cases in these domains. / Datenintegration hat das Ziel, Daten aus unterschiedlichen Quellen zu kombinieren und Nutzern eine einheitliche Sicht auf diese Daten zur Verfügung zu stellen. Diese Aufgabe ist gleichermaßen anspruchsvoll wie wertvoll. In dieser Dissertation werden Algorithmen zum Erkennen von Datenabhängigkeiten vorgestellt, die notwendige Informationen zur Datenintegration liefern. Der Schwerpunkt dieser Arbeit liegt auf Inklusionsabhängigkeiten (inclusion dependency, IND) im Allgemeinen und auf der speziellen Form der Bedingten Inklusionsabhängigkeiten (conditional inclusion dependency, CIND): (i) INDs ermöglichen das Finden von Strukturen in einem gegebenen Schema. (ii) INDs und CINDs unterstützen das Finden von Referenzen zwischen Datenquellen. Eine IND „A in B“ besagt, dass alle Werte des Attributs A in der Menge der Werte des Attributs B enthalten sind. Diese Arbeit liefert einen Algorithmus, der alle INDs in einer relationalen Datenquelle erkennt. Die Herausforderung dieser Aufgabe liegt in der Komplexität alle Attributpaare zu testen und dabei alle Werte dieser Attributpaare zu vergleichen. Die Komplexität bestehender Ansätze ist abhängig von der Anzahl der Attributpaare während der hier vorgestellte Ansatz lediglich von der Anzahl der Attribute abhängt. Damit ermöglicht der vorgestellte Algorithmus unbekannte Datenquellen mit großen Schemata zu untersuchen. Darüber hinaus wird der Algorithmus erweitert, um drei spezielle Formen von INDs zu finden, und ein Ansatz vorgestellt, der Fremdschlüssel aus den erkannten INDs filtert. Bedingte Inklusionsabhängigkeiten (CINDs) sind Inklusionsabhängigkeiten deren Geltungsbereich durch Bedingungen über bestimmten Attributen beschränkt ist. Nur der zutreffende Teil der Instanz muss der Inklusionsabhängigkeit genügen. Die Definition für CINDs wird in der vorliegenden Arbeit generalisiert durch die Unterscheidung von überdeckenden und vollständigen Bedingungen. Ferner werden Qualitätsmaße für Bedingungen definiert. Es werden effiziente Algorithmen vorgestellt, die überdeckende und vollständige Bedingungen mit gegebenen Qualitätsmaßen auffinden. Dabei erfolgt die Auswahl der verwendeten Attribute und Attributkombinationen sowie der Attributwerte automatisch. Bestehende Ansätze beruhen auf einer Vorauswahl von Attributen für die Bedingungen oder erkennen nur Bedingungen mit Schwellwerten von 100% für die Qualitätsmaße. Die Ansätze der vorliegenden Arbeit wurden durch zwei Anwendungsbereiche motiviert: Datenintegration in den Life Sciences und das Erkennen von Links in Linked Open Data. Die Effizienz und der Nutzen der vorgestellten Ansätze werden anhand von Anwendungsfällen in diesen Bereichen aufgezeigt.
110

Data driven approaches to improve the drug discovery process : a virtual screening quest in drug discovery

Ebejer, Jean-Paul January 2014 (has links)
Drug discovery has witnessed an increase in the application of in silico methods to complement existing in vitro and in vivo experiments, in an attempt to 'fail fast' and reduce the high attrition rates of clinical phases. Computer algorithms have been successfully employed for many tasks including biological target selection, hit identification, lead optimization, binding affinity determination, ADME and toxicity prediction, side-effect prediction, drug repurposing, and, in general, to direct experimental work. This thesis describes a multifaceted approach to virtual screening, to computationally identify small-molecule inhibitors against a biological target of interest. Conformer generation is a critical step in all virtual screening methods that make use of atomic 3D data. We therefore analysed the ability of computational tools to reproduce high quality, experimentally resolved conformations of organic small-molecules. We selected the best performing method (RDKit), and developed a protocol that generates a non-redundant conformer ensemble which tends to contain low-energy structures close to those experimentally observed. We then outline the steps we took to build a multi-million, small-molecule database (including molecule standardization and efficient exact, substructure and similarity searching capabilities), for use in our virtual screening experiments. We generated conformers and descriptors for the molecules in the database. We tagged a subset of the database as `drug-like' and clustered this to provide a reduced, diverse set of molecules for use in more computationally-intensive virtual screening protocols. We next describe a novel virtual screening method we developed, called Ligity, that makes use of known protein-ligand holo structures as queries to search the small-molecule database for putative actives. Ligity has been validated against targets from the DUD-E dataset, and has shown, on average, better performance than other 3D methods. We also show that performance improved when we fused the results from multiple input structures. This bodes well for Ligity's future use, especially when considering that protein structure databases such as the Protein Data Bank are growing exponentially every year. Lastly, we describe the fruitful application of structure-based and ligand-based virtual screening methods to Plasmodium falciparum Subtilisin-like Protease 1 (PfSUB1), an important drug target in the human stages of the life-cycle of the malaria parasite. Our ligand-based virtual screening study resulted in the discovery of novel PfSUB1 inhibitors. Further lead optimization of these compounds, to improve binding affinity in the nanomolar range, may promote them as drug candidates. In this thesis we postulate that the accuracy of computational tools in drug discovery may be enhanced to take advantage of the exponential increase of experimental data and the availability of cheaper computational power such as cloud computing.

Page generated in 0.1819 seconds