• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 737
  • 173
  • 83
  • 60
  • 59
  • 23
  • 20
  • 18
  • 10
  • 9
  • 6
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1529
  • 301
  • 289
  • 286
  • 234
  • 194
  • 175
  • 146
  • 127
  • 123
  • 122
  • 111
  • 111
  • 92
  • 90
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
611

Implementation and Evaluation of the Service Peer Discovery Protocol

Urdiales Delgado, Diego January 2004 (has links)
This document is the final report of the master's thesis "Implementation and Evuation of the Service Peer Discovery Protocol", carried out at the Center for Wireess Systems, KTH, Stockholm. This thesis addresses the problem of service discovery in peer-to-peer mobile networks by implementing and evaluating a previously designed protocl (the Service Peer Discovery Protocol). The main feature of peer-to-peer networks is that users connected to them can communicate directly with each other, without the necessity of interaction via a central point. However, in order for two networks users (ir peers) to communicate, they must have a means to locate and address each other, which is in gernal called a discovery protocol. There are many different solutions for discoverying protocols that work efficiently in fixed or slow-moving networks, but full mobility introduces a set of new difficulties for the discovery of peers and their services. The potential changes in location, which can occur very ofter, the changes in IP address that these changes cuase, and roaming between networks of different kinds are good examples of these difficulties. To solve these problems, a new Service Peer Discovery Protocol was designed and a test application built. The next step towards the introduction of this protocol was creating a working implementation, setting up a suitable test environment, performing experiments, and evaluating its performance. This evaluation could lead to improvments in the protoocl. The aim of this thesis is to implement and document the Service Peer Discovery Protocol, to carry out measurements of it, to evaluate the efficiency of the protocol, and to suggest ways in which it could be improved. The Service Peer Discovery Protocol was found to be well targeted to wireless, peer-to-peer networks, althgouh improvements in the protocol could make it more time and traffic-efficient while maintaining the same level of performance. / Detta är den slutliga rapporten för examensarbetet "Implementation och utvädering av Service Peer Discovery Protocol", utfört på Center for Wireless Systems, KTH, Stockholm.  Uppsatsen behandlar problemet med sökning efter tjänster i icke-hierarkiska (peer-to-peer) mobila nätverk genom att implementera och utvädera ett redan konstruerat protokoll (Service Peer Discovery Protocol). Den huvudsakliga fördelen med icke-hierarkiska nätverk är att anslutna anvndare (parter) kan kommunicera direkt med varandra, utan att behöva interagera med en central punkt.  Dock måste metoder för att lokalisera och adressera andra parter vara tillgängliga för att parterna skall kunna kommunicera, metoder som kalla sökprotokoll (discovery protocol). Det finns många olika sökprotokollösningar som fungerar effektivt i fasta eller långsamma mobila nätverk, men med full mobilitet introduceras ett antal nya svårigheter vid s kande efter parter och tjänster. Den potentiella förändringen av position (vilken kan inträffa ofta), byte av IP-address som dessa förändringar medför, och förflyttning mellan olika typer av nätverk, är exempel på sådana svårigheter. För att lösa dessa problem, konstruerades protokollet Service Peer Discovery Protocol och en testapplikation byggdes.  Nästa steg mot en introducering av detta protokoll var en fungerande implementation, en lämplig testmilö, utförandet av tester och en utvädering av prestandan.  Utväderingen syftade till att förbättra protokollet.  Syftet med detta examensar1ete är att implementera och dokumentera protokollet Service Peer Discovery Protocol, att göra mätningar, att utvädera effektiviteten samt att föreslå förbättringar av protokollet. Service Peer Discovery Protocol, fanns vara väl anpassat till icke-hierarkiska trådlösa nätverk.  Dock torde förbättringar av protokollet innebära tidseffektivare och trafikeffektivare beteende utan att kompromissa prestandanivån.
612

Reconfigurable Application Networks through Peer Discovery and Handovers

Gioacchino Cascella, Roberto January 2003 (has links)
This Master thesis work was carried out at theWireless Center at KTH and it is part of a pilot project. This thesis is conducted for the Institute for Microelectronics and Information Technology (IMIT) at the Royal Institute of Technology (KTH) in Stockholm (Sweden) and for the Department of Telecommunications at Politecnico di Torino in Turin (Italy). This thesis addresses an area with significant potential for offering services to mobile users. In such a scenario users should have minimal interaction with applications which, by taking into account available context information, should be able to make decisions, such as setting up delivery paths between peers without requiring a third party for the negotiation. In wireless reconfigurable networks, the mobile users are on the move and must deal with dynamic changes of network resources. In such a network, mobile users should be able to contact other peers or resources by using the current route. Thus although manual configuration of the network is a possible solution, it is not easily used because of the dynamic properties of the system which would demand too much user interaction. However, existing discovery protocols fall short of accomodating the complexity of reconfigurable and heterogeneous networks. The primary objective of this thesis work was to investigate a new approach at the application level for signaling by taking advantage of SIP’s features. The Session Initiation Protocol (SIP) is used to provide naming and localization of the user, and to provide functionality to invite users to establish sessions and to agree on communication parameters. The Specific Event Notification of the SIP protocol provides a framework for the notification of specific events and I believed that it could be instantiated as solution to the problem for reconfigurable application networks. This thesis proposes a method for providing localization information to SIP User Agents in order to establish sessions for service discovery. Furthermore, this method should consider context meta-data to design strategies effective in heterogeneous networks. A viable solution must support (re)location of users at the application layer when they roam between different wireless networks, such as GPRS and WLAN. An analysis of the implications of the proposed model is presented; in this analysis emphasis has been placed on how this model interacts with existing services.
613

Causal Discovery Algorithms for Context-Specific Models / Kausala Upptäckts Algoritmer för Kontext-Specifika Modeller

Ibrahim, Mohamed Nazaal January 2021 (has links)
Despite having a philosophical grounding from empiricism that spans some centuries, the algorithmization of causal discovery started only a few decades ago. This formalization of studying causal relationships relies on connections between graphs and probability distributions. In this setting, the task of causal discovery is to recover the graph that best describes the causal structure based on the available data. A particular class of causal discovery algorithms, called constraint-based methods rely on Directed Acyclic Graphs (DAGs) as an encoding of Conditional Independence (CI) relations that carry some level of causal information. However, a CI relation such as X and Y being independent conditioned on Z assumes the independence holds for all possible values Z can take, which can tend to be unrealistic in practice where causal relations are often context-specific. In this thesis we aim to develop constraint-based algorithms to learn causal structure from Context-Specific Independence (CSI) relations within the discrete setting, where the independence relations are of the form X and Y being independent given Z and C = a for some a. This is done by using Context-Specific trees, or CStrees for short, which can encode CSI relations. / Trots att ha en filosofisk grund från empirism som sträcker sig över några århundraden, algoritm isering av kausal upptäckt startade för bara några decennier sedan. Denna formalisering av att studera orsakssamband beror på samband mellan grafer och sannolikhetsfördelningar. I den här inställningen är kausal upptäckt att återställa grafen som bäst beskriver kausal strukturen baserat på tillgängliga data. En särskild klass av kausala upptäckts algoritmer, så kallade begränsnings baserade metoder, är beroende av Directed Acyclic Graphs (DAG) som en kodning av förhållanden med villkorlig självständighet (CI) som bär någon nivå av kausal information. En CI-relation som X och Y är oberoende förutsatt att Z förutsätter att oberoende gäller för alla möjliga värden som Z kan ta, vilket kan vara orealistiskt i praktiken där orsakssamband ofta är kontextspecifika. I denna avhandling strävar vi efter att utveckla begränsnings baserade algoritmer för att lära kausal struktur från Contex-Specific Independence (CSI) -relationer inom den diskreta miljön, där självständighet relationerna har formen X och Y är oberoende med tanke på Z och C = a för vissa a. Detta görs genom att använda sammanhang specifika träd, eller kortfattat CStrees, som kan koda CSI-relationer.
614

Die Entdeckung des Elementes 91 durch Kasimir Fajans und Oswald Göhring im Jahr 1913 und die Namensgebung durch Otto Hahn und Lise Meitner 1918

Niese, Siegfried 21 February 2013 (has links)
Kasimir Fajans und Oswald Göhring entdeckten 1913 das von ihnen Brevium (Bv) genannte Element 91 als kurzlebiges Protactiniumisotop 234mPa in unmittelbarer Folge des von Alexander S. Russell, Frederick Soddy und Fajans entdeckten radioaktiven Verschiebungsgesetzes, nachdem das als UX bezeichnete thoriumähnliche Tochterprodukt des Urans noch ein entsprechend der Voraussage von Dimitri Mendeleev tantalähnliches unbekanntes Radioelement enthalten muss. Auf der Suche nach dem langlebigen Mutterkörper des Actiniums fanden Otto Hahn und Lise Meitner 1918 das langlebige Isotop des Breviums (231Pa), das sie Protactinium nannten. Obgleich sie es als Isotop des Breviums bezeichneten, wurden sie in der Folgezeit nicht nur als Namensgeber; sondern meist auch als Entdecker des Elementes Nr. 91 genannt. / In 1913 Kasimir Fajans and Oswald Göhring discovered the element number 91as its short-lived isotope 234mPa. They named it brevium (Bv). The discovery was the result of the displacement law discovered by Alexander Smith Russell, Frederick Soddy and Fajans. According to this law and the periodic system of Dimitri Mendeleev the daughter of uranium UX must contain an unknown radioelement chemical similar to tantalum. In 1918 during the search of the mother of actinium Otto Hahn and Lise Meitner found the long-lived Isotope of Brevium (231Pa), which they designated as protactinium. Later often is written, that Hahn and Meitner have non-only given the name but also discovered the element number 91
615

Rough Sets Bankruptcy Prediction Models Versus Auditor Signalling Rates

McKee, Thomas E. 01 December 2003 (has links)
Rough set prediction capability was compared with actual auditor signaling rates for a large sample of United States companies from 1991 to 1997 time period. Prior bankruptcy prediction research was carefully reviewed to identify 11 possible predictive factors which had both significant theoretical support and were present in multiple studies. Rough sets theory was used to develop two different bankruptcy prediction models, each containing four variables from the 11 possible predictive variables. In contrast with prior rough sets theory research which suggested that rough sets theory offered significant bankruptcy predictive improvements for auditors, the rough sets models did not provide any significant comparative advantage with regard to prediction accuracy over the actual auditors' methodologies.
616

Stepping Stones and Pathways:Improving Retrieval by Chains of Relationships between Documents

Das Neves, Fernando Adrian 08 December 2004 (has links)
The information retrieval (IR) field has been successful in developing techniques to address many types of information needs. However, there are cases in which traditional approaches to IR are not able to produce adequate results. Examples include: when a small set of (2-3) documents is needed as an answer rather than a single document, or when "query splitting" is required to satisfactorily explore the document space. We explore an alternative model of building and presenting retrieval results for such cases. In particular, we research effective methods for handling information needs that may: 1. Include multiple topics: A typical query is interpreted by current IR systems as a request to retrieve documents that each discusses all topics included in that query. We propose an alternative interpretation based on query splitting. It allows queries to be interpreted as requests to retrieve sets of documents rather than individual documents, with meaningful relationships among the members of each such set. 2. Be interpreted as parts in a chain of relationships: Suppose a query concerns topics t1 and tm. Is there a relation between topics t1 and tm that involves t2 and possibly other topics as in {t1, t2, â ¦ tm}? Thus, we propose an alternative interpretation of user queries and presentation of the results. Our interpretation has the potential to improve retrieval results whenever there is a mismatch between the user's understanding of the collection and the actual collection content. We define and refine a retrieval scheme that enhances retrieval through a framework that combines multiple sources of evidence. Query results in our interpretation are networks of document groups representing topics, each group relating to and connecting to other groups in the network that partially answer the user's information need. We devise new and more effective representations and techniques to visualize results, and incorporate the user as part of the retrieval process. We also evaluate the improvement of the query results based on multiple measures. In particular, we verify the validity of our approach through a study involving a collection of Operating Systems research papers that was specially built for this dissertation. / Ph. D.
617

Multi-Layer Web Services Discovery using Word Embedding and Clustering Techniques

Obidallah, Waeal 25 February 2021 (has links)
Web services discovery is the process of finding the right Web services that best match the end-users’ functional and non-functional requirements. Artificial intelligence, natural language processing, data mining, and text mining techniques have been applied by researchers in Web services discovery to facilitate the process of matchmaking. This thesis contributes to the area of Web services discovery and recommendation, adopting the Design Science Research Methodology to guide the development of useful knowledge, including design theory and artifacts. The lack of a comprehensive review of Web services discovery and recommendation in the literature motivated us to conduct a systematic literature review. Our main purpose in conducting the systematic literature review was to identify and systematically compare current clustering and association rules techniques for Web services discovery and recommendation by providing answers to various research questions, investigating the prior knowledge, and identifying gaps in the related literature. We then propose a conceptual model and a typology of Web services discovery systems. The conceptual model provides a high-level representation of Web services discovery systems, including their various elements, tasks, and relationships. The proposed typology of Web services discovery systems is composed of five groups of characteristics: storage and location characteristics, formalization characteristics, matchmaking characteristics, automation characteristics, and selection characteristics. We reference the typology to compare Web services discovery methods and architectures from the extant literature by linking them to the five proposed characteristics. We employ the proposed conceptual model with its specified characteristics to design and develop the multi-layer data mining architecture for Web services discovery using word embedding and clustering techniques. The proposed architecture consists of five layers: Web services description and data preprocessing; word embedding and representation; syntactic similarity; semantic similarity; and clustering. In the first layer, we identify the steps to parse and preprocess the Web services documents. Bag of Words with Term Frequency–Inverse Document Frequency and three word-embedding models are employed for Web services representation in the second layer. Then in the third layer, four distance measures, including Cosine, Euclidean, Minkowski, and Word Mover, are studied to find the similarities between Web services documents. In layer four, WordNet and Normalized Google Distance are employed to represent and find the similarity between Web services documents. Finally, in the fifth layer, three clustering algorithms, including affinity propagation, K-means, and hierarchical agglomerative clustering, are investigated to cluster Web services based on the observed documents’ similarities. We demonstrate how each component of the five layers is employed in the process of Web services clustering using random-ly selected Web services documents. We conduct experimental analysis to cluster Web services using a collected dataset of Web services documents and evaluating their clustering performances. Using a ground truth for evaluation purposes, we observe that clusters built based on the word embedding models performed better compared to those built using the Bag of Words with Term Frequency–Inverse Document Frequency model. Among the three word embedding models, the pre-trained Word2Vec’s skip-gram model reported higher performance in clustering Web services. Among the three semantic similarity measures, path-based WordNet similarity reported higher clustering performance. By considering the different words representations models and syntactic and semantic similarity measures, the affinity propagation clustering technique performed better in discovering similarities among Web services.
618

From Discovery to Delivery : An Evaluation of Discovery Service WorldCat Discovery at Skövde University Library

Boers, Qiuhong January 2018 (has links)
This paper is to evaluate the discovery service WorldCat Discovery (WCD) at Skövde University Library (SUL) through the usability study of the discovery tool WorldCat Discovery (WCD). By reference to the concepts of “Information Portal” and “Next-generation catalogue”, as well as Dillon’s (2001) evaluation model, the overall impression of the discovery service WCD perceived by users at SUL is investigated; the benefits and the problems of the discovery tool WorldCat Discovery are examined and discussed. Data are collected by a two-stage survey among the users of Skövde University Library, which targets on students and researchers at the University of Skövde. The results show that the discovery service WCD is evaluated positively in general and is confirmed to be used in the future by most of the target group members at Skövde University Library. The features of single search interface and basic filter functions are the major benefits, but the access to full-text articles in minor-used languages and metadata quality are the main problems perceived by target group members during performing common search tasks through the WCD interface. By identifying the benefits and problems in relation to the aspects of discover and delivery, this study addresses a cooperative effort between academic libraries, discovery service vendors and content providers to a “seamless” integration of discovery services with academic libraries.
619

Development of DNA Aptamers Targeting Breast Cancer Derived Extracellular Vesicles for Biomarker Discovery

Susevski, Vanessa 18 September 2020 (has links)
Detection of cancer at the early stages greatly increases the chance for successful treatment and favourable prognosis for patients. However, a liquid-based biopsy has yet to be developed for most cancers. Extracellular vesicles (EVs) are an attractive candidate for early cancer detection since their surface proteome mirrors the cell of origin. Thus, there is a need for the development of reliable probes that can detect cancer derived EVs. In this thesis, the VBS-1 aptamer was developed to selectively bind to triple-negative breast cancer cell line derived EVs. Initially, several EV isolation methods were compared and isolated EVs were validated and characterized. Aptamer clones were developed by Systematic Evolution of Ligands by Exponential Enrichment to EVs isolated by differential ultracentrifugation and their binding was validated by flow cytometry. The binding partner of the selected VBS-1 aptamer was identified by LC-MS/MS to be the transmembrane protein ATP1A1. The presence of an ATP1A1-positive EV population was validated by flow cytometry. The selected aptamer may find further application in biosensors for the detection of EVs as cancer biomarkers in biological fluids.
620

Class discovery via feature selection in unsupervised settings

Curtis, Jessica 13 February 2016 (has links)
Identifying genes linked to the appearance of certain types of cancers and their phenotypes is a well-known and challenging problem in bioinformatics. Discovering marker genes which, upon genetic mutation, drive the proliferation of different types and subtypes of cancer is critical for the development of advanced tests and therapies that will specifically identify, target, and treat certain cancers. Therefore, it is crucial to find methods that are successful in recovering "cancer-critical genes" from the (usually much larger) set of all genes in the human genome. We approach this problem in the statistical context as a feature (or variable) selection problem for clustering, in the case where the number of important features is typically small (or rare) and the signal of each important feature is typically minimal (or weak). Genetic datasets typically consist of hundreds of samples (n) each with tens of thousands gene-level measurements (p), resulting in the well-known statistical "large p small n" problem. The class or cluster identification is based on the clinical information associated with the type or subtype of the cancer (either known or unknown) for each individual. We discuss and develop novel feature ranking methods, which complement and build upon current methods in the field. These ranking methods are used to select features which contain the most significant information for clustering. Retaining only a small set of useful features based on this ranking aids in both a reduction in data dimensionality, as well as the identification of a set of genes that are crucial in understanding cancer subtypes. In this paper, we present an outline of cutting-edge feature selection methods, and provide a detailed explanation of our own contributions to the field. We explain both the practical properties and theoretical advantages of the new tools that we have developed. Additionally, we explore a well-developed case study applying these new feature selection methods to different levels of genetic data to explore their practical implementation within the field of bioinformatics.

Page generated in 0.1362 seconds