• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 62
  • 62
  • 62
  • 62
  • 29
  • 25
  • 18
  • 13
  • 12
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Constraint-based software for broadband networks planninga software framework for planning with the holistic approach /

Manaf, Afwarman,1962- January 2000 (has links)
For thesis abstract select View Thesis Title, Contents and Abstract
52

Hybrid Methods for Feature Selection

Cheng, Iunniang 01 May 2013 (has links)
Feature selection is one of the important data preprocessing steps in data mining. The feature selection problem involves finding a feature subset such that a classification model built only with this subset would have better predictive accuracy than model built with a complete set of features. In this study, we propose two hybrid methods for feature selection. The best features are selected through either the hybrid methods or existing feature selection methods. Next, the reduced dataset is used to build classification models using five classifiers. The classification accuracy was evaluated in terms of the area under the Receiver Operating Characteristic (ROC) curve (AUC) performance metric. The proposed methods have been shown empirically to improve the performance of existing feature selection methods.
53

Selection of clinical trials [electronic resource] : knowledge representation and acquisition / by Savvas Nikiforou .

Nikiforou, Savvas. January 2002 (has links)
Title from PDF of title page. / Document formatted into pages; contains 42 pages. / Thesis (M.S.C.S.)--University of South Florida, 2002. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: When medical researchers test a new treatment procedure, they recruit patients with appropriate health problems and medical histories. An experiment with a new procedure is called a clinical trial. The selection of patients for clinical trials has traditionally been a labor-intensive task, which involves matching of medical records with a list of eligibility criteria. A recent project at the University of South Florida has been aimed at the automation of this task. The project has involved the development of an expert system that selects matching clinical trials for each patient. / If a patient's data are not sufficient for choosing a trial, the system suggests additional medical tests. We report the work on the representation and entry of the related selection criteria and medical tests. We first explain the structureof the system's knowledge base, which describes clinical trials and criteria for selecting patients. We then present an interface that enables a clinician to add new trials and selection criteria without the help of a programmer. Experiments show that the addition of a new clinical trial takes ten to twenty minutes, and that novice users learn the full functionality of the interface in about an hour. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
54

Case-driven collaborative classification

Vazey, Megan Margaret January 2007 (has links)
Thesis (PhD) -- Macquarie University, Division of Information and Communication Sciences, Department of Computing, 2007. / "Submitted January 27 2007, revised July 27 2007". / Bibliography: p. 281-304. / Mode of access: World Wide Web. / xiv, 487 p., bound ill. (some col.)
55

Improving service delivery at the National University of Lesotho Library through knowledge sharing

Tahleho, Tseole Emmanuel January 2016 (has links)
Knowledge is now considered the most important organizational resource, surpassing other resources like land and capital. It has, therefore, been acknowledged that knowledge can play an important role in ensuring an organization’s competitive edge. The purpose of this study was to investigate if knowledge sharing is being used to improve service delivery at the National University of Lesotho’s Thomas Mofolo Library. The researcher held the view that Librarians at Thomas Mofolo Library have different sets of skills which, if combined, could improve service delivery. By not sharing and retaining this existing wealth of knowledge, the researcher claimed that when librarians retire or resign from work, they will certainly take with them the knowledge they possess and the result of this knowledge loss is that the Library may be plagued by an inability to learn from the past experiences, which leads to reinvented wheels, unlearned lessons and the pattern of repeated mistakes. Both qualitative and quantitative methods were employed in the case study design in order to allow for multiple methods of data collection. Data were collected by means of questionnaires and interviews. Questionnaires were administered to all librarians who were available at the time and purposive sampling was used to determine interview participants. Out of the 25 questionnaires administered, 15 were returned, providing a response rate of 60%. The data collected by means of questionnaires was processed using Microsoft Access and analyzed using the Statistical Package for Social Science (SPSS) software (Version 17). The results of analysis were exported into Microsoft Excel for visual presentation and reporting of the results. The data from the interview sessions was analyzed manually by content analysis, using the notes that were taken by the researcher from the respondents during the interview sessions. The findings pointed to the fact that knowledge sharing does occur at TML, although mostly in an informal manner. This was largely due to a number of impediments such as lack of trust and the absence of motivations and rewards. The study concluded by recommending a number of initiatives that could be implemented in order to retain knowledge within the Library. The recommendations included developing a knowledge management strategy and formalizing knowledge sharing by formulating the desired policies. / Information Science / M.A. (Information Science)
56

The Acquisition Of Lexical Knowledge From The Web For Aspects Of Semantic Interpretation

Schwartz, Hansen A 01 January 2011 (has links)
This work investigates the effective acquisition of lexical knowledge from the Web to perform semantic interpretation. The Web provides an unprecedented amount of natural language from which to gain knowledge useful for semantic interpretation. The knowledge acquired is described as common sense knowledge, information one uses in his or her daily life to understand language and perception. Novel approaches are presented for both the acquisition of this knowledge and use of the knowledge in semantic interpretation algorithms. The goal is to increase accuracy over other automatic semantic interpretation systems, and in turn enable stronger real world applications such as machine translation, advanced Web search, sentiment analysis, and question answering. The major contributions of this dissertation consist of two methods of acquiring lexical knowledge from the Web, namely a database of common sense knowledge and Web selectors. The first method is a framework for acquiring a database of concept relationships. To acquire this knowledge, relationships between nouns are found on the Web and analyzed over WordNet using information-theory, producing information about concepts rather than ambiguous words. For the second contribution, words called Web selectors are retrieved which take the place of an instance of a target word in its local context. The selectors serve for the system to learn the types of concepts that the sense of a target word should be similar. Web selectors are acquired dynamically as part of a semantic interpretation algorithm, while the relationships in the database are useful to iii stand-alone programs. A final contribution of this dissertation concerns a novel semantic similarity measure and an evaluation of similarity and relatedness measures on tasks of concept similarity. Such tasks are useful when applying acquired knowledge to semantic interpretation. Applications to word sense disambiguation, an aspect of semantic interpretation, are used to evaluate the contributions. Disambiguation systems which utilize semantically annotated training data are considered supervised. The algorithms of this dissertation are considered minimallysupervised; they do not require training data created by humans, though they may use humancreated data sources. In the case of evaluating a database of common sense knowledge, integrating the knowledge into an existing minimally-supervised disambiguation system significantly improved results – a 20.5% error reduction. Similarly, the Web selectors disambiguation system, which acquires knowledge directly as part of the algorithm, achieved results comparable with top minimally-supervised systems, an F-score of 80.2% on a standard noun disambiguation task. This work enables the study of many subsequent related tasks for improving semantic interpretation and its application to real-world technologies. Other aspects of semantic interpretation, such as semantic role labeling could utilize the same methods presented here for word sense disambiguation. As the Web continues to grow, the capabilities of the systems in this dissertation are expected to increase. Although the Web selectors system achieves great results, a study in this dissertation shows likely improvements from acquiring more data. Furthermore, the methods for acquiring a database of common sense knowledge could be applied in a more exhaustive fashion for other types of common sense knowledge. Finally, perhaps the greatest benefits from this work will come from the enabling of real world technologies that utilize semantic interpretation.
57

The construction and use of an ontology to support a simulation environment performing countermeasure evaluation for military aircraft

Lombard, Orpha Cornelia 05 1900 (has links)
This dissertation describes a research study conducted to determine the benefits and use of ontology technologies to support a simulation environment that evaluates countermeasures employed to protect military aircraft. Within the military, aircraft represent a significant investment and these valuable assets need to be protected against various threats, such as man-portable air-defence systems. To counter attacks from these threats, countermeasures are deployed, developed and evaluated by utilising modelling and simulation techniques. The system described in this research simulates real world scenarios of aircraft, missiles and countermeasures in order to assist in the evaluation of infra-red countermeasures against missiles in specified scenarios. Traditional ontology has its origin in philosophy, describing what exists and how objects relate to each other. The use of formal ontologies in Computer Science have brought new possibilities for modelling and representation of information and knowledge in several domains. These advantages also apply to military information systems where ontologies support the complex nature of military information. After considering ontologies and their advantages against the requirements for enhancements of the simulation system, an ontology was constructed by following a formal development methodology. Design research, combined with the adaptive methodology of development, was conducted in a unique way, therefore contributing to establish design research as a formal research methodology. The ontology was constructed to capture the knowledge of the simulation system environment and the use of it supports the functions of the simulation system in the domain. The research study contributes to better communication among people involved in the simulation studies, accomplished by a shared vocabulary and a knowledge base for the domain. These contributions affirmed that ontologies can be successfully use to support military simulation systems / Computing / M. Tech. (Information Technology)
58

Socio-semantic conversational information access

Sahay, Saurav 15 November 2011 (has links)
The main contributions of this thesis revolve around development of an integrated conversational recommendation system, combining data and information models with community network and interactions to leverage multi-modal information access. We have developed a real time conversational information access community agent that leverages community knowledge by pushing relevant recommendations to users of the community. The recommendations are delivered in the form of web resources, past conversation and people to connect to. The information agent (cobot, for community/ collaborative bot) monitors the community conversations, and is 'aware' of users' preferences by implicitly capturing their short term and long term knowledge models from conversations. The agent leverages from health and medical domain knowledge to extract concepts, associations and relationships between concepts; formulates queries for semantic search and provides socio-semantic recommendations in the conversation after applying various relevance filters to the candidate results. The agent also takes into account users' verbal intentions in conversations while making recommendation decision. One of the goals of this thesis is to develop an innovative approach to delivering relevant information using a combination of social networking, information aggregation, semantic search and recommendation techniques. The idea is to facilitate timely and relevant social information access by mixing past community specific conversational knowledge and web information access to recommend and connect users with relevant information. Language and interaction creates usable memories, useful for making decisions about what actions to take and what information to retain. Cobot leverages these interactions to maintain users' episodic and long term semantic models. The agent analyzes these memory structures to match and recommend users in conversations by matching with the contextual information need. The social feedback on the recommendations is registered in the system for the algorithms to promote community preferred, contextually relevant resources. The nodes of the semantic memory are frequent concepts extracted from user's interactions. The concepts are connected with associations that develop when concepts co-occur frequently. Over a period of time when the user participates in more interactions, new concepts are added to the semantic memory. Different conversational facets are matched with episodic memories and a spreading activation search on the semantic net is performed for generating the top candidate user recommendations for the conversation. The tying themes in this thesis revolve around informational and social aspects of a unified information access architecture that integrates semantic extraction and indexing with user modeling and recommendations.
59

Análise de competição em licitações brasileiras de áreas de exploração e produção de petróleo / Competition analysis in brazilian petroleum exploration and production auctions

Rodriguez, Monica Rebelo 12 June 2010 (has links)
Orientadores: Osvair Vidal Trevisan, Boris Asrilhant / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-17T11:45:57Z (GMT). No. of bitstreams: 1 Rodriguez_MonicaRebelo_D.pdf: 37752110 bytes, checksum: fe86a28596b16f107e45c39acbce7fdd (MD5) Previous issue date: 2010 / Resumo: Há 10 anos da quebra do monopólio para a exploração e produção (E&P) de petróleo no Brasil o mercado se mostrou estável, competitivo e gerando resultados positivos que atraem o interesse das companhias nacionais e estrangeiras a investir no setor de "upstream". O processo de cessão de direitos e obrigações sobre as áreas de E&P é conduzido pela Agência Nacional de Petróleo, Gás Natural e Bio-combustíveis (ANP) por meio de licitação pública, com regras bem definidas, onde o vencedor assina um contrato de concessão com a ANP. Esta pesquisa apresenta e analisa o histórico destas licitações para áreas de exploração e produção e áreas inativas com acumulações marginais, dentro do cenário econômico brasileiro e do potencial exploratório do país, e compara o desempenho das empresas no Brasil e no Golfo do México Americano, segundo os investimentos realizados para aquisição dessas áreas. Apresenta, ainda, um modelo estocástico para estimava do valor dos blocos desenvolvido a partir das ofertas realizadas para áreas da Bacia de Campos em licitações pretéritas. Para analisar o nível de competição esperado para essas áreas, este estudo descreve também o desenvolvimento de um sistema especialista com a ferramenta Exsys Corvid®, baseado no julgamento de 36 especialistas da indústria do petróleo que trabalham em 20 companhias de pequeno, médio e grande porte. A aplicação desta metodologia permite que estas companhias estimem o nível de competição (alto, moderado, ou baixo) para áreas da Bacia de Campos. Conhecendo o valor das áreas e a estimativa do nível de competição, é possível subsidiar o processo decisório na elaboração de estratégias de oferta que permitam uma melhor alocção financeira dos recursos e a gestão ótima do portfólio exploratório pretendido pela companhia / Abstract: After 10 years of the ending of petroleum exploration and production (E&P) monopoly in Brazil, the market for those activities has shown to be stable and competitive, providing positive results which attracted both national and international investment for the upstream oil and gas sector. The regulatory agency promotes public licensing of E&P areas through a competitive sealed bid auction, whose rules are clear and known in advance by the companies. This research describes and evaluates the historical data for these E&P licensing, as well as for tenders of marginal oilfield accumulations, under the Brazilian economic scenario and the geologic potential of the country. It also compares oil companies performance regarding investment made in acquiring areas in Brazil to those in US-Gulf of Mexico. A stochastic model for block-value estimation is presented and applied to previous data from Campos Basin licensed areas. In order to estimate the level of competition expected for those areas, an expert system was built using Exsys Corvid®, based on the knowledge captured from 36 specialists in Brazilian public licensing working for 20 oil companies. The proposed methodology is applied to the case of Campos Basin areas and showed to properly estimate the levels of competition expected (high, moderate or low in the bid. By knowing the block-value and the expected level of competition, decision makers are better prepared for formulating bidding strategies that can result in better resources allocation and yield a better exploration portfolio management / Doutorado / Reservatórios e Gestão / Doutor em Ciências e Engenharia de Petróleo
60

Aquisição de conhecimento de agentes textuais baseada em MORPH / Knowledge acquisition of textual agents based on MORPH

Costa, Fabiana Marques, 1974- 19 August 2018 (has links)
Orientador: Antonio Carlos Zambon / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Tecnologia / Made available in DSpace on 2018-08-19T23:02:19Z (GMT). No. of bitstreams: 1 Costa_FabianaMarques_M.pdf: 2422520 bytes, checksum: 966c5b82f59168e9a1400b9b58760301 (MD5) Previous issue date: 2012 / Resumo: Esta pesquisa fundamenta-se no desenvolvimento de um método de aquisição de conhecimento de agentes textuais baseada em MORPH - Modelo Orientado à Representação do Pensamento Humano - que permite que se extraia o modelo mental de agentes textuais. O objetivo é evidenciar o conhecimento contido no agente textual, representá-lo graficamente para compreendê-lo, facilitando o processo de aprendizagem e refinando o estudo dos conteúdos de um texto. Pois considera-se que nem sempre autores deixam as ideias explícitas (suas estruturas mentais) em artigos científicos, de forma clara e objetiva. O MACAT é um processo composto por três etapas, estruturadas em diretrizes para a extração de objetos de agentes textuais diversos. Apresenta-se além do desenvolvimento do método, a aplicação do MACAT baseado em MORPH, para investigação de artigos científicos, visando à exemplificação de sua utilização e demonstrando sua utilidade na explicitação de conhecimento. Com isso, é póssível evidenciar a dinâmica dos processos contidos nos sistemas organizacionais, que apresentam dificuldades de construir o aprendizado, em razão da ausência de instrumentos pelos quais se possa avaliar a progressão do conhecimento. Como resultado, demonstra-se que o método torna possível a extração e representação do conhecimento de agentes humanos externalizados em agentes textuais, permitindo a compreensão de modelos mentais, alavancando a tomada de decisão em situações complexas / Abstract: This research is based on developing on a method for the Knowledge Acquisition of Textual Agents based on MORPH - Oriented Model to the Human Thought Representation - which allows you to extraction of a textual agent's mental model. The goal is to demonstrate the knowledge present in textual agent, representing it graphically by facilitating its understanding and the learning process and refining the study of the contents of a text. Because it is considered that the authors don't always make explicit ideas (mental structures) of scientific articles, clearly and objectively. The MACAT is a process composed of three steps, structured guidelines for the extraction of objects of various textual agents. It is presented in addition to method development, the application of MACAT based on MORPH for research papers, aimed at the exemplification of its use and demonstrating its usefulness in explicit knowledge. This makes it possible to demonstrate the dynamics organizational processes in computer systems in those which have difficulty in learning to build, due to the lack of instruments that can evaluate the evolution of knowledge. As a result, it is shown that the method makes possible the extraction and representation of knowledge into human agents that externalized in textual agents, able to understanding the mental models, leveraging the decision-making in complex situations / Mestrado / Tecnologia e Inovação / Mestre em Tecnologia

Page generated in 0.1409 seconds