71 |
Querying Structured Data via Informative RepresentationsBandyopadhyay, Bortik January 2020 (has links)
No description available.
|
72 |
An Application of Artificial Intelligence Techniques in Classifying Tree Species with LiDAR and Multi-Spectral Scanner DataPosadas, Benedict Kit A 09 August 2008 (has links)
Tree species identification is an important element in many forest resources applications such as wildlife habitat management, inventory, and forest damage assessment. Field data collection for large or mountainous areas is often cost prohibitive, and good estimates of the number and spatial arrangement of species or species groups cannot be obtained. Knowledge-based and neural network species classification models were constructed for remotely sensed data of conifer stands located in the lower mountain regions near McCall, Idaho, and compared to field data. Analyses for each modeling system were made based on multi-spectral sensor (MSS) data alone and MSS plus LiDAR (light detection and ranging) data. The neural network system produced models identifying five of six species with 41% to 88% producer accuracies and greater overall accuracies than the knowledge-based system. The neural network analysis that included a LiDAR derived elevation variable plus multi-spectral variables gave the best overall accuracy at 63%.
|
73 |
A hybrid knowledge-based lean six sigma maintenance system for sustainable buildingsAl Dairi, Jasim S.S., Khan, M. Khurshid, Munive-Hernandez, J. Eduardo January 2016 (has links)
No / The complexity of sustainable building maintenance environment enforces organizations to develop a
standardized maintenance quality management system that can be applied in all concerned departments.
This Chapter presents a novel conceptual model of a hybrid Knowledge-Based Lean Six Sigma Sustainable
Building Maintenance System (Lean6-SBM). The KB model seeks to apply the Lean Six Sigma
philosophy to support implementation of an ideal building maintenance system. The conceptual KB model
also integrates GAP technique to support benchmarking and decision making. The proposed conceptual
model is presented to show the fundamental components of the Lean6-SBM. / This work was supported by the Ministry of Defense Engineering Services 403 (MoDES—Sultanate of Oman) and the University of Bradford (United Kingdom).
|
74 |
Re-defining the Architectural Design Process Through Building a Decision Support Framework for Design with Reused Building Materials and ComponentsAli, Ahmed Kamal 07 December 2012 (has links)
Waste from construction and demolition-building activities is increasing every day. Landfills have almost reached their capacity. When thinking about the negative impact of demolishing activities on the environment it becomes very necessary to think about reusing and recycling building materials in new construction or perhaps better recycling our thoughts on how to make use of waste materials. In Kevin Lynch's book, Wasting Away, he wrote: "Architects must begin to think about holes in the ground and about flows of materials." Studies show that construction and demolition activities are the primary source of solid waste worldwide. For example construction and demolition wastes constitute about 40% of the total solid waste stream in the United States. The growing interest in materials and resource conservation in the United States is inherent in the growth of green building practices. The USGBC identifies six categories in the Materials and Resources (MR) section of LEED. One of these six categories is Resource Reuse (RR). Interestingly enough, a recent study about the cost of green buildings indicated that RR was the category credits least often achieved in most LEED certified projects. Literature suggests that there are a number of constraints and barriers to resource reuse primarily due to the complexity of buildings but perhaps the most important barrier, according to many architects, is the lack of easily accessible information to the design team on resource reuse. Therefore, as we promote the idea of building material reuse to a wider audience of designers and architects, we mus not forget that in the Architecture, Engineering and Construction (AEC) industry, both Reuse and Recycle terms are used interchangeably without yet a clear distinction between them. The use of arbitrary descriptions to distinguish reuse from recycle has caused nothing but more confusion to the public. This study argues that the real distinction between reuse and recycle exists in Knowledge and Information. This suggests that design with reuse requires a paradigm shift in the required knowledgebase and the way information flows within the design process. Unfortunately, the structure of this paradigm shift is not known and has not been well defined. Since knowledge forms the core of building a Decision Support Systems (DSS) for a design team in order to consider reuse, it is necessary to capture the required knowledge and information from the industry experts through a Knowledge Acquisition (KA) process. This knowledge can then be used to 1) identify the building material reuse criteria and 2) to build a prescriptive decision model and 3) to map the process design of the current traditional architectural design workflow and the proposed one. The overarching goal of this study is to use the building material reuse knowledgebase for 1) building a Unified Virtual Repository database to be connected to all available physical repositories and share a unified standard of information. 2) When the unified virtual repository is integrated with the Building Information Modeling (BIM) database, the DSS can work as a feedback and feed forward support for architects and designers as they consider building material reuse in new designs and constructions. / Ph. D.
|
75 |
Inconsistency- and Error-Tolerant Reasoning w.r.t. Optimal Repairs of EL⊥ OntologiesBaader, Franz, Kriegel, Francesco, Nuradiansyah, Adrian 12 February 2024 (has links)
Errors in knowledge bases (KBs) written in a Description Logic (DL) are usually detected when reasoning derives an inconsistency or a consequence that does not hold in the application domain modelled by the KB. Whereas classical repair approaches produce maximal subsets of the KB not implying the inconsistency or unwanted consequence, optimal repairs maximize the consequence sets. In this paper, we extend previous results on how to compute optimal repairs from the DL EL to its extension EL⊥, which in contrast to EL can express inconsistency. The problem of how to deal with inconsistency in the context of optimal repairs was addressed previously, but in a setting where the (fixed) terminological part of the KB must satisfy a restriction on cyclic dependencies. Here, we consider a setting where this restriction is not required. We also show how the notion of optimal repairs obtained this way can be used in inconsistency- and error-tolerant reasoning.
|
76 |
A Study of Knowledge Management within Lockheed Martin CorporationNichols, Jonathan 01 January 2007 (has links)
The following thesis is based on my work at Lockheed Martin Corporation for the past year and a half. Though the initial chapters attempt to briefly address the theory behind Knowledge Management (KM), the main goal is to explore the practical application of KM within Lockheed Martin. The work strongly focuses on two platforms, the SAP Enterprise Portal and Microsoft SharePoint Portal, as KM within Lockheed Marten is centered on these technologies. The course of the writing takes a brief look at KM theory, delves into the technologies used in KM, and then moves into the practical implementation of KM. It concludes with two case studies on projects that I have led during my employment with the company.
|
77 |
Automatické ladění vah pravidlových bází znalostí / Automated Weight Tuning for Rule-Based Knowledge BasesValenta, Jan January 2009 (has links)
This dissertation thesis introduces new methods of automated knowledge-base creation and tuning in information and expert systems. The thesis is divided in the two following parts. The first part is focused on the legacy expert system NPS32 developed at the Faculty of Electrical Engineering and Communication, Brno University of Technology. The mathematical base of the system is expression of the rule uncertainty using two values. Thus, it extends information capability of the knowledge-base by values of the absence of the information and conflict in the knowledge-base. The expert system has been supplemented by a learning algorithm. The learning algorithm sets weights of the rules in the knowledge base using differential evolution algorithm. It uses patterns acquired from an expert. The learning algorithm is only one-layer knowledge-bases limited. The thesis shows a formal proof that the mathematical base of the NPS32 expert system can not be used for gradient tuning of the weights in the multilayer knowledge-bases. The second part is focused on multilayer knowledge-base learning algorithm. The knowledge-base is based on a specific model of the rule with uncertainty factors. Uncertainty factors of the rule represents information impact ratio. Using a learning algorithm adjusting weights of every single rule in the knowledge base structure, the modified back propagation algorithm is used. The back propagation algorithm is modified for the given knowledge-base structure and rule model. For the purpose of testing and verifying the learning algorithm for knowledge-base tuning, the expert system RESLA has been developed in C#. With this expert system, the knowledge-base from medicine field, was created. The aim of this knowledge base is verify learning ability for complex knowledge-bases. The knowledge base represents heart malfunction diagnostic base on the acquired ECG (electrocardiogram) parameters. For the purpose of the comparison with already existing knowledge-basis, created by the expert and knowledge engineer, the expert system was compared with professionally designed knowledge-base from the field of agriculture. The knowledge-base represents system for suitable cultivar of winter wheat planting decision support. The presented algorithms speed up knowledge-base creation while keeping all advantages, which arise from using rules. Contrary to the existing solution based on neural network, the presented algorithms for knowledge-base weights tuning are faster and more simple, because it does not need rule extraction from another type of the knowledge representation.
|
78 |
Can Chatbot technologies answer work email needs? : A case study on work email needs in an accounting firmOlsen, Linnéa January 2021 (has links)
Work email is one of the organisations most critical tool today. It`s have become a standard way to communicate internally and externally. It can also affect our well-being. Email overload has become a well-known issue for many people. With interviews, follow up interviews, and a workshop, three persons from an accounting firm prioritise pre-define emails needs. And identified several other email needs that were added to the priority list. A thematic analysis and summarizing of a Likert scale was conducted to identify underlying work email needs and work email needs that are not apparent. Three work email needs were selected and using scenario-based methods and the elements of PACT to investigating how the characteristics of a chatbot can help solve the identified work email overload issue? The result shows that email overload is percept different from individual to individual. The choice of how email is handled and email activities indicate how email overload feeling is experienced. The result shows a need to get a sense of the email content quickly, fast collect financial information and information from Swedish authorities, and repetitive, time-consuming tasks. Suggestions on how this problem can be solved have been put forward for many years, and how to use machine learning to help reduce email overload. However, many of these proposed solutions have not yet been implemented on a full scale. One conclusion may be that since email overload is not experienced in the same way, individuals have different needs - One solution does not fit all. With the help of the character of a chatbot, many problems can be solved. And with a technological character of a chatbot that can learn individuals' email patterns, suggest email task to the user and performing tasks to reducing the email overload perception. Using keyword for email intents to get a sense of the email content faster and produce quick links where to find information about the identified subject. And to work preventive give the user remainder and perform repetitive tasks on specific dates.
|
79 |
Managing Dependencies in Knowledge-Based Systems: A Graph-Based ApproachTapankov, Martin January 2009 (has links)
<p>In knowledge-based engineering, the inference engine plays an important part in the behaviour of the system. A flexible and adaptive execution scheme allows the designer to experiment with different modes of operation and selecting an appropriate one with respect to the initial data set and the execution goal.</p><p>In this project, an extension of an existing research prototype software in the field of knowledge-based engineering will be developed, with the goal of building a reliable and easy to use dependency resolution engine that will replace a less-than-ideal current implementation of the same. A discussion will be included how the knowledge concepts and objects can be represented in an abstract mathematical form, converting at the same time the problem of dependency resolution to a more formally specified one in terms of the data abstraction proposed. Some algorithms and methods that are used to operate on the data set will be discussed from both a theoretical and programming point of view, analysing their complexity, proposing and testing their implementation. Graphical interface controls that can be used to visualize and understand easily the relations in the available knowledge base will be also demonstrated.</p><p>The testing and verification of the resulting software will be presented, comparing its behaviour against reference tools serving similar purposes. Methods for validating the consistency of the knowledge base will also be discussed. Finally, the integration of the newly-developed code within the context of the prototype will be discussed, commenting on the new features and functionality gained.</p>
|
80 |
Conception et développement d’un système d’aide au diagnostic clinique et génétique des rétinopathies pigmentaires / Design and development of a support system for clinical diagnosis and genetic retinitis pigmentosaHebrard, Maxime 20 December 2012 (has links)
Le diagnostic des rétinopathies pigmentaires pose différents problèmes, au niveau clinique comme au niveau moléculaire. En premier lieu, il s'agit de maladies rares, la faible prévalence de chaque pathologie à l'échelle de la population mondiale rend difficile leur étude. En second lieu, la caractérisation phénotypique de ces maladies est délicate car les symptômes qui en découlent s'avèrent très similaires. De manière liée, l'œil et le processus de la vision s'avèrent complexes et impliquent les produits d'expression de nombreux gènes. Ainsi, bien que les rétinopathies soient majoritairement monogénique et respectent le modèle d'hérédité mendélienne, les causes génétiques des maladies sont variées. Sur la base de ce double constat, nous proposons deux approches méthodologiques complémentaires menant à une meilleure compréhension de ce groupe de pathologies. Une première approche a pour finalité l'acquisition du jeu exhaustif des gènes impliqués. Les travaux portent sur l'exploitation des puces de génotypage. Nous effectuons une étude de liaison génétique entre les variations ponctuelles et les pathologies. Une seconde approche porte sur la représentation des connaissances associées aux phénotypes cliniques. Un composant ontologique est construit afin d'expliciter les savoirs nécessaires au diagnostic. Les données collectées sur le long terme par les experts sont étiquetées au travers de termes organisés au sein d'un thésaurus dédié. Les profils cliniques des patients et des maladies sont manipulés sous forme de collections de caractéristiques et comparés au moyen d'une mesure de similarité adaptée. L'objectif est alors de proposer un système d'aide au diagnostic. / Diagnosis of retinitis pigmentosa could be difficult regarding both to clinics or molecular issues. Firstly, there are rare diseases, so the prevalence of each pathology in the world population is very low. Secondly, the symptoms of diseases are very similar, so their phenotypic characterization is hard. Moreover, the eye and the visual process are complex and numerous genes' products are implicated. Although retinopathies are mainly monogenic and mendelian inherited diseases, the polymorphisms involved in these diseases are very diverse.These both observations lead us to develop two complementary methodological approaches in a view to better understand the retinopathies.The first approach aims to identify all the genes involved in the diseases using genotyping chips. For this purpose, we studied genetic linkage between single nucleotide variations and pathologies. The second approach leads to the representation of clinical knowledge. An ontological compound was built to make explicit the knowledge involved in the process of diagnosis. The data previously collected by experts were labeled by terms that were organized in a specific thesaurus. The clinic profiles of the patients and diseases were handled as features collections and were compared by similarity calculations. The goal of this work is to build a knowledge-based system for diagnosis.
|
Page generated in 0.021 seconds