• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 952
  • 452
  • 294
  • 27
  • 17
  • 13
  • 9
  • 7
  • 7
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 1948
  • 626
  • 622
  • 482
  • 423
  • 326
  • 325
  • 298
  • 286
  • 285
  • 284
  • 280
  • 224
  • 212
  • 167
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

The monitoring and control of stoker-fired boiler plant by neural networks

Chong, Alex Zyh Siong January 1999 (has links)
This thesis is concerned with the implementation of Artificial Neural Networks (ANNs) to monitor and control chain grate stoker-fired coal boilers with a view to improving the combustion efficiency whilst minimising pollutant emissions. A novel Neural Network Based Controller (NNBC) was developed following a comprehensive set of experiments carried out on a stoker test facility at the Coal Research Establishment (CRE) Ltd., before being evaluated on an industrial chain grate stoker at Her Majesty's Prison Garth, Leyland. The NNBC mimicked the actions of an expert boiler operator, by providing 'near optimum' settings of coal feed and air flow, as well as taking into account the correct 'staging' sequence of these parameters during load following conditions, before subsequently fine tuning the combustion air under quasi steady- state conditions. Test results from the on-line implementation of the NNBC on both chain grate stoker plants have demonstrated that improved transient and steady state combustion conditions were attained without having any adverse effect on the pollutant emissions nor the integrity of the appliances. A novel combustion monitoring system was also developed during the course of the work that can be used to infer the stability of combustion on the fire bed, following a pilot study of the 'flame front' movement during boiler load changes on the stoker test facility at CRE. This novel low-cost flame front monitor was rigorously tested on the industrial stoker plant, and long hours of successful on-line operation were achieved. It was also demonstrated with the use of ANNs, that the data gathered from the novel flame front monitor can be processed to yield evidence concerning movement of the ignition plane over a short period of time (several minutes). The prototype controller and flame front monitor would thus provide both stoker manufacturers and users with a means of meeting future legislative limits on pollutant emissions as indicated by the European Commission, as well as improving the combustion efficiency of this type of coal firing equipment Finally, ANNs were also used as a simplistic means to represent the complex coal combustion process on the bed of the stoker test facility whilst burning a particular type of coal. The resultant 'black-box' models of the combustion derivatives were able to represent the dynamics of the process and delivered accurate one-step ahead predictions over a wide range of unseen data. The work demonstrated the complex functional mapping capability of ANNs and also addressed the deficiencies in mathematical modelling of the coal combustion process on fixed grate, as indicated in the literature.
242

Design, decisions and dialogue

Blandford, Ann January 1991 (has links)
This thesis presents a design for an Intelligent Educational System to support the teaching of design evaluation in engineering. The design consists of a simple computerbased tool (or 'learning environment') for displaying and manipulating infonnation used in the course of problem solving, with a separate dialogue component capable of discussing aspects of the problem and of the problem solving strategy with the user. Many of the novel features of the design have been incorporated in a prototype system called WOMBAT. The main focus of this research has been on the design of the dialogue component. The design of the dialogue component is based on ideas taken from recent work on rational agency. The dialogue component has expertise in engaging in dialogues which support collaborative problem solving (involving system and user) in domains characterised as justified beliefs. It is capable of negotiating about what to do next and about what beliefs to take into account in problem solving. The system acquires problem-related beliefs by applying a simple plausible reasoning mechanism to a database of possible beliefs. The dialogue proceeds by turn-taking in which the current speaker constructs their chosen utterance (which may consist of several propositions and questions) and explicitly indicates when they have finished. When it is the system's turn to make an utterance, it decides what to say based on its beliefs about the current situation and on the likely utility of the various possible responses which it considers appropriate in the circumstances. Two aspects of the problem solving have been fully implemented. These are the discussion about what criteria a decision should be based on and the discussion about what decision step should be taken next. The system's contributions to the interaction are opportunistic, in the sense that at a dialogue level the system does not try to plan beyond the current utterance, and at a problem solving level it does not plan beyond the next action. The results of a formative evaluation of WOMBAT, in which it was exposed to a number of engineering educators, indicate that it is capable of engaging in a coherent dialogue, and that the dialogue is seen to have a pedagogical purpose. Although the approach of reasoning about the next action opportunistically has not proved adequate at a problem solving level, at a dialogue level it yields good results.
243

Acrosome reaction and cryopreservation of dog spermatozoa

Uçar, Ömer January 2000 (has links)
The use of AI in dogs has been limited by the lack of effective and reliable means of cryopreservation of semen and by the poor correlation between traditional methods of post-thaw assessment of semen quality and fertility. In order to address these problems, the present study focuses upon methods of cold storage and cryopreservation of dog spermatozoa by undertaking comparative evaluations of post-thaw motility and in vitro induction of acrosome reaction. Split-ejaculate protocols were used to compare the effect of storage at +4°C and cryopreservation upon (i) the maintenance of spermatozoa motility and (ii) spontaneous or A23187-induced acrosome reactions during incubation at 39°C (in 5% CO2 in the humidified air) for 60 or 120 min. The assessments of samples were made by Bright-Field, Phase Contrast (PC), Differential Interference Contrast (DIC), Scanning (SEM) and Transmission (TEM) Electron Microscopy. The interaction between the process of glycerolisation and the presence of seminal plasma is one of the key limitors for success in cryopreservation of dog spermatozoa. Interactions between the effects of removal of seminal plasma (by centrifugation), dilution rate, the temperature at which glycerolisation took place and the concentration of glycerol upon survival of spermatozoa at +4°C were studied in a series of split-ejaculate experiments. Spermatozoa were suspended in Tris-fructose-citric acid extender containing 20% (v/v) egg yolk and 8% (v/v) glycerol at +4°C for 48 h. Survival was assessed as the percentage of spermatozoa displaying progressive motility. Survival of spermatozoa was higher (P < 0.05) after glycerolisation at +4°C than at the room temperature. At dilution rates of 1:1 and 1:2 (semen: extender), the survival was higher (P < 0.05) in samples that were centrifuged and glycerolised at +4°C than the samples that were neat and glycerolised at the room temperature. While at the dilution rate of 1:16 it was higher (P < 0.05) in samples that were neat plus glycerolised at +4°C than all samples that were glycerolised at the room temperature. Concentrations of glycerol that were > 2% (v/v) resulted in lower (P < 0.05) survival than at lower concentrations. Following the initial stage of the investigations, the optimisation and validation of a method for in vitro induction of acrosome reactions were required. Suspensions of spermatozoa in TALP medium were incubated in the presence of a logarithmic. series of concentrations of the calcium ionophore, A23187. Induction of acrosome reactions was assessedb y bright-field (using naphthol yellow S/aniline blue stain, NA) and phase contrast (PC) microscopy. Using these methods, it was determined that incubation in the presence of 1 μM/1 A23187 for a period of 30-45 min was optimal for inducing acrosomer eactions in fresh semen. It was also noted that the assessments of acrosome reactions by using NA staining, were highly correlated with PC microscopy. In consequence, the simple procedure of NA staining might be an acceptable alternative to PC microscopy for use in the field. Subsequently, the effect of chilling and glycerolisation upon in vitro induction of acrosome reactions by A23187 was assessed. Acrosome reactions were studied as these have been described in the literature as providing accurate bioassay of spermatozoal functionality in vivo. Acrosome reactions were assessed by using DIC microscopy. The acrosomal integrity was impaired after chilling, which accelerated the A23187-induced acrosome reaction such that a lower concentration (0.1 μM/1) of A23187 was also effective to induce the reaction within 60 min of incubation. However, the presence of 2% glycerol (v/v, final) in standard Tris extender, containing 20% egg yolk, did not significantly affect the sequence of acrosome reaction. The optimal freezing regimen (from +4°C to -120°C) was determined by using a programmable biological freezer in a series of experiments, in which various cooling rates were combined in a Latin square design. Semen was diluted in standard Tris extender containing 20% egg yolk and 2% glycerol (v/v, final) and packed in 0.25 ml French paiettes (straws). The optimal cooling regimen was -0.5°C/min from +4°C to -9°C, -40°C/min to -20°C, -100°C/min to -120°C, followed by direct immersion of the straws in liquid nitrogen. Changes in temperatures within an individual straw were continuously measured and these data were found to be highly correlated with the eventual post-thaw motility of frozen-thawed spermatozoa. Although freezing and thawing resulted in major acrosomal deterioration, there were no significant differences between freezing regimens on the basis of in vitro induction of acrosome reactions, as assessed by DIC microscopy. Finally, ultrastructural studies, using SEM and TEM, upon chilled (as 'ready to freeze') and frozen-thawed spermatozoa subjected to A23187-induced acrosome reaction demonstrated that freeze-thawing provoked the acrosome reaction such that, with TEM (i) the plasma membrane was usually damaged or missing, (ii) the acrosomal changes (including the loss of acrosomal content, as seen by decondensation and swelling) except vesiculation of the acrosomal membranes, exceeded to the equatorial segment and (iii) a further damage occurred to the post-acrosomal region. In summary, these results show that semen should be; (i) centrifuged for dilutions of < 1:8, (ii) diluted at 1:8 in Trisfructose-citric acid extender containing 20% egg yolk, (iii) glycerolised at +4°C at a final concentration of 2% glycerol (v/v), (iv) cooled at -0.5°C/min from +4°C to -9°C, at -40°C/min to -20°C and at -100°C/min to -120°C, followed by direct immersion of the straws in liquid nitrogen for cryopreservation and (i) introduced to 1 μM/1 A23187 in TALP, (ii) incubated for at least 30-45 min for induction of acrosome reaction in vitro and, thereby, demonstrated that optimisation of cryopreservation and in vitro induction of acrosome reaction of dog spermatozoa are possible.
244

Adaptive Goal Oriented Action Planning for RTS Games

Hall, Tobias, Magnusson, Matteus January 2010 (has links)
This thesis describes the architecture of an adaptive goal-oriented AI system that can be used for Real-Time Strategy games. The system is at the end tested against a single opponent on three di erent maps with di erent sizes to test the ability of the AI opposed to the &apos;standard&apos; Finite State Machines and the likes in Real-Time Strategy games. The system consists of a task handler agent that manages all the active and halted tasks. A task is either low-level; used for ordering units, or high-level that can form advanced strategies. The General forms plans that are most advantageous at the moment. For creating e ective units against the opponent a priority system is used; where the unit priorities are calculated dynamically.
245

Prestandaförändringar vid estetiska förbättringar av A*algoritmen

Olofsson, Jim January 2012 (has links)
Algoritmer för vägplanering används ofta i dataspel för att navigera datorstyrda enheter. En av de vanligaste algoritmerna som används i samband med vägplanering är A* algoritmen, som kan användas för att effektivt hitta den kortaste vägen mellan två positioner i spelets nivåer. Algoritmen har dock inget stöd för att producera estetiskt tilltalande vägar, vilket kan leda till att spelets enheter rör sig som robotar genom spelnivån. Detta arbete tar upp och analyserar algoritmer som kan användas i kombination med A* algoritmen för att göra vägarna rakare, mjukare och mer direkta. Algoritmerna implementeras i ett program där deras minnesanvändning, tidseffektivitet och väglängd beräknas när de körs genom en spelnivå med väggar och hinder. Resultaten från slutet av rapporten visar att de estetiskt förbättrande algoritmerna kan implementeras för att göra stora förbättringar av A* algoritmens estetiska prestationer, utan större påverkningar på A* algoritmens minnesanvändning, tidseffektivitet och väglängd. Resultaten från både produkt och utvärderingen skulle kunna användas i framtida spelprojekt.
246

Cooperative and intelligent control of multi-robot systems using machine learning

Wang, Ying 05 1900 (has links)
This thesis investigates cooperative and intelligent control of autonomous multi-robot systems in a dynamic, unstructured and unknown environment and makes significant original contributions with regard to self-deterministic learning for robot cooperation, evolutionary optimization of robotic actions, improvement of system robustness, vision-based object tracking, and real-time performance. A distributed multi-robot architecture is developed which will facilitate operation of a cooperative multi-robot system in a dynamic and unknown environment in a self-improving, robust, and real-time manner. It is a fully distributed and hierarchical architecture with three levels. By combining several popular AI, soft computing, and control techniques such as learning, planning, reactive paradigm, optimization, and hybrid control, the developed architecture is expected to facilitate effective autonomous operation of cooperative multi-robot systems in a dynamically changing, unknown, and unstructured environment. A machine learning technique is incorporated into the developed multi-robot system for self-deterministic and self-improving cooperation and coping with uncertainties in the environment. A modified Q-learning algorithm termed Sequential Q-learning with Kalman Filtering (SQKF) is developed in the thesis, which can provide fast multi-robot learning. By arranging the robots to learn according to a predefined sequence, modeling the effect of the actions of other robots in the work environment as Gaussian white noise and estimating this noise online with a Kalman filter, the SQKF algorithm seeks to solve several key problems in multi-robot learning. As a part of low-level sensing and control in the proposed multi-robot architecture, a fast computer vision algorithm for color-blob tracking is developed to track multiple moving objects in the environment. By removing the brightness and saturation information in an image and filtering unrelated information based on statistical features and domain knowledge, the algorithm solves the problems of uneven illumination in the environment and improves real-time performance. / Applied Science, Faculty of / Mechanical Engineering, Department of / Graduate
247

Machine Learning Models for Biomedical Ontology Integration and Analysis

Smaili, Fatima Z. 13 September 2020 (has links)
Biological knowledge is widely represented in the form of ontologies and ontology-based annotations. Biomedical ontologies describe known phenomena in biology using formal axioms, and the annotations associate an entity (e.g. genes, diseases, chemicals, etc.) with a set of biological concepts. In addition to formally structured axioms, ontologies contain meta-data in the form of annotation properties expressed mostly in natural language which provide valuable pieces of information that characterize ontology concepts. The structure and information contained in ontologies and their annotations make them valuable for use in machine learning, data analysis and knowledge extraction tasks. I develop the first approaches that can exploit all of the information encoded in ontologies, both formal and informal, to learn feature embeddings of biological concepts and biological entities based on their annotations to ontologies. Notably, I develop the first approach to use all the formal content of ontologies in the form of logical axioms and entity annotations to generate feature vectors of biological entities using neural language models. I extend the proposed algorithm by enriching the obtained feature vectors through representing the natural language annotation properties within the ontology meta-data as axioms. Transfer learning is then applied to learn from the biomedical literature and apply on the formal knowledge of ontologies. To optimize learning that combines the formal content of biomedical ontologies and natural language data such as the literature, I also propose a new approach that uses self-normalization with a deep Siamese neural network that improves learning from both the formal knowledge within ontologies and textual data. I validate the proposed algorithms by applying them to the Gene Ontology to generate feature vectors of proteins based on their functions, and to the PhenomeNet ontology to generate features of genes and diseases based on the phenotypes they are associated with. The generated features are then used to train a variety of machinelearning based classifiers to perform different prediction tasks including the prediction of protein interactions, gene–disease associations and the toxicological effects of chemicals. I also use the proposed methods to conduct the first quantitative evaluation of the quality of the axioms and meta-data included in ontologies to prove that including axioms as background improves ontology-based prediction. The proposed approaches can be applied to a wide range of other bioinformatics research problems including similarity-based prediction and classification of interaction types using supervised learning, or clustering.
248

Machine Learning Models for Biomedical Ontology Integration and Analysis

Smaili, Fatima Z. 14 September 2020 (has links)
Biological knowledge is widely represented in the form of ontologies and ontologybased annotations. Biomedical ontologies describe known phenomena in biology using formal axioms, and the annotations associate an entity (e.g. genes, diseases, chemicals, etc.) with a set of biological concepts. In addition to formally structured axioms, ontologies contain meta-data in the form of annotation properties expressed mostly in natural language which provide valuable pieces of information that characterize ontology concepts. The structure and information contained in ontologies and their annotations make them valuable for use in machine learning, data analysis and knowledge extraction tasks. I develop the rst approaches that can exploit all of the information encoded in ontologies, both formal and informal, to learn feature embeddings of biological concepts and biological entities based on their annotations to ontologies. Notably, I develop the rst approach to use all the formal content of ontologies in the form of logical axioms and entity annotations to generate feature vectors of biological entities using neural language models. I extend the proposed algorithm by enriching the obtained feature vectors through representing the natural language annotation properties within the ontology meta-data as axioms. Transfer learning is then applied to learn from the biomedical literature and apply on the formal knowledge of ontologies. To optimize learning that combines the formal content of biomedical ontologies and natural language data such as the literature, I also propose a new approach that uses self-normalization with a deep Siamese neural network that improves learning from both the formal knowledge within ontologies and textual data. I validate the proposed algorithms by applying them to the Gene Ontology to generate feature vectors of proteins based on their functions, and to the PhenomeNet ontology to generate features of genes and diseases based on the phenotypes they are associated with. The generated features are then used to train a variety of machinelearning based classi ers to perform di erent prediction tasks including the prediction of protein interactions, gene{disease associations and the toxicological e ects of chemicals. I also use the proposed methods to conduct the rst quantitative evaluation of the quality of the axioms and meta-data included in ontologies to prove that including axioms as background improves ontology-based prediction. The proposed approaches can be applied to a wide range of other bioinformatics research problems including similarity-based prediction and classi cation of interaction types using supervised learning, or clustering.
249

Requirements Analysis for AI solutions : a study on how requirements analysis is executed when developing AI solutions

Olsson, Anton, Joelsson, Gustaf January 2019 (has links)
Requirements analysis is an essential part of the System Development Life Cycle (SDLC) in order to achieve success in a software development project. There are several methods, techniques and frameworks used when expressing, prioritizing and managing requirements in IT projects. It is widely established that it is difficult to determine requirements for traditional systems, so a question naturally arises on how the requirements analysis is executed as AI solutions (that even fewer individuals can grasp) are being developed. Little research has been made on how the vital requirements phase is executed during development of AI solutions. This research aims to investigate the requirements analysis phase during the development of AI solutions. To explore this topic, an extensive literature review was made, and in order to collect new information, a number of interviews were performed with five suitable organizations (i.e, organizations that develop AI solutions). The results from the research concludes that the requirements analysis does not differ between development of AI solutions in comparison to development of traditional systems. However, the research showed that there were some deviations that can be deemed to be particularly unique for the development of AI solutions that affects the requirements analysis. These are: (1) the need for an iterative and agile systems development process, with an associated iterative and agile requirements analysis, (2) the importance of having a large set of quality data, (3) the relative deprioritization of user involvement, and (4) the difficulty of establishing timeframe, results/feasibility and the behavior of the AI solution beforehand.
250

/Maybe/Probably/Certainly

Häggström, Frida January 2020 (has links)
This project is an experimentation and examination of contemporary computer vision and machine learning, with an emphasis on machine generated imagery and text, as well as object identification. In other words, this is a study of how computers and machines are learning to see and recognize the world. Computer vision is a kind of visual communication that we rarely think of as being designed. With an emphasis on written and visual research, this project aims to comprehend what exactly goes into the creation of machine generated imagery and artificial vision systems. I have spent the last couple of months looking through the lense of cameras, object identification apps and generative neural networks in order to try and understand how AI perceives reality. This resulted in a mixed media story about images and vision, told through the perspective of a fictional AI character. Visit ​www.maybe-probably.com​ to view the project.

Page generated in 0.0444 seconds