• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 4
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 34
  • 10
  • 9
  • 8
  • 7
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Towards Evolving More Brain-Like Artificial Neural Networks

Risi, Sebastian 01 January 2012 (has links)
An ambitious long-term goal for neuroevolution, which studies how artificial evolutionary processes can be driven to produce brain-like structures, is to evolve neurocontrollers with a high density of neurons and connections that can adapt and learn from past experience. Yet while neuroevolution has produced successful results in a variety of domains, the scale of natural brains remains far beyond reach. In this dissertation two extensions to the recently introduced Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) approach are presented that are a step towards more brain-like artificial neural networks (ANNs). First, HyperNEAT is extended to evolve plastic ANNs that can learn from past experience. This new approach, called adaptive HyperNEAT, allows not only patterns of weights across the connectivity of an ANN to be generated by a function of its geometry, but also patterns of arbitrary local learning rules. Second, evolvable-substrate HyperNEAT (ES-HyperNEAT) is introduced, which relieves the user from deciding where the hidden nodes should be placed in a geometry that is potentially infinitely dense. This approach not only can evolve the location of every neuron in the network, but also can represent regions of varying density, which means resolution can increase holistically over evolution. The combined approach, adaptive ES-HyperNEAT, unifies for the first time in neuroevolution the abilities to indirectly encode connectivity through geometry, generate patterns of heterogeneous plasticity, and simultaneously encode the density and placement of nodes in space. The dissertation culminates in a major application domain that takes a step towards the general goal of adaptive neurocontrollers for legged locomotion.
22

Développement d'une approche floue multicritère d'aide à la coordination des décideurs pour la résolution des problèmes de sélection dans les chaines logistiques / Multi-criteria group decision making approach for the selection problem

Igoulalene, Idris 02 December 2014 (has links)
Dans le cadre de cette thèse, notre objectif est de développer une approche multicritère d'aide à la coordination des décideurs pour la résolution des problèmes de sélection dans les chaines logistiques. En effet, nous considérons le cas où nous avons k décideurs/experts notés ST1,...,STk qui cherchent à classer un ensemble de m alternatives/choix notées A1,...,Am évaluées en termes de n critères conflictuels notés C1,..., Cn. L'ensemble des données manipulées est flou. Chaque décideur est amené à exprimer ses préférences pour chaque alternative par rapport à chaque critère à travers une matrice dite matrice des préférences. Notre approche comprend principalement deux phases, respectivement une phase de consensus qui consiste à trouver un accord global entre les décideurs et une phase de classement qui traite le problème de classement des différentes alternatives.Comme résultats, pour la première phase, nous avons adapté deux mécanismes de consensus, le premier est basé sur l'opérateur mathématique neat OWA et le second sur la mesure de possibilité. De même, nous avons développé un nouveau mécanisme de consensus basé sur la programmation par but goal programming. Pour la phase de classement, nous avons adapté dans un premier temps la méthode TOPSIS et dans un second, le modèle du goal programming avec des fonctions de satisfaction. Pour illustrer l'applicabilité de notre approche, nous avons utilisé différents problèmes de sélection dans les chaines logistiques comme la sélection des systèmes de formation, la sélection des fournisseurs, la sélection des robots et la sélection des entrepôts. / This thesis presents a development of a multi-criteria group decision making approach to solve the selection problems in supply chains. Indeed, we start in the context where a group of k decision makers/experts, is in charge of the evaluation and the ranking of a set of potential m alternatives. The alternatives are evaluated in fuzzy environment while taking into consideration both subjective (qualitative) and objective (quantitative) n conflicting criteria. Each decision maker is brought to express his preferences for each alternative relative to each criterion through a fuzzy matrix called preference matrix. We have developed three new approaches for manufacturing strategy, information system and robot selection problem:1. Fuzzy consensus-based possibility measure and goal programming approach.2. Fuzzy consensus-based neat OWA and goal programming approach.3. Fuzzy consensus-based goal programming and TOPSIS approach.Finally, a comparison of these three approaches is conducted and thus was able to give recommendations to improve the approaches and provide decision aid to the most satisfying decision makers.
23

Neat drummer : computer-generated drum tracks

Hoover, Amy K. 01 January 2008 (has links)
Computer-generated music composition programs have yet to produce creative, natural sounding music. To date, most approaches constrain the search space heuristically while ignoring the inherent structure of music over time. To address this problem, this thesis introduces NEAT Drummer, which evolves a special kind of artificial neural network (ANN) called compositional pattern producing networks (CPPNs) with the NeuroEvolution of Augmenting Topologies (NEAT) method for evolving increasingly complex structures. CPPNs in NEAT Drummer input existing human compositions and output an accompanying drum track. The existing musical parts form a scaffold i.e. support structure, for the drum pattern outputs, thereby exploiting the functional relationship of drums to musical parts (e.g. to lead guitar, bru:is, etc.) The results are convincing drum patterns that follow the contours of the original song, validating a new approach to computergenerated music composition.
24

A comparison of smoothing methods for the common item nonequivalent groups design

Kim, Han Yi 01 July 2014 (has links)
The purpose of this study was to compare the relative performance of various smoothing methods under the common item nonequivalent groups (CINEG) design. In light of the previous literature on smoothing under the CINEG design, this study aimed to provide general guidelines and practical insights on the selection of smoothing procedures under specific testing conditions. To investigate the smoothing procedures, 100 replications were simulated under various testing conditions by using an item response theory (IRT) framework. A total of 192 conditions (3 sample size × 4 group ability difference × 2 common-item proportion × 2 form difficulty difference × 1 test length × 2 common-item type × 2 common-item difficulty spread) were investigated. Two smoothing methods including log-linear presmoothing and cubic spline postsmoothing were considered with four equating methods including frequency estimation (FE), modified frequency estimation (MFE), chained equipercentile equating (CE), and kernel equating (KE). Bias, standard error, and root mean square error were computed to evaluate the performance of the smoothing methods. Results showed that 1) there were always one or more smoothing methods that produced smaller total error than unsmoothed methods; 2) polynomial log-linear presmoothing tended to perform better than cubic spline postsmoothing in terms of systematic and total errors when FE or MFE were used; 3) cubic spline postsmoothing showed a strong tendency to produce the least amount of random error regardless of the equating method used; 4) KE produced more accurate equating relationships under a majority of testing conditions when paired with CE; and 5) log-linear presmoothing produced smaller total error under a majority testing conditions than did cubic spline postsmoothing. Tables are provided to show the best-performing method for all combinations of testing conditions considered.
25

Utveckling av artificiell intelligens med genetiska tekniker och artificiella neurala nätverk

Ruuska Boquist, Philip January 2009 (has links)
<p> </p><p>Att använda artificiella neurala nätverk i datorspel blir ett allt mer populärt sätt att styra de datorstyrda agenterna då detta gör att agenterna får ett mer mänskligt beteende och förmågan att generalisera och möta nya situationer och klara sig igenom dessa på ett sätt som andra typer av artificiell intelligens inte alltid kan hantera. Svårigheten med denna teknik är att träna nätverket vilket ofta kräver en lång tid av inlärning och många olika träningfall. Genom att använda genetiska algoritmer för att träna upp nätverken så kan mycket av det både tid och prestandakrävande arbetet undvikas. Denna rapport kommer att undersöka möjligheten att använda genetiska tekniker för att träna artificiella neurala nätverk i en miljö anpassad till och med fokus på spel. Att använda genetiska tekniker för att träna artificiella neurala nätverk är en bra inlärningsteknik för problem där det enkelt går att skapa en passande fitnessfunktion och där andra inlärningstekniker kan vara svåra att använda. Det är dock ingen teknik som helt tar bort arbetet från utvecklare utan istället flyttar det mer åt att utveckla fitnessfunktionen och modifiera variabler.</p><p> </p>
26

Utveckling av artificiell intelligens med genetiska tekniker och artificiella neurala nätverk

Ruuska Boquist, Philip January 2009 (has links)
Att använda artificiella neurala nätverk i datorspel blir ett allt mer populärt sätt att styra de datorstyrda agenterna då detta gör att agenterna får ett mer mänskligt beteende och förmågan att generalisera och möta nya situationer och klara sig igenom dessa på ett sätt som andra typer av artificiell intelligens inte alltid kan hantera. Svårigheten med denna teknik är att träna nätverket vilket ofta kräver en lång tid av inlärning och många olika träningfall. Genom att använda genetiska algoritmer för att träna upp nätverken så kan mycket av det både tid och prestandakrävande arbetet undvikas. Denna rapport kommer att undersöka möjligheten att använda genetiska tekniker för att träna artificiella neurala nätverk i en miljö anpassad till och med fokus på spel. Att använda genetiska tekniker för att träna artificiella neurala nätverk är en bra inlärningsteknik för problem där det enkelt går att skapa en passande fitnessfunktion och där andra inlärningstekniker kan vara svåra att använda. Det är dock ingen teknik som helt tar bort arbetet från utvecklare utan istället flyttar det mer åt att utveckla fitnessfunktionen och modifiera variabler.
27

Neuronové sítě a genetické algoritmy / Neural Networks and Genetic Algorithm

Karásek, Štěpán January 2016 (has links)
This thesis deals with evolutionary and genetic algorithms and the possible ways of combining them. The theoretical part of the thesis describes genetic algorithms and neural networks. In addition, the possible combinations and existing algorithms are presented. The practical part of this thesis describes the implementation of the algorithm NEAT and the experiments performed. A combination with differential evolution is proposed and tested. Lastly, NEAT is compared to the algorithms backpropagation (for feed-forward neural networks) and backpropagation through time (for recurrent neural networks), which are used for learning neural networks. Comparison is aimed at learning speed, network response quality and their dependence on network size.
28

Burnout, NO, Flame Temperature, and Radiant Intensity from Oxygen-Enriched Combustion of a Hardwood Biomass

Thornock, Joshua David 01 December 2013 (has links)
Increasing concern for energy sustainability has created motivation for the combustion of renewable, CO2 neutral fuels. Biomass co-firing with coal provides a means of utilizing the scaled efficiencies of coal with the lower supply availability of biomass. One of the challenges of co-firing is the burnout of biomass particles which are typically larger than coal but must be oxidized in the same residence time. Larger biomass particles also can increase the length of the radiative region and alter heat flux profiles. As a result, oxygen injection is being investigated as a means of improving biomass combustion performance.An Air Liquide designed burner was used to investigate the impact of oxygen enrichment on biomass combustion using two size distributions of ground wood pellets (fine grind 220 µm and medium grind 500 µm mass mean diameter). Flame images were obtained with a calibrated RGB digital camera allowing a calculation of visible radiative heat flux. Ash samples and exhaust NO were collected for a matrix of operating conditions with varying injection strategies. The results showed that oxygen can be both beneficial and detrimental to the flame length depending on the momentum of the oxygen jet. Oxygen injection was found to improve carbon burnout, particularly in the larger wood particles. Low flow rates of oxygen enrichment (2 to 6 kg/hr) also produced a modest increase in NO formation up to 30%. The results showed medium grind ~500 µm mass mean diameter particle combustion could improve LOI from 30% to 15% with an oxygen flow rate of 8 kg/hr. Flame images showed low flow rates of O2 (2 kg/hr) in the center of the burner with the fine particles produced a dual flame, one flame surrounding the center oxygen jet and a second flame between the volatiles and secondary air. The flame surrounding the center oxygen jet produced a very high intensity and temperature (2100 K). This center flame can be used to help stabilize the flame, increase devolatilization rates, and potentially improve the trade-off between NO and burnout.
29

Quantifying the effect of exercise on total energy expenditure in obese women

Colley, Rachel Christine January 2007 (has links)
The prevalence of obesity continues to increase despite considerable research and innovation regarding treatment and management strategies. When completed as prescribed, exercise training is associated with numerous health benefits and predictable levels of weight loss. However, under free-living conditions the benefits of exercise are less consistent, suggesting that non-adherence and/or a compensatory response in non-exercise activity thermogenesis (NEAT) may be occurring. The accurate quantification of all components of total energy expenditure (TEE), including TEE itself, was imperative to elucidate the primary research question relating to the impact of exercise on TEE. In addition, the measurement of changes in body composition and the response to prescribed exercise were assessed in methodological and pilot investigations. Following this extensive background, the primary research question relating to the effect of exercise on levels of TEE and the associated implications of such a compensatory response could be more rigorously investigated. The first study investigated the variability in isotopic equilibrium time under field conditions, and the impact of this variability on estimates of total body water (TBW) and body composition when using the deuterium dilution technique. Following the collection of a fasting baseline urine sample, 10 women and 10 men were dosed with deuterium oxide (0.05g/kg body weight). Urine samples were collected every hour for 8 hours. The samples were analysed using isotope ratio mass spectrometry and time to equilibration was determined using three commonly employed data analysis approaches. Isotopic equilibrium was reached by 50, 80 and 100% of participants at 4, 6 and 8 h, respectively. The mean group equilibration times determined using the three different plateau determination methods were 4.8 ± 1.5, 3.8 ± 0.8, and 4.9 ±1.4 h, respectively. Isotopic enrichment, TBW, and percent body fat estimates differed between early sampling times (3-5 h), but not later sampling times (5-8 h). Therefore, sampling < 6 hours post dose compared to sampling ≥ 6 hours resulted in greater relative measurement error in TBW and body composition estimates. Although differences in equilibration time were apparent between the three plateau determination approaches, sampling at 6 hours or later may decrease the likelihood of error in body composition estimates resultant from incomplete isotopic equilibration in a small proportion of individuals. In the second study, the aim was to measure the self-paced walking (SPW) speed of adults ranging in body size from normal to obese. The utility of heart rate monitors to estimate the energy cost of walking was also investigated. Twenty-nine participants (12 normal-weight, 17 overweight or obese) completed two outdoor walking tests to determine their SPW speed. A walking treadmill test with stages below, at, and above the SPW speed was completed to compare the energy expenditure estimates of the Polar S610 and WM42 heart rate monitors with that from indirect calorimetry. The average SPW speed was 1.7 ± 0.1 m*sec-1, which was equivalent to an exercise intensity of 48.6 ± 9.4 %VO2max (61.0 ± 7.1 %HRmax). There was no difference in the energy expenditure estimation between indirect calorimetry (4.7 ± 0.7 kcal*kg*-1*h-1), the S610 (4.8 ± 1.3 kcal*kg*-1*h-1) and the WM42 (4.8 ± 1.6 kcal*kg*-1*h-1). It was concluded that the heart rate monitors provided reasonable energy expenditure estimates at the group level. However considerable error was evident at the individual level, explained in part by exercise heart rate and fitness level, suggesting that an individualised calibration should be performed where possible. An additional finding from this study was that 145 to 215 minutes of SPW per week, dependent upon the level of adiposity, is required to meet the current American College of Sports Medicine (ACSM) guidelines for health of 1000 kcal*wk-1. The purpose of the third study was to establish the level of adherence to a specific exercise prescription (1500 kcal*wk-1) by objectively quantifying unsupervised exercise energy expenditure (ExEE) in a group of obese women. The 16-wk lifestyle intervention consisted of weekly meetings with research staff, combined with promotion of increased ExEE (1500 kcal*wk-1) and a decreased dietary intake (-500 kcal*d-1). Twenty-nine obese females (Body Mass Index = 36.8 ± 5.0 kg*m2, Body Fat = 49.6 ± 3.7 %) from a hospital-based lifestyle intervention were included in the analysis. ExEE was estimated and monitored weekly using heart rate monitoring. Body composition was measured before and after the intervention by dual-energy x-ray absorptiometry (DXA). Results indicated free-living adherence to the exercise prescription was modest and variable, with 14% of participants achieving the 1500 kcal*wk-1. The average weekly ExEE (768 kcal*wk-1) represented 51.2% of the total amount prescribed. ExEE was correlated with changes in body weight (r = 0.65, p < 0.001) and fat mass (r = 0.65, p = 0.0002). Achievement of a 5% weight loss target was dependent on an ExEE level of 1000 kcal*wk-1 (p &lt0.001). Exercise 'adherers' (> 000 kcal*wk-1) lost more weight (-9.9 vs. -4.1 kg), more fat mass (-6.8 vs. -3.0 kg), and more waist circumference (-9.8 vs. -5.6 cm) when compared to 'non-adherers' (< 1000 kcal*wk-1). The results suggest that the extent of supervision and monitoring influenced exercise adherence rates. The variability in adherence highlights the importance of objective monitoring of ExEE. Identification of individuals not complying with program targets may enable intervention staff to provide additional support or make individualised adjustments to the exercise prescription. The fourth study investigated issues relating to the management and interpretation of accelerometry data when the device is to be used to monitor levels of daily physical activity. Given the high between-individual variability in accelerometry output for a given walking speed, the use of a more individualised approach to the data management has been suggested. In addition, accelerometry was used to compare daily physical activity patterns between a supervised and unsupervised exercise prescription of the same dose (1500 kcal*wk-1) in overweight and obese women. Total energy expenditure, activity energy expenditure, and vector magnitude increased significantly during the intervention. Time spent in very low intensity movement decreased from baseline to the intervention (p < 0.01) in both the supervised (-18.6 min*d-1) and unsupervised (-68.5 min*d-1) group, whereas time spent in high and vigorous intensity movement increased significantly from baseline to the intervention (p < 0.05 and p < 0.0001, respectively). The increase in vigorous movement was significantly greater in the supervised group when compared to the unsupervised group (+11.5 vs. +5.4 min*d-1, p < 0.05). Time spent above three different moderate-intensity walking thresholds increased from baseline to the intervention (p < 0.0001). The threshold determination approach significantly affected the resultant outcomes (p < 0.0001) such that the standard threshold was significantly different to both group-specific and individualised approaches. Significant differences were also noted in accelerometer output between treadmill and overground walking (p < 0.0001). A positive finding of this study was that two different interventions aimed at increasing physical activity levels in a group of sedentary and obese women were successful in gaining modest increases in overall daily movement. The change observed appears to be a replacement of sedentary movement with more vigorous physical activity. Collectively, the differences observed between threshold determination approaches, as well as between treadmill and overground walking, highlight the need for standardised approaches to accelerometry data management and analysis. In addition, the findings suggest that obese women may benefit from a certain degree of exercise supervision to ensure compliance, however, strategies to encourage these women to continue with the exercise on their own without supervision are essential to making a sustainable long-term change to their lifestyles. The final study aimed to assess whether obese women compensate for structured exercise by decreasing their NEAT and thereby impeding weight loss. Thirteen participants were prescribed 1500 kcal*wk-1 of exercise through a structured walking program (4 week supervised followed by 4 weeks unsupervised). The energy expenditure of the walks was quantified using individually-calibrated Polar F4 heart rate monitors. The DLW technique was used to measure TEE. Accelerometry measures were also collected throughout and represented an alternative method of quantifying changes in total daily movement patterns resultant from an increase in energy expenditure through exercise. Compliance with the exercise program was excellent, with the average compliance being 94% over the 8-week intervention. The adoption of moderate-intensity exercise in this group of obese women resulted in a 12% decrease in TEE (p = 0.01) and a 67% decrease in NEAT (p < 0.05). No significant change was observed in resting metabolic rate from baseline to the postintervention time-point. Compensation was significantly correlated with dietary report bias (r= -0.84, p = 0.001), body image (r = 0.75, p < 0.01), and bodily pain (r = -0.65, p < 0.05). A linear regression model including dietary reporting bias and the pain score explained 78% of the variation in ΔTEE. Compensators were therefore less likely to underreport their dietary intake, less likely to be self-aware of their obese state, and more likely to be experiencing pain in their daily life. Self-reported dietary intake decreased significantly during the intervention (p = 0.01) with specific decreases noted in fat and carbohydrate intake. The consequence of compensation was evidenced by a lack of significant change in body weight, body composition, or blood lipids (p > 0.05). However, positive outcomes of the study included improvement in the SF-36 scores of general health (p < 0.05) and maintenance of exercise program adherence into the unsupervised phase of the intervention. Qualitative data collected via interview indicated that 85% of participants experienced increased energy and positive feedback from peers during the intervention. This study confirms that exercise prescription needs to be prescribed with an individualised approach that takes into account level of adiposity. The goal of exercise prescription for the obese should therefore be to determine the intensity and modality of exercise that does not activate compensatory behaviours, as this may in turn negate the beneficial effects of the additional energy expenditure of exercise. This study confirms that during the initial phase of an exercise-based weight loss intervention, the majority of obese women compensated for some, if not all, the energy cost of the exercise sessions by reducing NEAT. Whether this compensatory behaviour continues beyond the first month of an exercise program, particularly after training adaptations in cardiorespiratory fitness are realised, cannot be discerned from the current study. However these results do provide a rationale for why the magnitude of weight loss achieved is often less than predicted during exercise interventions. Further research is required to examine the temporal pattern of compensation in NEAT, and the relationship between the time courses of NEAT compensation relative to physical fitness improvements. The results from this thesis support the use of activity monitors such as accelerometers during weight loss interventions to track NEAT and provide objective feedback regarding compensatory behaviours to clinicians and the obese individuals.
30

Falconet: Force-feedback Approach For Learning From Coaching And Observation Using Natural And Experiential Training

Stein, Gary 01 January 2009 (has links)
Building an intelligent agent model from scratch is a difficult task. Thus, it would be preferable to have an automated process perform this task. There have been many manual and automatic techniques, however, each of these has various issues with obtaining, organizing, or making use of the data. Additionally, it can be difficult to get perfect data or, once the data is obtained, impractical to get a human subject to explain why some action was performed. Because of these problems, machine learning from observation emerged to produce agent models based on observational data. Learning from observation uses unobtrusive and purely observable information to construct an agent that behaves similarly to the observed human. Typically, an observational system builds an agent only based on prerecorded observations. This type of system works well with respect to agent creation, but lacks the ability to be trained and updated on-line. To overcome these deficiencies, the proposed system works by adding an augmented force-feedback system of training that senses the agents intentions haptically. Furthermore, because not all possible situations can be observed or directly trained, a third stage of learning from practice is added for the agent to gain additional knowledge for a particular mission. These stages of learning mimic the natural way a human might learn a task by first watching the task being performed, then being coached to improve, and finally practicing to self improve. The hypothesis is that a system that is initially trained using human recorded data (Observational), then tuned and adjusted using force-feedback (Instructional), and then allowed to perform the task in different situations (Experiential) will be better than any individual step or combination of steps.

Page generated in 0.0243 seconds