• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 128
  • 52
  • 51
  • 9
  • 9
  • 7
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 296
  • 296
  • 90
  • 75
  • 67
  • 65
  • 64
  • 59
  • 49
  • 41
  • 39
  • 36
  • 36
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

A hybrid multi-agent architecture and heuristics generation for solving meeting scheduling problem

Alratrout, Serein Abdelmonam January 2009 (has links)
Agent-based computing has attracted much attention as a promising technique for application domains that are distributed, complex and heterogeneous. Current research on multi-agent systems (MAS) has become mature enough to be applied as a technology for solving problems in an increasingly wide range of complex applications. The main formal architectures used to describe the relationships between agents in MAS are centralised and distributed architectures. In computational complexity theory, researchers have classified the problems into the followings categories: (i) P problems, (ii) NP problems, (iii) NP-complete problems, and (iv) NP-hard problems. A method for computing the solution to NP-hard problems, using the algorithms and computational power available nowadays in reasonable time frame remains undiscovered. And unfortunately, many practical problems belong to this very class. On the other hand, it is essential that these problems are solved, and the only possibility of doing this is to use approximation techniques. Heuristic solution techniques are an alternative. A heuristic is a strategy that is powerful in general, but not absolutely guaranteed to provide the best (i.e. optimal) solutions or even find a solution. This demands adopting some optimisation techniques such as Evolutionary Algorithms (EA). This research has been undertaken to investigate the feasibility of running computationally intensive algorithms on multi-agent architectures while preserving the ability of small agents to run on small devices, including mobile devices. To achieve this, the present work proposes a new Hybrid Multi-Agent Architecture (HMAA) that generates new heuristics for solving NP-hard problems. This architecture is hybrid because it is "semi-distributed/semi-centralised" architecture where variables and constraints are distributed among small agents exactly as in distributed architectures, but when the small agents become stuck, a centralised control becomes active where the variables are transferred to a super agent, that has a central view of the whole system, and possesses much more computational power and intensive algorithms to generate new heuristics for the small agents, which find optimal solution for the specified problem. This research comes up with the followings: (1) Hybrid Multi-Agent Architecture (HMAA) that generates new heuristic for solving many NP-hard problems. (2) Two frameworks of HMAA have been implemented; search and optimisation frameworks. (3) New SMA meeting scheduling heuristic. (4) New SMA repair strategy for the scheduling process. (5) Small Agent (SMA) that is responsible for meeting scheduling has been developed. (6) “Local Search Programming” (LSP), a new concept for evolutionary approaches, has been introduced. (7) Two types of super-agent (LGP_SUA and LSP_SUA) have been implemented in the HMAA, and two SUAs (local and global optima) have been implemented for each type. (8) A prototype for HMAA has been implemented: this prototype employs the proposed meeting scheduling heuristic with the repair strategy on SMAs, and the four extensive algorithms on SUAs. The results reveal that this architecture is applicable to many different application domains because of its simplicity and efficiency. Its performance was better than many existing meeting scheduling architectures. HMAA can be modified and altered to other types of evolutionary approaches.
72

Enhancing genetic programming for predictive modeling

König, Rikard January 2014 (has links)
<p>Avhandling för teknologie doktorsexamen i datavetenskap, som kommer att försvaras offentligt tisdagen den 11 mars 2014 kl. 13.15, M404, Högskolan i Borås. Opponent: docent Niklas Lavesson, Blekinge Tekniska Högskola, Karlskrona.</p>
73

Data driven modelling for environmental water management

Syed, Mofazzal January 2007 (has links)
Management of water quality is generally based on physically-based equations or hypotheses describing the behaviour of water bodies. In recent years models built on the basis of the availability of larger amounts of collected data are gaining popularity. This modelling approach can be called data driven modelling. Observational data represent specific knowledge, whereas a hypothesis represents a generalization of this knowledge that implies and characterizes all such observational data. Traditionally deterministic numerical models have been used for predicting flow and water quality processes in inland and coastal basins. These models generally take a long time to run and cannot be used as on-line decision support tools, thereby enabling imminent threats to public health risk and flooding etc. to be predicted. In contrast, Data driven models are data intensive and there are some limitations in this approach. The extrapolation capability of data driven methods are a matter of conjecture. Furthermore, the extensive data required for building a data driven model can be time and resource consuming or for the case predicting the impact of a future development then the data is unlikely to exist. The main objective of the study was to develop an integrated approach for rapid prediction of bathing water quality in estuarine and coastal waters. Faecal Coliforms (FC) were used as a water quality indicator and two of the most popular data mining techniques, namely, Genetic Programming (GP) and Artificial Neural Networks (ANNs) were used to predict the FC levels in a pilot basin. In order to provide enough data for training and testing the neural networks, a calibrated hydrodynamic and water quality model was used to generate input data for the neural networks. A novel non-linear data analysis technique, called the Gamma Test, was used to determine the data noise level and the number of data points required for developing smooth neural network models. Details are given of the data driven models, numerical models and the Gamma Test. Details are also given of a series experiments being undertaken to test data driven model performance for a different number of input parameters and time lags. The response time of the receiving water quality to the input boundary conditions obtained from the hydrodynamic model has been shown to be a useful knowledge for developing accurate and efficient neural networks. It is known that a natural phenomenon like bacterial decay is affected by a whole host of parameters which can not be captured accurately using solely the deterministic models. Therefore, the data-driven approach has been investigated using field survey data collected in Cardiff Bay to investigate the relationship between bacterial decay and other parameters. Both of the GP and ANN models gave similar, if not better, predictions of the field data in comparison with the deterministic model, with the added benefit of almost instant prediction of the bacterial levels for this recreational water body. The models have also been investigated using idealised and controlled laboratory data for the velocity distributions along compound channel reaches with idealised rods have located on the floodplain to replicate large vegetation (such as mangrove trees).
74

A teachable semi-automatic web information extraction system based on evolved regular expression patterns

Siau, Nor Zainah January 2014 (has links)
This thesis explores Web Information Extraction (WIE) and how it has been used in decision making and to support businesses in their daily operations. The research focuses on a WIE system based on Genetic Programming (GP) with an extensible model to enhance the automatic extractor. This uses a human as a teacher to identify and extract relevant information from the semi-structured HTML webpages. Regular expressions, which have been chosen as the pattern matching tool, are automatically generated based on the training data to provide an improved grammar and lexicon. This particularly benefits the GP system which may need to extend its lexicon in the presence of new tokens in the web pages. These tokens allow the GP method to produce new extraction patterns for new requirements.
75

Modelling of process systems with genetic programming

Lotz, Marco 12 1900 (has links)
Thesis (MScEng (Process Engineering))--University of Stellenbosch, 2006. / Genetic programming (GP) is a methodology that imitates genetic algorithms, which uses mutation and replication to produce algorithms or model structures based on Darwinian survival-of-the-fittest principles. Despite its obvious po-tential in process systems engineering, GP does not appear to have gained large-scale acceptance in process engineering applications. In this thesis, therefore, the following hypothesis was considered: Genetic programming offers a competitive approach towards the automatic generation of process models from data. This was done by comparing three different GP algorithms to classification and regression trees (CART) as benchmark. Although these models could be assessed on the basis of several different criteria, the assessment was limited to the predictive power and interpretability of the models. The reason for using CART as a benchmark, was that it is well-established as a nonlinear approach to modelling, and more importantly, it can generate interpretable models in the form of IF-THEN rules. Six case studies were considered. Two of these were based on simulated data (a regression and a classification problem), while the other four were based on real-world data obtained from the process industries (three classification problems and one regression problem). In the two simulated case studies, the CART models outperformed the GP models both in terms of predictive power and interpretability. In the four real word case studies, two of the GP algorithms and CART performed equally in terms of predictive power. Mixed results were obtained as far as the interpretability of the models was concerned. The CART models always produced sets of IF-THEN rules that were in principle easy to interpret. However, when many of these rules are needed to represent the system (large trees), the tree models lose their interpretability – as was indeed the case in the majority of the case studies considered. Nonetheless, the CART models produced more interpretable structures in almost all the case studies. The exception was a case study related to the classification of hot rolled steel plates (which could have surface defects or not). In this case, the one of the GP models produced a singularly simple model, with the same predictive power as that of the classification tree. Although GP models and their construction were generally more complex than classification/regression models and did not appear to afford any particular advantages in predictive power over the classification/regression trees, they could therefore provide more concise, interpretable models than CART. For this reason, the hypothesis of the thesis should arguably be accepted, especially if a high premium is placed on the development of interpretable models.
76

Synergistic use of promoter prediction algorithms: a choice of small training dataset?

Oppon, Ekow CruickShank January 2000 (has links)
<p>Promoter detection, especially in prokaryotes, has always been an uphill task and may remain so, because of the many varieties of sigma factors employed by various organisms in transcription. The situation is made more complex by the fact, that any seemingly unimportant sequence segment may be turned into a promoter sequence by an activator or repressor (if the actual promoter sequence is made unavailable). Nevertheless, a computational approach to promoter detection has to be performed due to number of reasons. The obvious that comes to mind is the long and tedious process involved in elucidating promoters in the &lsquo / wet&rsquo / laboratories not to mention the financial aspect of such endeavors. Promoter detection/prediction of an organism with few characterized promoters (M.tuberculosis) as envisaged at the beginning of this work was never going to be easy. Even for the few known Mycobacterial promoters, most of the respective sigma factors associated with their transcription were not known. If the information (promoter-sigma) were available, the research would have been focused on categorizing the promoters according to sigma factors and training the methods on the respective categories. That is assuming that, there would be enough training data for the respective categories. Most promoter detection/prediction studies have been carried out on E.coli because of the availability of a number of experimentally characterized promoters (+- 310). Even then, no researcher to date has extended the research to the entire E.coli genome.</p>
77

The evolution of complete software systems

Withall, Mark S. January 2003 (has links)
This thesis tackles a series of problems related to the evolution of completesoftware systems both in terms of the underlying Genetic Programmingsystem and the application of that system. A new representation is presented that addresses some of the issues withother Genetic Program representations while keeping their advantages. Thiscombines the easy reproduction of the linear representation with the inheritablecharacteristics of the tree representation by using fixed-length blocks ofgenes representing single program statements. This means that each block ofgenes will always map to the same statement in the parent and child unless itis mutated, irrespective of changes to the surrounding blocks. This methodis compared to the variable length gene blocks used by other representationswith a clear improvement in the similarity between parent and child. Traditionally, fitness functions have either been created as a selection ofsample inputs with known outputs or as hand-crafted evaluation functions. Anew method of creating fitness evaluation functions is introduced that takesthe formal specification of the desired function as its basis. This approachensures that the fitness function is complete and concise. The fitness functionscreated from formal specifications are compared to simple input/outputpairs and the results show that the functions created from formal specificationsperform significantly better. A set of list evaluation and manipulation functions was evolved as anapplication of the new Genetic Program components. These functions havethe common feature that they all need to be 100% correct to be useful. Traditional Genetic Programming problems have mainly been optimizationor approximation problems. The list results are good but do highlight theproblem of scalability in that more complex functions lead to a dramaticincrease in the required evolution time. Finally, the evolution of graphical user interfaces is addressed. The representationfor the user interfaces is based on the new representation forprograms. In this case each gene block represents a component of the userinterface. The fitness of the interface is determined by comparing it to a seriesof constraints, which specify the layout, style and functionality requirements. A selection of web-based and desktop-based user interfaces were evolved. With these new approaches to Genetic Programming, the evolution ofcomplete software systems is now a realistic goal.
78

Enhancing genetic programming for predictive modeling

König, Rikard January 2014 (has links)
See separate file, "Abstract.png"
79

Reverse Engineering the Human Brain: An Evolutionary Computation Approach to the Analysis of fMRI

Allgaier, Nicholas 01 January 2015 (has links)
The field of neuroimaging has truly become data rich, and as such, novel analytical methods capable of gleaning meaningful information from large stores of imaging data are in high demand. Those methods that might also be applicable on the level of individual subjects, and thus potentially useful clinically, are of special interest. In this dissertation we introduce just such a method, called nonlinear functional mapping (NFM), and demonstrate its application in the analysis of resting state fMRI (functional Magnetic Resonance Imaging) from a 242-subject subset of the IMAGEN project, a European study of risk-taking behavior in adolescents that includes longitudinal phenotypic, behavioral, genetic, and neuroimaging data. Functional mapping employs a computational technique inspired by biological evolution to discover and mathematically characterize interactions among ROI (regions of interest), without making linear or univariate assumptions. Statistics of the resulting interaction relationships comport with recent independent work, constituting a preliminary cross-validation. Furthermore, nonlinear terms are ubiquitous in the models generated by NFM, suggesting that some of the interactions characterized here are not discoverable by standard linear methods of analysis. One such nonlinear interaction is discussed in the context of a direct comparison with a procedure involving pairwise correlation, designed to be an analogous linear version of functional mapping. Another such interaction suggests a novel distinction in brain function between drinking and non-drinking adolescents: a tighter coupling of ROI associated with emotion, reward, and interceptive processes such as thirst, among drinkers. Finally, we outline many improvements and extensions of the methodology to reduce computational expense, complement other analytical tools like graph-theoretic analysis, and possibly allow for voxel level functional mapping to eliminate the necessity of ROI selection.
80

The Effect Of Biodiesel Blends On Particle Number Emissions From A Light Duty Diesel Engine

Feralio, Tyler Samuel 01 January 2015 (has links)
Numerous studies have shown that respirable particles contribute to adverse human health outcomes including discomfort in irritated airways, increased asthma attacks, irregular heartbeat, non-fatal heart attacks, and even death. Particle emissions from diesel vehicles are a major source of airborne particles in urban areas. In response to energy security and global climate regulations, the use of biodiesel as an alternative fuel for petrodiesel has significantly increased in recent years. Particle emissions from diesel engines are highly dependent on fuel composition and, as such, the increased use of biodiesel in diesel vehicles may potentially change the concentration, size, and composition of particles in respirable air. One indicator used to evaluate the potential health risk of these particles to humans is particle diameter (Dp). Ultrafine particles (UFPs, Dp Current research in automotive emissions primarily focuses on particle emissions measured on a total particle mass (PM) basis from heavy-duty diesel vehicles. The nation's light-duty diesel fleet is, however, increasing; and because the mass of a UFP is much less than that of larger particles, the total PM metric is not sufficient for characterization of UFP emissions. As such, this research focuses on light-duty diesel engine transient UFP emissions, measured by particle number (PN), from petrodiesel, biodiesel, and blends thereof. The research objectives were to determine: 1) the difference in UFP emissions between petrodiesel and blends of waste vegetable oil-based biodiesel (WVO), 2) the differences between UFP emissions from blends of WVO and soybean oil-based biodiesel (SOY), and 3) the feasibility of using genetic programming (GP) to select the primary engine operating parameters needed to predict UFP emissions from different blends of biodiesel. The results of this research are significant in that: 1) Total UFP number emission rates (ERs) exhibited a non-monotonic increasing trend relative to biodiesel content of the fuel for both WVO and SOY that is contrary to the majority of prior studies and suggests that certain intermediate biodiesel bends may produce lower UFP emissions than lower and higher blends, 2) The data collected corroborate reports in the literature that fuel consumption of diesel engines equipped with pump-line-nozzle fuel injection systems can increase with biodiesel content of the fuel without operational changes, 3) WVO biodiesel blends reduced the overall mean diameter of the particle distribution relative to petrodiesel more so than SOY biodiesel blends, and 4) Feature selection using genetic programming (GP) suggests that the primary model inputs needed to predict total UFP emissions are exhaust manifold temperature, intake manifold air temperature, mass air flow, and the percentage of biodiesel in the fuel; These are different than inputs typically used for emissions modeling such as engine speed, throttle position, and torque suggesting that UFP emissions modeling could be improved by using other commonly measured engine operating parameters.

Page generated in 0.0861 seconds