• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3284
  • 568
  • 314
  • 234
  • 196
  • 127
  • 108
  • 83
  • 83
  • 83
  • 83
  • 83
  • 83
  • 32
  • 29
  • Tagged with
  • 6046
  • 6046
  • 2179
  • 1672
  • 1376
  • 1254
  • 943
  • 921
  • 809
  • 790
  • 717
  • 658
  • 635
  • 587
  • 572
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Identification of attribute interactions and generation of globally relevant continuous features in machine learning

Letourneau, Sylvain January 2003 (has links)
Datasets found in real world applications of machine learning are often characterized by low-level attributes with important interactions among them. Such interactions may increase the complexity of the learning task by limiting the usefulness of the attributes to dispersed regions of the representation space. In such cases, we say that the attributes are locally relevant. To obtain adequate performance with locally relevant attributes, the learning algorithm must be able to analyse the interacting attributes simultaneously and fit an appropriate model for the type of interactions observed. This is a complex task that surpasses the ability of most existing machine learning systems. This research proposes a solution to this problem by extending the initial representation with new globally relevant features. The new features make explicit the important information that was previously hidden by the initial interactions, thus reducing the complexity of the learning task. This dissertation first proposes an idealized study of the potential benefits of globally relevant features assuming perfect knowledge of the interactions among the initial attributes. This study involves synthetic data and a variety of machine learning systems. Recognizing that not all interactions produce a negative effect on performance, the dissertation introduces a novel technique named Relevance graphs to identify the interactions that negatively affect the performance of existing learning systems. The tool of interactive relevance graphs addresses another important need by providing the user with an opportunity to participate in the construction of a new representation that cancels the effects of the negative attribute interactions. The dissertation extends the concept of relevance graphs by introducing a series of algorithms for the automatic discovery of appropriate transformations. We use the named GLOREF (GLObally RElevant Features) to designate the approach that integrates these algorithms. The dissertation fully describes the GLOREF approach along with an extensive empirical evaluation with both synthetic and UCI datasets. This evaluation shows that the features produced by the GLOREF approach significantly improve the accuracy with both synthetic and real-world data.
412

Towards obstacle reconstruction through wide baseline set of images

Elias, Rimon January 2004 (has links)
In this thesis, we handle the problem of extracting 3D information from multiple images of a robotic work site in the context of teleoperation. A human operator determines the virtual path of a robotic vehicle and our mission is to provide him with the sequence of images that should be seen by the teleoperated robot moving along this path. The environment, in which the robotic vehicle moves, has a planar ground surface. In addition, a set of wide baseline images are available for the work site. This implies that a small number of points may be visible in more than two views. Moreover, camera parameters are known approximately. According to the sensor error margins, the parameters read lie within some range. Obstacles of different shapes are present in such an environment. In order to generate the sequence, this ground plane as well as the obstacles must be represented. The perspective image of the ground plane can be obtained through a homography matrix. This is done through the virtual camera parameters and the overhead view of the work site. In order to represent obstacles, we suggest different methods; these are volumetric and planar. Our algorithm to represent obstacles starts with detecting junctions. This is done through a new fast junction detection operator we propose. This operator provides the location of the junction as well as the orientations of the edges surrounding it. Junctions belonging to the obstacles are identified against those belonging to the ground plane through calculating the inter-image homography matrices. Fundamental matrices relating images can be estimated roughly through the available camera parameters. Strips surrounding epipolar lines are used as a search range for detecting possible matches. We introduce a novel homographic correlation method to be applied among candidates by reconstructing the planes of junctions in space. Two versions of homographic correlation are proposed; these are SAD and VNC. Both versions achieve matching results that outperform non-homographic correlation. The match set is then turned into a set of 3D points through triangulation. At this point, we propose a hierarchical structure to cluster points in space. This results in bounding boxes containing obstacles. A more accurate volumetric representation for the obstacle can be achieved through a voxelization approach. Another representation is suggested. That is to represent obstacles as planar patches. This is done through mapping among original and synthesized images. Finally, steps of the different algorithms presented throughout the thesis are supported by examples to show the usefulness we claim of our approaches.
413

Vision-based localization, map building and obstacle reconstruction in ground plane environments

Hajjdiab, Hassan January 2004 (has links)
The work described in this thesis develops the theory of 3D obstacle reconstruction and map building problems in the context of a robot, or a team of robots, equipped with one camera mounted on board. The study is composed of many problems representing the different phases of actions taken by the robot. This thesis first studies the problem of image matching for wide baseline images taken by moving robots. The ground plane is detected and the inter-image homography induced by the ground plane is calculated. A novel technique for ground plane matching is introduced using the overhead view transformation. The thesis then studies the simultaneous localization and map building (SLAM) problem for a team of robots collaborating in the same work site. A vision-based technique is introduced in this thesis to solve the SLAM problem. The third problem studied in this thesis is the 3D obstacle reconstruction of the obstacles lying on the ground surface. In this thesis a Geometric/Variational level set method is proposed to reconstruct the obstacles detected by the robots.
414

Dynamic Bayesian networks

Horsch, Michael C. January 1990 (has links)
Given the complexity of the domains for which we would like to use computers as reasoning engines, an automated reasoning process will often be required to perform under some state of uncertainty. Probability provides a normative theory with which uncertainty can be modelled. Without assumptions of independence from the domain, naive computations of probability are intractible. If probability theory is to be used effectively in AI applications, the independence assumptions from the domain should be represented explicitly, and used to greatest possible advantage. One such representation is a class of mathematical structures called Bayesian networks. This thesis presents a framework for dynamically constructing and evaluating Bayesian networks. In particular, this thesis investigates the issue of representing probabilistic knowledge which has been abstracted from particular individuals to which this knowledge may apply, resulting in a simple representation language. This language makes the independence assumptions for a domain explicit. A simple procedure is provided for building networks from knowledge expressed in this language. The mapping between the knowledge base and network created is precisely defined, so that the network always represents a consistent probability distribution. Finally, this thesis investigates the issue of modifying the network after some evaluation has taken place, and several techniques for correcting the state of the resulting model are derived. / Science, Faculty of / Computer Science, Department of / Graduate
415

A cerebellum-like learning machine

Klett, Robert Duncan January 1979 (has links)
This thesis derives a new learning system which is presented as both an improved cerebellar model and as a general purpose learning machine. It is based on a summary of recent publications concerning the operating characteristics and structure of the mammalian cerebellum and on standard interpolating and surface fitting techniques for functions of one and several variables. The system approximates functions as weighted sums of continuous basis functions. Learning, which takes place in an iterative manner, is accomplished by presenting the system with arbitrary training points (function input variables) and associated function values. The system is shown to be capable of minimizing the estimation error in the mean-square-error sense. The system is also shown to minimize the expectation of the interference, which results from learning at a single point, on all other points in the input space. In this sense, the system maximizes the rate at which arbitrary functions are learned. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Unknown
416

Artificial intelligence for the control of hybrid power filters

Van Schoor, George 15 August 2012 (has links)
D.Ing. / Training data are developed for an ANN controlling a laboratory scale HPC. Special attention is given to the development of a cost function to determine the optimal state of the HPC for a particular input state. The cost function uses the reactive power compensation efficiency (77Q), the distortion compensation efficiency (rid) and the losses in the HPC (PLHP0 as optimisation parameters. After a process of optimisation the ANN is trained with a randomised training set of size 2000. A 5:10:6 ANN topology representing 5 input layer neurons, 10 hidden layer neurons and 6 output layer neurons is used. The optimisation results in shorter training times as well as more effective training. A laboratory scale experiment, which practically proves that an ANN can make meaningful choices in terms of HPC control, is conducted. The adaptive behaviour of the ANN controller for the HPC is evaluated by means of the interactive integrated state space model. It is found that the ANN controller can sensibly adapt its output under conditions of line impedance change as well as conditions of load changes of users sharing the same point of coupling as the consumer being compensated. The conclusion from this research is that it is viable to apply AI in the control of an HPC. A non-linear, time-varying system such as this ideally lends itself to the application of ANN control. The total cost of the HPC is expected to be minimised while minimum standards in terms of compensation are still maintained. The performance of such an ANN controller is however strongly dependent on the integrity of the training data. Using an actual system to set up the training data would be the ideal in refining the ANN model. Devising a strategy to continually update the training of the ANN to ensure the relevancy with respect to the dynamic range of the ANN is recommended as an area for further research.
417

Applying Evolutionary Computation and Ensemble Approaches to Protein Contact Map and Protein Function Determination

Chapman, Samuel D. 13 January 2017 (has links)
<p> Proteins are important biological molecules that perform many different functions in an organism. They are composed of sequences of amino acids that play a large part in determining both their structure and function. In turn, the structures of proteins are related to their functions. Using computational methods for protein study is a popular approach, offering the possibility of being faster and cheaper than experimental methods. These software-based methods are able to take information such as the protein sequence and other empirical data and output predictions such as protein structure or function.</p><p> In this work, we have developed a set of computational methods that are used in the application of protein structure prediction and protein function prediction. For protein structure prediction, we use the evolution of logic circuits to produce logic circuit classifiers that predict the protein contact map of a protein based on high-dimensional feature data. The diversity of the evolved logic circuits allows for the creation of ensembles of classifiers, and the answers from these ensembles are combined to produce more-accurate answers. We also apply a number of ensemble algorithms to our results.</p><p> Our protein function prediction work is based on the use of six existing computational protein function prediction methods, of which four were optimized for use on a benchmark dataset, along with two others developed by collaborators. We used a similar ensemble framework, combining the answers from the six methods into an ensemble using an algorithm, CONS, that we helped develop.</p><p> Our contact map prediction study demonstrated that it was possible to evolve logic circuits for this purpose, and that ensembles of the classifiers improved performance. The results fell short of state-of-the-art methods, and additional ensemble algorithms failed to improve the performance. However, the method was also able to work as a feature detector, discovering salient features from the high-dimensional input data, a computationally-intractable problem. In our protein function prediction work, the combination of methods similarly led to a robust ensemble. The CONS ensemble, while not performing as well as the best individual classifier in absolute terms, was nevertheless very close in terms of performance. More intriguingly, there were many specific cases where it performed better than any single method, indicating that this ensemble provided valuable information not captured by any single methods. </p><p> To our knowledge, this is the first time the evolution of logic circuits has been used in any Bioinformatics problem, and it is expected that as the method becomes more developed, results will improve. It is also expected that the feature-detection aspect of this method can be used in other studies. The function prediction study also marks, to our knowledge, the most-comprehensive ensemble classification for protein function prediction. Finally, we expect that the ensemble classification methods used and developed in our protein structure and function work here will pave the way towards stronger ensemble predictors in the future.</p>
418

Local search algorithms for geometric object recognition: Optimal correspondence and pose

Beveridge, J. Ross 01 January 1993 (has links)
Recognizing an object by its shape is a fundamental problem in computer vision, and typically involves finding a discrete correspondence between object model and image features as well as the pose--position and orientation--of the camera relative to the object. This thesis presents new algorithms for finding the optimal correspondence and pose of a rigid 3D object. They utilize new techniques for evaluating geometric matches and for searching the combinatorial space of possible matches. An efficient closed-form technique for computing pose under weak-perspective (four parameter 2D affine) is presented, and an iterative non-linear 3D pose algorithm is used to support matching under full 3D perspective. A match error ranks matches by summing a fit error, which measures the quality of the spatial fit between corresponding line segments forming an object model and line segments extracted from an image, and an omission error, which penalizes matches which leave portions of the model omitted or unmatched. Inclusion of omission is crucial to success when matching to corrupted and partial image data. New optimal matching algorithms use a form of combinatorial optimization called local search, which relies on iterative improvement and random sampling to probabilistically find globally optimal matches. A novel variant has been developed, subset-convergent local search finds optimal matches with high probability on problems known to be difficult for other techniques. Specifically, it does well on a test suite of highly fragmented and cluttered data, symmetric object models, and multiple model instances. Problem search spaces grows exponentially in the number of potentially paired features n, yet empirical performance suggests computation is bounded by $n\sp2.$ Using the 3D pose algorithm during matching, local search solves problems involving significant amounts of 3D perspective. No previous work on geometric matching has generalized in this way. Our hybrid algorithm combines the closed-form weak-perspective pose and iterative 3D pose algorithms to efficiently solve matching problems involving perspective. For robot navigation, this algorithm recognizes 3D landmarks, and thereby permits a mobile robot to successfully update its estimated pose relative to these landmarks.
419

Information extraction as a basis for portable text classification systems

Riloff, Ellen Michele 01 January 1994 (has links)
Knowledge-based natural language processing systems have achieved good success with many tasks, but they often require many person-months of effort to build an appropriate knowledge base. As a result, they are not portable across domains. This knowledge-engineering bottleneck must be addressed before knowledge-based systems will be practical for real-world applications. This dissertation addresses the knowledge-engineering bottleneck for a natural language processing task called "information extraction". A system called AutoSlog is presented which automatically constructs dictionaries for information extraction, given an appropriate training corpus. In the domain of terrorism, AutoSlog created a dictionary using a training corpus and five person-hours of effort that achieved 98% of the performance of a hand-crafted dictionary that took approximately 1500 person-hours to build. This dissertation also describes three algorithms that use information extraction to support high-precision text classification. As more information becomes available on-line, intelligent information retrieval will be crucial in order to navigate the information highway efficiently and effectively. The approach presented here represents a compromise between keyword-based techniques and in-depth natural language processing. The text classification algorithms classify texts with high accuracy by using an underlying information extraction system to represent linguistic phrases and contexts. Experiments in the terrorism domain suggest that increasing the amount of linguistic context can improve performance. Both AutoSlog and the text classification algorithms are evaluated in three domains: terrorism, joint ventures, and microelectronics. An important aspect of this dissertation is that AutoSlog and the text classification systems can be easily ported across domains.
420

Large-scale dynamic optimization using teams of reinforcement learning agents

Crites, Robert Harry 01 January 1996 (has links)
Recent algorithmic and theoretical advances in reinforcement learning (RL) are attracting widespread interest. RL algorithms have appeared that approximate dynamic programming (DP) on an incremental basis. Unlike traditional DP algorithms, these algorithms do not require knowledge of the state transition probabilities or reward structure of a system. This allows them to be trained using real or simulated experiences, focusing their computations on the areas of state space that are actually visited during control, making them computationally tractable on very large problems. RL algorithms can be used as components of multi-agent algorithms. If each member of a team of agents employs one of these algorithms, a new collective learning algorithm emerges for the team as a whole. In this dissertation we demonstrate that such collective RL algorithms can be powerful heuristic methods for addressing large-scale control problems. Elevator group control serves as our primary testbed. The elevator domain poses a combination of challenges not seen in most RL research to date. Elevator systems operate in continuous state spaces and in continuous time as discrete event dynamic systems. Their states are not fully observable and they are non-stationary due to changing passenger arrival rates. As a way of streamlining the search through policy space, we use a team of RL agents, each of which is responsible for controlling one elevator car. The team receives a global reinforcement signal which appears noisy to each agent due to the effects of the actions of the other agents, the random nature of the arrivals and the incomplete observation of the state. In spite of these complications, we show results that in simulation surpass the best of the heuristic elevator control algorithms of which we are aware. These results demonstrate the power of RL on a very large scale stochastic dynamic optimization problem of practical utility.

Page generated in 0.1095 seconds