• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7433
  • 1103
  • 1048
  • 794
  • 476
  • 291
  • 237
  • 184
  • 90
  • 81
  • 63
  • 52
  • 44
  • 43
  • 42
  • Tagged with
  • 14406
  • 9224
  • 3943
  • 2366
  • 1924
  • 1915
  • 1721
  • 1624
  • 1513
  • 1439
  • 1373
  • 1354
  • 1341
  • 1275
  • 1269
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Hierarchical design approach to texture analysis by spatial grey level dependence

Wood, Andrew John January 1994 (has links)
No description available.
482

A study of measured texture in images of natural scenes under varying illumination conditions

Khondkar, B. K. January 1995 (has links)
No description available.
483

Extending AdaBoost:Varying the Base Learners and Modifying the Weight Calculation

Neves de Souza, Erico 27 May 2014 (has links)
AdaBoost has been considered one of the best classifiers ever developed, but two important problems have not yet been addressed. The first is the dependency on the ``weak" learner, and the second is the failure to maintain the performance of learners with small error rates (i.e. ``strong" learners). To solve the first problem, this work proposes using a different learner in each iteration - known as AdaBoost Dynamic (AD) - thereby ensuring that the performance of the algorithm is almost equal to that of the best ``weak" learner executed with AdaBoost.M1. The work then further modifies the procedure to vary the learner in each iteration, in order to locate the learner with the smallest error rate in its training data. This is done using the same weight calculation as in the original AdaBoost; this version is known as AdaBoost Dynamic with Exponential Loss (AB-EL). The results were poor, because AdaBoost does not perform well with strong learners, so, in this sense, the work confirmed previous works' results. To determine how to improve the performance, the weight calculation is modified to use the sigmoid function with algorithm output being the derivative of the same sigmoid function, rather than the logistic regression weight calculation originally used by AdaBoost; this version is known as AdaBoost Dynamic with Logistic Loss (AB-DL). This work presents the convergence proof that binomial weight calculation works, and that this approach improves the results for the strong learner, both theoretically and empirically. AB-DL also has some disadvantages, like the search for the ``best" classifier and that this search reduces the diversity among the classifiers. In order to attack these issues, another algorithm is proposed that combines AD ``weak" leaner execution policy with a small modification of AB-DL's weight calculation, called AdaBoost Dynamic with Added Cost (AD-AC). AD-AC also has a theoretical upper bound error, and the algorithm offers a small accuracy improvement when compared with AB-DL, and traditional AdaBoost approaches. Lastly, this work also adapts AD-AC's weight calculation approach to deal with data stream problem, where classifiers must deal with very large data sets (in the order of millions of instances), and limited memory availability.
484

Fault diagnosis and condition monitoring for NC/CNC machine tools

Harris, C. G. January 1987 (has links)
No description available.
485

Artificial evolution of fuzzy and temporal rule based systems

Carse, Brian January 1997 (has links)
No description available.
486

Reinforcement learning and knowledge transformation in mobile robotics

Pipe, Anthony Graham January 1997 (has links)
No description available.
487

Distributed boosting algorithms

Thompson, Simon Giles January 1999 (has links)
No description available.
488

The generation of knowledge based systems for interactive nonlinear constrained optimisation

Lynch, Paul Kieran January 1997 (has links)
No description available.
489

Design of a Permanent Magnet Synchronous Machine for a Flywheel Energy Storage System within a Hybrid Electric Vehicle

Jiang, Ming 06 1900 (has links)
As an energy storage device, the flywheel has significant advantages over conventional chemical batteries, including higher energy density, higher efficiency, longer life time, and less pollution to the environment. An effective flywheel system can be attributed to its good motor/generator (M/G) design. This thesis describes the research work on the design of a permanent magnet synchronous machine (PMSM) as an M/G suitable for integration in a flywheel energy storage system within a large hybrid electric vehicle (HEV). The operating requirements of the application include wide power and speed ranges combined with high total system efficiency. Along with presenting the design, essential issues related to PMSM design including cogging torque, iron losses and total harmonic distortion (THD) are investigated. An iterative approach combining lumped parameter analysis with 2D Finite Element Analysis (FEA) was used, and the final design is presented showing excellent performance. / Power Engineering and Power Electronics
490

Supervised machine learning for email thread summarization

Ulrich, Jan 11 1900 (has links)
Email has become a part of most people's lives, and the ever increasing amount of messages people receive can lead to email overload. We attempt to mitigate this problem using email thread summarization. Summaries can be used for things other than just replacing an incoming email message. They can be used in the business world as a form of corporate memory, or to allow a new team member an easy way to catch up on an ongoing conversation. Email threads are of particular interest to summarization because they contain much structural redundancy due to their conversational nature. Our email thread summarization approach uses machine learning to pick which sentences from the email thread to use in the summary. A machine learning summarizer must be trained using previously labeled data, i.e. manually created summaries. After being trained our summarization algorithm can generate summaries that on average contain over 70% of the same sentences as human annotators. We show that labeling some key features such as speech acts, meta sentences, and subjectivity can improve performance to over 80% weighted recall. To create such email summarization software, an email dataset is needed for training and evaluation. Since email communication is a private matter, it is hard to get access to real emails for research. Furthermore these emails must be annotated with human generated summaries as well. As these annotated datasets are rare, we have created one and made it publicly available. The BC3 corpus contains annotations for 40 email threads which include extractive summaries, abstractive summaries with links, and labeled speech acts, meta sentences, and subjective sentences. While previous research has shown that machine learning algorithms are a promising approach to email summarization, there has not been a study on the impact of the choice of algorithm. We explore new techniques in email thread summarization using several different kinds of regression, and the results show that the choice of classifier is very critical. We also present a novel feature set for email summarization and do analysis on two email corpora: the BC3 corpus and the Enron corpus.

Page generated in 0.0584 seconds