Spelling suggestions: "subject:": cachine 1earning"" "subject:": cachine c1earning""
61 |
Spammer Detection on Online Social NetworksAmlesahwaram, Amit Anand 14 March 2013 (has links)
Twitter with its rising popularity as a micro-blogging website has inevitably attracted attention of spammers. Spammers use myriad of techniques to lure victims into clicking malicious URLs. In this thesis, we present several novel features capable of distinguishing spam accounts from legitimate accounts in real-time. The features exploit the behavioral and content entropy, bait-techniques, community-orientation, and profile characteristics of spammers. We then use supervised learning algorithms to generate models using the proposed features and show that our tool, spAmbush, can detect spammers in real-time. Our analysis reveals detection of more than 90% of spammers with less than five tweets and more than half with only a single tweet. Our feature computation has low latency and resource requirement. Our results show a 96% detection rate with only 0.01% false positive rate. We further cluster the unknown spammers to identify and understand the prevalent spam campaigns on Twitter.
|
62 |
Failure-driven learning as model-based self-redesignStroulia, Eleni January 1994 (has links)
No description available.
|
63 |
Introspective multistrategy learning : constructing a learning strategy under reasoning failureCox, Michael Thomas 05 1900 (has links)
No description available.
|
64 |
EBKAT : an explanation-based knowledge acquisition toolWusteman, Judith January 1990 (has links)
No description available.
|
65 |
Learning to co-operate in multi-agent systemsKostiadis, Kostas January 2003 (has links)
No description available.
|
66 |
Modelling of learning in designSim, Siang Kok January 2000 (has links)
No description available.
|
67 |
Feature based adaptive motion model for better localizationBhargava, Rohan 10 April 2014 (has links)
In the 21st century, we are moving ahead in making robots a ubiquitous part of our everyday life. The need for a robot to interact with the environment has become a necessity. The interaction with the world requires a sense of it's pose. Humans clearly are very good in having a sense of their location in the world around them. The same task for robots is very difficult due to the uncertainties in the movement, limitation in sensing of the environment and complexities in the environment itself. When we close our eyes and walk we have a good estimate of our location but the same can't be said for robots. Without the help of external sensors the problem of localization becomes difficult. Humans use their vestibular system to generate cues about their movement and update their position. The same can be done for robots by using acceleration, velocity or odometry as cues to a motion model.
The motion model can be represented as a distribution to account for uncertainties in the environment. The parameters to the model are typically static in the current implementation throughout the experiment. Previous work has shown that by having an online calibration method for the model has improved localization. The previous work provided a framework to build adaptive motion model and targeted land based robot and sensors.
The work presented here builds on the same framework to adapt motion models for Autonomous Underwater Vehicle. We present detailed results of the framework in a simulator. The work also proposes a method for motion estimation using side sonar images. This is used as a feedback to the motion model. We validate the motion estimation approach with real world datasets.
|
68 |
Efficient Machine Learning with High Order and Combinatorial StructuresTarlow, Daniel 13 August 2013 (has links)
The overaching goal in this thesis is to develop the representational frameworks, the inference algorithms, and the learning methods necessary for the accurate modeling of domains that exhibit complex and non-local dependency structures. There are three parts to this thesis. In the first part, we develop a toolbox of high order potentials (HOPs) that are useful for defining interactions and constraints that would be inefficient or otherwise difficult to use within the standard graphical modeling framework. For each potential, we develop associated algorithms so that the type of interaction can be used efficiently in a variety of settings. We further show that this HOP toolbox is useful not only for defining models, but also for defining loss functions.
In the second part, we look at the similarities and differences between special-purpose and general-purpose inference algorithms, with the aim of learning from the special-purpose algorithms so that we can build better general-purpose algorithms. Specifically, we show how to cast a popular special-purpose algorithm (graph cuts) in terms of the degrees of freedom available to a popular general-purpose algorithm (max-product belief propagation). After, we look at how to take the lessons learned and build a better general-purpose algorithm.
Finally, we develop a class of model that allows for the discrete optimization algorithms studied in the previous sections (as well as other discrete optimization algorithms) to be used as the centerpoint of probabilistic models. This allows us to build probabilistic models that have fast exact inference procedures in domains where the standard probabilistic formulation would lead to intractability.
|
69 |
Efficient Machine Learning with High Order and Combinatorial StructuresTarlow, Daniel 13 August 2013 (has links)
The overaching goal in this thesis is to develop the representational frameworks, the inference algorithms, and the learning methods necessary for the accurate modeling of domains that exhibit complex and non-local dependency structures. There are three parts to this thesis. In the first part, we develop a toolbox of high order potentials (HOPs) that are useful for defining interactions and constraints that would be inefficient or otherwise difficult to use within the standard graphical modeling framework. For each potential, we develop associated algorithms so that the type of interaction can be used efficiently in a variety of settings. We further show that this HOP toolbox is useful not only for defining models, but also for defining loss functions.
In the second part, we look at the similarities and differences between special-purpose and general-purpose inference algorithms, with the aim of learning from the special-purpose algorithms so that we can build better general-purpose algorithms. Specifically, we show how to cast a popular special-purpose algorithm (graph cuts) in terms of the degrees of freedom available to a popular general-purpose algorithm (max-product belief propagation). After, we look at how to take the lessons learned and build a better general-purpose algorithm.
Finally, we develop a class of model that allows for the discrete optimization algorithms studied in the previous sections (as well as other discrete optimization algorithms) to be used as the centerpoint of probabilistic models. This allows us to build probabilistic models that have fast exact inference procedures in domains where the standard probabilistic formulation would lead to intractability.
|
70 |
Incorporating prior domain knowledge into inductive machine learning: its implementation in contemporary capital markets.Yu, Ting January 2007 (has links)
An ideal inductive machine learning algorithm produces a model best approximating an underlying target function by using reasonable computational cost. This requires the resultant model to be consistent with the training data, and generalize well over the unseen data. Regular inductive machine learning algorithms rely heavily on numerical data as well as general-purpose inductive bias. However certain environments contain rich domain knowledge prior to the learning task, but it is not easy for regular inductive learning algorithms to utilize prior domain knowledge. This thesis discusses and analyzes various methods of incorporating prior domain knowledge into inductive machine learning through three key issues: consistency, generalization and convergence. Additionally three new methods are proposed and tested over data sets collected from capital markets. These methods utilize financial knowledge collected from various sources, such as experts and research papers, to facilitate the learning process of kernel methods (emerging inductive learning algorithms). The test results are encouraging and demonstrate that prior domain knowledge is valuable to inductive learning machines.
|
Page generated in 0.0885 seconds