• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 29
  • 21
  • 15
  • 11
  • 9
  • 8
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 208
  • 83
  • 51
  • 42
  • 32
  • 31
  • 30
  • 29
  • 27
  • 26
  • 25
  • 22
  • 22
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Automatic identification and removal of low quality online information

Webb, Steve. January 2008 (has links)
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2009. / Committee Chair: Pu, Calton; Committee Member: Ahamad, Mustaque; Committee Member: Feamster, Nick; Committee Member: Liu, Ling; Committee Member: Wu, Shyhtsun Felix. Part of the SMARTech Electronic Thesis and Dissertation Collection.
122

Transactional behaviour based spam detection /

Choi, Thomas, January 1900 (has links)
Thesis (M.App.Sc.) - Carleton University, 2007. / Includes bibliographical references (p. 119-126). Also available in electronic format on the Internet.
123

Visuelle Analyse von E-mail-Verkehr

Mansmann, Florian. January 2003 (has links)
Konstanz, Univ., Diplomarb., 2003.
124

Phishing Warden : enhancing content-triggered trust negotiation to prevent phishing attacks /

Henshaw, James Presley, January 2005 (has links) (PDF)
Thesis (M.S.)--Brigham Young University. Dept. of Computer Science, 2005. / Includes bibliographical references (p. 47-50).
125

E-mail spam filtering solution for the Western Interstate Commission for Higher Education (WICHE)

Worley, Jerry A. January 2005 (has links) (PDF)
Thesis (M.S.C.I.T.)--Regis University, Denver, Colo., 2005. / Title from PDF title page (viewed Nov. 23, 2005). Includes bibliographical references.
126

Transfer Learning for BioImaging and Bilingual Applications

January 2015 (has links)
abstract: Discriminative learning when training and test data belong to different distributions is a challenging and complex task. Often times we have very few or no labeled data from the test or target distribution, but we may have plenty of labeled data from one or multiple related sources with different distributions. Due to its capability of migrating knowledge from related domains, transfer learning has shown to be effective for cross-domain learning problems. In this dissertation, I carry out research along this direction with a particular focus on designing efficient and effective algorithms for BioImaging and Bilingual applications. Specifically, I propose deep transfer learning algorithms which combine transfer learning and deep learning to improve image annotation performance. Firstly, I propose to generate the deep features for the Drosophila embryo images via pretrained deep models and build linear classifiers on top of the deep features. Secondly, I propose to fine-tune the pretrained model with a small amount of labeled images. The time complexity and performance of deep transfer learning methodologies are investigated. Promising results have demonstrated the knowledge transfer ability of proposed deep transfer algorithms. Moreover, I propose a novel Robust Principal Component Analysis (RPCA) approach to process the noisy images in advance. In addition, I also present a two-stage re-weighting framework for general domain adaptation problems. The distribution of source domain is mapped towards the target domain in the first stage, and an adaptive learning model is proposed in the second stage to incorporate label information from the target domain if it is available. Then the proposed model is applied to tackle cross lingual spam detection problem at LinkedIn’s website. Our experimental results on real data demonstrate the efficiency and effectiveness of the proposed algorithms. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2015
127

Detecting Fake Reviews with Machine Learning

Ferreira Uchoa, Marina January 2018 (has links)
Many individuals and businesses make decisions based on freely and easily accessible online reviews. This provides incentives for the dissemination of fake reviews, which aim to deceive the reader into having undeserved positive or negative opinions about an establishment or service. With that in mind, this work proposes machine learning applications to detect fake online reviews from hotel, restaurant and doctor domains. In order to _lter these deceptive reviews, Neural Networks and Support Vector Ma- chines are used. Both algorithms' parameters are optimized during training. Parameters that result in the highest accuracy for each data and feature set combination are selected for testing. As input features for both machine learning applications, unigrams, bigrams and the combination of both are used. The advantage of the proposed approach is that the models are simple yet yield results comparable with those found in the literature using more complex models. The highest accuracy achieved was with Support Vector Machine using the Laplacian kernel which obtained an accuracy of 82.92% for hotel, 80.83% for restaurant and 73.33% for doctor reviews.
128

To deceive the receiver : A genre analysis of the electronic variety of Nigerian scam letters

Bredesjö Budge, Susanne January 2006 (has links)
This essay analyses fifty electronic Nigerian scam letters or spam in order to find out whether they can be considered a genre of their own according to Swales’ (1990) definition. It is comparing the Nigerian scam letters to sales promotion letters, as presented by Bhatia (1993). The functional moves Bhatia (1993) found in the sales promotion letters were applied to the Nigerian scam letters, and three functional moves unique for the scam letters were established. The functional moves specific to the Nigerian scam letters together with the scam letters’ compatibility with Swales’ (1990) definition of genre, give support for this essay’s argument that the Nigerian scam letters constitute a genre of their own.
129

Exploring Privacy Risks in Information Networks / Att utforska risker mot personlig integritet i informationsnätverk

Jacobsson, Andreas January 2004 (has links)
Exploring privacy risks in information networks is analysing the dangers and hazards that are related to personal information about users of a network. It is about investigating the dynamics and complexities of a setting where humans are served by technology in order to exploit the network for their own good. In the information network, malicious activities are motivated by commercial factors in that the attacks to privacy are happening, not in the name of national security, but in the name of the free market together with technological advancements. Based on the assumption of Machiavellian Intelligence, we have modelled our analyses by way of concepts such as Arms Race, Tragedy of the Commons, and the Red Queen effect. In a number of experiments on spam, adware, and spyware, we have found that they match the characteristics of privacy-invasive software, i.e., software that ignores users’ right to decide what, how and when information about themselves is disseminated by others. Spam messages and adware programs suggest a hazard in that they exploit the lives of millions and millions of users with unsolicited commercial and/or political content. Although, in reality spam and adware are rather benign forms of a privacy risks, since they, e.g., do not collect and/or transmit user data to third parties. Spyware programs are more serious forms of privacy risks. These programs are usually bundled with, e.g., file-sharing tools that allow a spyware to secretly infiltrate computers in order to collect and distribute, e.g., personal information and data about the computer to profit-driven third parties on the Internet. In return, adware and spam displaying customised advertisements and offers may be distributed to vast amounts of users. Spyware programs also have the capability of retrieving malicious code, which can make the spyware act like a virus when the file-sharing tools are distributed in-between the users of a network. In conclusion, spam, spyware and virulent programs invade user privacy. However, our experiments also indicate that privacy-invasive software inflicts the security, stability and capacity of computerised systems and networks. Furthermore, we propose a description of the risk environment in information networks, where network contaminants (such as spam, spyware and virulent programs) are put in a context (information ecosystem) and dynamically modelled by their characteristics both individually and as a group. We show that network contamination may be a serious threat to the future prosperity of an information ecosystem. It is therefore strongly recommended to network owners and designers to respect the privacy rights of individuals. Privacy risks have the potential to overthrow the positive aspects of belonging to an information network. In a sound information network the flow of personal information is balanced with the advantages of belonging to the network. With an understanding of the privacy risk environment, there is a good starting-point for recognising and preventing intrusions into matters of a personal nature. In reflect, mitigating privacy risks contributes to a secure and efficient use of information networks.
130

Bayesisk filtrering i syfte att motverka spam : En studie om bayesisk filtrering i olika programvaror

Bengtsson, Andreas, Kindstrand, Johan, Persson, Stefan January 2013 (has links)
Ett konstant problem med e-post är mängden skräppost som skickas dagligen och bidrar till en osäkerhet bland hemanvändare samt medför stora kostnader för företag. Att kunna skydda sig och filtrera bort skräppost är av stor vikt. Vad är egentligen skräppost?Programvaror mot skräppost använder flera metoder för att lösa problemet.Arbetet behandlar en av dessa metoder och hur effektivt den används i olika programvaror. Den metod som arbetet fokuserar på är bayesisk filtrering och programvarornas förmåga att utnyttja den. I studien kommer en analys huruvida Spamassassin och GFI MailEssentials utnyttjar bayesisk filtrering utföras. Tester kommer att genomföras med samma förutsättningar på de två programvarorna, det vill säga alla filter och skydd kommer att vara inaktiverade förrutom bayesisk filtrering. Testerna kommer att ge resultat som sedan analyseras där effektiviteten av filtret visar sig.

Page generated in 0.0304 seconds