• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 2
  • 1
  • Tagged with
  • 937
  • 143
  • 105
  • 73
  • 73
  • 63
  • 44
  • 39
  • 35
  • 21
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Ontology-based context management for mobile devices

De, Suparna January 2009 (has links)
No description available.
142

Content and service adaption management

Attou, Abdelhak January 2009 (has links)
No description available.
143

End-to-end veriable voting with Prêt à Voter

Bismark, David January 2010 (has links)
No description available.
144

Type projections over sef-describing data

Simeoni, Fabio January 2011 (has links)
No description available.
145

Replication and a multi-method approach to empirical software engineering research

Daly, John William January 1996 (has links)
No description available.
146

Multi-objective planning using linear programming

Mohamed Radzi, Nor Haizan January 2010 (has links)
No description available.
147

Speculative parallelisation with dynamic data structures

Rybin, Pavel January 2010 (has links)
No description available.
148

Large-scale connectionist natural language parsing using lexical semantic and syntactic knowledge

Nkantah, Dianabasi Edet January 2007 (has links)
Syntactic parsing plays a pivotal role in most automatic natural language processing systems. The research project presented in this dissertation has focused on two main characteristics of connectionist models for natural language processing: their adaptability to different tagging conventions, and their ability to use multiple linguistic constraints in parallel during sentence processing. In focusing on these key characteristics, an existing hybrid connectionist, shift-reduce corpus-based parsing model has been modified. This parser, which had earlier been trained to acquire linguistic knowledge from the Lancaster Parsed Corpus, has been adapted to learn linguistic knowledge from the Wall Street Journal Corpus. This adaptation is a novel demonstration that this connectionist parser, and by extension, other similar connectionist models, is able to adapt to more than one syntactic tagging convention; this implies their ability to adapt to the underlying linguistic theories used to annotate these corpora. The parser has also been adapted to integrate shallow lexical semantic information with syntactic information for full syntactic parsing. This approach was used to investigate the effect of shallow lexical semantic information on full syntactic parsing. In pursuing the aims of this project, a novel algorithm for semantic tagging of nouns in the Wall Street Journal Corpus has been developed. The lexical semantic information used in this semantic annotation algorithm was extracted from WordNet, an online lexical resource. Using only syntactic information in making parsing decisions, this parsing model was tested on test sets of sentences that were not used during training. The parser generalised to parse these test sentences with an F-measure of 72.5% and 59.5% on sentences from the Lancaster Parsed Corpus and Wall Street Journal Corpus, respectively. On the integration of shallow lexical semantic information with syntactic information in its input representation, the parser generalised to parse test sentences from the Wall Street Journal Corpus with an F-measure of 56.75%. Although this integration did not seem to improve the parser's overall training/generalisation performance, given its present configuration, it did appear to improve the parser's decision making concerning preposition phrase attachment.
149

Design, evaluation and analysis of combinatorial optimization heuristic algorithms

Karapetyan, Daniil January 2010 (has links)
No description available.
150

A new technique for intelligent web personal recommendation

Embarak, Ossama Hashem Khamis January 2011 (has links)
Personal recommendation systems nowadays are very important in web applications because of the available huge volume of information on the World Wide Web, and the necessity to save users’ time, and provide appropriate desired information, knowledge, items, etc. The most popular recommendation systems are collaborative filtering systems, which suffer from certain problems such as cold-start, privacy, user identification, and scalability. In this thesis, we suggest a new method to solve the cold start problem taking into consideration the privacy issue. The method is shown to perform very well in comparison with alternative methods, while having better properties regarding user privacy. The cold start problem covers the situation when recommendation systems have not sufficient information about a new user’s preferences (the user cold start problem), as well as the case of newly added items to the system (the item cold start problem), in which case the system will not be able to provide recommendations. Some systems use users’ demographical data as a basis for generating recommendations in such cases (e.g. the Triadic Aspect method), but this solves only the user cold start problem and enforces user’s privacy. Some systems use users’ ’stereotypes’ to generate recommendations, but stereotypes often do not reflect the actual preferences of individual users. While some other systems use user’s ’filterbots’ by injecting pseudo users or bots into the system and consider these as existing ones, but this leads to poor accuracy. We propose the active node method, that uses previous and recent users’ browsing targets and browsing patterns to infer preferences and generate recommendations (node recommendations, in which a single suggestion is given, and batch recommendations, in which a set of possible target nodes are shown to the user at once). We compare the active node method with three alternative methods (Triadic Aspect Method, Naïve Filterbots Method, and MediaScout Stereotype Method), and we used a dataset collected from online web news to generate recommendations based on our method and based on the three alternative methods. We calculated the levels of novelty, coverage, and precision in these experiments, and we found that our method achieves higher levels of novelty in batch recommendation while achieving higher levels of coverage and precision in node recommendations comparing to these alternative methods. Further, we develop a variant of the active node method that incorporates semantic structure elements. A further experimental evaluation with real data and users showed that semantic node recommendation with the active node method achieved higher levels of novelty than nonsemantic node recommendation, and semantic-batch recommendation achieved higher levels of coverage and precision than non-semantic batch recommendation.

Page generated in 0.0297 seconds