• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 7
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

DSP-based active power filter

Othman, Mohd. Ridzal January 1998 (has links)
Harmonics in systems are conventionally suppressed using passive tuned filters, which have practical limitations in terms of the overall cost, size and performance, and these are particularly unsatisfactory when large number of harmonics are involved Active power filtering is an alternative approach in which the filter injects suitable compensation currents to cancel the harmonic currents, usually through the use of power electronic converters. This type of filter does not exhibit the drawbacks normally associated with its passive counterpart, and a large number of harmonics can be compensated by a single unit without incurring additional cost or performance degradation. This thesis investigates an active power filter configuration incorporating instantaneous reactive power theory to calculate the compensation currents. Since the original equations for determining the reference compensation currents are defined in two imaginary phases, considerable computation time is necessary to transform them from the real three-phase values. The novel approach described in the thesis minimises the required computation time by calculating the equations directly in terms of the phase values i. e. three-phase currents and voltages. Furthermore, by utilising a sufficiently fast digital signal processor ( DSP ) to perform the calculation, real-time compensation can be achieved with greater accuracy. The results obtained show that the proposed approach leads to further harmonic suppression in both the current and voltage waveforms compared to the original approach, due to considerable reduction in the computation time of the reference compensation currents.
2

UM MODELO DE SISTEMA DE FILTRAGEM HÍBRIDA PARA UM AMBIENTE COLABORATIVO DE ENSINO APRENDIZAGEM / A MODEL SYSTEM FOR A HYBRID COLLABORATIVE FILTERING LEARNING ENVIRONMENT OF EDUCATION

SANTOS, André Luis Silva dos 15 February 2008 (has links)
Made available in DSpace on 2016-08-17T14:52:38Z (GMT). No. of bitstreams: 1 Andre Luis Silva dos Santos.pdf: 7753143 bytes, checksum: 538ea307ce9dad0b071cd12c49ac05f0 (MD5) Previous issue date: 2008-02-15 / Nowadays, the World Wide Web (WWW) is an excellent source of information. However, open issues carry on. It´s difficult obtain relevant information in short time. Moreover, there is no accuracy for retrieving this information. Servers such as Google, Altavista and Cadê, can retrieve a huge amount of information. Nonetheless, the retrieved information could be not relevant. The information filtering systems arise to aim users in the searching for relevant information. This work proposes a hybrid model of filtering information based on content-based filtering and collaborative filtering. This model has been used into a collaborative learning system named NetClass and it was developed using the PASSI methodology. A case study done with CEFET´s students is presented as well. / A Web é uma excelente fonte de informação, mas um dos problemas que surgem com a grande disseminação de informações é a dificuldade de se obter informação relevante em tempo hábil e de forma precisa. Mecanismos que auxiliem o usuário na recuperação de informações tais como o Google.com, Altavista e Cadê, muitas das vezes retornam uma grande quantidade de conteúdo, sem garantir uma boa efetividade de recuperação, com excesso de informações recuperadas ou com informações irrelevantes. Os Sistemas de Filtragem de Informação surgem como alternativa de auxílio aos usuários na busca de informações relevantes. Este trabalho propõe a criação de um modelo de sistema de filtragem híbrido de informação baseados nos métodos: Filtragem Baseada em Conteúdo e Filtragem Colaborativa. O modelo proposto é aplicado a um ambiente colaborativo de ensinoaprendizagem, o NetClass, e foi desenvolvido com a metodologia PASSI. Um estudo de caso feito com alunos do CEFET-MA também é descrito.
3

Busca na web e agrupamento de textos usando computação inspirada na biologia / Search in the web and text clustering using computing inspired by biology

Pereira, Andre Luiz Vizine 18 December 2007 (has links)
Orientadores: Ricardo Ribeiro Gudwin, Leandro Nunes de Castro Silva / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-11T06:40:01Z (GMT). No. of bitstreams: 1 Pereira_AndreLuizVizine_M.pdf: 1817378 bytes, checksum: 1d28283d8d2855800dd0f406eb97e5e0 (MD5) Previous issue date: 2007 / Resumo: A Internet tornou-se um dos principais meios de comunicação da atualidade, reduzindo custos, disponibilizando recursos e informação para pessoas das mais diversas áreas e interesses. Esta dissertação desenvolve e aplica duas abordagens de computação inspirada na biologia aos problemas de otimização do processo de busca e recuperação de informação na web e agrupamento de textos. Os algoritmos investigados e modificados são o algoritmo genético e o algoritmo de agrupamento por colônia de formigas. O objetivo final do trabalho é desenvolver parte do conjunto de ferramentas que será usado para compor o núcleo de uma comunidade virtual acadêmica adaptativa. Os resultados obtidos mostraram que o algoritmo genético é uma ferramenta adequada para otimizar a busca de informação na web, mas o algoritmo de agrupamento por colônia de formigas ainda apresenta limitações quanto a sua aplicabilidade para agrupamento de textos. / Abstract: The Internet became one of the main sources of information and means of communication, reducing costs and providing resources and information to the people all over the world. This dissertation develops and applies two biologically-inspired computing approaches, namely a genetic algorithm and the ant-clustering algorithm, to the problems of optimizing the information search and retrieval over the web, and to perform text clustering. The final goal of this project is to design and develop some of the tools to be used to construct an adaptive academic virtual community. The results obtained showed that the genetic algorithm can be feasibly applied to the optimizing information search and retrieval, whilest the ant-clustering algorithm needs further investigation in order to be efficiently applied to text clustering. / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
4

PROPOST: UMA FERRAMENTA BASEADA EM CONHECIMENTO PARA GESTÃO DE PORTIFÓLIO DE PROJETOS. / PROPOST: A KNOWLEDGE-BASED TOOL FOR PROJECT PORTFOLIO MANAGEMENT.

VIEIRA, Eduardo Newton Oliveira 12 February 2007 (has links)
Submitted by Maria Aparecida (cidazen@gmail.com) on 2017-08-29T14:58:03Z No. of bitstreams: 1 Eduardo Vieira.pdf: 6054087 bytes, checksum: 24f8532bfdfaeef177aa46b9f5974869 (MD5) / Made available in DSpace on 2017-08-29T14:58:03Z (GMT). No. of bitstreams: 1 Eduardo Vieira.pdf: 6054087 bytes, checksum: 24f8532bfdfaeef177aa46b9f5974869 (MD5) Previous issue date: 2007-02-12 / This work introduces PROPOST (Project Portfolio Support Tool), a knowledgebased software tool for supporting Project Portfolio Management – an increasing management model nowadays. This tool focuses on a project definition process, and was modeled using the MAAEM methodology and the ONTORMAS ontology-driven tool, as well as by reusing the ONTOINFO and ONTOWUM ontologies, which describe software product families for the development of Information Retrieval and Filtering applications, respectively. PROPOST looks for providing resource optimization by supporting reuse of existing information systems as well as avoiding duplicity on project definition for the composition on the organization’s software portfolio. The tool was created not only as a contribution for solving a current problem related to redundancy on portfolio definition, as well as support for several activities related to portfolio management (select, prioritization and evaluation). The development of PROPOST provides references on how ontology-based development can help in the software development process. It also contributes as a case study for evaluating the MAAEM methodology and the ONTORMAS ontology used in modeling process, having provided several hints for their improvement. / Este trabalho apresenta a PROPOST (Project Portfolio Support Tool), uma ferramenta baseada em conhecimento, para suporte à Gestão de Portifólio de Projetos – um modelo de gestão em ascensão na atualidade. Esta ferramenta possui seu foco no processo de definição de projetos, e foi modelada usando a metodologia MAAEM e a ferramenta dirigida por ontologias ONTORMAS, bem como pelo reuso das ontologias ONTOINFO e ONWOWUM, as quais descrevem famílias de produtos de software para o desenvolvimento de aplicações nas áreas de Recuperação e Filtragem de Informação, respectivamente. A PROPOST objetiva promover a otimização de recursos através da reutilização de sistemas de informação existentes, bem como evitar duplicidade na definição de projetos para a composição do portifólio de software das empresas. Sendo assim, a concepção desta ferramenta objetivou contribuir para a solução de um problema da atualidade, relacionado à redundância na composição do portifólio de projetos, bem como suporte a outras atividades relacionadas à gestão do portifólio (seleção, priorização e avaliação). O desenvolvimento da PROPOST também serve de referência sobre as contribuições das ontologias no processo de desenvolvimento de software. Adicionalmente, esse trabalho também constituiu um estudo de caso para avaliação da metodologia MAAEM e da ontologia ONTORMAS usadas no processo de modelagem, tendo proporcionado várias contribuições para a melhoria das mesmas.
5

A Comparison Of Different Recommendation Techniques For A Hybrid Mobile Game Recommender System

Cabir, Hassane Natu Hassane 01 November 2012 (has links) (PDF)
As information continues to grow at a very fast pace, our ability to access this information effectively does not, and we are often realize how harder is getting to locate an object quickly and easily. The so-called personalization technology is one of the best solutions to this information overload problem: by automatically learning the user profile, personalized information services have the potential to offer users a more proactive and intelligent form of information access that is designed to assist us in finding interesting objects. Recommender systems, which have emerged as a solution to minimize the problem of information overload, provide us with recommendations of content suited to our needs. In order to provide recommendations as close as possible to a user&rsquo / s taste, personalized recommender systems require accurate user models of characteristics, preferences and needs. Collaborative filtering is a widely accepted technique to provide recommendations based on ratings of similar users, But it suffers from several issues like data sparsity and cold start. In one-class collaborative filtering, a special type of collaborative filtering methods that aims to deal with datasets that lack counter-examples, the challenge is even greater, since these datasets are even sparser. In this thesis, we present a series of experiments conducted on a real-life customer purchase database from a major Turkish E-Commerce site. The sparsity problem is handled by the use of content-based technique combined with TFIDF weights, memory based collaborative filtering combined with different similarity measures and also hybrids approaches, and also model based collaborative filtering with the use of Singular Value Decomposition (SVD). Our study showed that the binary similarity measure and SVD outperform conventional measures in this OCCF dataset.
6

A Content Boosted Collaborative Filtering Approach For Movie Recommendation Based On Local &amp / Global Similarity And Missing Data Prediction

Ozbal, Gozde 01 September 2009 (has links) (PDF)
Recently, it has become more and more difficult for the existing web based systems to locate or retrieve any kind of relevant information, due to the rapid growth of the World Wide Web (WWW) in terms of the information space and the amount of the users in that space. However, in today&#039 / s world, many systems and approaches make it possible for the users to be guided by the recommendations that they provide about new items such as articles, news, books, music, and movies. However, a lot of traditional recommender systems result in failure when the data to be used throughout the recommendation process is sparse. In another sense, when there exists an inadequate number of items or users in the system, unsuccessful recommendations are produced. Within this thesis work, ReMovender, a web based movie recommendation system, which uses a content boosted collaborative filtering approach, will be presented. ReMovender combines the local/global similarity and missing data prediction v techniques in order to handle the previously mentioned sparseness problem effectively. Besides, by putting the content information of the movies into consideration during the item similarity calculations, the goal of making more successful and realistic predictions is achieved.
7

Rough set-based reasoning and pattern mining for information filtering

Zhou, Xujuan January 2008 (has links)
An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).

Page generated in 0.1129 seconds