• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2174
  • 1050
  • 1
  • Tagged with
  • 3225
  • 3225
  • 3225
  • 3222
  • 3217
  • 3216
  • 3216
  • 3216
  • 3216
  • 3216
  • 3216
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Metodologia Híbrida de Previsão de Preços de Eletricidade e Potência Eólica

Jorge Nuno Dias Lopes Gonçalves 21 July 2016 (has links)
Desenvolvimento de uma nova metodologia de previsão de preços de electricidade e de potência eólica através da conjugação de diferentes sistemas e elementos já existentes e estabelecer uma comparação com outras anteriormente criadas de modo a verificar a sua fiabilidade e possível melhoria na qualidade de ambas as previsões, nomeadamente na diminuição dos respetivos erros.
372

Framework for Monte Carlo Tree Search-related strategies in Competitive Card Based Games

Pedro Ricardo Oliveira Fernandes 27 September 2016 (has links)
In recent years, Monte Carlo Tree Search (MCTS) has been successfully applied as a new artificial intelligence strategy in game playing, with excellent results yielded in the popular board game Go, real time strategy games and card games. The MCTS algorithm was developed as an alternative over established adversarial search algorithms, i.e., Minimax (MM) and knowledge-based approaches.MCTS can achieve good results with nothing more than information about the game rules, and can achieve breakthroughs in domains of high complexity, whereas in traditional AI approaches, developers might struggle to find heuristics through expertise in each specific game.Every algorithm has its caveats, and MCTS is no exception, as stated by Browne et al: "Although basic implementations of MCTS provide effective play for some domains, results can be weak if the basic algorithm is not enhanced. (...) There is currently no better way than a manual, empirical study of the effect of enhancements to obtain acceptable performance in a particular domain."Thus, the first objective of this dissertation is to research various state of the art MCTS enhancements in a context of card games and then proceed to apply, experiment and fine tune them in order to achieve a highly competitive implementation, validated and tested against other algorithms such as MM.By analysing trick-taking card games such as Sueca and Bisca, where players take turns placing cards face up in the table, there are similarities that allow the development of a MCTS based implementation that features effective enhancements for multiple game variations, since they are non deterministic imperfect information problems. Good results have been achieved in this domain with the algorithm, in games such as Spades and Hearts.The end result aims toward a framework that offers a competitive AI implementation for at least 3 different card games (achieved with analysis and validation against other approaches), allowing developers to integrate their own card games and benefit from a working AI, and also serving as testing ground to rank different agent implementations.
373

Rethinking Email Information Visualization: a case study

Pedro Daniel Cardoso dos Santos 16 July 2015 (has links)
Quantas vezes a expressão "Uma imagem vale mais do que mil palavras" é usada no dia-a-dia? Uma representação gráfica de qualquer objeto é mais facilmente assimilada pelo cérebro humano do que a sua representação textual dado que a visão é o principal meio de transporte de informação para o cérebro, tanto em quantidade como em velocidade. E este principio pode ser aplicado em qualquer área. Nomeadamente na área da ciência onde cada vez mais é necessário analisar grandes quantidades de dados e tirar conclusões relevantes com eles. E estes conjuntos de dados eram visualizados de uma forma muito pobre até à pouco tempo, altura em que a área de \textit{Information Visualization} surgiu. Com a aplicação de um conjunto de técnicas e de principios o processo de aquisição de conhecimento pode ser facilitado de forma que o utilizador consiga adquirir mais informação e tirar melhores conclusões ao mesmo tempo que o seu esforço cognitivo diminui significativamente.Uma área onde a \textit{Information Visualization} pode ter impacto tremendo é a área do email. Como é sobejamente sabido, o email é um sistema de comunicação massivamente utilizado e em variadíssimos contextos. Ainda assim, o email, de uma forma geral, está parado no tempo, estando ainda assente na sua estrutura original de há 40 anos atrás. Isto indica que há uma necessidade de atualizar este sistema, analisando numa primeira fase quais são as ações que os utilizadores executam, quais as suas necessidades e depois adaptar o sistema, tendo em conta estes dados.Esta dissertação vai ter três partes distintas, sendo a primeira detalhar esta ainda pouco divulgada ciência. A segunda será investigar o sistema de email, apontando para quais são as suas principais lacunas e os novos usos que os utilizadores associam ao email que não estavam planeados na sua especificação original. Finalmente, a terceira parte consistirá em juntar os dois tópicos anteriores de forma a tentar atingir uma solução que impulsione o email a dar um passo em frente no sentido de voltar a fazer com que este sistema seja uma experiência agradável para o utilizador, tendo um cliente de email (atualmente ainda em desenvolvimento) como caso de estudo onde os protótipos aqui desenvolvidos possam ser integrados para que possam ser validados pelos seus utilizadores, sendo depois a opinião destes tida em conta para a melhoria destes protótipos. / How many times has the phrase "An image's worth a thousand words" been used in the daily life? Indeed, a graphical representation of anything that surrounds us is much better assimilated by the human brain rather than a textual representation, because the vision is the sense that carries more information and faster to the brain. Therefore, this principle may be applied to anything, namely, in a scientific area where scientists have to deal with large sets of data. This datasets were visualized in a less capable way until a few years ago when the area of Information Visualization started to emerge and alerting the specialists to the fact that, with a set of techniques of principles, the cognitive process of acquiring information can be facilitated in such a way that the user can acquire more information and get a better insight over the data he's visualizing at the same time that the effort to perform this task is reduced.The aforementioned science, Information Visualization, can be applied to any field of study. And a particular field who can be tremendously improved with this relation is the email environment. As it is widely known, the email use is massified and it's one of the most used means of communication within several contexts. Still, besides this massification, the structure of email is still the same as it was 40 years ago. This points to a need of improving this system, analyzing first what are the actions that users execute, what are the user's needs and then adapting the email to the modern user.This dissertation is going to have three distinct parts being them, at first a description of this still not mainstream science that is Information Visualization, next present the results of a research over the email environment, pointing to which are its main gaps and the new features that users associate to it and finally, mix the two previous topics - using information visualization techniques to solve a subset, suited for this dissertation's timeframe, of email's problems - trying to achieve a solution that pushes the email system forward in the direction of making the use of the email, once again, a pleasant experience to the user, having a real email client - that's currently under development - as a case study so that all the work here developed can be validated in a real life scenario, with real users, having feedback and with that improve the prototypes meeting that way the users' needs.
374

Automatic Recognition of Emotion for Music Recommendation

António Miguel Antunes de Oliveira 27 July 2016 (has links)
Music is widely associated with emotions. The automatic recognition of emotions from audio is very challenging because important factors such as personal experience and cultural background are not captured by the musical sounds. Currently, there are challenges associated with most steps of music emotion recognition (MER) systems, namely feature selection, the model of emotions, annotation methods, and machine learning techniques used. This project uses different machine learning techniques to automatically associate musical features calculated from audio to annotations of emotions made by human listeners. The map between the feature space and the model of emotions learned by the model can be used to estimate the emotions associated with music that the system has not been previously exposed to. Consequently, the system has the potential to recommend music to listeners based on emotional content.
375

Orquestração e aprovisionamento de um estúdio televisivo baseado na tecnologia IP na Cloud

Vasco Fernandes Gonçalves 21 July 2017 (has links)
Com a evolução que se tem verificado nos últimos anos a nível da velocidade de transmissão de dados em redes de computadores, tem surgido a possibilidade de transitar a produção de conteúdos televisivos, tradicionalmente produzidos em dispositivos específicos para o efeito e com elevados custos, para sistemas informáticos interligados com redes IP.A acompanhar esta evolução tem-se assistido também a novas tendências na produção de conteúdo televisivo, como o aumento da resolução da imagem, transmissão de mais canais de som (surround, múltiplas línguas) ou a transmissão de novos formatos de conteúdo, como 3D, 360° ou realidade virtual.O elevado custo do equipamento tradicional, principalmente no que diz respeito à produção de conteúdo televisivo ao vivo, pode dificultar a entrada no mercado de novos canais ou a adoção de novos tipos de media por canais já existentes.A evolução ainda ao nível da performance do hardware informático possibilita também a virtualização destes ambientes, de forma a melhor aproveitar os recursos disponíveis e até a alocá-los dinamicamente de acordo com a variação da sua necessidade.Assim, este trabalho propõe minimizar os custos previamente referidos através da criação de uma plataforma que permita lançar numa cloud, de forma intuitiva para um utilizador, um sistema virtualizado que desempenhe as mesmas funções que os equipamentos tradicionais, e também providenciar métricas de funcionamento do mesmo. / With the evolution of data transmission speeds on computer networks in recent years, the possibility to move the production of television content, traditionally produced in specific devices for this purpose and at high costs, to IP network connected systems has appeared.In addition, new trends in the production of television content such as increased image resolution, transmission of more sound channels (surround, multiple languages) or the transmission of new content formats such as 3D, 360° or virtual reality.The high cost of traditional equipment, particularly with regard to the production of live television content, may hinder the entry of new channels into the market or the adoption of new types of media by existing channels.The evolution in performance of computer hardware also allows the virtualization of these environments, in order to better take advantage of the available resources and even to allocate them dynamically according to the variation of their necessity.Thus, this work proposes to minimize the costs previously mentioned by creating a platform that enables the deployment in a cloud, in a manner that is intuitive for a user, of a virtualized system that performs the same functions as the traditional equipment, and also provide metrics of its performance.
376

Behavioral Analytics for Medical Decision Support: Supporting dementia diagnosis through outlier detection

João Miguel Pinto Ferreira 29 July 2013 (has links)
No description available.
377

Music-Based Procedural Content Generation for Games

Nuno Filipe Bernardino Oliveira 28 July 2015 (has links)
A geração procedimental é algo ainda recente no mundo académico, que se entende como a criação de conteúdos automaticamente via algoritmos. Há várias razões para o desenvolvimento desta técnica, mas a principal é a redução da memória consumida, pois os algoritmos de geração procedimentais são capazes de gerar conteúdo em massa, ocupando ordens de magnitude menores em disco. Este procedimento é, normalmente, utilizado em jogos para gerar níveis, mapas, vegetação, missões, sendo menos comum para gerar ou alterar o motor de jogo ou o comportamento de NPCs (Non-Player Character). Apesar de a maioria dos jogos possuírem música, é frequente este elemento apenas servir como suporte ao jogo e ajudar a criar o ambiente necessário. Os jogos que utilizam a música como fonte de informação para criar conteúdo jogável ainda são raros e mesmo nestes o conteúdo gerado é, muitas vezes, gerado previamente e estático. Nesse sentido, novos jogos têm vindo a diferenciar-se neste processo, nos quais a música escolhida irá gerar conteúdos automaticamente e de forma diversa. O objetivo desta dissertação é desenvolver um jogo, de forma completamente procedimental a partir de segmentos de música, com o intuito de ser possível diferenciar de forma significativa os diferentes níveis criados e ser capaz de tirar conclusões referentes à utilização de música como gerador procedimental de conteúdos. Jogo este que será composto por missões de stealth onde é necessário ao jogador atravessar todo o nível com os recursos que encontrar e sem ser visto/apanhado pelos inimigos. O jogo consistirá, então, em receber uma música ou segmento de música como input e através de uma análise individual poder recolher algumas características importantes que o irão distinguir de outros. Após este processo, cada nível será criado consoante estes, permitindo diversidade em cada missão, principalmente de forma a condicionar o modo como esta será jogada. / The generation of content procedurally is still something that is emerging in the academic studies, and it is understood as the creation of content automatically, through algorithmic means. There are many reasons to develop this technic, but, mainly, it is used to decrease the memory used for this matter, as this algorithms can continuosly create great amount of content, using lesser space in disk. This procedure is already used in games to create different levels, maps, missions or, less common, to change the game engine or the NPCs (Non-Player Character) behaviour.Even though most games already use music, it is mostly used as a way of supporting the game and create all the needed environment to enhance the user experience. Games that use music as an input source to create playable content are still rare and, commonly, they have their content generated before and in a static way. Naturally, new games are becoming popular in this process by using the chosen songs to generate different content automatically.The goal of this disseration is to develop a game, in a complete procedural way, through music segments, so it would be possible to distinguish significantly different levels, as well as being able to create an opinion about the usage of music as a procedural content generation. This game will be consisted in different stealth missions where the player has to cross the entire level using the available resources in order to not being seen/caught by the enemies. So, the game will receive a music or a segment of it as an input and through a unique analysys it will collect some important features that will allow each segment to be different. After this process, each level will be generated following these features, allowing it to create diversity in each mission, mainly to change the way the game is meant to be played.
378

Embedded Scheduler for Dynamically Reconfigurable Accelerators

Carlos Jorge Matos Carneiro de Sousa 27 July 2016 (has links)
Nowadays, the volume of data and application complexity is growing sharply. Simultaneously, embedded systems are becoming more widely used. To keep up with the increasing need of computational power, it is required to design faster and more power efficient circuits. However, as size decreases and the density of circuits increases the present day solutions are steadily hitting their maximum capability. With this in mind, it is necessary to develop new computer architectures. One approach is to have a dynamically reconfigurable system. The existence of configurable hardware would allow optimizing the execution of small parts of the application's execution, leading to an overall speedup of the system. The main goal of this project is to implement an embedded scheduler capable of, during runtime, generating the specification and the operation schedule of a Reconfigurable Processing Unit that accelerates execution of application hot spots identified from binary execution traces.
379

Computer-Vision-based surveillance of Intelligent Transportation Systems

João Francisco Carvalho Neto 25 September 2017 (has links)
As salas de controlo operacional e gestão de tráfego funcionam tendo como base informações provenientes de sensores instalados pelas cidades, entre eles câmaras de videovigilância. Na sala de controlo da Câmara Municipal do Porto a extração de informação é feita de forma manual, um processo moroso e pouco fiável. Assim, este trabalho surge no âmbito de uma colaboração entre a empresa Armis Group e a Câmara Municipal do Porto que pretende instalar automatismos para analisar as streams de vídeo na sala de controlo.No entanto, criar estes automatismos apresenta uma série de desafios. As câmaras têm diferentes formatos de saída, estão sujeitas ao clima, o que cria ruído na imagem, diferente luminosidade ao longo do dia e variac˜oes da direc˜ao e zoom conforme a necessidade do operador. Este conjunto de obst´aculos torna imposs´ivel uma abordagem de metodologia u´nica, sendo necess´ario estudar diferentes formas de abordar os diferentes problemas.Se estas dificuldades forem ultrapassadas, implementando soluc˜oes robustas, a produtividade da sala de controlo vai aumentar. Assim, o objetivo ´e implementar algoritmos para a detec˜ao de objetos em movimento e extrac˜ao de informac˜ao com interesse para a sala de controlo, como volumes de tr´afego, contagem de pessoas e acidentes vi´arios. Alguns dos m´etodos j´a explorados incluem Background-Extraction e Feature Tracking e constituem o ponto de partida para a literature review. Com estes dados os operadores da sala de controlo poder˜ao fazer decis˜oes mais informadas para melhorar a rede vi´aria da cidade do Porto a curto e a longo prazo. / In this project we tackle the problem of surveillance camera images processing at the Porto Operational Control Center, using state-of-the-art algorithms to detect events of interest automatically. The result of the work will be integrated in a framework being developed at LIACC that will process the video from capture to storage. Since the problems of vehicle and pedestrian detection are hot research topics we will need to implement the best algorithms as of now and build the framework to allow us to update the algorithms in the future.This implementation has however, a series of challenges as cameras have different resolutions and settings as well as not being static, being moved according to the operator commands. Most of the cameras are exposed to the weather, which can deteriorate the performance of the detection algorithms. It is then needed to perform some extra operations to mitigate these issues, such as noise, rain and shadow removal methods. If we manage to overcome these issues and present an application that can effectively detect the desired events, the Control Center will have another tool to even more productively monitor the city.
380

Continuous Maintenance System for optimal scheduling based on real-time machine monitoring

Francisco José Oliveira Costa 07 March 2018 (has links)
Nowadays, the maintenance activities are the ones that most draw the attention of companies due to the increased costs of sudden machines stop, and consequently, stop the production processes. These stops are mostly caused by wear-out of its components that lead to machine breakdown and a close monitoring of the manufacturing processes need to be made. Based on this, and to increase the production line efficiency, there's a need to continuously monitor the machines' performance, and together with all the historical maintenance data, create strategies to minimize the maintenance phases and costs. These strategies may lie in the prediction of a suitable time periods to perform maintenance operations, a based on that, group a set of machines together to perform maintenance activities between day-off and day-on shifts. This represents a difficulty mainly because the increased complexity of scheduling and planning activities of a production line, being necessary to minimize the impact of maintenance activities based on failure prediction in all the already existing plan.

Page generated in 0.1341 seconds