• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 307
  • 291
  • 118
  • 94
  • 51
  • 50
  • 37
  • 22
  • 19
  • 10
  • 9
  • 7
  • 7
  • 5
  • 4
  • Tagged with
  • 1121
  • 306
  • 294
  • 219
  • 157
  • 149
  • 127
  • 126
  • 125
  • 120
  • 115
  • 112
  • 104
  • 103
  • 100
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

A Web Service for Protein Refinement and Refinement of Membrane Proteins

Pothakanoori, Kapil 17 December 2010 (has links)
The structures obtained from homology modeling methods are of intermediate resolution 1-3Ã… from true structure. Energy minimization methods allow us to refine the proteins and obtain native like structures. Previous work shows that some of these methods performed well on soluble proteins. So we extended this work on membrane proteins. Prediction of membrane protein structures is a particularly important, since they are important biological drug targets, and since their number is vanishingly small, as a result of the inherent difficulties in working with these molecules experimentally. Hence there is a pressing need for alternative computational protein structure prediction methods. This work tests the ability of common molecular mechanics potential functions (AMBER99/03) and a hybrid knowledge-based potential function (KB_0.1) to refine near-native structures of membrane proteins in vacuo. A web based utility for protein refinement has been developed and deployed based on the KB_0.1 potential to refine proteins.
372

Development of an Interactive, Hands-on Learning Experience of the Google Maps API

Kale, Rushikesh Digambar 14 May 2010 (has links)
The project is to design and implement a Web application for realizing an innovative, hands-on interactive learning experience for the Google Maps API. This learning environment was developed based on a real-world Geographic Information System (GIS), the Gulf of Mexico Coastal Geospatial Information Support System. Significant efforts were invested not only in development of this GIS system, but also in the design and implement that turns the production system into a learning environment. The Web development aspect attracts computer science students, while the opportunity to learn GIS concepts in an interactive way to attract students from the geography department and the opportunity to learn the Google Maps API proves interesting to regular internet users. The Web learning system was given to a focus group whose feedback was collected through a survey. The survey results reveal a favorable response to the interactive, hands-on learning model and the Web implementation.
373

Business Intelligence v MS Dynamics AX 2009 / Business Intelligence in MS Dynamics AX 2009

Hubáček, Filip January 2010 (has links)
The subject of the diploma thesis entitled "Business Intelligence in Microsoft Dynamics AX" is to analyze the functionality of the ERP system Microsoft Dynamics AX 2009 in the area of Business Intelligence and Reporting with the reflection of the current market position of the company. The goal is to set the basic definition of the relationship between ERP and Business Intelligence systems, further defining the possibilities of MS Dynamics AX in BI in the terms of their practical use and also to describe the fundamental technological aspects. The aim of the work is evaluation and definition particular steps during implementation based on methodology MS Sure Step 2010 together with description of deployment process. As a reflection of insufficient coverage of some of the areas is mentioned solution of the company Circon Circle Consulting. Also is realized proposal for BI for AX cost accounting module, including designing data mart, ETL and report with respect to specifics, such as the parent-child hierarchies and many-to-many relationship between fact tables and dimensions. The contribution of this work is mainly evident for the consultants of the system, to who is provided insight into the important and for users attractive functionality and also is offered possible implementation process. Technically oriented readers may appreciate the solutions for Cost Accounting and potential approaches to the concept of data mart or other areas where they meet the above-mentioned aspects.
374

Využití IPTV v univerzitní síti / IPTV use in academic network

Jireš, Kamil January 2009 (has links)
The purpose of this diploma thesis is to provide readers the deployment of IPTV solutions using free software. In this thesis the reader will familiarize with the theory of IPTV, which implies a necessary part of the system, IPTV related protocols and possible errors that can be when running the IPTV service. The work is divided into four parts. The first part discusses the theory. In theory, the reader familiarize with the definition of basic concepts, which are essential elements of the system, the protocols used in conjunction with IPTV and will also be discussed errors that may occur during operation of IPTV services. The second section outlines the architecture of which the reader can find a solution that was implemented in an experimental operation. The third section focuses on the choice of components (hardware and software) that were used in the test sytem. The variants are mentioned, from which the components were selected. The fourth section contains the actual implementation, which describes each components. At the end of the fourth part is inserted configuration files that can be used in case of repeated implementation of the solution.
375

Utvecklingen av Spotalike

Bygdeson, Mattias January 2019 (has links)
The goal with this assignment has been to study the product Spotalike and develop a new version to make the product more attractive. The studying of the product was done with the help of user data, such as how Spotalike is being used, what target audience it has, why it's being used, etc. The new version of Spotalike was planned by making design sketches and prototypes which were created as a first step in order to get a better picture of what the result would be. The new version is not available to the public, but it is fully functional and works locally. The solution that was concluded was to develop a music player which is built on the founding principles of the old Spotalike. The music player is developed with React and is powered by Spotify. Besides the old functions there are also new functions that has been implemented, and the interface has been redesigned. There is currently no new user data available to determine the result of the development, since the new version of Spotalike hasn't been made public yet. / Målet med detta projektarbete har varit att granska produkten Spotalike och utveckla en ny version som gör produkten mer eftertraktad. Problemgranskning har gjorts med analysering av användardata – hur Spotalike används, av vem, varför den används, osv. Den nya versionen av Spotalike planerades med hjälp av designskisser och prototyper som togs fram som första steg för att få en bättre bild av slutresultatet. Den nya versionen är inte tillgänglig publikt, men är fullt funktionell lokalt. Lösningen som togs fram var att skapa en musikspelare som bygger på de grundprinciper gamla Spotalike har. Musikspelaren är bygd med React och använder Spotify i bakgrunden som motor. Utöver de redan befintliga funktionerna så har även nya funktioner tagits fram och gränssnittet har redesignats. Någon ny användarstatistik för att se om lösningen har gynnat bra resultat i form av användarupplevelse finns inte tillgänglig då tjänsten ännu inte har hunnit bli tillgängligt publikt.
376

Machine learning to detect anomalies in datacenter

Lindh, Filip January 2019 (has links)
This thesis investigates the possibility of using anomaly detection on performance data of virtual servers in a datacenter to detect malfunctioning servers. Using anomaly detection can potentially reduce the time a server is malfunctioning, as the server can be detected and checked before the error has a significant impact. Several approaches and methods were applied and evaluated on one virtual server: the K-nearest neighbor algorithm, the support-vector machine, the K-means clustering algorithm, self-organizing maps, CPU-memory usage ratio using a Gaussian model, and time series analysis using neural network and linear regression. The evaluation and comparison of the methods were mainly based on reported errors during the time period they were tested. The better the detected anomalies matched the reported errors the higher score they received. It turned out that anomalies in performance data could be linked to real errors in the server to some extent. This enables the possibility of using anomaly detection on performance data as a way to detect malfunctioning servers. The most simple method, looking at the ratio between memory usage and CPU, was the most successful one, detecting most errors. However the anomalies were often detected just after the error had been reported. Support vector machine were more successful at detecting anomalies before they were reported. The proportion of anomalies played a big role however and K-nearest neighbor received higher score when having a higher proportion of anomalies.
377

Utilização de metadados no gerenciamento de acesso a servidores de vídeo. / Metadata utilization in the video servers access management.

Goularte, Rudinei 26 February 1998 (has links)
A experiência com autoria de material didático multimídia para propósitos educacionais mostra um grande problema: como prover uma maneira de tratar objetos multimídia de modo que usuários inexperientes (como professores) possam estar aptos a projetar e construir suas próprias apresentações? A criação de tais apresentações envolve fatores como armazenamento, entrega, busca e apresentação de material multimídia (vídeo em especial). Uma infra-estrutura básica que armazene e entregue eficientemente os dados de vídeo é necessária, porém, outro ponto importante é organizar esses dados armazenados no servidor de forma a facilitar seu acesso por parte dos usuários. Neste trabalho, isto é alcançado através do uso de um sistema interativo de recuperação e gerenciamento de informações projetado para facilitar o acesso a itens (ou parte deles) armazenados no servidor. A principal característica de tal sistema é o uso de uma base de metadados contendo os atributos dos vídeos armazenados no servidor. Buscas podem ser feitas por título, assunto, tamanho, autor, conteúdo ou, mais importante no caso de material didático, por cenas ou frames específicos. O sistema foi implementado segundo uma abordagem cliente/servidor utilizando a linguagem de programação JAVA. A comunicação entre clientes e servidores é realizada através do uso do Visibroker 3.0, que é uma ferramenta de programação para Objetos Distribuídos segundo o padrão CORBA. O acesso aos dados a partir da base de metadados é realizado através do uso de um driver PostgreSQL que segue a API JDBC. Para propósitos de avaliação do sistema um player foi construído utilizando a ferramenta Java Media Framework (JMF). Foi realizada uma análise para a verificação do impacto da utilização das tecnologias CORBA e JDBC no sistema. Foi detectado que a utilização da tecnologia JDBC impõe um atraso muito mais significante que a utilização da tecnologia CORBA. Outra conclusão é que a utilização de metadados provê uma melhor interatividade em buscas, permite economia de tempo durante o processo de edição e provê economia de espaço de armazenamento através do compartilhamento de objetos como vídeos, cenas e frames. / The experience with authoring multimedia material for educational purposes shows a major problem: how to provide an easy and efficient way to handle multimedia objects in a manner that non-expert users (namely school teachers) can be able to design and build their own presentations? The creation of this presentations involves factors like storage, delivery, search and presentation of multimedia material (video in special). A basic infra-structure that stores and efficiently deliver the video data is needed. However, another important point is the organization of these data stored into the server in a way to facilitate the access to them from the users. In the system wich is the subject of this work, this is achived through the use of an interactive information management and retrieval system designed to facilitate the access to items (or parts of the items) stored in the server. The main characteristic of the system is the use of a metadata base which contains attributes of the videos stored in the server. Searches can be made by title, subject, length, author, content or, most important in the didatic multimedia material case, by a specific scene or frame. The system was built with JAVA programming language in a client/server way. The communication between clients and servers is realized through the use of the Visibroker 3.0, which is a Distributed Objects programming tool according to the CORBA standard. The data access from the metadata base use a PostgreSQL driver which follows the JDBC API. For evaluation purposes a playback tool was built using Java Media Framework (JMF). An analisys was carried out to verify the impact of the utilization of CORBA and JDBC technologies in the system. It was detected that JDBC technology utilization imposes a much more significate delay than the CORBA technology utilization. Another conclusion is that metadata utilization provide better interactivity searches, making the editing process faster and save storage space through the sharing of objects like videos, scenes and frames.
378

Reconhecimento de sessões http em um modelo para servidor web com diferenciação de serviços / Sessions recognition for a web server model with differentiation of services

Mourão, Hima Carla Belloni 15 December 2006 (has links)
Esta dissertação de mestrado aborda a introdução de reconhecimento de sessões http em um modelo de servidor web com serviços diferenciados (SWDS). Algumas técnicas foram desenvolvidas com o objetivo de produzir diferenciação de serviços junto com garantias de que novas sessões poderiam ser aceitas no sistema. Esses objetivos constituem requisitos essenciais na Internet atual, especialmente para aplicações web modernas. Um novo esquema para controle de admissão de sessões foi desenvolvido e introduzido no modelo SWDS, considerando dois mecanismos para aceitar novas sessões, com garantia de nalização. O mecanismo que estima a capacidade do sistema de aceitar novas sessões, baseado em um modelo de sessão construído dinamicamente a partir da carga do sistema, é destacado. A proposta global deste trabalho também considera um controle de admissão de requisições, baseado em sessões, onde a nova política de atendimento criada mantém o sistema livre de sobrecargas e oferece atendimento diferenciado para as sessões. As políticas de negociação desenvolvidas para o controle de admissão de requisições tiveram um papel importante neste trabalho, contribuindo para a priorização do atendimento das sessões. Os resultados obtidos mostram que os controles propostos constituem estruturas fundamentais para a estabilidade do desempenho do sistema, tanto quanto os mecanismos desenvolvidos têm grande importância no atendimento das sessões e, portanto de seus clientes, através de uma abordagem baseada em diferenciação. / This MSc dissertation approaches the introduction of the HTTP sessions recognition in a web server model with diferentiated services (SWDS). Some techniques have been developed aiming at issuing diferentiation of services together with guarantees that new sessions could be accepted in the system. These aims constitute essential requirements for the current Internet, especially for modern web applications. A new scheme for the admission control system has been developed and introduced in the SWDS model, considering two mechanisms for accepting new sessions, with guarantee of their nalizations. The mechanism that estimates the system capacity of acceptance of a new session, based on a session model built dynamically from system workload information, is highlighted. The global propose of this work also considers a request admission control, based on sessions, where the new attendance polices created keep the system free from overloads and over diferentiated attendance for the sessions. The negotiation polices developed for request admission control had an important place in this work, contributing for the session attendance prioritization. The results reached show that the controls proposed comprise fundamental structures for system performance stability, as well as the mechanisms developed have great importance in attending sessions and, therefore, their clients by means of a diferentiation-based approach.
379

Caracterização de carga de trabalho para testes de modelos de servidores web / Workload characterization to test web server models

Silva, Luis Henrique Castilho da 11 August 2006 (has links)
A World Wide Web é um meio de comunicação em constante crescimento, agregando diversos componentes e serviços em um ritmo acelerado. Os novos tipos de sites, tais como, o comércio eletrônico (e-commerce), notí?cia/informação (Web-publishing), vídeo sob demanda exigem ainda mais recursos do servidor. Nesse contexto, visando adequar a avaliação de desempenho aos novos ambientes da Web, o presente trabalho apresenta um estudo caracterizando diversos traces de servidores Web Apache, permitindo coletar dados importantes que definem a forma como os usuários e servidores interagem. Com esses dados, quatro tipos de categorias de sites foram analisados: Padrão (composto da média de todos os traces analisados), Acadêmico, Notícia/Informação e Tradicional. Nessa análise avaliam-se quatro aspectos: o intervalo de chegada, o código de resposta, o tipo objeto e o tamanho do objeto e ao final, modelos matemáticos são propostos para representar essas características. Além disso, este trabalho também desenvolveu um gerador de cargas de trabalho sintéticas, o W4Gen(World Wide Web Workload Generator). Com uma interface gráfica amigável, ele permite aos seus usuários gerar novas cargas com base nos modelos matemáticos. Além disso, ele também permite modificar as características essenciais para simular novos tipos de cargas. Para validar os resultados deste trabalho, utilizou-se o modelo de servidor Web com diferenciação de serviços (SWDS), verificando o desempenho em situações de sobregarga / World Wide Web is a media in constant growth, joining several components and services in an accelerated evolution. The new kinds of sites, such as, E-commerce, Web-publishing and demand video still uses more servers resources. In this context, adapting the newWeb environment to performance evaluation, the present work accomplished a characterization study of several Apache Web servers traces, allowing collect important data that define the form as users and servers interact. With these data, four types of sites categories were analyzed: Default (composed of all trace), Academic, Web-publishing and Traditional. In this analysis, it was evaluated four aspects: the arrival time, the status code, the object class and the object size and at the end, mathematical models are proposed to represent those characteristics. Furthermore, a synthetic workload generator was also developed. With a graphical interface, the W4Gen (World WideWebWorkload Generator), as called, allows the users to generate new workloads based on mathematical models. Besides, it also allows to modify the four essential aspects preseted above to simulate new types of workloads. Finally, to validate the results, the Web server model with differentiated services was used, verifying the performance in overload situations
380

Avaliação de algoritmos de controle de congestionamento como controle de admissão em um modelo de servidores web com diferenciação de serviços / Evaluation of congestion control algorithms used as control admission in a model of web servers with service differentiation

Figueiredo, Ricardo Nogueira de 11 March 2011 (has links)
Esta dissertação apresenta a construção de um protótipo de servidor Web distribuído, baseado no modelo de servidor Web com diferenciação de serviços (SWDS) e a implementação e avaliação de algoritmos de seleção, utilizando o conceito de controle de congestionamento para requisições HTTP. Com isso, além de implementar uma plataforma de testes, este trabalho também avalia o comportamento de dois algoritmos de controle de congestionamento. Os dois algoritmos estudados são chamados de Drop Tail e RED (Random Early Detection), no qual são bastante difundidos na literatura científica e aplicados em redes de computadores. Os resultados obtidos demostram que, apesar das particularidades de cada algoritmo, existe uma grande relação entre tempo de resposta e a quantidade de requisições aceitas / This MSc dissertation presents the implementation of a prototype for a distributed web server based on the SWDS, a model for a web server with service differentiation, and the implementation and evaluation of selection algorithms adopting the concept of congestion control for HTTP requests. Thus, besides implementing a test platform this work also evaluates the behavior of two congestion control algorithms. The two algorithms studied are the Drop Tail and the RED (Random Early Detection), which are frequently discussed in the scientific literature and widely applied in computer networks. The results obtained show that, although the particularities of each algorithm, there is a strong relation between the response times and the amount of requests accepted in the server

Page generated in 0.045 seconds