• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 28
  • 28
  • 28
  • 28
  • 13
  • 13
  • 9
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A framework for responsive content adaptation in electronic display networks

West, Philip January 2006 (has links)
Recent trends show an increase in the availability and functionality of handheld devices, wireless network technology, and electronic display networks. We propose the novel integration of these technologies to provide wireless access to content delivered to large-screen display systems. Content adaptation is used as a method of reformatting web pages to display more appropriately on handheld devices, and to remove unwanted content. A framework is presented that facilitates content adaptation, implemented as an adaptation layer, which is extended to provide personalization of adaptation settings and response to network conditions. The framework is implemented as a proxy server for a wireless network, and handles HTML and XML documents. Once a document has been requested by a user, the HTML/XML is retrieved and parsed, creating a Document Object Model tree representation. It is then altered according to the user’s personal settings or predefined settings, based on current network usage and the network resources available. Three adaptation techniques were implemented; spatial representation, which generates an image map of the document, text summarization, which creates a tree view representation of a document, and tag extraction, which replaces specific tags with links. Three proof-of-concept systems were developed in order to test the robustness of the framework. A system for use with digital slide shows, a digital signage system, and a generalized system for use with the internet were implemented. Testing was performed by accessing sample web pages through the content adaptation proxy server. Tag extraction works correctly for all HTML and XML document structures, whereas spatial representation and text summarization are limited to a controlled subset. Results indicate that the adaptive system has the ability to reduce average bandwidth usage, by decreasing the amount of data on the network, thereby allowing a greater number of users access to content. This suggests that responsive content adaptation has a positive influence on network performance metrics.
22

An introduction to computer programming for complete beginners using HTML, JavaScript, and C#

Parker, Rembert N. January 2008 (has links)
Low student success rates in introductory computer programming classes result in low student retention rates in computer science programs. For some sections of the course a traditional approach began using C# in the .Net development environment immediately. An experimental course redesign for one section was prepared that began with a study of HTML and JavaScript and focused on having students build web pages for several weeks; after that the experimental course used C# and the .Net development environment, covering all the material that was covered in the traditional sections. Students were more successful in the experimental section, with a higher percentage of the students passing the course and a higher percentage of the students continuing on to take at least one additional computer science course. / Department of Computer Science
23

World Wide Graphics

Timmons, Alysha Marie 01 January 2001 (has links)
The scope of this project describes World Wide Graphics (WWG) a software package that provides instructors with the tools needed to present a web-based presentation to a group of students while having the ability of enhancing the prepared HTML slide with userdrawn graphics and highlighting.
24

Web Texturizer: Exploring intra web document dependencies

Tandon, Seema Amit 01 January 2004 (has links)
The goal of this project is to create a customized web browser to facilitate the skimming of documents by offsetting the document with relevant information. This project added techniques of learning information retrieval to automate the web browsing experience to the web texturizer. The script runs on the web texturizer website; and it allows users to quickly navigate through the web page.
25

Classificação de sites a partir das análises estrutural e textual

Ribas, Oeslei Taborda 28 August 2013 (has links)
Com a ampla utilização da web nos dias atuais e também com o seu crescimento constante, a tarefa de classificação automática de sítios web têm adquirido importância crescente, pois em diversas ocasiões é necessário bloquear o acesso a sítios específicos, como por exemplo no caso do acesso a sítios de conteúdo adulto em escolas elementares e secundárias. Na literatura diferentes trabalhos têm surgido propondo novos métodos de classificação de sítios, com o objetivo de aumentar o índice de páginas corretamente categorizadas. Este trabalho tem por objetivo contribuir com os métodos atuais de classificação através de comparações de quatro aspectos envolvidos no processo de classificação: algoritmos de classificação, dimensionalidade (número de atributos considerados), métricas de avaliação de atributos e seleção de atributos textuais e estruturais presentes nas páginas web. Utiliza-se o modelo vetorial para o tratamento de textos e uma abordagem de aprendizagem de máquina clássica considerando a tarefa de classificação. Diversas métricas são utilizadas para fazer a seleção dos termos mais relevantes, e algoritmos de classificação de diferentes paradigmas são comparados: probabilista (Naıve Bayes), árvores de decisão (C4.5), aprendizado baseado em instâncias (KNN - K vizinhos mais próximos) e Máquinas de Vetores de Suporte (SVM). Os experimentos foram realizados em um conjunto de dados contendo sítios de dois idiomas, Português e Inglês. Os resultados demonstram que é possível obter um classificador com bons índices de acerto utilizando apenas as informações do texto ˆancora dos hyperlinks. Nos experimentos o classificador baseado nessas informações atingiu uma Medida-F de 99.59%. / With the wide use of the web nowadays, also with its constant growth, task of automatic classification of websites has gained increasing importance. In many occasions it is necessary to block access to specific sites, such as in the case of access to adult content sites in elementary and secondary schools. In the literature different studies has appeared proposing new methods for classification of sites, with the goal of increasing the rate of pages correctly categorized. This work aims to contribute to the current methods of classification by comparing four aspects involved in the classification process: classification algorithms, dimensionality (amount of selected attributes), attributes evaluation metrics and selection of textual and structural attributes present in webpages. We use the vector model to treat text and an machine learning classical approach according to the classification task. Several metrics are used to make the selection of the most relevant terms, and classification algorithms from different paradigms are compared: probabilistic (Na¨ıve Bayes), decision tree (C4.5), instance-based learning (KNN - K-Nearest Neighbor) and support vector machine (SVM). The experiments were performed on a dataset containing two languages, English and Portuguese. The results show that it is possible to obtain a classifier with good success indexes using only the information from the anchor text in hyperlinks, in the experiments the classifier based on this information achieved 99.59% F-measure.
26

Classificação de sites a partir das análises estrutural e textual

Ribas, Oeslei Taborda 28 August 2013 (has links)
Com a ampla utilização da web nos dias atuais e também com o seu crescimento constante, a tarefa de classificação automática de sítios web têm adquirido importância crescente, pois em diversas ocasiões é necessário bloquear o acesso a sítios específicos, como por exemplo no caso do acesso a sítios de conteúdo adulto em escolas elementares e secundárias. Na literatura diferentes trabalhos têm surgido propondo novos métodos de classificação de sítios, com o objetivo de aumentar o índice de páginas corretamente categorizadas. Este trabalho tem por objetivo contribuir com os métodos atuais de classificação através de comparações de quatro aspectos envolvidos no processo de classificação: algoritmos de classificação, dimensionalidade (número de atributos considerados), métricas de avaliação de atributos e seleção de atributos textuais e estruturais presentes nas páginas web. Utiliza-se o modelo vetorial para o tratamento de textos e uma abordagem de aprendizagem de máquina clássica considerando a tarefa de classificação. Diversas métricas são utilizadas para fazer a seleção dos termos mais relevantes, e algoritmos de classificação de diferentes paradigmas são comparados: probabilista (Naıve Bayes), árvores de decisão (C4.5), aprendizado baseado em instâncias (KNN - K vizinhos mais próximos) e Máquinas de Vetores de Suporte (SVM). Os experimentos foram realizados em um conjunto de dados contendo sítios de dois idiomas, Português e Inglês. Os resultados demonstram que é possível obter um classificador com bons índices de acerto utilizando apenas as informações do texto ˆancora dos hyperlinks. Nos experimentos o classificador baseado nessas informações atingiu uma Medida-F de 99.59%. / With the wide use of the web nowadays, also with its constant growth, task of automatic classification of websites has gained increasing importance. In many occasions it is necessary to block access to specific sites, such as in the case of access to adult content sites in elementary and secondary schools. In the literature different studies has appeared proposing new methods for classification of sites, with the goal of increasing the rate of pages correctly categorized. This work aims to contribute to the current methods of classification by comparing four aspects involved in the classification process: classification algorithms, dimensionality (amount of selected attributes), attributes evaluation metrics and selection of textual and structural attributes present in webpages. We use the vector model to treat text and an machine learning classical approach according to the classification task. Several metrics are used to make the selection of the most relevant terms, and classification algorithms from different paradigms are compared: probabilistic (Na¨ıve Bayes), decision tree (C4.5), instance-based learning (KNN - K-Nearest Neighbor) and support vector machine (SVM). The experiments were performed on a dataset containing two languages, English and Portuguese. The results show that it is possible to obtain a classifier with good success indexes using only the information from the anchor text in hyperlinks, in the experiments the classifier based on this information achieved 99.59% F-measure.
27

Intranet concept for small business

Lenaburg, Allen Gregg 01 January 2004 (has links)
The purpose of this project is to build a working intranet containing core applications that create the framework for a small business intranet. Small businesses may benefit from an intranet because of its ability to effectively streamline the processes for retrieving and distributing information. Intranets are internal networks using TCP/IP protocols, Web server software, and browser client software to share information created in HTML within an organization, and to access company databases.
28

Web-based geotemporal visualization of healthcare data

Bloomquist, Samuel W. 09 October 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Healthcare data visualization presents challenges due to its non-standard organizational structure and disparate record formats. Epidemiologists and clinicians currently lack the tools to discern patterns in large-scale data that would reveal valuable healthcare information at the granular level of individual patients and populations. Integrating geospatial and temporal healthcare data within a common visual context provides a twofold benefit: it allows clinicians to synthesize large-scale healthcare data to provide a context for local patient care decisions, and it better informs epidemiologists in making public health recommendations. Advanced implementations of the Scalable Vector Graphic (SVG), HyperText Markup Language version 5 (HTML5), and Cascading Style Sheets version 3 (CSS3) specifications in the latest versions of most major Web browsers brought hardware-accelerated graphics to the Web and opened the door for more intricate and interactive visualization techniques than have previously been possible. We developed a series of new geotemporal visualization techniques under a general healthcare data visualization framework in order to provide a real-time dashboard for analysis and exploration of complex healthcare data. This visualization framework, HealthTerrain, is a concept space constructed using text and data mining techniques, extracted concepts, and attributes associated with geographical locations. HealthTerrain's association graph serves two purposes. First, it is a powerful interactive visualization of the relationships among concept terms, allowing users to explore the concept space, discover correlations, and generate novel hypotheses. Second, it functions as a user interface, allowing selection of concept terms for further visual analysis. In addition to the association graph, concept terms can be compared across time and location using several new visualization techniques. A spatial-temporal choropleth map projection embeds rich textures to generate an integrated, two-dimensional visualization. Its key feature is a new offset contour method to visualize multidimensional and time-series data associated with different geographical regions. Additionally, a ring graph reveals patterns at the fine granularity of patient occurrences using a new radial coordinate-based time-series visualization technique.

Page generated in 0.0913 seconds