• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 45
  • 34
  • 30
  • 30
  • 18
  • 8
  • 7
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 288
  • 116
  • 88
  • 70
  • 69
  • 67
  • 53
  • 39
  • 34
  • 34
  • 34
  • 29
  • 28
  • 27
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Konzeption und Implementierung eines Anzeigenannahme-, Service- und Verwaltungssystems für das Publizieren und Verwalten von gestalteten Inseraten für Anzeigen-Online-Dienste im World Wide Web

Blümel, Christian 20 October 2017 (has links)
Ausgehend von der Aufgabenstellung, ein Anzeigenannahme-, Service- und Verwaltungssystem (ASV-System) für das Inserieren und Verwalten gestalteter Anzeigen im WWW zu entwickeln, das eine Lösungsmöglichkeit für die Einschränkungen und Problemstellungen aktueller Anzeigen-Online-Dienste im WWW aufzeigt, wurden die Hauptforderungen an das ASV-System formuliert. Es sollte ein ASV-System entwickelt werden, das es Kunden des Anzeigen-Online-Dienstes ermöglicht, gestaltete Anzeigen in Form von HTML-Seiten mit eingebundenen Grafiken, Bildern, JAVA-Applets, Videos etc. nur unter Verwen-dung eines üblichen WWW-Browsers zu schalten und während der gesamten Laufzeit der Anzeige verändern zu können. Die Mitarbeiter (Verwalter) des Anzeigen-Online-Dienstes sollten ebenfalls nur unter Verwendung eines üblichen WWW-Browsers die eingegangenen Aufträge bearbeiten und freischalten können. Die zu entwickelnde Konzeption sollte schließlich in Form eines Beispiel-ASV-Systems realisiert werden. Die Entwicklung des ASV-Systems wurde nach Verfahren des Software-Engineerings durchgeführt. So wurden schrittweise das Lastenheft, das Pflichten-heft, das Produktmodell und die Konzeption der Benutzungsoberfläche erarbeitet. Besondere Aufmerksamkeit wurde dabei der Erstellung des Produktmodells gewidmet, das mittels der Methode der Strukturierten Analyse und unter Verwendung des Basiskonzeptes Entity-Relationship-Modell entwickelt wurde. Es besteht aus den Teilen ER-Modell, Datenflußdiagramm, Data Dictionary und einer ausführlichen Mini-Spezifikation in Form von Pseudocode. Das Datenbankschema des ASV-Systems wurde auf der Grundlage des zuvor erarbeiteten ER-Modells in der Entwurfsphase erstellt. Die Entwicklung des ASV-Systems als WWW-Anwendung erforderte die Bearbeitung einer Reihe von Problemstellungen, für die die traditionellen Methoden des Software-Engineerings keine adäquate Unterstützung anboten. So mußten Problemstellungen wie die Analyse der Anforderungen an das ASV-System als Client/Server-Anwendung, die Auswahl der geeigneten Entwicklungs-werkzeuge und die Analyse der relevanten Sicherheitsaspekte gesondert durch-geführt werden. Das auf der Grundlage der erstellten Konzeption realisierte Beispiel-ASV-System erfüllt die gestellten Hauptanforderungen und könnte bestimmte Erweiterungen wie zum Beispiel eine Benutzerverwaltung vorausgesetzt als Grundlage für die Realisierung einer Vielzahl von Online-Diensten (Anzeigendienste, Stellen- und Immobilienbörsen etc.) genutzt werden.
202

Linux - aktiv im Netz

Schreiber, Alexander 14 June 2000 (has links)
Dieser Vortrag gibt eine kleine Uebersicht ueber die Einsatzmoeglichkeiten von Linux im Netz - sowohl als Client als auch als Server.
203

WWW Privacy - P3P Platform of Privacy Preferencers

Foerster, Marian 10 July 2000 (has links)
Gemeinsamer Workshop von Universitaetsrechenzentrum und Professur Rechnernetze und verteilte Systeme (Fakultaet fuer Informatik) der TU Chemnitz. Workshop-Thema: Infrastruktur der ¨Digitalen Universitaet¨ WWW Privacy - P3P Platform of Privacy Preferencers Der Vortrag soll einen Einblick in das z.Zt. noch in der Entwicklung stehenden Protokolls P3P des W3C geben. Dabei wird das Grundprinzip von P3P, einige technische Realisierungsmoeglichkeiten sowie ein Demo-Einkaufssystem vorgestellt.
204

Evaluation, Analysis and adaptation of web prefetching techniques in current web

Doménech i de Soria, Josep 06 May 2008 (has links)
Abstract This dissertation is focused on the study of the prefetching technique applied to the World Wide Web. This technique lies in processing (e.g., downloading) a Web request before the user actually makes it. By doing so, the waiting time perceived by the user can be reduced, which is the main goal of the Web prefetching techniques. The study of the state of the art about Web prefetching showed the heterogeneity that exists in its performance evaluation. This heterogeneity is mainly focused on four issues: i) there was no open framework to simulate and evaluate the already proposed prefetching techniques; ii) no uniform selection of the performance indexes to be maximized, or even their definition; iii) no comparative studies of prediction algorithms taking into account the costs and benefits of web prefetching at the same time; and iv) the evaluation of techniques under very different or few significant workloads. During the research work, we have contributed to homogenizing the evaluation of prefetching performance by developing an open simulation framework that reproduces in detail all the aspects that impact on prefetching performance. In addition, prefetching performance metrics have been analyzed in order to clarify their definition and detect the most meaningful from the user's point of view. We also proposed an evaluation methodology to consider the cost and the benefit of prefetching at the same time. Finally, the importance of using current workloads to evaluate prefetching techniques has been highlighted; otherwise wrong conclusions could be achieved. The potential benefits of each web prefetching architecture were analyzed, finding that collaborative predictors could reduce almost all the latency perceived by users. The first step to develop a collaborative predictor is to make predictions at the server, so this thesis is focused on an architecture with a server-located predictor. The environment conditions that can be found in the web are als / Doménech I De Soria, J. (2007). Evaluation, Analysis and adaptation of web prefetching techniques in current web [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1841
205

Publication of the Bibliographies on the World Wide Web

Moral, Ibrahim Utku 28 January 1997 (has links)
Every scientific research begins with a literature review that includes an extensive bibliographic search. Such searches are known to be difficult and time-consuming because of the vast amount of topical material existing in today's ever-changing technology base. Keeping up-to-date with related literature and being aware of the most recent publications require extensive time and effort. The need for a WWW-based software tool for collecting and providing access to this scientific body of knowledge is undeniable. The study explained herein deals with this problem by developing an efficient, advanced, easy-to-use tool, WebBiblio, that provides a globally accessible WWW environment enabling the collection and dissemination of searchable bibliographies comprised of abstracts and keywords. This thesis describes the design, structure and features of WebBiblio, and explains the ideas and approaches used in its development. The developed system is not a prototype, but a production system that exploits the capabilities of the WWW. Currently, it is used to publish three VV&T bibliographies at the WWW site: http://manta.cs.vt.edu/biblio. With its rich set of features and ergonomically engineered interface, WebBiblio brings a comprehensive solution to solving the problem of globally collecting and providing access to a diverse set of bibliographies. / Master of Science
206

Modelos Proativos para Hipermídia Adaptativa / Proactive models for adaptive hypermedia

Palazzo, Luiz Antonio Moro January 2000 (has links)
Os Sistemas de Hipermídia (SH) vem se tornando cada vez mais populares em diversas áreas de aplicação, tais como educação, marketing, comércio elétrico, informação pessoal e serviços inteligentes de interface. Atualmente um dos principais ramos da pesquisa em SH são os Sistemas de Hipermídia Adaptativa (SHA) [BRU 96] [ESP 97], juntamente com as áreas relacionadas de Modelagem do Usuário (MU) e Interfaces Inteligentes (II). Uma das características mais criticas em um SHA é o modelo do usuário, uma representação dos objetivos, conhecimento, preferências, necessidades e desejos de seus usuários. A idéia é que usuários com diferentes perfis ou modelos estarão interessados em diferentes perfis de informação dentre as apresentadas em uma página hipermídia e podem também desejar navegar no sistema através de diferentes links. A ação adaptativa em um SHA é orientada de modo a oferecer a seus usuários informação hipermídia e navegação ajustados aos respectivos modelos. A adaptação é geralmente considerada de uma forma retroativa, onde as estruturas de apresentação e navegação são produzidas como simples reações evolução passada do modelo do usuário e a oportunidades oferecidas pelo ambiente. A adaptação proativa [PAL 98] adota a idéia de seleção e mesmo a geração ativa de hiperdocumentos que serão provavelmente interessante para um determinado usuário. O uso de modelos proativos para a obtenção de informação personalizada permite a antecipação das necessidades e demandas do usuário. Isto é obtido através do emprego de algum tipo de inferência sobre os objetos hipermídia disponíveis, restrita pelo conhecimento disponível no modelo do usuário. No presente trabalho propõe-se uma metodologia para a construção de SHA, através da integração de dois modelos proativos diferentes. O primeiro desses modelos possui características conexionistas e é orientado a navegação adaptativa. Este modelo destaca a representação comportamental dos links na rede, considerando a freqüência com que estes são percorridos. O processo de modelagem é balizado pelas leis de transitividade e reflexividade, que permitem representar proativamente o hiperespaço, simplesmente através da quantização de seus links, abstraindo o conteúdo de seus nodos. O segundo modelo trata dos aspectos semânticos do processamento de informações através da lógica das situações, que oferece um arcabouço formal para a representação, composição e inferência da relação de relevância entre os nodos de sistemas de hipermídia adaptativa. O ponto de partida são os conceitos de infon, documento e descritor, assim como a semântica da relação "é sobre" que pode existir entre diferentes documentos. A integração entre os dois modelos é realizada através da sobreposição das representações em um domínio compartilhado. Uma arquitetura genérica orientada a agentes para o desenvolvimento de sistemas de HA proativa é apresentada, centrada nos processos de interfaceamento, modelagem e adaptação. A tese se completa com o projeto e desenvolvimento de um sistema educacional online com adaptação proativa para a World Wide Web. Trabalhos futuros são propostos nas áreas da educação, sistemas de informações pessoais e o trabalho colaborativo de equipes. / Hypermedia Systems (HS) are becoming more and more popular in several application areas, like education, marketing, e-commerce, personal information and intelligent interface services. Currently, one of the main branches in HS research is Adaptive Hypermedia Systems (AHS) [BRU 96] [ESP 97], with its related technologies, like User Modeling (UM) and Intelligent Interfaces (II) . One of the most critical features in an AHS is the user model, a representation of the goals, knowledge, preferences, needs and desires of its users. The underlying idea is that users with different profiles or models will be interested in different pieces of information presented on a hypermedia page and may also want to use different links for navigation. The adaptation task performed by an AHS is oriented to assist users with personallytailored hypermedia information and navigation. Adaptation is in general viewed in a retroactive way where presentation and navigational structures are produced as simple reactions to the past evolution of the user model and environment opportunities. The proactive way [PAL 98] supports the idea of active selection or even the generation of hyperdocuments that will probably be interesting to a particular user. Use of proactive models in personal information gathering allows the anticipation of users needs and requests. This is achieved by applying some kind of inference over the available hypermedia objects, constrained by the knowledge present in the user model. In this work a methodology for AHS construction of is proposed, by means of the integration of two different proactive models. The first one has a conexionist trait and is oriented to adaptive navigation. This model enhances behavioral representation of the links in the network through the frequency in which they are activated. Modeling process is here controlled by the laws of transitivity and synmetry, allowing proactive representation of the hyperspace only by means of the links, with no regard for the nodes contents. The second model is related with semantic aspects of information processing through the theory of situations, which offers a formal framework for representing, composing and inference of the relevance relationship between nodes in AHS. The starting point here are the concepts of infon, document and descriptor, as the semantic of the aboutness relationship that may occur between documents. The integration of these two models is done by superposing the representations on a shared domain. An agent-oriented general architecture for the development of proactive AHS is presented, focusing interface, modeling and adaptation processes. The work is concluded with the project and development of a educational online system with proactive adaptation for the World Wide Web. Future work is proposed in the areas of education, personal information systems and collaborative development.
207

SDIP: um ambiente inteligente para a localização de informações na internet / SDIP: an intelligent system to discover information on the internet

Fernandez, Luis Fernando Nunes January 1995 (has links)
A proposta do trabalho descrito detalhadamente neste texto é implementar um sistema inteligente, que seja capaz de auxiliar os seus usuários na tarefa de localizar e recuperar informações, dentro da rede Internet. Com o intuito de alcançar o objetivo proposto, construímos um sistema que oferece aos seus usuários duas formas distintas, porem integradas, de interfaces: língua natural e gráfica (baseada em menus, janelas etc.). Adicionalmente, a pesquisa das informações é realizada de maneira inteligente, ou seja, baseando-se no conhecimento gerenciado pelo sistema, o qual é construído e estruturado dinamicamente pelo próprio usuário. Em linhas gerais, o presente trabalho está estruturado logicamente em quatro partes, a saber: 1. Estudo introdutório dos mais difundidos sistemas de pesquisa e recuperação de informações, hoje existentes dentro da Internet. Com o crescimento desta rede, aumentaram enormemente a quantidade e a variedade das informações por ela mantidas, e disponibilizadas aos seus usuários. Concomitantemente, diversificaram-se os sistemas que permitem o acesso a este conjunto de informações, distribuídas em centenas de servidores por todo o mundo. Nesse sentido, com o intuito de situar e informar o leitor a respeito do tema, discutimos detidamente os sistemas Archie, gopher, WAIS e WWW; 2. Estudo introdutório a respeito da Discourse Representation Theory (DRT). Em linhas gerais, a DRT é um formalismo para a representação do discurso que faz use de modelos para a avaliação semântica das estruturas geradas, que o representam. Por se tratar de um estudo introdutório, neste trabalho discutiremos tão somente os aspectos relativos a representação do discurso que são propostos pela teoria, dando ênfase a, forma de se representar sentenças simples, notadamente aquelas de interesse do sistema; 3. Estudo detalhado da implementação, descrevendo cada um dos processos que formam o sistema. Neste estudo são abordados os seguintes módulos: Processo Archie: modulo onde está implementadas as facilidades que permitem ao sistema interagir com os servidores Archie; Processo FTP: permite ao SDIP recuperar arquivos remotos, utilizando o protocolo padrão da Internet FTP; Front-end e Interface SABI: possibilitam a realização de consultas bibliográficas ao sistema SABI, instalado na Universidade Federal do Rio Grande do Sul; Servidor de Correio Eletrônico: implementa uma interface alternativa para o acesso ao sistema, realizado, neste caso, por intermédio de mensagens do correio eletrônico; Interface Gráfica: oferece aos usuários um ambiente gráfico para a interação com o sistema; Processo Inteligente: Modulo onde está implementada a parte inteligente do sistema, provendo, por exemplo, as facilidades de interpretação de sentenças da língua portuguesa. 4. Finalmente, no epilogo deste trabalho, mostramos exemplos que ilustram a utilização das facilidades oferecidas pelo ambiente gráfico do SDIP. Descrevendo sucinta.mente o funcionamento do sistema, os comandos e consultas dos usuários podem ser formuladas de duas maneiras distintas. No primeiro caso, o sistema serve apenas como um intermediário para o acesso aos servidores Archie e SABI, oferecendo aos usuários um ambiente gráfico para a interação com estes dois sistemas. Na segunda modalidade, os usuários formulam as suas consultas ou comandos, utilizando-se de sentenças em língua natural. Neste Ultimo caso, quando se tratar de uma consulta, o sistema, utilizando-se de sua base de conhecimento, procurara aperfeiçoar a consulta efetuada pelo usuário, localizando, desta forma, as informações que melhor atendam as necessidades do mesmo. / The proposal of the work describe detailedly in this master dissertation is to implement an intelligent system that will be capable of to help of its users in the task of locate and retrieve informations, inside of the Internet. With the object of reach this goal, was builded a system that offer to its users two distincts types, however integrated, of interfaces: natural language and graphic ( based in menus, windows, etc ). Furthermore, the search of the informations is realized of intelligent way, based it in the knowledgement managed by system, which is builded and structured dinamically by the users. In general lines, the present work are structured logically in four parts, which are listed below: 1. Introdutory study of the most divulgated systems of search and retrieval of informations, today existent inside of the Internet. With growth of this net, increase greatfull the quantity and variety of the informations keeped and published for users by it. Beside it, has appeared to many systems that allow the access to this set of informations, distributed on hundreds of servers in the whole world. In these sense, with the intuit of situate and to inform the reader about the subject, we describe formally the systems archie, gopher, WAIS and WWW , respectively; 2. An Introdutory study of the Discourse Representation Theory (DRT). In this work, the DRT is the formalism utilized for the representation of the discourse that uses models to evaluate semanticly the structures generated, which represent it. In fact, we will discusse in this work so only the aspects relatives to discourse representation that are purposes by theory, given emphasis for the way to represent simple sentences, notory those recognized and important for the system ; 3. Detailed study of the implementation, describing each of the process that compose the system. In this study are described the following modules : Archie Process: Module where are implemented the facilities that allow the system to interact whit the Archie Servers in the Internet; FTP Process: it allows the SDIP to retrieve remote files, utilizing the standard protocol of the Internet, called FTP (File Transfer Protocol); Front-end and Interface SABI: these components are used by system to realize bibliographic queries to SABI manager, installed at Universidade Federal do Rio Grande do Sul; Eletronic Mail Server: it implements an alternative interface to access SDIP, realized in this case, throught eletronic mail messages, which transport firstly the user's query and secondly the system's response; Graphic Interface : it offers to the users a graphical environment for the interaction with the system ; Intelligent Process: module where are implemented the intelligent part of the system, providing, for instance, the facilities for interpretation of sentences wrote in portuguese language. 4. Finally, in the epilogue of this work, we show samples that illustrate the utilization of the facilities implemented at SDIP's graphical environment. Describing the functionability of the system, the users's commands and queries could be formulated of two disctincts ways. In the first case, the system serves only as the intermediary for the access to Archie servers and SABI, offering for its users a graphical environment for the interaction with these two others systems. In the second modality, the users formulate their queries or commands, utilizing sentences in natural language. In this last case, when it is a query, the system utilizing its base of knowledgement, will try to refine the user's question, localizing the set of information that better satisfies his needs.
208

企業網站使用者介面與互動模式之研究

徐安良, Hsu, An-Liang Unknown Date (has links)
本研究主要探討企業網站的使用者介面與互動模式之設計與應用情形。本研究屬於實徵性研究,經由文獻的探討整理出網站的網頁調查分析表以及建立問卷調查的架構,研究樣本的選擇皆根據數位週刊中對於台灣500大網站調查報告中,有關企業網站之部份。  在網站調查分析方面,本研究針對金融業、通訊業、電子業、軟體業及服務業等五大類型的企業網站,挑選各類別排行前十名的企業網站進行實際調查,共計50家企業網站。調查結果發現:(1)企業網站的版面整體的編排上較偏向平面、靜態的設計。(2)有豐富的色彩使用。(3)企業網站對於活潑動態資訊內容的提供上不足。(4)網頁與使用者間的的互動功能仍有待加強。  在問卷調查方面,本研究採Email寄發問卷,共寄出152份問卷,有效回收31份,有效回收率24.03%。經由資料分析發現:(1) 不同類型的企業網站使用者其對於網頁構成要素重要性的認定都是一致的,並不會因為企業網站類型的不同而有所不同 (2) 使用者對其本身公司網站設計的滿意程度由高至低依次為金融業、電子業、軟體業、服務業、通訊業。(3) 不同類型的企業網站使用者其對於網頁互動模式重要性的認定都是一致的,並不會因為企業網站類型的不同而有所不同。(4)女性對於網頁構成要素及互動模式的重要性偏好在某些項目上較男性為高。(5)不同教育背景的使用者對於網頁的構成要素及互動模式在某些項目上有顯著的差異。(6) 不同年齡層的使用者對於網頁構成要素及互動模式的重要性認知上並無顯著差異。 隨著網際網路的蓬勃發展,企業網站顯然已成為掌握市場動脈與提昇競爭優勢的一項重要工具。網站是本小利大的投資,企業網站除了要提升網頁介面的吸引力及親和度,增進與使用者的互動外,企業網站應提供更豐富、實用的資訊內容以強化本身的競爭優勢。
209

全球資訊網論述表現初探-以反國民卡行動聯盟網站為例

林智惟 Unknown Date (has links)
本文從社會脈絡的角度來思考網際網路之傳播研究,認為單一網路傳播現象或傳播事件有助於簡化研究問題。因此本文試圖從網際網路中的一個重要傳播形態-全球資訊網-出發,目的在了解其特殊文本結構與傳播特質,如何影響其在某一社會議題傳播過程(或傳播內容的生產與消費)中的論述表現?以及此論述表現對社會議題的影響力和意義?作者挑選了一個曾是重要社會議題,也與全球資訊網傳播形態發生關連之國民卡事件作為討論對象,原因是「反國民卡網站」的成立讓反國民卡行動成為一個與網路傳播相關的社會事件。在研究方法上,作者嘗試應用論述分析方法於全球資訊網網站的分析中,並希望發展出全球資訊網的系統性分析方法。研究發現,論述活動接合了網際空間與社會空間,因此論述可以作為研究網路傳播之出發點。其次,全球資訊網超文本之開放性,提供社會成員更多操縱傳播媒體的機會以及一個多元論述的環境,讓社會成員依自身利益主動結合取得對事物的詮釋權力,而全球資訊網論述表現對社會議題所具有的意義,也就在它可以作為一個「團體的媒體」,讓社會成員可以結合起來對抗社會中的主流論述。最後,全球資訊網網站之論述分析可以從網站「文本」形態出發,進而討論文本背後所牽涉到的相關社會脈絡,以豐富對個案的詮釋。
210

Implementierung eines Algorithmus zur Partitionierung von Graphen

Riediger, Steffen 05 July 2007 (has links) (PDF)
Partitionierung von Graphen ist im Allgemeinen sehr schwierig. Es stehen derzeit keine Algorithmen zur Verfügung, die ein allgemeines Partitionierungsproblem effizient lösen. Aus diesem Grund werden heuristische Ansätze verfolgt. Zur Analyse dieser Heuristiken ist man derzeit gezwungen zufällige Graphen zu Verwenden. Daten realer Graphen sind derzeit entweder nur sehr schwer zu erheben (z.B. Internetgraph), oder aus rechtlichen bzw. wirtschaftlichen Gründen nicht zugänglich (z.B. soziale Netzwerke). Die untersuchten Heuristiken liefern teilweise nur unter bestimmten Voraussetzungen Ergebnisse. Einige arbeiten lediglich auf einer eingeschränkten Menge von Graphen, andere benötigen zum Erkennen einer Partition einen mit der Knotenzahl steigenden Durchschnittsgrad der Knoten, z.B. [DHM04]. Der im Zuge dieser Arbeit erstmals implementierte Algorithmus aus [CGL07a] benötigt lediglich einen konstanten Durchschnittsgrad der Knoten um eine Partition des Graphen, wenn diese existiert, zu erkennen. Insbesondere muss dieser Durchschnittsgrad nicht mit der Knotenzahl steigen. Nach der Implementierung erfolgten Tests des Algorithmus an zufälligen Graphen. Diese Graphen entsprachen dem Gnp-Modell mit eingepflanzter Partition. Die untersuchten Clusterprobleme waren dabei große Schnitte, kleine Schnitte und unabhängige Mengen. Der von der Art des Clusterproblems abhängige Durchschnittsgrad wurde während der Tests bestimmt.

Page generated in 0.0394 seconds