Spelling suggestions: "subject:"[een] HTML"" "subject:"[enn] HTML""
381 |
Originální a převzatý zdroj v síti internet / Original and Assumed Sources on World Wide WebSvatý, Michal January 2011 (has links)
The thesis focuses on problem of authorship of the Internet content. It considers the question whether we can analyze the artwork purely as a closed structure of the symbols or whether it is necessary to take work as a broader structure. The first part of the thesis shows the Internet in general and its specifics it as a medium. The second part is devoted to the problem of authorship. It explains traditional view of Nelson Goodman, who examined the authenticity of artwork with regard to the internal arrangement of the elements structure of the object. The third part shows the later theories inconsistent with the views of Nelson Goodman, views of different contemporary authors who present the changes of authorship aspects in the environment which is under constant development, quickly sharing ideas and resources. New theories suggest to look beyond the authorship of the artwork domain, apply broader focous. The conclusion provides a synthesis of these views.
|
382 |
Web-based geotemporal visualization of healthcare dataBloomquist, Samuel W. 09 October 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Healthcare data visualization presents challenges due to its non-standard organizational structure and disparate record formats. Epidemiologists and clinicians currently lack the tools to discern patterns in large-scale data that would reveal valuable healthcare information at the granular level of individual patients and populations. Integrating geospatial and temporal healthcare data within a common visual context provides a twofold benefit: it allows clinicians to synthesize large-scale healthcare data to provide a context for local patient care decisions, and it better informs epidemiologists in making public health recommendations.
Advanced implementations of the Scalable Vector Graphic (SVG), HyperText Markup Language version 5 (HTML5), and Cascading Style Sheets version 3 (CSS3) specifications in the latest versions of most major Web browsers brought hardware-accelerated graphics to the Web and opened the door for more intricate and interactive visualization techniques than have previously been possible. We developed a series of new geotemporal visualization techniques under a general healthcare data visualization framework in order to provide a real-time dashboard for analysis and exploration of complex healthcare data. This visualization framework, HealthTerrain, is a concept space constructed using text and data mining techniques, extracted concepts, and attributes associated with geographical locations.
HealthTerrain's association graph serves two purposes. First, it is a powerful interactive visualization of the relationships among concept terms, allowing users to explore the concept space, discover correlations, and generate novel hypotheses. Second, it functions as a user interface, allowing selection of concept terms for further visual analysis.
In addition to the association graph, concept terms can be compared across time and location using several new visualization techniques. A spatial-temporal choropleth map projection embeds rich textures to generate an integrated, two-dimensional visualization. Its key feature is a new offset contour method to visualize multidimensional and time-series data associated with different geographical regions. Additionally, a ring graph reveals patterns at the fine granularity of patient occurrences using a new radial coordinate-based time-series visualization technique.
|
383 |
Systém pro podporu e-learningu / System for E-learning SupportDrahoš, Michal Unknown Date (has links)
This project is dealing with creating of system for e-learning support. It describes problematics of e-learning, explains meaning of this term and describes its advantages and disadvantages. The objectives were to create an application suitable for electronic learning, which could illustrate specifications of restrictions of the system objects. This means to create an application, where is explained the problem of OCL in that kind of format, which reacts the structure and the form of application used for e-learning. I meet the program languages of OCL and UML 2.0. I read over the CASE Rational Rose environment and meet the e-learning problem, too. These are the knowledges, which I used while I was creating my e-learning application.
|
384 |
Modulární informační systém pro publikování / Modular Information System for PublishingLevinský, Stanislav January 2007 (has links)
This master thesis dives a brief introduction to the languages for creating web information systems and languages for creating web pages. It describes existing modular systems for publication (also known as content management systems, CSM), their advantages and disavantages. Requirements for the Content Management System are specified and the general scheme (ERD) for such a system is proposed. Presented system is realised in PHP and database system MySQL. Implemented user-friendly aplication running on the server communicates developed system that provides content management functions.
|
385 |
Systém pro podporu výuky dynamických datových struktur / System for Support of Dynamic Data Structures LearningTrávníček, Jiří Unknown Date (has links)
The main objective of this work is to design and implement an application that can be used as an aid for the education of programming essentials. Particularly, the attention focuses on the domain of dynamic data structures. The target application will be implemented with the use of web technologies so that it can be run in an ordinary WWW browser. First of all, a brief introduction recapitulates the data structures to be covered. Then the work summarizes the usable technologies available within the web browsers with the focus on the particular technology (which is DHTML) that will become the target platform. The most significant part of this work then discusses the design of the final application. This rather theoretical part is then followed by the description of the practical implementation. A short user manual is also included.
|
386 |
Studijní statistiky na portálu / Study Statistics for PortalGruzová, Michaela Unknown Date (has links)
The master's thesis discusses the project and implementation of study statistics of the Portal of Brno University of Technology. It analyses the structure of the BUT portal and the technologies used for its creation. The technologies we are speaking about are the server-side scripting language PHP, query language SQL and cascading style sheets CSS. It describes the Oracle database technology and the st01 database scheme, a large scheme containing data used by information systems. It analyses the initial state of web applications and the situation at single faculties. The thesis analyses different solutions of certain parts of the application and studies the most appropriate ones. At the end it describes the implementation of study statistics and its integration into the central information system.
|
387 |
Improving Retrieval Accuracy in Main Content Extraction from HTML Web DocumentsMohammadzadeh, Hadi 17 December 2013 (has links) (PDF)
The rapid growth of text based information on the World Wide Web and various applications making use of this data motivates the need for efficient and effective methods to identify and separate the “main content” from the additional content items, such as navigation menus, advertisements, design elements or legal disclaimers.
Firstly, in this thesis, we study, develop, and evaluate R2L, DANA, DANAg, and AdDANAg, a family of novel algorithms for extracting the main content of web documents. The main concept behind R2L, which also provided the initial idea and motivation for the other three algorithms, is to use well particularities of Right-to-Left languages for obtaining the main content of web pages. As the English character set and the Right-to-Left character set are encoded in different intervals of the Unicode character set, we can efficiently distinguish the Right-to-Left characters from the English ones in an HTML file. This enables the R2L approach to recognize areas of the HTML file with a high density of Right-to-Left characters and a low density of characters from the English character set. Having recognized these areas, R2L can successfully separate only the Right-to-Left characters. The first extension of the R2L, DANA, improves effectiveness of the baseline algorithm by employing an HTML parser in a post processing phase of R2L for extracting the main content from areas with a high density of Right-to-Left characters. DANAg is the second extension of the R2L and generalizes the idea of R2L to render it language independent. AdDANAg, the third extension of R2L, integrates a new preprocessing step to normalize the hyperlink tags. The presented approaches are analyzed under the aspects of efficiency and effectiveness. We compare them to several established main content extraction algorithms and show that we extend the state-of-the-art in terms of both, efficiency and effectiveness.
Secondly, automatically extracting the headline of web articles has many applications. We develop and evaluate a content-based and language-independent approach, TitleFinder, for unsupervised extraction of the headline of web articles. The proposed method achieves high performance in terms of effectiveness and efficiency and outperforms approaches operating on structural and visual features. / Das rasante Wachstum von textbasierten Informationen im World Wide Web und die Vielfalt der Anwendungen, die diese Daten nutzen, macht es notwendig, effiziente und effektive Methoden zu entwickeln, die den Hauptinhalt identifizieren und von den zusätzlichen Inhaltsobjekten wie
z.B. Navigations-Menüs, Anzeigen, Design-Elementen oder Haftungsausschlüssen trennen.
Zunächst untersuchen, entwickeln und evaluieren wir in dieser Arbeit R2L, DANA, DANAg und AdDANAg, eine Familie von neuartigen Algorithmen zum Extrahieren des Inhalts von Web-Dokumenten. Das grundlegende Konzept hinter R2L, das auch zur Entwicklung der drei weiteren Algorithmen führte, nutzt die Besonderheiten der Rechts-nach-links-Sprachen aus, um den Hauptinhalt von Webseiten zu extrahieren.
Da der lateinische Zeichensatz und die Rechts-nach-links-Zeichensätze durch verschiedene Abschnitte des Unicode-Zeichensatzes kodiert werden, lassen sich die Rechts-nach-links-Zeichen leicht von den lateinischen Zeichen in einer HTML-Datei unterscheiden. Das erlaubt dem R2L-Ansatz, Bereiche mit einer hohen Dichte von Rechts-nach-links-Zeichen und wenigen lateinischen Zeichen aus einer HTML-Datei zu erkennen. Aus diesen Bereichen kann dann R2L die Rechts-nach-links-Zeichen extrahieren. Die erste Erweiterung, DANA, verbessert die Wirksamkeit des Baseline-Algorithmus durch die Verwendung eines HTML-Parsers in der Nachbearbeitungsphase des R2L-Algorithmus, um den Inhalt aus Bereichen mit einer hohen Dichte von Rechts-nach-links-Zeichen zu extrahieren. DANAg erweitert den Ansatz des R2L-Algorithmus, so dass eine Sprachunabhängigkeit erreicht wird. Die dritte Erweiterung, AdDANAg, integriert eine neue Vorverarbeitungsschritte, um u.a. die Weblinks zu normalisieren. Die vorgestellten Ansätze werden in Bezug auf Effizienz und Effektivität analysiert. Im Vergleich mit mehreren etablierten Hauptinhalt-Extraktions-Algorithmen zeigen wir, dass sie in diesen Punkten überlegen sind.
Darüber hinaus findet die Extraktion der Überschriften aus Web-Artikeln vielfältige Anwendungen. Hierzu entwickeln wir mit TitleFinder einen sich nur auf den Textinhalt beziehenden und sprachabhängigen Ansatz. Das vorgestellte Verfahren ist in Bezug auf Effektivität und Effizienz besser als bekannte Ansätze, die auf strukturellen und visuellen Eigenschaften der HTML-Datei beruhen.
|
388 |
WORKSHOP "MOBILITÄT"Anders, Jörg 12 June 2001 (has links)
Gemeinsamer Workshop von Universitaetsrechenzentrum und Professur "Rechnernetze und verteilte Systeme" der Fakultaet fuer Informatik der TU Chemnitz.
Workshop-Thema: Mobilitaet
|
389 |
En studie av hur en webbapplikation för annonsering av konsultuppdrag till studenter kan implementeras för att uppfattas som användbar / A study of how a web application for advertising consulting jobs to students can be implemented to be perceived as usefulÅström, Adam, Öberg, Albin, Elkjaer, Alice, Olsson, Fredrik, Bengtsson Malmborg, Hannes, Jacobson, Madeleine, Schwartz-Blicke, Oscar, Storsved, Viktor January 2021 (has links)
Studentkonsultprojekt gör det möjligt för studenter att applicera sin kunskap i näringslivet samtidigt som det blir mindre kostsamt för företagen att anlita konsulter. Då jobbsökande via internet blir allt vanligare finns det ett behov av en webbapplikation för konsultuppdrag som kopplar samman studenter och företag. En av de viktigaste aspekterna för att skapa en konkurrenskraftig webbapplikation är användbarheten. Således är intentionen med denna studie att undersöka Hur kan en webbapplikation för konsultuppdrag mellan företag och studenter implementeras för att uppfattas som användbar av studenter? För att besvara frågeställningen har en webbapplikation för förmedling av konsulttjänster mellan företag och studenter utvecklats. Webbapplikationen baseras på en teoretisk grund där olika dimensioner av begreppet användbarhet analyserats. De dimensioner som lyfts är effektivitet, ändamålsenlighet och tillfredsställelse. I tillägg till detta har vikten av att specificera användare och de estetiska aspekternas påverkan på användbarhet behandlats. För att utvärdera om webbapplikationen upplevs som användbar testas den på tre testgrupper i tre olika skeden för att undersöka deras upplevelse av webbapplikationen. Testerna utgår från metoden thinking aloud tillsammans med enkäterna System Usability Scale (SUS) och Visual Aesthetics of Websites Inventory Short (VisAWI-S). SUS- och VisAWI-S-enkäterna gav indikationer på en starkt användbar applikation genom hela utvecklingsprocessen. Detta utifrån implementation av en design som främst utgick från principerna enkelhet och färgrikedom samt fokusområdena Synlighet av systemstatus, Igenkänning istället för återkallande, Flexibilitet och effektiv användning och Estetisk och minimalistisk design. Genom att analysera resultaten från thinking aloud-testerna kunde en tydlig minskning av negativa kommentarer identifieras mellan användartest 1-3. Utifrån dessa testresultat, dras slutsatserna att genom återkoppling relaterad till utförda aktioner, implementation av markörer och färgval med hänsyn till kontraster kan en webbapplikation för konsultjobb implementeras för att uppfattas som användbar av studenter. / Studentconsulting makes it possible for students to use their knowledge in business cases. In addition to this it also reduces the cost for enterprises when hiring consultants. As job hunting via the internet becomes more common there is a need for a web application that connects company projects with students who are interested in consulting. One of the most prominent aspects for a web application to be competitive is usability. The intention with this study is to examine How can a webapplication for consulting jobs between enterprises and students be implemented to be perceived as useful by students? To answer this question a web application for intermediation of consulting jobs between enterprises and students has been developed. The web application is developed on a theoretical basis where the different dimensions of the term usability has been analysed. These dimensions are efficiency, effectiveness and satisfaction. In addition to this the importance of specifying users and the aesthetic aspects effects on usability has been discussed. To evaluate if the web aplication is perceived as useful it is tested on three groups of people on three different occasions to examine their perception of the web application. The tests are based on the thinking aloud method together with the questionnaires System Usability Scale (SUS) and Visual Aesthetics of Websites Inventory Short (VisAWI-S). The SUS and VisAWI-S questionnaires indicated that the web application had a high level of usability throughout the development process. This was achieved through implementing a design based on simplicity and colorfulness as well as the principles Visibility of system status, Recognition rather than recall, Flexibility and efficiency of use and Aesthetic and minimalist design. By analysing the results from the thinking aloud tests a reduction in negative comments between test 1 and 3 could be identified. From the test results, the conclusion is that through responses related to completed actions, implementation of markers, and contrasting colors a web application for consulting jobs between enterprises and students can be implemented to be perceived as useful by students.
|
390 |
Improving Retrieval Accuracy in Main Content Extraction from HTML Web DocumentsMohammadzadeh, Hadi 27 November 2013 (has links)
The rapid growth of text based information on the World Wide Web and various applications making use of this data motivates the need for efficient and effective methods to identify and separate the “main content” from the additional content items, such as navigation menus, advertisements, design elements or legal disclaimers.
Firstly, in this thesis, we study, develop, and evaluate R2L, DANA, DANAg, and AdDANAg, a family of novel algorithms for extracting the main content of web documents. The main concept behind R2L, which also provided the initial idea and motivation for the other three algorithms, is to use well particularities of Right-to-Left languages for obtaining the main content of web pages. As the English character set and the Right-to-Left character set are encoded in different intervals of the Unicode character set, we can efficiently distinguish the Right-to-Left characters from the English ones in an HTML file. This enables the R2L approach to recognize areas of the HTML file with a high density of Right-to-Left characters and a low density of characters from the English character set. Having recognized these areas, R2L can successfully separate only the Right-to-Left characters. The first extension of the R2L, DANA, improves effectiveness of the baseline algorithm by employing an HTML parser in a post processing phase of R2L for extracting the main content from areas with a high density of Right-to-Left characters. DANAg is the second extension of the R2L and generalizes the idea of R2L to render it language independent. AdDANAg, the third extension of R2L, integrates a new preprocessing step to normalize the hyperlink tags. The presented approaches are analyzed under the aspects of efficiency and effectiveness. We compare them to several established main content extraction algorithms and show that we extend the state-of-the-art in terms of both, efficiency and effectiveness.
Secondly, automatically extracting the headline of web articles has many applications. We develop and evaluate a content-based and language-independent approach, TitleFinder, for unsupervised extraction of the headline of web articles. The proposed method achieves high performance in terms of effectiveness and efficiency and outperforms approaches operating on structural and visual features. / Das rasante Wachstum von textbasierten Informationen im World Wide Web und die Vielfalt der Anwendungen, die diese Daten nutzen, macht es notwendig, effiziente und effektive Methoden zu entwickeln, die den Hauptinhalt identifizieren und von den zusätzlichen Inhaltsobjekten wie
z.B. Navigations-Menüs, Anzeigen, Design-Elementen oder Haftungsausschlüssen trennen.
Zunächst untersuchen, entwickeln und evaluieren wir in dieser Arbeit R2L, DANA, DANAg und AdDANAg, eine Familie von neuartigen Algorithmen zum Extrahieren des Inhalts von Web-Dokumenten. Das grundlegende Konzept hinter R2L, das auch zur Entwicklung der drei weiteren Algorithmen führte, nutzt die Besonderheiten der Rechts-nach-links-Sprachen aus, um den Hauptinhalt von Webseiten zu extrahieren.
Da der lateinische Zeichensatz und die Rechts-nach-links-Zeichensätze durch verschiedene Abschnitte des Unicode-Zeichensatzes kodiert werden, lassen sich die Rechts-nach-links-Zeichen leicht von den lateinischen Zeichen in einer HTML-Datei unterscheiden. Das erlaubt dem R2L-Ansatz, Bereiche mit einer hohen Dichte von Rechts-nach-links-Zeichen und wenigen lateinischen Zeichen aus einer HTML-Datei zu erkennen. Aus diesen Bereichen kann dann R2L die Rechts-nach-links-Zeichen extrahieren. Die erste Erweiterung, DANA, verbessert die Wirksamkeit des Baseline-Algorithmus durch die Verwendung eines HTML-Parsers in der Nachbearbeitungsphase des R2L-Algorithmus, um den Inhalt aus Bereichen mit einer hohen Dichte von Rechts-nach-links-Zeichen zu extrahieren. DANAg erweitert den Ansatz des R2L-Algorithmus, so dass eine Sprachunabhängigkeit erreicht wird. Die dritte Erweiterung, AdDANAg, integriert eine neue Vorverarbeitungsschritte, um u.a. die Weblinks zu normalisieren. Die vorgestellten Ansätze werden in Bezug auf Effizienz und Effektivität analysiert. Im Vergleich mit mehreren etablierten Hauptinhalt-Extraktions-Algorithmen zeigen wir, dass sie in diesen Punkten überlegen sind.
Darüber hinaus findet die Extraktion der Überschriften aus Web-Artikeln vielfältige Anwendungen. Hierzu entwickeln wir mit TitleFinder einen sich nur auf den Textinhalt beziehenden und sprachabhängigen Ansatz. Das vorgestellte Verfahren ist in Bezug auf Effektivität und Effizienz besser als bekannte Ansätze, die auf strukturellen und visuellen Eigenschaften der HTML-Datei beruhen.
|
Page generated in 0.0526 seconds