• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 12
  • 8
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Unicode - Herausforderungen

Heide, Gerd 03 May 2004 (has links)
Workshop "Netz- und Service-Infrastrukturen" Unicode bietet erstmals die Möglichkeit, jedes in der Welt vorkommende Zeichen eindeutig im Rechner abzubilden. Im Vortrag wird auf mögliche Probleme bei der Umstellung, wie z.B. die Verarbeitung von bereits vorhanden Daten, die Parallelarbeit mit unterschiedlichen Kodierungssystemen und den remote Aufruf von Rechnern mit einer anderen Kodierung, eingegangen.
2

An intuitive unicode input method for ancient Egyptian hieroglyphic writing

Miyagawa, So 20 April 2016 (has links) (PDF)
In this study, I extended input methods for the Japanese language to Egyptian hieroglyphics. There are several systems that capable of inputting Egyptian hieroglyphic writing. However, they do not allow us to directly input hieroglyphs, for instance, into MS Word. The new Egyptian hieroglyphic input system being reported here, developed using technology used for inputting Japanese writing, is quite unique and allows the direct input of hieroglyphs, for example, into MS Word. Ancient Egyptian hieroglyphs and the Japanese writing system (with its mixture of hiragana, katakana and kanji) share basic graphemic characteristics. For instance, Ancient Egyptian hieroglyphic logograms are functionally similar to Japanese kanji logograms (Chinese characters), whereas Egyptian hieroglyphic phonograms are functionally similar to Japanese hiragana and katakana syllabic phonograms. The input technology for Japanese makes it possible to input a mixture of logograms and phonograms, and phonetic complements. This technology is a well-organized and handy tool to input Japanese writing into computers, having been used by over 100 million people. I applied this technology to Ancient Egyptian hieroglyphic inputting and created a new intuitive hieroglyphic inputting system using Google Japanese Input. Using this method, anyone can directly write Egyptian hieroglyphic writing into software like MS Word. If the transcription of an ancient Egyptian word is entered, the correct hieroglyphs are generated by this system. If there are multiple options for any phonemic combinations that use other combinations of phonetic complements or determinatives, a dropdown window with a list of several combinations of glyphs appears and the user can choose the desired combination.
3

An intuitive unicode input method for ancient Egyptian hieroglyphic writing: applying the input technology of the Japanese writing system

Miyagawa, So January 2016 (has links)
In this study, I extended input methods for the Japanese language to Egyptian hieroglyphics. There are several systems that capable of inputting Egyptian hieroglyphic writing. However, they do not allow us to directly input hieroglyphs, for instance, into MS Word. The new Egyptian hieroglyphic input system being reported here, developed using technology used for inputting Japanese writing, is quite unique and allows the direct input of hieroglyphs, for example, into MS Word. Ancient Egyptian hieroglyphs and the Japanese writing system (with its mixture of hiragana, katakana and kanji) share basic graphemic characteristics. For instance, Ancient Egyptian hieroglyphic logograms are functionally similar to Japanese kanji logograms (Chinese characters), whereas Egyptian hieroglyphic phonograms are functionally similar to Japanese hiragana and katakana syllabic phonograms. The input technology for Japanese makes it possible to input a mixture of logograms and phonograms, and phonetic complements. This technology is a well-organized and handy tool to input Japanese writing into computers, having been used by over 100 million people. I applied this technology to Ancient Egyptian hieroglyphic inputting and created a new intuitive hieroglyphic inputting system using Google Japanese Input. Using this method, anyone can directly write Egyptian hieroglyphic writing into software like MS Word. If the transcription of an ancient Egyptian word is entered, the correct hieroglyphs are generated by this system. If there are multiple options for any phonemic combinations that use other combinations of phonetic complements or determinatives, a dropdown window with a list of several combinations of glyphs appears and the user can choose the desired combination.
4

Unicode - Herausforderungen

Heide, Gerd 03 May 2004 (has links)
Workshop "Netz- und Service-Infrastrukturen" Unicode bietet erstmals die Möglichkeit, jedes in der Welt vorkommende Zeichen eindeutig im Rechner abzubilden. Im Vortrag wird auf mögliche Probleme bei der Umstellung, wie z.B. die Verarbeitung von bereits vorhanden Daten, die Parallelarbeit mit unterschiedlichen Kodierungssystemen und den remote Aufruf von Rechnern mit einer anderen Kodierung, eingegangen.
5

Episode 3.09 – UTF-8 Encoding and Unicode Code Points

Tarnoff, David 01 January 2020 (has links)
ASCII was developed when every computer was an island and over 35 years before the first emoji appeared. In this episode, we will take a look at how Unicode and UTF-8 expanded ASCII for ubiquitous use while maintaining backwards compatibility.
6

Developing a concept for handling IT security with secured and trusted electronic connections

Hockmann, Volker January 2014 (has links)
In this day and age, the Internet provides the biggest linkage of information, personal data and information, social contact facilities, entertainment and electronic repository for all things including software downloads and tools, online books and technical descriptions, music and movies - both legal and illegal [Clarke, 1994]. With the increasing bandwidth in the last few years worldwide, it is possible to access the so-called "Triple-Play-Solutions" - Voice over lP, High-Speed-Internet and Video on Demand. More than 100 million subscribers have signed on across Asia, Europe, and the Americas in 2007, and growth is likely to continue steadily in all three. As broadband moves into the mainstream, it is reshaping the telecommunications, cable and Internet access industrie [Beardsley, Scott and Doman, Andrew, and EdinMC Kinsey, Par, 2003]. Cisco [Cisco, 2012], one of the biggest network companies, will expect more than 966 exabytes (nearly 1 zettabyte) per year or 80.5 exabytes per month in 2015 and the "Global IP traffic has increased eightfold over the past 5 years, and will increase fourfold over the next 5 years. Overall, IP traffic will grow at a compound annual growth rate (CAGR) of 32 percent from 2010 to 2015" . More and more types of sensible data flow between different recipients. News from around the world are transferred within seconds from the one end to the other end of the world, and affect the financial market, stock exchange [Reuters, 2012] and also bring down whole governments. For instance, worldwide humoil might ensue if a hacker broke into the web-server of an international newspaper or news channel like N-TV in Germany or BBC in England and displayed messages of a political revolution in Dubai or the death of the CEO from Microsoft or IBM.
7

Mitteilungen des URZ 2/2004

Heide,, Richter,, Riedel,, Schier,, Kratzert,, Ziegler, 10 May 2004 (has links) (PDF)
Informationen des Universitätsrechenzentrums
8

Mitteilungen des URZ 2/2004

Heide, Richter, Riedel, Schier, Kratzert, Ziegler 10 May 2004 (has links)
Informationen des Universitätsrechenzentrums:Nutzung der Computerpools Unicode - eine neue Art der Zeichenkodierung Sicheres Programmieren mit PHP (Teil 2) NIDS im Campusnetz MONARCH Achtung, Mail-Würmer! Kurzinformationen
9

A Framework to Understand Emoji Meaning: Similarity and Sense Disambiguation of Emoji using EmojiNet

Wijeratne, Sanjaya January 2018 (has links)
No description available.
10

Improving Retrieval Accuracy in Main Content Extraction from HTML Web Documents

Mohammadzadeh, Hadi 17 December 2013 (has links) (PDF)
The rapid growth of text based information on the World Wide Web and various applications making use of this data motivates the need for efficient and effective methods to identify and separate the “main content” from the additional content items, such as navigation menus, advertisements, design elements or legal disclaimers. Firstly, in this thesis, we study, develop, and evaluate R2L, DANA, DANAg, and AdDANAg, a family of novel algorithms for extracting the main content of web documents. The main concept behind R2L, which also provided the initial idea and motivation for the other three algorithms, is to use well particularities of Right-to-Left languages for obtaining the main content of web pages. As the English character set and the Right-to-Left character set are encoded in different intervals of the Unicode character set, we can efficiently distinguish the Right-to-Left characters from the English ones in an HTML file. This enables the R2L approach to recognize areas of the HTML file with a high density of Right-to-Left characters and a low density of characters from the English character set. Having recognized these areas, R2L can successfully separate only the Right-to-Left characters. The first extension of the R2L, DANA, improves effectiveness of the baseline algorithm by employing an HTML parser in a post processing phase of R2L for extracting the main content from areas with a high density of Right-to-Left characters. DANAg is the second extension of the R2L and generalizes the idea of R2L to render it language independent. AdDANAg, the third extension of R2L, integrates a new preprocessing step to normalize the hyperlink tags. The presented approaches are analyzed under the aspects of efficiency and effectiveness. We compare them to several established main content extraction algorithms and show that we extend the state-of-the-art in terms of both, efficiency and effectiveness. Secondly, automatically extracting the headline of web articles has many applications. We develop and evaluate a content-based and language-independent approach, TitleFinder, for unsupervised extraction of the headline of web articles. The proposed method achieves high performance in terms of effectiveness and efficiency and outperforms approaches operating on structural and visual features. / Das rasante Wachstum von textbasierten Informationen im World Wide Web und die Vielfalt der Anwendungen, die diese Daten nutzen, macht es notwendig, effiziente und effektive Methoden zu entwickeln, die den Hauptinhalt identifizieren und von den zusätzlichen Inhaltsobjekten wie z.B. Navigations-Menüs, Anzeigen, Design-Elementen oder Haftungsausschlüssen trennen. Zunächst untersuchen, entwickeln und evaluieren wir in dieser Arbeit R2L, DANA, DANAg und AdDANAg, eine Familie von neuartigen Algorithmen zum Extrahieren des Inhalts von Web-Dokumenten. Das grundlegende Konzept hinter R2L, das auch zur Entwicklung der drei weiteren Algorithmen führte, nutzt die Besonderheiten der Rechts-nach-links-Sprachen aus, um den Hauptinhalt von Webseiten zu extrahieren. Da der lateinische Zeichensatz und die Rechts-nach-links-Zeichensätze durch verschiedene Abschnitte des Unicode-Zeichensatzes kodiert werden, lassen sich die Rechts-nach-links-Zeichen leicht von den lateinischen Zeichen in einer HTML-Datei unterscheiden. Das erlaubt dem R2L-Ansatz, Bereiche mit einer hohen Dichte von Rechts-nach-links-Zeichen und wenigen lateinischen Zeichen aus einer HTML-Datei zu erkennen. Aus diesen Bereichen kann dann R2L die Rechts-nach-links-Zeichen extrahieren. Die erste Erweiterung, DANA, verbessert die Wirksamkeit des Baseline-Algorithmus durch die Verwendung eines HTML-Parsers in der Nachbearbeitungsphase des R2L-Algorithmus, um den Inhalt aus Bereichen mit einer hohen Dichte von Rechts-nach-links-Zeichen zu extrahieren. DANAg erweitert den Ansatz des R2L-Algorithmus, so dass eine Sprachunabhängigkeit erreicht wird. Die dritte Erweiterung, AdDANAg, integriert eine neue Vorverarbeitungsschritte, um u.a. die Weblinks zu normalisieren. Die vorgestellten Ansätze werden in Bezug auf Effizienz und Effektivität analysiert. Im Vergleich mit mehreren etablierten Hauptinhalt-Extraktions-Algorithmen zeigen wir, dass sie in diesen Punkten überlegen sind. Darüber hinaus findet die Extraktion der Überschriften aus Web-Artikeln vielfältige Anwendungen. Hierzu entwickeln wir mit TitleFinder einen sich nur auf den Textinhalt beziehenden und sprachabhängigen Ansatz. Das vorgestellte Verfahren ist in Bezug auf Effektivität und Effizienz besser als bekannte Ansätze, die auf strukturellen und visuellen Eigenschaften der HTML-Datei beruhen.

Page generated in 0.1333 seconds