• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Μηχανισμοί και τεχνικές διαχείρισης, επεξεργασίας, ανάλυσης, κατηγοριοποίησης, εξαγωγής περίληψης και προσωποποίησης συχνά ανανεώσιμων δεδομένων του παγκόσμιου ιστού για παρουσίαση σε σταθερές και κινητές συσκευές

Πουλόπουλος, Βασίλειος 01 November 2010 (has links)
Ζούμε μία εποχή τεχνολογικών εξελίξεων και τεχνολογικών αλμάτων με το Διαδίκτυο να γίνεται ένας από τους βασικότερους εκφραστές των νέων τεχνολογικών τάσεων. Ωστόσο, ο τρόπος λειτουργίας του και δόμησής του παρουσιάζει εξαιρετικά μεγάλη ανομοιογένεια με αποτέλεσμα οι χρήστες να βρίσκονται συχνά μπροστά από αδιέξοδο στην προσπάθεια αναζήτησης πληροφορίας. Άλλωστε η ύπαρξη εκατομμυρίων domains οδηγεί σε δυσκολίες κατά την αναζήτηση πληροφορίας. Η έρευνα που πραγματοποιείται επικεντρώνεται στους δικτυακούς τόπους που αποτελούν πηγές ενημέρωσης και πιο συγκεκριμένα στα ειδησεογραφικά πρακτορεία ειδήσεων, αλλά και στα blogs. Μία απλή αναζήτηση αποκάλυψε περισσότερους από 40 δικτυακούς τόπους από μεγάλα ειδησεογραφικά πρακτορεία στην Αμερική. Αυτό σημαίνει πως στην προσπάθεια αναζήτησης μίας είδησης και δη, όλων των πτυχών της, κάποιος θα πρέπει να επισκεφθεί αν όχι όλους, τους περισσότερους από αυτούς τους δικτυακούς τόπους για να εντοπίσει στοιχεία για το θέμα που τον ενδιαφέρει. Σε αυτό το «πρόβλημα» ή έστω σε αυτή την επίπονη διαδικασία, έχει γίνει προσπάθεια να δοθούν λύσεις μέσα από τη χρήση των καναλιών επικοινωνίας RSS και μέσα από προσωποποιημένους δικτυακούς τόπους που διαθέτουν τα μεγάλα ειδησεογραφικά πρακτορεία ή ακόμα και από τους μηχανισμούς αναζήτησης που αυτοί διαθέτουν. Σε κάθε περίπτωση όμως, υπάρχουν σημαντικά μειονεκτήματα που συχνά οδηγούν και πάλι το χρήστη σε αδιέξοδο. Τα κανάλια επικοινωνίας δε φιλτράρουν πληροφορίες, τροφοδοτώντας τους RSS readers των χρηστών με πληθώρα πληροφοριών που δεν αφορούν τους χρήστες ή ακόμα είναι ενοχλητικές για αυτούς. Για παράδειγμα η προσθήκη δύο (2) μόνον καναλιών από Ελληνικά μεγάλα ειδησεογραφικά portals μας οδήγησε στη λήψη περισσότερων από 1000 ειδήσεων καθημερινά. Από την άλλη, η χρήση των microsites που έχουν οι δικτυακοί τόποι επιβάλει στους χρήστες την επίσκεψη σε όλους τους δικτυακούς τόπους που τους ενδιαφέρουν. Όσον αφορά στη χρήση των μηχανών αναζήτησης, ακόμα και οι πιο μεγάλες από αυτές συχνά επιστρέφουν εκατομμύρια αποτελέσματα στα ερωτήματα των χρηστών ή πληροφορίες που δεν είναι επικαιροποιημένες. Τέλος, επειδή οι δικτυακοί τόποι των ειδησεογραφικών πρακτορείων δεν έχουν κατασκευαστεί για να προσφέρουν εκτενείς υπηρεσίες αναζήτησης ειδήσεων, είναι συχνό το φαινόμενο είτε να μην προσφέρουν καθόλου υπηρεσία αναζήτησης, είτε η υπηρεσία που προσφέρουν να μη μπορεί να απαντήσει με δομημένα αποτελέσματα και αντί να βοηθά τους χρήστες να εντοπίσουν την πληροφορία που αναζητούν, να τους αποπροσανατολίζει. / We live an era of technology advances and huge technological steps where the Internet becomes a basic place of demonstration of the technology trends. Nevertheless, the way of operation and construction of the WWW is extremely uneven and this results in dead-ends when the users are trying to locate information. Besides the existence of billions of domains leads to difficulties in difficulties in recording all this information. The research that we are doing, is focused on websites that are sources of information and specifically news portals and informational blogs. A simple search on the Internet led to more than 40 large scale press agencies in America. This means that when trying to search for information and more specifically a news article in all its existences somebody has to visit all the websites. This problem, or at least this tedious task is of major concern of the research community. Many solutions were proposed in order to overcome the aforementioned issues with usage of RSS feeds or personalized microsites, or even analytical search applications. In any occasion there are many disadvantages that lead the user to a dead-end again. The RSS feeds do not filter information and they feed the user’s RSS readers with large amounts of information that most of it is not of the user’s concern. For example, a simple addition of 2 rss feeds from large Greek portals led to receipt of more that 1000 news articles within a day! On the other side, the usage of microsites that many websites support is a solution if and only if the user visits every single website and of course have and maintain an account to each one of them. The search engines are an alternative but lately, due to the expansion of the WWW, the results to simple queries are often million or the first results retrieved are outdated. Finally, the websites of the major news agencies are not directly constructed to offer extensive searching facilities and thus they usually offer search results through support of a large well-known search engine (eg. Google). According to the aforementioned the research that we are conducting is furthermore focused on the study of techniques and mechanisms that try to give a solution to the everyday issue of being informed about news and having a spherical opinion about an issue. The idea is simple and lies on the problem of the Internet: instead of letting the user do all the search of the news and information that meet their needs we collect all the informationand present them directly to the user, presenting only the information that meet their profile. This sounds pretty simple and logical, but the implementation we have to think of a number of prerequisites. The constraints are: the users of the Internet speak different languages and they want to see the news in their mother language and the users want access to the information from everywhere. This implies that we need a mechanism that would collect news articles from many – if not all – news agencies worldwide so that everybody can be informed. The news articles that we collect should be furthermore analyzed before presented to the users. In parallel we need to apply text pre-processing techniques, categorization and automatic summarization so that the news articles can be presented back to the user in a personalized manner. Finally, the mechanism is able to construct and maintain a user profile and present only articles that meet the profile of the user and not all the articles collected by the system. As it is obvious this is not a simple procedure. Substantially it a multilevel modular mechanism that implements and uses advanced algorithm on every level in order to achieve the required result. We are referring to eight different mechanisms that lead to the desired result. The systems are: 1. Retrieve news and articles from the Internet –advaRSS system 2. HTML page analysis and useful text extraction – CUTER system. 3. Preprocess and Natural Language Processing in order to extract keywords. 4. Categorization subsystem in order to construct ontologies that assigns texts to categories 5. Article Grouping mechanism (web application level) 6. Automatic Text Summarization 7. Web based User Personalization Mechanism 8. Application based User Personalization Mechanism The subsystems and system architecture is presented in figure 1: The procedure of fetching articles and news from the WWW is a procedure that includes algorithms that fetch data of the large database that is called internet. In this research we have included algorithms for instant retrieval of articles and the mechanism has furthermore mechanism for fetching HTML pages that include news articles. As a next step and provided that we own HTML pages with articles we have procedures for efficient useful text extraction. The HTML pages include the body of the article and information that are disrelated to the article like advertisements. Our mechanism introduces algorithms and systems for extraction of the original body of the text out of the aforementioned pages and omitting any irrelevant information. As a furthermore procedure of the same mechanism we try and extract multimedia related to the article. The aforementioned mechanism are communicating directly with the Internet.
2

DEPENDABLE CLOUD RESOURCES FOR BIG-DATA BATCH PROCESSING & STREAMING FRAMEWORKS

Bara M Abusalah (10692924) 07 May 2021 (has links)
The examiner of cloud computing systems in the last few years observes that there is a trend of the emergence of new Big Data frameworks every single year. Since Hadoop was developed in 2007, new frameworks followed it such as Spark, Storm, Heron, Apex, Flink, Samza, Kafka ... etc. Each framework is developed in a certain way to target and achieve certain objectives better than other frameworks do. However, there are few common functionalities and aspects that are shared between these frameworks. One vital aspect all these frameworks strive to achieve is better reliability and faster recovery time in case of failures. Despite all the advances in making datacenters dependable, failures actually still happen. This is particularly onerous for long-running “big data” applications, where partial failures can lead to significant losses and lengthy recomputations. This is also crucial for streaming systems where events are processed and monitored online in real time, and any delay in data delivery will cause a major inconvenience to the users.<div>Another observation is that some reliability implementations are redundant between different frameworks. Big data processing frameworks like Hadoop MapReduce include fault tolerance mechanisms, but these are commonly targeted at specific system/failure models, and are often redundant between frameworks. Encapsulating these implementations into one layer and making it shared between different applications will benefit more than one frame-work without the burden of re-implementing the same reliability approach in each single framework.<br></div><div>These observations motivated us to solve the problem by presenting two systems: Guardian and Warden. Guardian is tailored towards batch processing big data systems while Warden is targeted towards stream processing systems. Both systems are robust, RMS based, generic, multi-framework, flexible, customizable, low overhead systems that allow their users to run their applications with individually configurable fault tolerance granularity and degree, with only minor changes to their implementation.<br></div><div>Most reliability approaches carry out one rigid fault tolerance technique targeted towards one system at a time. It is more challenging to provide a reliability approach that is pluggable in multiple Big Data frameworks at a time and can achieve low overheads comparable with single targeted framework approaches, yet is flexible and customizable by its users to make it tailored towards their objectives. The genericity is attained by providing an interface that can be used in different applications from different frameworks in any part of the application code. The low overhead is achieved by providing faster application finish times with and without failures. The customizability is fulfilled by providing the users the options to choose between two fault tolerance guarantees (Crash Failures / Byzantine Failures) and, in case of streaming systems; it is combined with two delivery semantics (Exactly Once / At Most Once).<br></div><div>In other words, this thesis proposes the paradigm of dependable resources: big data processing frameworks are typically built on top of resource management systems (RMSs),and proposing fault tolerance support at the level of such an RMS yields generic fault tolerance mechanisms, which can be provided with low overhead by leveraging constraints on resources.<br></div><div>To the best of our knowledge, such approach was never tried on multiple big data batch processing and streaming frameworks before.<br></div><div>We demonstrate the benefits of Guardian by evaluating some batch processing frame-works such as Hadoop, Tez, Spark and Pig on a prototype of Guardian running on Amazon-EC2, improving completion time by around 68% in the presence of failures, while maintaining around 6% overhead. We’ve also built a prototype of Warden on the Flink and Samza (with Kafka) streaming frameworks. Our evaluations on Warden highlight the effectiveness of our approach in the presence of failures and without failures compared to other fault tolerance techniques (such as checkpointing)<br></div>
3

Performance and Cost Optimization for Distributed Cloud-native Systems

Ashraf Y Mahgoub (13169517) 28 July 2022 (has links)
<p> First, NoSQL data-stores provide a set of features that is demanded by high perfor?mance computing (HPC) applications such as scalability, availability and schema flexibility. High performance computing (HPC) applications, such as metagenomics and other big data systems, need to store and analyze huge volumes of semi-structured data. Such applica?tions often rely on NoSQL-based datastores, and optimizing these databases is a challenging endeavor, with over 50 configuration parameters in Cassandra alone. As the application executes, database workloads can change rapidly over time (e.g. from read-heavy to write-heavy), and a system tuned for one phase of the workload becomes suboptimal when the workload changes. </p>
4

Avaliação de acessibilidade e usabilidade na Web: um apoio com foco nos usuários senescentes / Web accessibility and usability evaluation: a support focusing on the older users

Rodrigues, Sandra Souza 26 October 2016 (has links)
A constante evolução da Web tem se mostrado como um fenômeno mundial que rapidamente precisa responder aos diversos segmentos da sociedade atual, com websites e aplicativos de compras, governo, bancos, entretenimento e outros. Nesse contexto, é necessário que o conteúdo Web possibilite acesso aos mais diferentes perfis de usuários, independentemente de suas deficiências ou necessidades especiais. Um outro fenômeno mundial é o envelhecimento da população. Os usuários senescentes (pessoas que estão em processo de envelhecimento, o qual acarreta declínio físico e mental gradual, ocorrendo geralmente nos indivíduos a partir de 60 anos de idade), apresentam algumas de suas capacidades reduzidas e, encontram, naturalmente, barreiras ao interagir com os serviços e conteúdos disponibilizados na Web. Essa população tem apresentado um índice de crescimento demográfico grande neste século em relação ao que se via nas gerações passadas. Apesar das exigências de legislação específica, recomendações e diretrizes que auxiliam o desenvolvimento de conteúdo acessível e usável, ainda há muitos problemas de acessibilidade e usabilidade que precisam ser resolvidos, diante do rápido avanço tecnológico observado nos recursos da Web atual. Em especial, observa-se pouca atenção às dificuldades que usuários senescentes possuem, pois a maioria de websites são projetados considerando como público-alvo, pessoas mais jovens e especializadas, muitas vezes, treinadas para interagir nos websites. Portanto, é relevante um apoio para avaliação das páginas Web, visando atender as necessidades dos usuários senescentes. A proposta deste projeto foi desenvolver um apoio para a avaliação de acessibilidade e usabilidade na Web, considerando-se o perfil dos senescentes, com vistas a proporcionar um feedback objetivo, aos desenvolvedores e especialistas. Esse apoio foi elaborado por meio de um Checklist que foi desenvolvido, com base em procedimentos científicos, e análises realizadas, sobre sua aplicação. / The constant evolution of the Web has proven to be a worldwide phenomenon that quickly needs to address the various segments of current society (websites and web applications, government, banks, entertainment and others). In this context, it is necessary that the Web content provides the possibility of access to the different user profiles, regardless of their disabilities or special needs. Another worldwide phenomenon is the aging population. The senescent users (people who are in the aging process, which leads to gradual physical and mental decline, usually occurring in individuals from the age of 60) have some of their capacities limited and they face barriers to interact with services and content available on the Web. This population has shown a high growth rate in this century in comparison to what was observed in past generations. Despite of the demands for specific legislation, recommendations and guidelines that assist the development of content accessible and usable, there are still many problems of accessibility and usability that need to be solved, given the rapid technological advances seen in the current Web resources. In particular, there is little attention to the difficulties that senescent users have, given that most websites are designed considering target audience that usually is composed of young people and specialists, often trained to interact on websites. Therefore, it is important to support evaluation of Web pages, to meet the requirements of senescent users. The purpose of this project is to develop a support for the evaluation of web accessibility and usability, focusing on the older people, in order to provide clear feedback to developers and experts. This support was prepared by a Checklist that was developed based on scientific procedures, and analyzes on its application.
5

Avaliação de acessibilidade e usabilidade na Web: um apoio com foco nos usuários senescentes / Web accessibility and usability evaluation: a support focusing on the older users

Sandra Souza Rodrigues 26 October 2016 (has links)
A constante evolução da Web tem se mostrado como um fenômeno mundial que rapidamente precisa responder aos diversos segmentos da sociedade atual, com websites e aplicativos de compras, governo, bancos, entretenimento e outros. Nesse contexto, é necessário que o conteúdo Web possibilite acesso aos mais diferentes perfis de usuários, independentemente de suas deficiências ou necessidades especiais. Um outro fenômeno mundial é o envelhecimento da população. Os usuários senescentes (pessoas que estão em processo de envelhecimento, o qual acarreta declínio físico e mental gradual, ocorrendo geralmente nos indivíduos a partir de 60 anos de idade), apresentam algumas de suas capacidades reduzidas e, encontram, naturalmente, barreiras ao interagir com os serviços e conteúdos disponibilizados na Web. Essa população tem apresentado um índice de crescimento demográfico grande neste século em relação ao que se via nas gerações passadas. Apesar das exigências de legislação específica, recomendações e diretrizes que auxiliam o desenvolvimento de conteúdo acessível e usável, ainda há muitos problemas de acessibilidade e usabilidade que precisam ser resolvidos, diante do rápido avanço tecnológico observado nos recursos da Web atual. Em especial, observa-se pouca atenção às dificuldades que usuários senescentes possuem, pois a maioria de websites são projetados considerando como público-alvo, pessoas mais jovens e especializadas, muitas vezes, treinadas para interagir nos websites. Portanto, é relevante um apoio para avaliação das páginas Web, visando atender as necessidades dos usuários senescentes. A proposta deste projeto foi desenvolver um apoio para a avaliação de acessibilidade e usabilidade na Web, considerando-se o perfil dos senescentes, com vistas a proporcionar um feedback objetivo, aos desenvolvedores e especialistas. Esse apoio foi elaborado por meio de um Checklist que foi desenvolvido, com base em procedimentos científicos, e análises realizadas, sobre sua aplicação. / The constant evolution of the Web has proven to be a worldwide phenomenon that quickly needs to address the various segments of current society (websites and web applications, government, banks, entertainment and others). In this context, it is necessary that the Web content provides the possibility of access to the different user profiles, regardless of their disabilities or special needs. Another worldwide phenomenon is the aging population. The senescent users (people who are in the aging process, which leads to gradual physical and mental decline, usually occurring in individuals from the age of 60) have some of their capacities limited and they face barriers to interact with services and content available on the Web. This population has shown a high growth rate in this century in comparison to what was observed in past generations. Despite of the demands for specific legislation, recommendations and guidelines that assist the development of content accessible and usable, there are still many problems of accessibility and usability that need to be solved, given the rapid technological advances seen in the current Web resources. In particular, there is little attention to the difficulties that senescent users have, given that most websites are designed considering target audience that usually is composed of young people and specialists, often trained to interact on websites. Therefore, it is important to support evaluation of Web pages, to meet the requirements of senescent users. The purpose of this project is to develop a support for the evaluation of web accessibility and usability, focusing on the older people, in order to provide clear feedback to developers and experts. This support was prepared by a Checklist that was developed based on scientific procedures, and analyzes on its application.

Page generated in 0.0485 seconds