• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 9
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Information Retrieval Strategies of Millennial Undergraduate Students in Web and Library Database Searches

Porter, Brandi 01 January 2009 (has links)
Millennial students make up a large portion of undergraduate students attending colleges and universities, and they have a variety of online resources available to them to complete academically related information searches, primarily Web based and library-based online information retrieval systems. The content, ease of use, and required search techniques are different between the two information retrieval systems. Students often prefer searching the Web, but in doing so often miss higher quality materials that may be available only through their library. Furthermore, each system uses different information retrieval algorithms for producing results, so proficiency in one search system may not transfer to another. Web based information retrieval systems are unable to search and retrieve many resources available in libraries and other proprietary information retrieval systems, often referred to as the Invisible Web. These are resources that are not available to the general public and are password protected (from anyone not considered to be an affiliated user of that particular organization). These resources are often licensed to libraries by third party vendors or publishers and include fee-based access to content. Therefore, many millennial students may not be accessing many scholarly resources available to them if they were to use Web based information retrieval systems. Investigation of how millennial students approach searches for the same topic in both systems was conducted. The goal was to build upon theory of why students search using various techniques, why they often choose the Web for their searches, and what can be done to improve library online information retrieval systems. Mixed qualitative methods of data gathering were used to elicit this information. The investigation showed that millennial undergraduate students lacked detailed search strategies, and often used the same search techniques regardless of system or subject. Students displayed greater familiarity and ease of use with Web based IR systems than online library IR systems. Results illustrated suggestions for search design enhancements to library online information retrieval systems such as better natural language searching and easier linking to full text articles. Design enhancements based on millennial search strategies should encourage students to use library-based information retrieval systems more often.
2

The Role of Tasks in the Internet Health Information Searching of Chinese Graduate Students

Pan, Xuequn 05 1900 (has links)
The purpose of the study was to examine the relationships between types of health information tasks and the Internet information search processes of Chinese graduate students at the University of North Texas. the participants' Internet information search processes were examined by looking at the source used to start the search, language selection, use of online translation tools, and time spent. in a computer classroom, 45 Chinese graduate students searched the Internet and completed three health information search tasks: factual task, interpretative task, and exploratory task. Data of the Chinese graduate students’ health information search processes were gathered from Web browser history files, answer sheets, and questionnaires. Parametric and non-parametric statistical analyses were conducted to test the relationships between the types of tasks and variables identified in the search process. Results showed that task types only had a statistically significant impact on the time spent. for the three tasks, the majority of Chinese graduate students used search engines as major sources for the search starting point, utilized English as the primary language, and did not use online translation tools. the participants also reported difficulties in locating relevant answers and recommended ways to be assisted in the future when searching the Internet for health information. the study provided an understanding of Chinese graduate students' health information seeking behavior with an aim to enrich health information user studies. the results of this study contribute to the areas of academic library services, multilingual health information system design, and task-based health information searching.
3

Personalisation of web information search: an agent based approach

Gopinathan-Leela, Ligon, n/a January 2005 (has links)
The main purpose of this research is to find an effective way to personalise information searching on the Internet using middleware search agents, namely, Personalised Search Agents (PSA). The PSA acts between users and search engines, and applies new and existing techniques to mine and exploit relevant and personalised information for users. Much research has already been done in developing personalising filters, as a middleware technique which can act between user and search engines to deliver more personalised results. These personalising filters, apply one or more of the popular techniques for search result personalisation, such as the category concept, learning from user actions and using metasearch engines. By developing the PSA, these techniques have been investigated and incorporated to create an effective middleware agent for web search personalisation. In this thesis, a conceptual model for the Personalised Search Agent is developed, implemented by developing a prototype and benchmarked the prototype against existing web search practices. System development methodology which has flexible and iterative procedures that switch between conceptual design and prototype development was adopted as the research methodology. In the conceptual model of the PSA, a multi-layer client server architecture is used by applying generalisation-specialisation features. The client and the server are structurally the same, but differ in the level of generalisation and interface. The client handles personalising information regarding one user whereas the server effectively combines the personalising information of all the clients (i.e. its users) to generate a global profile. Both client and server apply the category concept where user selected URLs are mapped against categories. The PSA learns the user relevant URLs both by requesting explicit feedback and by implicitly capturing user actions (for instance the active time spent by the user on a URL). The PSA also employs a keyword-generating algorithm, and tries different combinations of words in a user search string by effectively combining them with the relevant category values. The core functionalities of the conceptual model for the PSA, were implemented in a prototype, used to test the ideas in the real word. The result was benchmarked with the results from existing search engines to determine the efficiency of the PSA over conventional searching. A comparison of the test results revealed that the PSA is more effective and efficient in finding relevant and personalised results for individual users and possesses a unique user sense rather than the general user sense of traditional search engines. The PSA, is a novel architecture and contributes to the domain of knowledge web information searching, by delivering new ideas such as active time based user relevancy calculations, automatic generation of sensible search keyword combinations and the implementation of a multi-layer agent architecture. Moreover, the PSA has high potential for future extensions as well. Because it captures highly personalised data, data mining techniques which employ case-based reasoning make the PSA a more responsive, more accurate and more effective tool for personalised information searching.
4

Exploratory Research into the Use of Web Resources of Students Enrolled in an Introductory University-level Medical Translation Course

January 2015 (has links)
abstract: This study explored the Web resources used by four students enrolled in an introductory university-level Medical Translation course over a period of one semester. The research examined the students’ use of time, information needs and searches, and whether user attributes (translation experience and training, specialization and familiarity with the text, previous Web search training, and effort) or task-related factors (perceived task difficulty) had a relationship with the Web searching behavior of the participants. The study also investigates how this behavior might have been reflected in the quality of the product. The study focused on two translation tasks extracted from medical texts selected by the instructor that had to be translated from English into Spanish. Data was gathered by means of various instruments: translated texts, Think-Aloud Protocols, computer screen recordings, and questionnaires. The results of the data gathered from these instruments were triangulated in an effort to find relationships between the translation process and the translation product. Results were analyzed both qualitatively and quantitatively. The findings of the study revealed that the students spent a considerable amount of time looking for information on the Web during their translation assignments, and that the students exhibited an inclination toward bilingual Web sources. An analysis of user attributes suggested that translation experience might have had a relationship with the resources used, and the frequency of their use. Data showed that the more experienced students in the translation program received higher scores in their translations. It was also found that the higher the level of familiarity with the topic, the fewer the number of total searches. In addition, previous Web search training appeared to have a relationship with where and how information was sought. It was observed that in one of the two translation tasks, the more effort the students declared, the more Web searches they carried out. A look at perceived task difficulty indicated that in one of two tasks, this factor had an impact on the number of Web searches, which in turn, seemed to influence the time spent on the translation process, and the translation scores. / Dissertation/Thesis / Masters Thesis Spanish 2015
5

A new integrated model for multitasking during web searching

Alexopoulou, Peggy (Pagona) January 2016 (has links)
Investigating multitasking information behaviour, particularly while using the web, has become an increasingly important research area. People s reliance on the web to seek and find information has encouraged a number of researchers to investigate the characteristics of information seeking behaviour and the web seeking strategies used. The current research set out to explore multitasking information behaviour while using the web in relation to people s personal characteristics, working memory, and flow (a state where people feel in control and immersed in the task). Also investigated were the effects of pre-determined knowledge about search tasks and the artefact characteristics. In addition, the study also investigated cognitive states (interactions between the user and the system) and cognitive coordination shifts (the way people change their actions to search effectively) while multitasking on the web. The research was exploratory using a mixed method approach. Thirty University students participated; 10 psychologists, 10 accountants and 10 mechanical engineers. The data collection tools used were: pre and post questionnaires, pre-interviews, a working memory test, a flow state scale test, audio-visual data, web search logs, think aloud data, observation, and the critical decision method. Based on the working memory test, the participants were divided into two groups, those with high scores and those with lower scores. Similarly, participants were divided into two groups based on their flow state scale tests. All participants searched information on the web for four topics: two for which they had prior knowledge and two more without prior knowledge. The results revealed that working memory capacity affects multitasking information behaviour during web searching. For example, the participants in the high working memory group and high flow group had a significantly greater number of cognitive coordination and state shifts than the low working memory group and low flow group. Further, the perception of task complexity was related to working memory capacity; those with low memory capacity thought task complexity increased towards the end of tasks for which they had no prior knowledge compared to tasks for which they had prior knowledge. The results also showed that all participants, regardless of their working memory capacity and flow level, had the same the first frequent cognitive coordination and cognitive state sequences: from strategy to topic. In respect of disciplinary differences, accountants rated task complexity at the end of the web seeking procedure to be statistically less significant for information tasks with prior knowledge compared to the participants from the other disciplines. Moreover, multitasking information behaviour characteristics such as the number of queries, web search sessions and opened tabs/windows during searches has been affected by the disciplines. The findings of the research enabled an exploratory integrated model to be created, which illustrates the nature of multitasking information behaviour when using the web. One other contribution of this research was to develop new more specific and closely grounded definitions of task complexity and artefact characteristics). This new research may influence the creation of more effective web search systems by placing more emphasis on our understanding of the complex cognitive mechanisms of multitasking information behaviour when using the web.
6

Supervised Identification of the User Intent of Web Search Queries

González-Caro, Cristina 27 September 2011 (has links)
As the Web continues to increase both in size and complexity, Web search is a ubiquitous service that allows users to find all kind of information, resources, and activities. However, as the Web evolves so do the needs of the users. Nowadays, users have more complex interests that go beyond of the traditional informational queries. Thus, it is important for Web-search engines, not only to continue answering effectively informational and navigational queries, but also to be able to identify and provide accurate results for new types of queries. This Ph.D. thesis aims to analyze the impact of the query intent in the search behavior of the users. In order to achieve this, we first study the behavior of users with different types of query intent on search engine result pages (SERP), using eye tracking techniques. Our study shows that the query intent of the user affects all the decision process in the SERP. Users with different query intent prefer different type of search results (organic, sponsored), they attend to different main areas of interest (title, snippet, URL, image) and focus on search results with different ranking position. To be able to accurately identify the intent of the user query is an important issue for search engines, as this will provide useful elements that allow them adapting their results to changing user behaviors and needs. Therefore, in this thesis we propose a method to identify automatically the intent behind user queries. Our hypothesis is that the performance of single-faceted classification of queries can be improved by introducing information of multi-faceted training samples into the learning process. Hence, we study a wide set of facets that can be considered for the characterization of the query intent of the user and we investigate whether combining multiple facets can improve the predictability of these facets. Our experimental results show that this idea can significantly improve the quality of the classification. Since most of previous works in query intent classification are oriented to the study of single facets, these results are a first step to an integrated query intent classification model. / A medida que la Web sigue creciendo, tanto en tamaño como en complejidad, la búsqueda Web llega a ser un servicio ubicuo que permite a los usuarios encontrar todo tipo de información, recursos y actividades. Sin embargo, así como la Web evoluciona también lo hacen las necesidades de los usuarios. Hoy en día, los usuarios tienen intereses más complejos que van más allá de las tradicionales consultas informacionales. Por lo tanto, es importante para los motores de búsqueda Web, no solo continuar respondiendo efectivamente las consultas informacionales y navegacionales, sino también identificar y proveer resultados precisos para los nuevos tipos de consultas. El objetivo de esta tesis es analizar el impacto de la intención de la consulta en el comportamiento de búsqueda de los usuarios. Para lograr esto, primero estudiamos el comportamiento de usuarios con diferentes intenciones en las páginas de resultados de motores de búsqueda (SERP). Nuestro estudio muestra que la intención de la consulta afecta todo el proceso de decisión en la SERP. Los usuarios con diferentes intenciones prefieren resultados de búsqueda diferentes (orgánicos, patrocinados), miran diferentes áreas de interés (título, snippet, URL, imagen) y se concentran en resultados con diferente posición en el ranking. Identificar automáticamente la intención de la consulta aportaría elementos valiosos que permitirán a los sistemas de búsqueda adaptar sus resultados a los comportamientos cambiantes del usuario. Por esto, esta tesis propone un método para identificar automáticamente la intención detrás de la consulta. Nuestra hipótesis es que el rendimiento de la clasificación de consultas basada en facetas simples puede ser mejorado con la introducción de ejemplos multi-faceta en el proceso de aprendizaje. Por lo tanto, estudiamos un grupo amplio de facetas e investigamos si la combinación de facetas puede mejorar su predictibilidad. Nuestros resultados muestran que esta idea puede mejorar significativamente la calidad de la clasificación. Dado que la mayoría de trabajos previos están orientados al estudio de facetas individuales, estos resultados son un primer paso hacia un modelo integrado de clasificación de la intención de la consulta.
7

Finding, extracting and exploiting structure in text and hypertext / Att finna, extrahera och utnyttja strukturer i text och hypertext

Ågren, Ola January 2009 (has links)
Data mining is a fast-developing field of study, using computations to either predict or describe large amounts of data. The increase in data produced each year goes hand in hand with this, requiring algorithms that are more and more efficient in order to find interesting information within a given time. In this thesis, we study methods for extracting information from semi-structured data, for finding structure within large sets of discrete data, and to efficiently rank web pages in a topic-sensitive way. The information extraction research focuses on support for keeping both documentation and source code up to date at the same time. Our approach to this problem is to embed parts of the documentation within strategic comments of the source code and then extracting them by using a specific tool. The structures that our structure mining algorithms are able to find among crisp data (such as keywords) is in the form of subsumptions, i.e. one keyword is a more general form of the other. We can use these subsumptions to build larger structures in the form of hierarchies or lattices, since subsumptions are transitive. Our tool has been used mainly as input to data mining systems and for visualisation of data-sets. The main part of the research has been on ranking web pages in a such a way that both the link structure between pages and also the content of each page matters. We have created a number of algorithms and compared them to other algorithms in use today. Our focus in these comparisons have been on convergence rate, algorithm stability and how relevant the answer sets from the algorithms are according to real-world users. The research has focused on the development of efficient algorithms for gathering and handling large data-sets of discrete and textual data. A proposed system of tools is described, all operating on a common database containing "fingerprints" and meta-data about items. This data could be searched by various algorithms to increase its usefulness or to find the real data more efficiently. All of the methods described handle data in a crisp manner, i.e. a word or a hyper-link either is or is not a part of a record or web page. This means that we can model their existence in a very efficient way. The methods and algorithms that we describe all make use of this fact. / Informationsutvinning (som ofta kallas data mining även på svenska) är ett forskningsområde som hela tiden utvecklas. Det handlar om att använda datorer för att hitta mönster i stora mängder data, alternativt förutsäga framtida data utifrån redan tillgänglig data. Eftersom det samtidigt produceras mer och mer data varje år ställer detta högre och högre krav på effektiviteten hos de algoritmer som används för att hitta eller använda informationen inom rimlig tid. Denna avhandling handlar om att extrahera information från semi-strukturerad data, att hitta strukturer i stora diskreta datamängder och att på ett effektivt sätt rangordna webbsidor utifrån ett ämnesbaserat perspektiv. Den informationsextraktion som beskrivs handlar om stöd för att hålla både dokumentationen och källkoden uppdaterad samtidigt. Vår lösning på detta problem är att låta delar av dokumentationen (främst algoritmbeskrivningen) ligga som blockkommentarer i källkoden och extrahera dessa automatiskt med ett verktyg. De strukturer som hittas av våra algoritmer för strukturextraktion är i form av underordnanden, exempelvis att ett visst nyckelord är mer generellt än ett annat. Dessa samband kan utnyttjas för att skapa större strukturer i form av hierarkier eller riktade grafer, eftersom underordnandena är transitiva. Det verktyg som vi har tagit fram har främst använts för att skapa indata till ett informationsutvinningssystem samt för att kunna visualisera indatan. Huvuddelen av den forskning som beskrivs i denna avhandling har dock handlat om att kunna rangordna webbsidor utifrån både deras innehåll och länkarna som finns mellan dem. Vi har skapat ett antal algoritmer och visat hur de beter sig i jämförelse med andra algoritmer som används idag. Dessa jämförelser har huvudsakligen handlat om konvergenshastighet, algoritmernas stabilitet givet osäker data och slutligen hur relevant algoritmernas svarsmängder har ansetts vara utifrån användarnas perspektiv. Forskningen har varit inriktad på effektiva algoritmer för att hämta in och hantera stora datamängder med diskreta eller textbaserade data. I avhandlingen presenterar vi även ett förslag till ett system av verktyg som arbetar tillsammans på en databas bestående av “fingeravtryck” och annan meta-data om de saker som indexerats i databasen. Denna data kan sedan användas av diverse algoritmer för att utöka värdet hos det som finns i databasen eller för att effektivt kunna hitta rätt information. / AlgExt, CHiC, ProT
8

Αποδοτικοί αλγόριθμοι και προσαρμοστικές τεχνικές διαχείρισης δικτυακών πληροφοριακών συστημάτων και εφαρμογών παγκόσμιου ιστού / Efficient algorithms and adaptive techniques for net-centric information systems and web applications management

Σακκόπουλος, Ευάγγελος 25 June 2007 (has links)
Στα πλαίσια της διδακτορικής μας διατριβής ασχοληθήκαμε με προβλήματα διαχείρισης δικτυακών πληροφοριακών συστημάτων που βασίζονται σε τεχνολογίες παγκόσμιου ιστού (network-centric information systems, netcentric information systems, web information systems). Η έννοια της δικτυο-κεντρικής προσέγγισης (netcentric) προσπαθεί να αποδώσει την τάση να χρησιμοποιείται η δικτυακή υποδομή και τεχνολογία όλο και περισσότερο στα πληροφοριακά συστήματα και τις εφαρμογές παγκόσμιου ιστού για να παρέχουν, να δημοσιοποιούν, να διαμοιράζουν και να επικοινωνούν online υπηρεσίες και πληροφορίες. Κύριος στόχος της διατριβής είναι α) η διασφάλιση της ποιότητας κατά την εξυπηρέτηση, β) η μείωση του χρόνου εντοπισμού και γ) η εξατομίκευση υπηρεσιών και πληροφοριών σε δικτυακά πληροφοριακά περιβάλλοντα και εφαρμογές που βασίζονται σε τεχνολογίες μηχανικής Παγκόσμιου Ιστού. Σε πρώτο επίπεδο, οι αποδοτικοί αλγόριθμοι που αναπτύξαμε αφορούν τις υπηρεσίες Web Services που έχουν σχεδιαστεί να υποστηρίζουν διαλειτουργική αλληλεπίδραση μεταξύ μηχανών με χρήση δικτυακής υποδομής. Πρόκειται ένα τεχνολογικό πλαίσιο το οποίο προτυποποιήθηκε από το W3 Consortium (http://www.w3.org) και γνωρίζει την ευρεία υποστήριξη τόσο της επιστημονικής κοινότητας τεχνολογιών πληροφορικής και επικοινωνιών όσο και των επαγγελματιών μηχανικών Η/Υ και της βιομηχανίας πληροφορικής παγκοσμίως. Αναλυτικότερα στο πρώτο μέρος της διατριβής δίνουμε αρχικά μία νέα κατηγοριοποίηση και συγκριτική παρουσίαση των λύσεων και προβλημάτων που αφορούν αποδοτικές λύσεις αλγορίθμων διαχείρισης και αναζήτησης υπηρεσιών. Στη συνέχεια, εισάγουμε μια σειρά από νέους αποδοτικούς αλγορίθμους διαχείρισης και αναζήτησης υπηρεσιών που διασφαλίζουν την ποιότητα της παρεχόμενης υπηρεσίας και βελτιώνουν την πολυπλοκότητα στο χρόνο εντοπισμού μιας υπηρεσίας. Συνολικά στο πρώτο μέρος παρουσιάζουμε: - Αποδοτικούς αλγορίθμους δυναμικής επιλογής Web Service που λαμβάνουν υπόψη μη λειτουργικές προδιαγραφές για ποιότητα και απόδοση κατά την προσπάθεια χρήσης (consumption) του Web Service (QoWS enabled WS discovery). - Αποδοτικούς αλγορίθμους διαχείρισης και αναζήτησης υπηρεσιών δικτυο-κεντρικών πληροφοριακών συστημάτων οι οποίοι βασίζονται σε αποκεντρικοποιημένες δικτυακές λύσεις ειδικά σχεδιασμένες για WS καταλογογράφηση (decentralized WS discovery). Σε δεύτερο επίπεδο, δίνουμε αποδοτικές προσαρμοστικές μεθόδους για την εξατομίκευση των αποτελεσμάτων αναζήτησης πληροφοριών στον Παγκόσμιο Ιστό. Με τον τρόπο αυτό επιτυγχάνουμε βελτίωση της απόδοσης τόσο για τις εσωτερικές λειτουργίες διαχείρισης και αναζήτησης των δικτυακών πληροφοριακών συστημάτων όσο και του τελικού αποτελέσματος, της πληροφορίας δηλαδή, που παρουσιάζουν τα συστήματα αυτά στον τελικό χρήστη. Συγκεκριμένα, στο δεύτερο μέρος της διατριβής εισάγουμε μια σειρά από τρεις αλγορίθμους εξατομίκευση των αποτελεσμάτων αναζήτησης, οι οποίοι βασίζονται σε τεχνικές μετρικών συνδέσμων (link metrics). Το κύριο πλεονέκτημα των τεχνικών που προτείνουμε είναι ότι επιτρέπουν, με τη χρήση μιας αρκετά απλής μεθοδολογίας, την εξατομίκευση των αποτελεσμάτων αναζήτησης, χωρίς να επιβαρύνονται οι χρήστες σε όγκο αποθήκευσης ή με καθυστερήσεις λόγου χρόνου εκτέλεσής τους. Επιτυγχάνουμε εξατομικευμένη αναζήτηση εφαρμόζοντας τεχνικές ανάλυσης και επεξεργασίας συνδέσμων όχι στο γράφο ιστού αλλά για πρώτη φορά σε αρκετά μικρότερους εξατομικευμένους γράφους που σχηματίζονται από διαθέσιμες σημασιολογικές ταξονομίες. Συνοψίζοντας τα ερευνητικά αποτελέσματα του δεύτερου μέρους παρουσιάζουμε τα ακόλουθα: - Αποδοτικοί αλγόριθμοι για εξατομικευμένη αναζήτηση πληροφορίας (personalized searching) στον Παγκόσμιο Ιστό. - Μηχανισμός προσαρμοστικής παρουσίασης αποτελεσμάτων αναζήτησης με χρήση πολλαπλών επιπέδων κατηγοριοποίησης. - Επέκταση των αλγορίθμων για μηχανισμούς στοχευμένης συλλογής σελίδων (focused web crawlers) που αποτελούν εναλλακτική της εξατομικευμένης αναζήτησης πληροφοριών. Τέλος στο τρίτο και τελευταίο μέρος της διατριβής παρουσιάζουμε μια σειρά από εφαρμογές, αρχιτεκτονικές και λειτουργικά πλαίσια τα οποία αφορούν δικτυακά πληροφοριακά περιβάλλοντα στα οποία εφαρμόζουμε τεχνικές διαχείρισης υπηρεσιών και μηχανισμούς εξατομίκευσης πληροφοριών. O κύριος στόχος της παρουσίασης των λύσεων αυτών είναι να επιδειχθεί ότι οι προτεινόμενοι αποδοτικοί αλγόριθμοι, που παρουσιάστηκαν στα προηγούμενα κεφάλαια, έχουν εφαρμογή σε πολλαπλά προβλήματα διαφορετικών επιστημονικών και τεχνολογικών πεδίων που χρησιμοποιούν δικτυακά πληροφοριακά συστήματα και εφαρμογές παγκόσμιου ιστού. / In our PhD dissertation we dealt with performance issues in network - centric information systems, netcentric information systems and web information systems. Netcentric approach attempts to depict the augmenting tendency to use the network communication in information systems and web applications in order to provide, to publish, to distribute and to communicate online services and information. The key aim of our doctoral thesis is a) the quality at the service provision, v) the reduction of discovery time and c) the personalization of services and information in network information systems and applications that are based on web engineering technologies. Initially, we studied, designed and implemented efficient algorithms concerning Web Services technologies that have been designed to facilitate interoperable service integration using network infrastructure. Web Services Architecture has been standardized by W3 Consortium (http://www.w3.org) as the technological framework and it has received the wide support of the information technology scientific community as well as the information technology (IT) professionals and industry worldwide. In the first section we introduce a new categorization and comparative presentation of the available algorithmic solutions for service management and discovery. Then, we introduce a series of new efficient algorithms that ensure quality of service provision and improve time complexity in service discovery. Overall in the first part of the thesis we present: - Efficient algorithms for dynamic Web Service selection taking into account non-functional specifications (Quality of Web Service – QoWS) and performance issues during Web Service (WS) consumption attempt (i.e. QoWS enabled WS discovery). - Efficient algorithms for service management and discovery in network centric information systems that are based on decentralized network approaches specifically designed for WS discovery. In the sequel, we propose efficient adaptive methods for personalized web searching. In this way we provide performance improvement both for the internal management and discovery functionality of web based net-centric information systems as well as for the systems’ output that is the end-user information. In particular, in the second section, we introduce a series of three new algorithms for personalized searching. The proposed algorithms are mainly based on link metrics techniques. Their main advantage is that they allow, with the use of a simple methodology, search results personalization, with minimum overhead in terms of storage volume and computation time. We achieve personalized search using link analysis in a personalized graph much smaller one than the whole web graph. The personalized graph is shaped taking advantage of semantic taxonomies. Summarizing the novel research results of this second section are the following: - Efficient algorithms for personalized web information searching. - Adaptive presentation mechanisms of search results with the use of multiple levels of novel categorization. - Extension that allows the adoption of the algorithms for the case of focused web crawling mechanisms, which constitute an alternative personalized searching approach. Finally in the third and last section of our thesis, we present a series of applications, architectures and frameworks of different web based net-centric information environments cases, in which we apply our techniques for service management and personalized information discovery. The main objective of this presentation is to show that the efficient algorithms presented in the previous sections, have multiple potentials of application in problems of different research and technological areas using web based net-centric informative systems and web applications. Cases presented include network management information systems, e-learning approaches, semantic mining and multimedia retrieval systems, web content and structure maintenance solutions and agricultural information systems.
9

Web searching for Translation: an Exploratory and multiple-Case-Study

Enríquez Raído, Vanesa 14 April 2011 (has links)
En aquest treball exploratori s'estudien les conductes de cercad'informació a la web d'un total de sis participants (quatre estudiants de traducció en el seu primer any d'estudis de postgrau i dos traductors professionals amb tres i 15 anysd'experiència, respectivament). Atès que la necessitat de cercar, recuperar, utilitzar i generar informació depèn, entre molts altres factors, del tipus d'usuari i recerques documentals, aquest estudise centra en les cerques d'informació en línia realitzades a partir de la traducció de l'espanyol a l'anglès de dos textos dedivulgació científica. Els comportaments de recerca d'informació en línia dels participants de l'estudi s'analitzen per tant en relació a una sèrie de característiques textuals (encàrrec de traducció i text d'origen) i qualitats personals (nivell de coneixement sobre la temàtica de traducció, i nivell de coneixement i experiència tanten el camp de la traducció com en la recerca d'informació en línia). Tot i que s'han recopilat dades de tots els participants de l'estudi pel que fa a la primera tasca de traducció, les dades relatives a la segona tasca de traducció corresponen només als estudiants de traducció. / En este trabajo exploratorio se estudian las conductas de búsqueda de información en la Web de un total de seis participantes (cuatro estudiantes de traducción en su primer año de estudios de postgrado y dos traductores profesionales con tres y 15 años de experiencia, respectivamente). Dado que la necesidad de buscar, recuperar, utilizar y generar información depende, entre otros muchos factores, del tipo de usuario y búsquedas documentales, este estudio se centra en las búsquedas de información en línea realizadas a partir de la traducción del español al inglés de dos textos de divulgación científica. Los comportamientos de búsqueda de información de los participantes del estudio se analizan por tanto en relación a una serie de características textuales (encargo de traducción y texto de origen) y cualidades personales (nivel de conocimiento sobre la temática de traducción, y nivel de conocimiento y experiencia tanto en el campo de la traducción como en la búsqueda de información en línea). Si bien todos los participantes del estudio realizaron la primera tarea de traducción, solo los estudiantes llevaron a cabo la segunda tarea de traducción. / This multiple-case study explores the Web search behaviors of a total of six participants. These include a naturally occurring sample of four postgraduate translation trainees (in their first year of studies) who enrolled in an introductory course on technical and scientific translation, and two additional subjects (a PhD student of translation with three years of casual professional translation experience and a translation teacher with over 15 years of experience in the discipline) who participated in a pilot study conducted prior to the main study. Given that the need to seek, retrieve, use, and generate translation information depends on the type of users and the translation tasks performed, the study focuses on two specific tasks dealing with the translation of two popular-science texts from Spanish into English. In particular, the study examines the online search behaviors of all participants in relation to a number of translation task attributes (text type and translation brief) as well as user attributes (translation expertise, Web search expertise, and domain knowledge). While for the first task data was obtained from all six research participants, the second task was only carried out by the four translation trainees.

Page generated in 0.466 seconds