• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 105
  • 21
  • 9
  • 7
  • 7
  • 7
  • 7
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 186
  • 186
  • 113
  • 62
  • 58
  • 52
  • 39
  • 38
  • 25
  • 23
  • 23
  • 19
  • 19
  • 19
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Cluster-based relevance feedback techniques for web searches

Deng, Ziqiang 01 January 1998 (has links)
No description available.
122

Detecting Internet visual plagiarism in higher education photography with Google™ Search by Image : proposed upload methods and system evaluation

Van Heerden, Leanri. January 2014 (has links)
Thesis (M. Tech. (Design and Studio Art)) - Central University of Technology, Free State, 2014 / The Information Age has presented those in the discipline of photography with very many advantages. Digital photographers enjoy all the perquisites of convenience while still producing high-quality images. Lecturers find themselves the authorities of increasingly archaic knowledge in a perpetual race to keep up with technology. When inspiration becomes imitation and visual plagiarism occurs, lecturers may find themselves at a loss for taking action as content-based image retrieval systems, like Google™ Search by Image (SBI), have not yet been systematically tested for the detection of visual plagiarism. Currently there exists no efficacious method available to photography lecturers in higher education for detecting visual plagiarism. As such, the aim of this study is to ascertain the most effective uploading methods and precision of the Google™ SBI system which lecturers can use to establish a systematic workflow that will combat visual plagiarism in photography programmes. Images were selected from the Google™ Images database by means of random sampling and uploaded to Google™ SBI to determine if the system can match the images to their Internet source. Each of the images received a black and white conversion, a contrast adjustment and a hue shift to ascertain whether the system can also match altered images. Composite images were compiled to establish whether the system can detect images from the salient feature. Results were recorded and the precision values calculated to determine the system’s success rate and accuracy. The results were favourable and 93.25% of the adjusted images retrieved results with a precision value of 0.96. The composite images had a success rate of 80% when uploaded intact with no dissections and a perfect precision value of 1.00. Google™ SBI can successfully be used by the photography lecturer as a functional visual plagiarism detection system to match images unethically appropriated by students from the Internet.
123

Web search engines as teaching and research resources : a perceptions survey of IT and CS staff from selected universities of the KwaZulu-Natal and Eastern Cape provinces of South Africa

Tamba, Paul A. Tamba January 2011 (has links)
A dissertation submitted in fulfillment of the requirements for the degree of Master in Technology: Information Technology, Durban University of Technology, 2011. / This study examines the perceived effect of the following factors on web searching ability of academic staff in the computing discipline: demographic attributes such as gender, age group, position held by the academic staff, highest qualification, etc; lecturing experience, research experience, English language proficiency, and web searching experience. The research objectives are achieved using a Likert-scale based questionnaire for 61 academic staff from Information Technology and Computer Science departments from four Universities from the Kwazulu-Natal and Eastern Cape provinces of South Africa. Descriptive and inferential statistics were computed for data analysis from the questionnaire after performing data reliability and validity tests using factor analysis and Cronbach‟s coefficients methods on the PASW Statistics 18.0 (SPSS). Descriptive statistics revealed a majority of staff from IT as compared to staff in CS and, a majority of under qualified middle age male staff in junior positions with considerable years of lecturing experience but with little research experience. Inferential statistics show an association between web searching ability and demographic attributes such as academic qualifications, positions, and years of research experience, and also reveal a relationship between web searching ability and lecturing experience, and between web searching ability and English language ability. However, the association between position, English language ability, and searching ability was found to be the strongest of all. The novelty finding by this study is the effect of lecturing experience on web searching ability which has not been claimed by existing research reviewed. Ideas for future research include mentoring of academic staff by more experienced staff, training of novice web searchers, designing and using semantic search systems both in English and in local languages, publishing more web content in local languages, and triangulating various research strategies for the analysis of the usability of web search engines.
124

Αλγόριθμοι και τεχνικές εξατομικευμένης αναζήτησης σε διαδικτυακά περιβάλλοντα με χρήση υποκείμενων σημασιολογιών

Πλέγας, Ιωάννης 06 December 2013 (has links)
Η τεράστια ανάπτυξη του Παγκόσμιου Ιστού τις τελευταίες δεκαετίες έχει αναδείξει την αναζήτηση πληροφοριών ως ένα από τα πιο σημαντικά ζητήματα στον τομέα της έρευνας στις Τεχνολογίες της Πληροφορικής. Σήμερα, οι σύγχρονες μηχανές αναζήτησης απαντούν αρκετά ικανοποιητικά στα ερωτήματα των χρηστών, αλλά τα κορυφαία αποτελέσματα που επιστρέφονται δεν είναι πάντα σχετικά με τα δεδομένα που αναζητά ο χρήστης. Ως εκ τούτου, οι μηχανές αναζήτησης καταβάλλουν σημαντικές προσπάθειες για να κατατάξουν τα πιο σχετικά αποτελέσματα του ερωτήματος ως προς τον χρήστη στα κορυφαία αποτελέσματα της λίστας κατάταξης των αποτελεσμάτων. Η διατριβή αυτή ασχολείται κυρίως με το παραπάνω πρόβλημα, δηλαδή την κατάταξη στις υψηλότερες θέσεις των πιο σχετικών αποτελεσμάτων ως προς τον χρήστη (ειδικά για ερωτήματα που οι όροι τους έχουν πολλαπλές σημασίες). Στο πλαίσιο της παρούσας έρευνας κατασκευάστηκαν αλγόριθμοι και τεχνικές που βασίζονται στην τεχνική της σχετικής ανατροφοδότησης (relevance feedback) για την βελτίωση των αποτελεσμάτων που επιστρέφονται από μια μηχανή αναζήτησης. Βασική πηγή της ανατροφοδότησης ήταν τα αποτελέσματα που επιλέγουν οι χρήστες κατά την διαδικασία πλοήγησης. Ο χρήστης επεκτείνει την αρχική πληροφορία αναζήτησης (λέξεις κλειδιά) με νέα πληροφορία που προέρχεται από τα αποτελέσματα που διαλέγει. Έχοντας ένα νέο σύνολο πληροφορίας που αφορά τις προτιμήσεις του χρήστη, συγκρίνεται η σημασιολογική πληροφορία του συνόλου αυτού με τα υπόλοιπα αποτελέσματα (αυτά που επιστράφηκαν πριν επιλέξει το συγκεκριμένο αποτέλεσμα) και μεταβάλλεται η σειρά των αποτελεσμάτων προωθώντας και προτείνοντας τα αποτελέσματα που είναι πιο σχετικά με το νέο σύνολο πληροφορίας. Ένα άλλο πρόβλημα που πρέπει να αντιμετωπιστεί κατά την υποβολή ερωτημάτων από τους χρήστες σε μια μηχανή αναζήτησης είναι ότι τα ερωτήματα που υποβάλλονται στις μηχανές αναζήτησης είναι συνήθως μικρά σε αριθμό λέξεων και αμφίσημα. Συνεπώς, πρέπει να υπάρχουν τρόποι αποσαφήνισης των διαφορετικών εννοιών των όρων αναζήτησης και εύρεσης της έννοιας που ενδιαφέρει τον χρήστη. Η αποσαφήνιση των όρων αναζήτησης είναι μια διαδικασία που έχει μελετηθεί στην βιβλιογραφία με αρκετούς διαφορετικούς τρόπους. Στην διατριβή μου προτείνω νέες στρατηγικές αποσαφήνισης των εννοιών των όρων αναζήτησης των μηχανών αναζήτησης και εξερευνάται η αποδοτικότητά τους στις μηχανές αναζήτησης. Η καινοτομία τους έγκειται στη χρήση του Page-Rank σαν ενδείκτη της σημαντικότητας μιας έννοιας για έναν όρο του ερωτήματος. Επίσης είναι ευρέως γνωστό ότι ο Παγκόσμιος Ιστός περιέχει έγγραφα με την ίδια πληροφορία και έγγραφα με σχεδόν ίδια πληροφορία. Παρά τις προσπάθειες των μηχανών αναζήτησης με αλγόριθμους εύρεσης των κειμένων που περιέχουν επικαλυπτόμενη πληροφορία, ακόμα υπάρχουν περιπτώσεις που τα κείμενα που ανακτώνται από μια μηχανή αναζήτησης περιέχουν επαναλαμβανόμενη πληροφορία. Στην διατριβή αυτή παρουσιάζονται αποδοτικές τεχνικές εύρεσης και περικοπής της επικαλυπτόμενης πληροφορίας από τα αποτελέσματα των μηχανών αναζήτησης χρησιμοποιώντας τις σημασιολογικές πληροφορίες των αποτελεσμάτων των μηχανών αναζήτησης. Συγκεκριμένα αναγνωρίζονται τα αποτελέσματα που περιέχουν την ίδια πληροφορία και απομακρύνονται, ενώ ταυτόχρονα τα αποτελέσματα που περιέχουν επικαλυπτόμενη πληροφορία συγχωνεύονται σε νέα κείμενα(SuperTexts) που περιέχουν την πληροφορία των αρχικών αποτελεσμάτων χωρίς να υπάρχει επαναλαμβανόμενη πληροφορία. Ένας άλλος τρόπος βελτίωσης της αναζήτησης είναι ο σχολιασμός των κειμένων αναζήτησης έτσι ώστε να περιγράφεται καλύτερα η πληροφορία τους. Ο σχολιασμός κειμένων(text annotation) είναι μια τεχνική η οποία αντιστοιχίζει στις λέξεις του κειμένου επιπλέον πληροφορίες όπως η έννοια που αντιστοιχίζεται σε κάθε λέξη με βάση το εννοιολογικό περιεχόμενο του κειμένου. Η προσθήκη επιπλέον σημασιολογικών πληροφοριών σε ένα κείμενο βοηθάει τις μηχανές αναζήτησης να αναζητήσουν καλύτερα τις πληροφορίες που ενδιαφέρουν τους χρήστες και τους χρήστες να βρουν πιο εύκολα τις πληροφορίες που αναζητούν. Στην διατριβή αυτή αναλύονται αποδοτικές τεχνικές αυτόματου σχολιασμού κειμένων από τις οντότητες που περιέχονται στην Wikipedia, μια διαδικασία που αναφέρεται στην βιβλιογραφία ως Wikification. Με τον τρόπο αυτό οι χρήστες μπορούν να εξερευνήσουν επιπλέον πληροφορίες για τις οντότητες που περιέχονται στο κείμενο που τους επιστρέφεται. Ένα άλλο τμήμα της διατριβής αυτής προσπαθεί να εκμεταλλευτεί την σημασιολογία των αποτελεσμάτων των μηχανών αναζήτησης χρησιμοποιώντας εργαλεία του Σημασιολογικού Ιστού. Ο στόχος του Σημασιολογικού Ιστού (Semantic Web) είναι να κάνει τους πόρους του Ιστού κατανοητούς και στους ανθρώπους και στις μηχανές. Ο Σημασιολογικός Ιστός στα πρώτα βήματά του λειτουργούσε σαν μια αναλυτική περιγραφή του σώματος των έγγραφων του Ιστού. Η ανάπτυξη εργαλείων για την αναζήτηση σε Σημασιολογικό Ιστό είναι ακόμα σε πρώιμο στάδιο. Οι σημερινές τεχνικές αναζήτησης δεν έχουν προσαρμοστεί στην δεικτοδότηση και στην ανάκτηση σημασιολογικής πληροφορίας εκτός από μερικές εξαιρέσεις. Στην έρευνά μας έχουν δημιουργηθεί αποδοτικές τεχνικές και εργαλεία χρήσης του Παγκόσμιου Ιστού. Συγκεκριμένα έχει κατασκευαστεί αλγόριθμος μετατροπής ενός κειμένου σε οντολογία ενσωματώνοντας την σημασιολογική και συντακτική του πληροφορία έτσι ώστε να επιστρέφονται στους χρήστες απαντήσεις σε ερωτήσεις φυσικής γλώσσας. Επίσης στην διατριβή αυτή αναλύονται τεχνικές φιλτραρίσματος XML εγγράφων χρησιμοποιώντας σημασιολογικές πληροφορίες. Συγκεκριμένα παρουσιάζεται ένα αποδοτικό κατανεμημένο σύστημα σημασιολογικού φιλτραρίσματος XML εγγράφων που δίνει καλύτερα αποτελέσματα από τις υπάρχουσες προσεγγίσεις. Τέλος, στα πλαίσια αυτής της διδακτορικής διατριβής γίνεται επιπλέον έρευνα για την βελτίωση της απόδοσης των μηχανών αναζήτησης από μια διαφορετική οπτική γωνία. Στην κατεύθυνση αυτή παρουσιάζονται τεχνικές περικοπής ανεστραμμένων λιστών ανεστραμμένων αρχείων. Επίσης επιτυγχάνεται ένας συνδυασμός των προτεινόμενων τεχνικών με υπάρχουσες τεχνικές συμπίεσης ανεστραμμένων αρχείων πράγμα που οδηγεί σε καλύτερα αποτελέσματα συμπίεσης από τα ήδη υπάρχοντα. / The tremendous growth of the Web in the recent decades has made the searching for information as one of the most important issues in research in Computer Technologies. Today, modern search engines respond quite well to the user queries, but the results are not always relative to the data the user is looking for. Therefore, search engines are making significant efforts to rank the most relevant query results to the user in the top results of the ranking list. This work mainly deals with this problem, the ranking of the relevant results to the user in the top of the ranking list even when the queries contain multiple meanings. In the context of this research, algorithms and techniques were constructed based on the technique of relevance feedback which improves the results returned by a search engine. Main source of feedback are the results which the users selects during the navigation process. The user extends the original information (search keywords) with new information derived from the results that chooses. Having a new set of information concerning to the user's preferences, the relevancy of this information is compared with the other results (those returned before choosing this effect) and change the order of the results by promoting and suggesting the results that are more relevant to the new set of information. Another problem that must be addressed when the users submit queries to the search engines is that the queries are usually small in number of words and ambiguous. Therefore, there must be ways to disambiguate the different concepts/senses and ways to find the concept/sense that interests the user. Disambiguation of the search terms is a process that has been studied in the literature in several different ways. This work proposes new strategies to disambiguate the senses/concepts of the search terms and explore their efficiency in search engines. Their innovation is the use of PageRank as an indicator of the importance of a sense/concept for a query term. Another technique that exploits semantics in our work is the use of text annotation. The use of text annotation is a technique that assigns to the words of the text extra information such as the meaning assigned to each word based on the semantic content of the text. Assigning additional semantic information in a text helps users and search engines to seek or describe better the text information. In my thesis, techniques for improving the automatic annotation of small texts with entities from Wikipedia are presented, a process that referred in the literature as Wikification. It is widely known that the Web contain documents with the same information and documents with almost identical information. Despite the efforts of the search engine’s algorithms to find the results that contain repeated information; there are still cases where the results retrieved by a search engine contain repeated information. In this work effective techniques are presented that find and cut the repeated information from the results of the search engines. Specifically, the results that contain the same information are removed, and the results that contain repeated information are merged into new texts (SuperTexts) that contain the information of the initial results without the repeated information. Another part of this work tries to exploit the semantic information of search engine’s results using tools of the Semantic Web. The goal of the Semantic Web is to make the resources of the Web understandable to humans and machines. The Semantic Web in their first steps functioned as a detailed description of the body of the Web documents. The development of tools for querying Semantic Web is still in its infancy. The current search techniques are not adapted to the indexing and retrieval of semantic information with a few exceptions. In our research we have created efficient techniques and tools for using the Semantic Web. Specifically an algorithm was constructed that converts to ontology the search engine’s results integrating semantic and syntactic information in order to answer natural language questions. Also this paper contains XML filtering techniques that use semantic information. Specifically, an efficient distributed system is proposed for the semantic filtering of XML documents that gives better results than the existing approaches. Finally as part of this thesis is additional research that improves the performance of the search engines from a different angle. It is presented a technique for cutting the inverted lists of the inverted files. Specifically a combination of the proposed technique with existing compression techniques is achieved, leading to better compression results than the existing ones.
125

Evaluating User Feedback Systems

Menard, Jr., Kevin Joseph 04 May 2006 (has links)
The increasing reliance of people on computers for daily tasks has resulted in a vast number of digital documents. Search engines were once luxury tools for quickly scanning a set of documents but are now quickly becoming the only practical way to navigate through this sea of information. Traditionally, search engine results are based upon a mathematical formula of document relevance to a search phrase. Often, however, what a user deems to be relevant and what a search engine computes as relevant are not the same. User feedback regarding the utility of a search result can be collected in order to refine query results. Additionally, user feedback can be used to identify queries that lack high quality search results. A content author can then further develop existing content or create new content to improve those search results. The most straightforward way of collecting user feedback is to add a graphical user interface component to the search interface that asks the user how much he or she liked the search result. However, if the feedback mechanism requires the user to provide feedback before he or she can progress further with his or her search, the user may become annoyed and provide incorrect feedback values out of spite. Conversely, if the feedback mechanism does not require the user to provide feedback at all then the overall amount of collected feedback will be diminished as many users will not expend the effort required to give feedback. This research focused on the collection of explicit user feedback in both mandatory (a user must give feedback) and voluntary (a user may give feedback) scenarios. The collected data was used to train a set of decision tree classifiers that provided user satisfaction values as a function of implicit user behavior and a set of search terms. The results of our study indicate that a more accurate classifier can be built from explicit data collected in a voluntary scenario. Given a limited search domain, the classification accuracy can be further improved.
126

Täckning av ett informationsbehov via Wold Wide Webb : Hur man undersöker ett informationsbehov, söker efter information på WWW samt utvärderar sökmaskiner. / Coverage of an information need through World Wide Webb : How to examine an information need, search for information on WWW and evaluate search engines

Ekeroth, Patrik, Sverker, Jakob January 1996 (has links)
This thesis was built on a practical work for LM Ericsson Data AB and deals with how toexamine an information need for a limited department within the corporation using aqualitative interview method.The thesis deals with how to search for requested information on the Internet and the WorldWide Web (WWW). It contains a model and a method for evaluation of search engines onthe World Wide Web, and 16 search engines which primarily indexes the World Wide Webare being evaluated.
127

Surfing for knowledge : how undergraduate students use the internet for research and study purposes.

Phillips, Genevieve. January 2013 (has links)
The developments in technology and concomitant access to the Internet have reshaped the way people research in their personal and academic lives. The ever-expanding amount of information on the Internet is creating an environment where users are able to find what they seek for or add to the body of knowledge or both. Researching, especially for academic purposes, has been greatly impacted by the Internet’s rapid growth and expansion. This project stemmed from a desire to understand how student’s research methods have evolved when taking into account their busy schedules and needs. The availability and accessibility of the Internet has increased its use considerably as a straightforward medium from which users obtain desired information. This thesis was to ascertain in what manner senior undergraduate students at the University of Kwa-Zulu Natal Pietermaritzburg campus use the Internet for academic research purposes which is largely determined by the individual’s personal preference and access to the Internet. Through the relevant literature review there arose pertinent questions that required answers. Students were interviewed to determine when, why and how they began using the Internet, and how this usage contributes to their academic work; whether it aids or inhibits student’s research. Through collection and analysis of data, evidence emerged that students followed contemporary research methods, making extensive use of the Internet, while a few use both forms of resources, unless compelled by lecturers when following assignment requirements. As a secondary phase, from the results received from the students, lecturers were interviewed. Differing levels of restrictions on students were evident; they themselves use the Internet for academic research purposes. Lecturers were convinced they had the understanding and experience to discern what was relevant and factual. Referring to the Internet for research is becoming more popular. This should continue to increase as the student’s lives become more complex. A suggestion offered by this research project is to academic staff. Equip students from their early University years on standards they should follow in order to research correctly, as opposed to limiting their use of the Internet leading in part to students committing plagiarism being unaware of the wealth of reputable resources available for their use and benefit on the Internet. / Thesis (M.A.)-University of KwaZulu-Natal, Pietermaritzburg, 2013.
128

Building a search engine for music and audio on the World Wide Web

Knopke, Ian January 2005 (has links)
The main contribution of this dissertation is a system for locating and indexing audio files on the World Wide Web. The idea behind this system is that the use of both web page and audio file analysis techniques can produce more relevant information for locating audio files on the web than is used in full-text search engines. / The most important part of this system is a web crawler that finds materials by following hyperlinks between web pages. The crawler is distributed and operates using multiple computers across a network, storing results to a database. There are two main components: a set of retrievers that retrieve pages and audio files from the web, and a central crawl manager that coordinates the retrievers and handles data storage tasks. / The crawler is designed to locate three types of audio files: AIFF, WAVE, and MPEG-1 (MP3), but other types can be easily added to the system. Once audio files are located, analyses are performed of both the audio files and the associated web pages that link to these files. Information extracted by the crawler can be used to build search indexes for resolving user queries. A set of results demonstrating aspects of the performance of the crawler are presented, as well as some statistics and points of interest regarding the nature of audio files on the web.
129

Supervised Identification of the User Intent of Web Search Queries

González-Caro, Cristina 27 September 2011 (has links)
As the Web continues to increase both in size and complexity, Web search is a ubiquitous service that allows users to find all kind of information, resources, and activities. However, as the Web evolves so do the needs of the users. Nowadays, users have more complex interests that go beyond of the traditional informational queries. Thus, it is important for Web-search engines, not only to continue answering effectively informational and navigational queries, but also to be able to identify and provide accurate results for new types of queries. This Ph.D. thesis aims to analyze the impact of the query intent in the search behavior of the users. In order to achieve this, we first study the behavior of users with different types of query intent on search engine result pages (SERP), using eye tracking techniques. Our study shows that the query intent of the user affects all the decision process in the SERP. Users with different query intent prefer different type of search results (organic, sponsored), they attend to different main areas of interest (title, snippet, URL, image) and focus on search results with different ranking position. To be able to accurately identify the intent of the user query is an important issue for search engines, as this will provide useful elements that allow them adapting their results to changing user behaviors and needs. Therefore, in this thesis we propose a method to identify automatically the intent behind user queries. Our hypothesis is that the performance of single-faceted classification of queries can be improved by introducing information of multi-faceted training samples into the learning process. Hence, we study a wide set of facets that can be considered for the characterization of the query intent of the user and we investigate whether combining multiple facets can improve the predictability of these facets. Our experimental results show that this idea can significantly improve the quality of the classification. Since most of previous works in query intent classification are oriented to the study of single facets, these results are a first step to an integrated query intent classification model. / A medida que la Web sigue creciendo, tanto en tamaño como en complejidad, la búsqueda Web llega a ser un servicio ubicuo que permite a los usuarios encontrar todo tipo de información, recursos y actividades. Sin embargo, así como la Web evoluciona también lo hacen las necesidades de los usuarios. Hoy en día, los usuarios tienen intereses más complejos que van más allá de las tradicionales consultas informacionales. Por lo tanto, es importante para los motores de búsqueda Web, no solo continuar respondiendo efectivamente las consultas informacionales y navegacionales, sino también identificar y proveer resultados precisos para los nuevos tipos de consultas. El objetivo de esta tesis es analizar el impacto de la intención de la consulta en el comportamiento de búsqueda de los usuarios. Para lograr esto, primero estudiamos el comportamiento de usuarios con diferentes intenciones en las páginas de resultados de motores de búsqueda (SERP). Nuestro estudio muestra que la intención de la consulta afecta todo el proceso de decisión en la SERP. Los usuarios con diferentes intenciones prefieren resultados de búsqueda diferentes (orgánicos, patrocinados), miran diferentes áreas de interés (título, snippet, URL, imagen) y se concentran en resultados con diferente posición en el ranking. Identificar automáticamente la intención de la consulta aportaría elementos valiosos que permitirán a los sistemas de búsqueda adaptar sus resultados a los comportamientos cambiantes del usuario. Por esto, esta tesis propone un método para identificar automáticamente la intención detrás de la consulta. Nuestra hipótesis es que el rendimiento de la clasificación de consultas basada en facetas simples puede ser mejorado con la introducción de ejemplos multi-faceta en el proceso de aprendizaje. Por lo tanto, estudiamos un grupo amplio de facetas e investigamos si la combinación de facetas puede mejorar su predictibilidad. Nuestros resultados muestran que esta idea puede mejorar significativamente la calidad de la clasificación. Dado que la mayoría de trabajos previos están orientados al estudio de facetas individuales, estos resultados son un primer paso hacia un modelo integrado de clasificación de la intención de la consulta.
130

Towards automatic understanding and integration of web databases for developing large-scale unified access systems

He, Hai. January 2006 (has links)
Thesis (Ph. D.)--State University of New York at Binghamton, Computer Science Department, 2006. / Includes bibliographical references.

Page generated in 0.0421 seconds