• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 12
  • 10
  • 6
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 77
  • 45
  • 15
  • 13
  • 12
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Further Developing Preload Lists for the Tor Network / Vidareutveckling av preloadlistor för Tor-nätverket

Bahmiary, Daniel January 2023 (has links)
A recently proposed defense for the anonymity network Tor uses preload lists of domains to determine what should be cached in the Domain Name System (DNS) caches of Tor relays. The defense protects against attacks that infer what is cached in Tor relays. By having domains continuously cached (preloaded), the cache will become independent of which websites have been visited. The current preload lists contain useless domains and have room for improvement. The objective of this project is to answer the question of "How can we generate better preload lists?" and to provide improved methods for generating preload lists, with the ultimate goal of generating better preload lists that the Tor Project can benefit from.  We further developed existing tools to use web crawling to find more useful domains, as well as implementing filtering to remove useless domains from the preload lists. The results of this project showed promising results, as the useless domains decreased by an average of around 57% and more useful domains were found. / Ett nyligen föreslaget försvar för anonymitetsnätverket Tor använder preload listor av domäner för att avgöra vad som ska cachelagras i domännamnssystemets (DNS) cacher för Tor reläer. Försvaret skyddar mot attacker som avgör vad som cachelagras i Tor relärer. Genom att ha domäner kontinuerligt cachade (förladdade), blir cachen oberoende av vilka websidor som har besökts. De nuvarande preload listorna innehåller värdelösa domäner och har utrymme för förbättring. Syftet med detta projekt är att svara på frågan "Hur kan vi generera bättre preload listor?" och att bidra med förbättrade metoder för att generera preload listor, med det ultimata målet att generera bättre preload listor som Torprojektet kan dra nytta av. Vi vidareutvecklade befintliga verktyg till att använda webbkrypning för att hitta mer användbara domäner, samt implementerade filtrering av värdelösa domäner från preload listorna. Detta projekt visade lovande resultat, då de värdelösa domänerna minskade med i genomsnitt ungefär 57% och mer användbara domäner hittades.
52

Information Diffusion on Twitter

Zhou, Li 03 June 2015 (has links)
No description available.
53

Intelligent Event Focused Crawling

Farag, Mohamed Magdy Gharib 23 September 2016 (has links)
There is need for an integrated event focused crawling system to collect Web data about key events. When an event occurs, many users try to locate the most up-to-date information about that event. Yet, there is little systematic collecting and archiving anywhere of information about events. We propose intelligent event focused crawling for automatic event tracking and archiving, as well as effective access. We extend the traditional focused (topical) crawling techniques in two directions, modeling and representing: events and webpage source importance. We developed an event model that can capture key event information (topical, spatial, and temporal). We incorporated that model into the focused crawler algorithm. For the focused crawler to leverage the event model in predicting a webpage's relevance, we developed a function that measures the similarity between two event representations, based on textual content. Although the textual content provides a rich set of features, we proposed an additional source of evidence that allows the focused crawler to better estimate the importance of a webpage by considering its website. We estimated webpage source importance by the ratio of number of relevant webpages to non-relevant webpages found during crawling a website. We combined the textual content information and source importance into a single relevance score. For the focused crawler to work well, it needs a diverse set of high quality seed URLs (URLs of relevant webpages that link to other relevant webpages). Although manual curation of seed URLs guarantees quality, it requires exhaustive manual labor. We proposed an automated approach for curating seed URLs using social media content. We leveraged the richness of social media content about events to extract URLs that can be used as seed URLs for further focused crawling. We evaluated our system through four series of experiments, using recent events: Orlando shooting, Ecuador earthquake, Panama papers, California shooting, Brussels attack, Paris attack, and Oregon shooting. In the first experiment series our proposed event model representation, used to predict webpage relevance, outperformed the topic-only approach, showing better results in precision, recall, and F1-score. In the second series, using harvest ratio to measure ability to collect relevant webpages, our event model-based focused crawler outperformed the state-of-the-art focused crawler (best-first search). The third series evaluated the effectiveness of our proposed webpage source importance for collecting more relevant webpages. The focused crawler with webpage source importance managed to collect roughly the same number of relevant webpages as the focused crawler without webpage source importance, but from a smaller set of sources. The fourth series provides guidance to archivists regarding the effectiveness of curating seed URLs from social media content (tweets) using different methods of selection. / Ph. D.
54

Collecte orientée sur le Web pour la recherche d’information spécialisée / Focused document gathering on the Web for domain-specific information retrieval

De Groc, Clément 05 June 2013 (has links)
Les moteurs de recherche verticaux, qui se concentrent sur des segments spécifiques du Web, deviennent aujourd'hui de plus en plus présents dans le paysage d'Internet. Les moteurs de recherche thématiques, notamment, peuvent obtenir de très bonnes performances en limitant le corpus indexé à un thème connu. Les ambiguïtés de la langue sont alors d'autant plus contrôlables que le domaine est bien ciblé. De plus, la connaissance des objets et de leurs propriétés rend possible le développement de techniques d'analyse spécifiques afin d'extraire des informations pertinentes.Dans le cadre de cette thèse, nous nous intéressons plus précisément à la procédure de collecte de documents thématiques à partir du Web pour alimenter un moteur de recherche thématique. La procédure de collecte peut être réalisée en s'appuyant sur un moteur de recherche généraliste existant (recherche orientée) ou en parcourant les hyperliens entre les pages Web (exploration orientée).Nous étudions tout d'abord la recherche orientée. Dans ce contexte, l'approche classique consiste à combiner des mot-clés du domaine d'intérêt, à les soumettre à un moteur de recherche et à télécharger les meilleurs résultats retournés par ce dernier.Après avoir évalué empiriquement cette approche sur 340 thèmes issus de l'OpenDirectory, nous proposons de l'améliorer en deux points. En amont du moteur de recherche, nous proposons de formuler des requêtes thématiques plus pertinentes pour le thème afin d'augmenter la précision de la collecte. Nous définissons une métrique fondée sur un graphe de cooccurrences et un algorithme de marche aléatoire, dans le but de prédire la pertinence d'une requête thématique. En aval du moteur de recherche, nous proposons de filtrer les documents téléchargés afin d'améliorer la qualité du corpus produit. Pour ce faire, nous modélisons la procédure de collecte sous la forme d'un graphe triparti et appliquons un algorithme de marche aléatoire biaisé afin d'ordonner par pertinence les documents et termes apparaissant dans ces derniers.Dans la seconde partie de cette thèse, nous nous focalisons sur l'exploration orientée du Web. Au coeur de tout robot d'exploration orientée se trouve une stratégie de crawl qui lui permet de maximiser le rapatriement de pages pertinentes pour un thème, tout en minimisant le nombre de pages visitées qui ne sont pas en rapport avec le thème. En pratique, cette stratégie définit l'ordre de visite des pages. Nous proposons d'apprendre automatiquement une fonction d'ordonnancement indépendante du thème à partir de données existantes annotées automatiquement. / Vertical search engines, which focus on a specific segment of the Web, become more and more present in the Internet landscape. Topical search engines, notably, can obtain a significant performance boost by limiting their index on a specific topic. By doing so, language ambiguities are reduced, and both the algorithms and the user interface can take advantage of domain knowledge, such as domain objects or characteristics, to satisfy user information needs.In this thesis, we tackle the first inevitable step of a all topical search engine : focused document gathering from the Web. A thorough study of the state of art leads us to consider two strategies to gather topical documents from the Web: either relying on an existing search engine index (focused search) or directly crawling the Web (focused crawling).The first part of our research has been dedicated to focused search. In this context, a standard approach consists in combining domain-specific terms into queries, submitting those queries to a search engine and down- loading top ranked documents. After empirically evaluating this approach over 340 topics, we propose to enhance it in two different ways: Upstream of the search engine, we aim at formulating more relevant queries in or- der to increase the precision of the top retrieved documents. To do so, we define a metric based on a co-occurrence graph and a random walk algorithm, which aims at predicting the topical relevance of a query. Downstream of the search engine, we filter the retrieved documents in order to improve the document collection quality. We do so by modeling our gathering process as a tripartite graph and applying a random walk with restart algorithm so as to simultaneously order by relevance the documents and terms appearing in our corpus.In the second part of this thesis, we turn to focused crawling. We describe our focused crawler implementation that was designed to scale horizontally. Then, we consider the problem of crawl frontier ordering, which is at the very heart of a focused crawler. Such ordering strategy allows the crawler to prioritize its fetches, maximizing the number of in-domain documents retrieved while minimizing the non relevant ones. We propose to apply learning to rank algorithms to efficiently order the crawl frontier, and define a method to learn a ranking function from existing crawls.
55

Stimulace zón používaných při reflexní lokomoci pomocí proudu TENS / Stimulation of the zones used during reflex locomotion by TENS

Vodňanská, Markéta January 2011 (has links)
Thesis title: Stimulation of the zones used during reflex lokomotion by the TENS Name: Markéta Vodňanská The aim of the thesis: The aim of this thesis is to determine, whether is activated the appropriate locomotor pattern during Vojta reflex locomotion - reflexive crawling, when TENS is used for a stimulation of trigger zones, as it is during manual stimulation of trigger zones for reflexive crawling. Method: The essence of this study is the stimulation of trigger zones used in the reflex locomotion manually and by transcutaneous electrical nerve stimulation (TENS) at a frequency of 30 Hz and 182 Hz. Six probands participated in this experiment. During the manual and TENS stimulation was scanned electrical activity in selected muscles by surface electromyography. First, it was evaluated the order of activation of selected muscles, using "standard timing" analysis by MyoResearch XP Master program. Second, it was evaluated the crawling reflex locomotion pattern visually. Results: It was confirmed, that the crawling reflex locomotion pattern, which is provoked by manual stimulation of trigger zones, is provoked by TENS stimulation as well, using the same trigger zones. It follows that the vector of direction and pressure during manual stimulation of trigger zones is not necessary for recall of the...
56

Srovnávací kineziologická analýza jízdy na vozíku a plazení / The comparative kineziologic analysis of forward stroke on wheelchair and crawling

Vatěrová, Hana January 2011 (has links)
Title: The comparative kineziologic analysis of forward stroke on wheelchair and crawling. Objectives of the Thesis: The aim of the thesis is to compare muscle activity of selected muscles in shoulder girdle during forward stroke on wheelchair and crawling. Method: Surface electromyography combinated with kinematography analysis used synchronized video recording. Results and Conclusions: As the research shows, there is a difference in muscle activity (timing) between forward stroke on wheelchair and during crawling. It was proved that the activity of forward stroke on wheelchair does not have natural locomotive character. Keywords: Forward stroke on wheelchair, crawling, shoulder girdle, surface electromyography, kinematics analysis.
57

Komparativní analýza vybraných koordinačních ukazatelů plavecké techniky kraul a spontánního plazení / Comparative analysis of selected coordinate indicators in front crawl swiming technique and crawling

Vodička, Radek January 2011 (has links)
AABBSSTTRRAACCTT Title: Comparative analysis of selected coordinate indicators in front crawl swimming technique and crawling. Purposes: The first aim of thesis is to compare coordinate indictors of average swimming cycle and average crawling cycle. Methods: Surface electromyography of muscular activity combined with cinematography analysis used synchronized video recording. Intraindividual comparative analysis and subsequent interindividual comparison of timing muscular activation in one average swimming and crawling cycle. Results: The timing of muscular activity of m. pectoralis major and m. latissimus dorsi during swimming cycle was identical for all probands. This phenomenon was not found in crawling . Key words: Swimming technique, Front Crawl, crawling, EMG, muscle, locomotion, shoulder girdle
58

Využití elektroléčebných proudů v reflexní lokomoci / The use of therapeutic currents in reflex locomotion

Rotterová, Jitka January 2013 (has links)
Thesis title: The use of therapeutic currents in reflex locomotion Name: Jitka Rotterová The aim of the thesis: The aim of this thesis is to determine, whether the appropriate locomotor pattern is activated during Vojta reflex locomotion - reflexive crawling, when Russian stimulation is used for a stimulation of trigger zones, as it is during manual stimulation of these trigger zones, and if the electrical potential will spread to the distant locations of the body. Method: Pilot study of experimentally descriptive character. The essence of this study is the stimulation, of heel zone and zone on medial epicondyle of the femur used in the reflex locomotion manually and by electrical current. Four probands participated in this experiment. During the manual and electrical stimulation activity in selected muscles was scanned by surface electromyography. First, order of activation of selected muscles was evaluated using "standard timing" analysis by MyoResearch XP Master program. Second, the frequency spectrum was evaluated in the same program. Results: The experiment shows that the stimulation of trigger zones of Vojta reflex locomotion by stimulating with Russian current can evoke motor response that corresponds to the locomotor pattern of reflexive crawling. Timing of activity of the muscles monitored...
59

Automated Discovery, Binding, and Integration Of GIS Web Services

Shulman, Lev 18 May 2007 (has links)
The last decade has demonstrated steady growth and utilization of Web Service technology. While Web Services have become significant in a number of IT domains such as eCommerce, digital libraries, data feeds, and geographical information systems, common portals or registries of Web Services require manual publishing for indexing. Manually compiled registries of Web Services have proven useful but often fail to include a considerable amount of Web Services published and available on the Web. We propose a system capable of finding, binding, and integrating Web Services into an index in an automated manner. By using a combination of guided search and web crawling techniques, the system finds a large number of Web Service providers that are further bound and aggregated into a single portal available for public use. Results show that this approach is successful in discovering a considerable number of Web Services in the GIS(Geographical Information Systems) domain, and demonstrate improvements over existing methods of Web Service Discovery.
60

Collecte orientée sur le Web pour la recherche d'information spécialisée

De Groc, Clément 05 June 2013 (has links) (PDF)
Les moteurs de recherche verticaux, qui se concentrent sur des segments spécifiques du Web, deviennent aujourd'hui de plus en plus présents dans le paysage d'Internet. Les moteurs de recherche thématiques, notamment, peuvent obtenir de très bonnes performances en limitant le corpus indexé à un thème connu. Les ambiguïtés de la langue sont alors d'autant plus contrôlables que le domaine est bien ciblé. De plus, la connaissance des objets et de leurs propriétés rend possible le développement de techniques d'analyse spécifiques afin d'extraire des informations pertinentes.Dans le cadre de cette thèse, nous nous intéressons plus précisément à la procédure de collecte de documents thématiques à partir du Web pour alimenter un moteur de recherche thématique. La procédure de collecte peut être réalisée en s'appuyant sur un moteur de recherche généraliste existant (recherche orientée) ou en parcourant les hyperliens entre les pages Web (exploration orientée).Nous étudions tout d'abord la recherche orientée. Dans ce contexte, l'approche classique consiste à combiner des mot-clés du domaine d'intérêt, à les soumettre à un moteur de recherche et à télécharger les meilleurs résultats retournés par ce dernier.Après avoir évalué empiriquement cette approche sur 340 thèmes issus de l'OpenDirectory, nous proposons de l'améliorer en deux points. En amont du moteur de recherche, nous proposons de formuler des requêtes thématiques plus pertinentes pour le thème afin d'augmenter la précision de la collecte. Nous définissons une métrique fondée sur un graphe de cooccurrences et un algorithme de marche aléatoire, dans le but de prédire la pertinence d'une requête thématique. En aval du moteur de recherche, nous proposons de filtrer les documents téléchargés afin d'améliorer la qualité du corpus produit. Pour ce faire, nous modélisons la procédure de collecte sous la forme d'un graphe triparti et appliquons un algorithme de marche aléatoire biaisé afin d'ordonner par pertinence les documents et termes apparaissant dans ces derniers.Dans la seconde partie de cette thèse, nous nous focalisons sur l'exploration orientée du Web. Au coeur de tout robot d'exploration orientée se trouve une stratégie de crawl qui lui permet de maximiser le rapatriement de pages pertinentes pour un thème, tout en minimisant le nombre de pages visitées qui ne sont pas en rapport avec le thème. En pratique, cette stratégie définit l'ordre de visite des pages. Nous proposons d'apprendre automatiquement une fonction d'ordonnancement indépendante du thème à partir de données existantes annotées automatiquement.

Page generated in 0.067 seconds