Spelling suggestions: "subject:"[een] LINK ANALYSIS"" "subject:"[enn] LINK ANALYSIS""
1 |
Web manifestations of knowledge-based innovation systems in the UKStuart, David January 2008 (has links)
Innovation is widely recognised as essential to the modern economy. The term knowledgebased innovation system has been used to refer to innovation systems which recognise the importance of an economy’s knowledge base and the efficient interactions between important actors from the different sectors of society. Such interactions are thought to enable greater innovation by the system as a whole. Whilst it may not be possible to fully understand all the complex relationships involved within knowledge-based innovation systems, within the field of informetrics bibliometric methodologies have emerged that allows us to analyse some of the relationships that contribute to the innovation process. However, due to the limitations in traditional bibliometric sources it is important to investigate new potential sources of information. The web is one such source. This thesis documents an investigation into the potential of the web to provide information about knowledge-based innovation systems in the United Kingdom. Within this thesis the link analysis methodologies that have previously been successfully applied to investigations of the academic community (Thelwall, 2004a) are applied to organisations from different sections of society to determine whether link analysis of the web can provide a new source of information about knowledge-based innovation systems in the UK. This study makes the case that data may be collected ethically to provide information about the interconnections between web sites of various different sizes and from within different sectors of society, that there are significant differences in the linking practices of web sites within different sectors, and that reciprocal links provide a better indication of collaboration than uni-directional web links. Most importantly the study shows that the web provides new information about the relationships between organisations, rather than just a repetition of the same information from an alternative source. Whilst the study has shown that there is a lot of potential for the web as a source of information on knowledge-based innovation systems, the same richness that makes it such a potentially useful source makes applications of large scale studies very labour intensive.
|
2 |
Métricas de análise de links e qualidade de conteúdo: um estudo de caso na Wikipédia / Link analysis metrics and content quality: a case of study in WikipediaHanada, Raíza Tamae Sarkis 26 February 2013 (has links)
Muitos links entre páginas na Web podem ser vistos como indicadores de qualidade e importância para as páginas que eles apontam. A partir desta ideia, vários estudos propuseram métricas baseadas na estrutura de links para inferir qualidade de conteúdo em páginas da web. Contudo, até onde sabemos, o único trabalho que examinou a correlação entre tais métricas e qualidade de conteúdo consistiu de um estudo limitado que deixou várias questões em aberto. Embora tais métricas sejam muito bem sucedidas na tarefa de ranquear páginas que foram fornecidas como respostas para consultas submetidas para máquinas de busca, não é possível determinar a contribuição específica de fatores como qualidade, popularidade e importância para os resultados. Esta dificuldade se deve em parte ao fato de que a informação sobre qualidade, popularidade e importância é difícil de obter para páginas da web em geral. Ao contrário de páginas da web, estas informações podem ser obtidas para artigos da Wikipédia, uma vez que qualidade e importância são avaliadas por especialistas humanos, enquanto a popularidade pode ser estimada com base nas visualizações dos artigos. Isso torna possível a verificação da relação existente entre estes fatores e métricas de análise de links, nosso objetivo neste trabalho. Para fazer isto, nós implementamos vários algoritmos de análise de links e comparamos os rankings obtidos com eles com os obtidos considerando a avaliação humana feita na Wikipédia com relação aos fatores qualidade, popularidade e importância. Nós observamos que métricas de análise de links são mais relacionadas com qualidade e popularidade que com importância e a correlação é moderada / Many links between Web pages can be viewed as indicative of the quality and importance of the pages pointed to. Accordingly, several studies have proposed metrics based on links to infer web page content quality. However, as far as we know, the only work that has examined the correlation between such metrics and content quality consisted of a limited study that left many open questions. In spite of these metrics having been shown successful in the task of ranking pages which were provided as answers to queries submitted to search machines, it is not possible to determine the specific contribution of factors such as quality, popularity, and importance to the results. This difficulty is partially due to the fact that such information is hard to obtain for Web pages in general. Unlike ordinary Web pages, the content quality of Wikipedia articles is evaluated by human experts, which makes it feasible to verify the relation between such link analysis metrics and the quality of Wikipedia articles, our goal in this work. To accomplish that, we implemented several link analysis algorithms and compared their resulting rankings with the ones created by human evaluators regarding factors such as quality, popularity and importance. We found that the metrics are more correlated to quality and popularity than to importance, and the correlation is moderate
|
3 |
Missing Link Discovery In Wikipedia: A Comparative StudySunercan, Omer 01 February 2010 (has links) (PDF)
The fast growing online encyclopedia concept presents original and innovative features by taking advantage of information technologies. The links connecting the articles is one of the most important instances of these features. In this thesis, we present our work on discovering missing links in Wikipedia articles. This task is important for both readers and authors of Wikipedia. Readers will bene& / #64257 / t from the increased article quality with better navigation support. On the other hand, the system can be employed to support authors during editing.
This study combines the strengths of different approaches previously applied for the task, and proposes its own techniques to reach satisfactory results. Because of the subjectivity in the nature of the task / automatic evaluation is hard to apply. Comparing approaches seems to be the best method to evaluate new techniques, and we offer a semi-automatized method for evaluation of the results. The recall is calculated automatically using existing links in Wikipedia. The precision is calculated according to manual evaluations of human assessors. Comparative results for different techniques are presented, showing the success of our improvements.
Our system employs Turkish Wikipedia (Vikipedi) and, according to our knowledge, it is the & / #64257 / rst study on it. We aim to exploit the Turkish Wikipedia as a semantic resource to examine whether it is scalable enough for such purposes.
|
4 |
Métricas de análise de links e qualidade de conteúdo: um estudo de caso na Wikipédia / Link analysis metrics and content quality: a case of study in WikipediaRaíza Tamae Sarkis Hanada 26 February 2013 (has links)
Muitos links entre páginas na Web podem ser vistos como indicadores de qualidade e importância para as páginas que eles apontam. A partir desta ideia, vários estudos propuseram métricas baseadas na estrutura de links para inferir qualidade de conteúdo em páginas da web. Contudo, até onde sabemos, o único trabalho que examinou a correlação entre tais métricas e qualidade de conteúdo consistiu de um estudo limitado que deixou várias questões em aberto. Embora tais métricas sejam muito bem sucedidas na tarefa de ranquear páginas que foram fornecidas como respostas para consultas submetidas para máquinas de busca, não é possível determinar a contribuição específica de fatores como qualidade, popularidade e importância para os resultados. Esta dificuldade se deve em parte ao fato de que a informação sobre qualidade, popularidade e importância é difícil de obter para páginas da web em geral. Ao contrário de páginas da web, estas informações podem ser obtidas para artigos da Wikipédia, uma vez que qualidade e importância são avaliadas por especialistas humanos, enquanto a popularidade pode ser estimada com base nas visualizações dos artigos. Isso torna possível a verificação da relação existente entre estes fatores e métricas de análise de links, nosso objetivo neste trabalho. Para fazer isto, nós implementamos vários algoritmos de análise de links e comparamos os rankings obtidos com eles com os obtidos considerando a avaliação humana feita na Wikipédia com relação aos fatores qualidade, popularidade e importância. Nós observamos que métricas de análise de links são mais relacionadas com qualidade e popularidade que com importância e a correlação é moderada / Many links between Web pages can be viewed as indicative of the quality and importance of the pages pointed to. Accordingly, several studies have proposed metrics based on links to infer web page content quality. However, as far as we know, the only work that has examined the correlation between such metrics and content quality consisted of a limited study that left many open questions. In spite of these metrics having been shown successful in the task of ranking pages which were provided as answers to queries submitted to search machines, it is not possible to determine the specific contribution of factors such as quality, popularity, and importance to the results. This difficulty is partially due to the fact that such information is hard to obtain for Web pages in general. Unlike ordinary Web pages, the content quality of Wikipedia articles is evaluated by human experts, which makes it feasible to verify the relation between such link analysis metrics and the quality of Wikipedia articles, our goal in this work. To accomplish that, we implemented several link analysis algorithms and compared their resulting rankings with the ones created by human evaluators regarding factors such as quality, popularity and importance. We found that the metrics are more correlated to quality and popularity than to importance, and the correlation is moderate
|
5 |
EASTERN RANGE TITAN IV/CENTAUR-TDRSS OPERATIONAL COMPATIBILITY TESTINGBocchino, Chris, Hamilton, William 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / The future of range operations in the area of expendable launch vehicle (ELV) support is
unquestionably headed in the direction of space-based rather than land- or air-based assets
for such functions as metric tracking or telemetry data collection. To this end, an effort
was recently completed by the Air Force’s Eastern Range (ER) to certify NASA’s
Tracking and Data Relay Satellite System (TDRSS) as a viable and operational asset to be
used for telemetry coverage during future Titan IV/Centaur launches. The test plan
developed to demonstrate this capability consisted of three parts: 1) a bit error rate test; 2)
a bit-by-bit compare of data recorded via conventional means vice the TDRSS network
while the vehicle was radiating in a fixed position from the pad; and 3) an in-flight
demonstration to ensure positive radio frequency (RF) link and usable data during critical
periods of telemetry collection. The subsequent approval by the Air Force of this approach
allows future launch vehicle contractors a relatively inexpensive and reliable means of
telemetry data collection even when launch trajectories are out of sight of land-based
assets or when land- or aircraft-based assets are not available for support.
|
6 |
ANTENNA PATTERN EVALUATION FOR LINK ANALYSISPedroza, Moises 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / The use of high bit rates in the missile testing environment requires that the receiving
telemetry system(s) have the correct signal margin for no PCM bit errors. This requirement
plus the fact that the use of “redundant systems” are no longer considered optimum
support scenarios has made it necessary to select the minimum number of tracking sites
that will gather the data with the required signal margin. A very basic link analysis can be
made by using the maximum and minimum gain values from the transmitting antenna
pattern. Another way of evaluating the transmitting antenna gain is to base the gain on the
highest percentile appearance of the highest gain value.
This paper discusses the mathematical analysis the WSMR Telemetry Branch uses to
determine the signal margin resulting from a radiating source along a nominal trajectory.
The mathematical analysis calculates the missile aspect angles (Theta, Phi, and Alpha) to
the telemetry tracking system that yields the transmitting antenna gain. The gain is
obtained from the Antenna Radiation Distribution Table (ARDT) that is stored in a
computer file. An entire trajectory can be evaluated for signal margin before an actual
flight. The expected signal strength level can be compared to the actual signal strength
level from the flight. This information can be used to evaluate any plume effects.
|
7 |
Segmentação dos usuários de cartão de crédito por meio da análise de cesto de compras / Segmentation of credit card clients by market basket analysisTavares, Pedro Daniel 17 January 2012 (has links)
Esta dissertação de mestrado tem como objetivo, elaborar um modelo de segmentação baseando-se no comportamento comprovado de consumo de clientes, valendo-se das técnicas de Análise de Associação e Análise de Cesto de Compras, aplicadas aos dados das faturas de cartão de crédito dos clientes. A partir do modelo proposto, testou-se a previsibilidade das próximas transações dos clientes por meio de uma amostra de validação. A motivação desta pesquisa provém de três pilares: Contexto Científico, Tecnológico e Mercadológico. No Contexto Científico, apesar de já terem sido publicados artigos que associam a utilização do cartão de crédito a perfis de segmentação de clientes, não se encontram publicados estudos que relacionam dados da própria utilização do cartão como fonte de informação do cliente. A razão mais provável para isso é a dificuldade no levantamento dos dados fundamentais para este tipo de pesquisa. Com o apoio de uma grande instituição financeira, este trabalho está se tornando viável, sob o preceito da análise apenas sobre bases de clientes anônimos e que não transpareça informações estratégicas da instituição. No contexto tecnológico, com a tecnologia de informação em crescente desenvolvimento, as operações feitas com cartão de crédito tem o processamento on-line em tempo real, promovendo a troca de informação entre o estabelecimento comercial e a instituição emissora do cartão de crédito no momento em que a cobrança é lançada e aceita pelo consumidor final. Isso possibilita que ações promocionais sejam realizadas em toda a cadeia de valor de cartões de crédito, gerando mais valor para os clientes e empresas. No contexto mercadológico, o Brasil apresentou altas taxas de crescimento do mercado de cartões de crédito nas últimas décadas, substituindo os outros meios mais antigos de pagamento e de crediário. Especialmente no Brasil, observam-se compras pagas com o uso do cartão de crédito parceladas com e sem juros, o que contribui para a substituição de outras formas de crédito. Como benefício deste trabalho, concluiu-se que a partir do conhecimento do consumo do cliente, pode-se aplicar a análise de cesto de compras para prever as próximas transações dos clientes, a fim de segmentar os clientes para estimulá-los a aderir a uma determinada oferta. / The objective of this research is elaborating a Segmentation Model based on credit card client\'s behavior using Link Analysis and Market Basket Analysis techniques. The proposed model was used to testing the predictability of next client transactions through validation sample. Scientific, technological and marketing scenarios are the three motivational pillars of this research. On scientific context there were published studies that associate credit card use with segmentation profile of customer. However these studies do not establish relationship between data from own clients credit card utilization. One probably reason for this lack analysis into studies is the difficult collect of fundamental data. This research was feasible with the support of a great Brazilian financial group. On technological context is observed a wide information technology development. Credit cards transactions have on-line processing. This scenario allows exchange information between market and credit card institution at the moment of final client transaction approval. This technology permits that actions be realized along credit card value chain based on transactions that have been made. On marketing context, during the latest decades, Brazil has shown large growth rates on credit card beyond older ways of payment. In Brazil, is observed a wide utilization of credit cards in installment purchases contributing for the replacement of other ways of credits. This research conclude that from the knowledge of client consume profile, using the Market Basket Analysis technique, it is possible to get a forecast of purchase transactions with the objective to stimulate the consumer in accept particular offer.
|
8 |
High-Speed Link Modeling: Analog/Digital Equalization and Modulation TechniquesLee, Keytaek 2012 May 1900 (has links)
High-speed serial input-output (I/O) link has required advanced equalization and modulation techniques to mitigate inter-symbol interference (ISI) caused by multi-Gb/s signaling over band-limited channels. Increasing demands for transceiver power and area complexity has leveraged on-going interest in analog-to-digital converter (ADC) based link, which allows for robust equalization and flexible adaptation to advanced signaling. With diverse options in ISI control techniques, link performance analysis for complicated transceiver architectures is very important. This work presents advanced statistical modeling for ADC-based link, performance comparison of existing modulation and equalization techniques, and proposed hybrid ADC-based receiver that achieves further power saving in digital equalization.
Statistical analysis precisely estimates high-speed link margins at given implementation constrains and low target bit-error-rate (BER), typically ranges from 1e-12 to 1e-15, by applying proper statistical bound of noise and distortion. The proposed statistical ADC-based link modeling utilizes bounded probability density function (PDF) of limited quantization distortion (4-6 bits) through digital feed-forward and decision feedback equalizers (FFE-DFE) to improve low target BER estimation. Based on statistical modeling, this work surveys the impact of insufficient equalization, jitter and crosstalk on modulation selection among two and four level pulse amplitude modulation (PAM-2 and PAM-4, respectively) and duobinary, and ADC resolution reduction performance by partial analog equalizer (PAE).
While the information of channel loss at effective Nyquist frequency and signaling constellation loss initially guides modulation selection, the statistical analysis results show that PAM-4 best tolerates jitter and crosstalk, and duobinary requires the least equalization complexity. Meanwhile, despite robust digital equalization, high-speed ADC complexity and power consumption is still a critical bottleneck, so that PAE is necessitated to reduce ADC resolution requirement. Statistical analysis presents up to 8-bit resolution is required in 12.5Gb/s data communications at 46dB of channel loss without PAE, while 5-bit ADC is enough with 3-tap FFE PAE. For optimal ADC resolution reduction by PAE, digital equalizer complexity also increases to provide enough margin tolerating significant quantization distortion. The proposed hybrid receiver defines unreliable signal thresholds by statistical analysis and selectively takes additional digital equalization to save potentially increasing dynamic power consumption in digital. Simulation results report that the hybrid receiver saves at least 64% of digital equalization power with 3-tap FFE PAE in 12.5Gb/s data rate and up to 46dB loss channels. Finally, this work shows the use of embedded-DFE ADC in the hybrid receiver is limited by error propagation.
|
9 |
A longitudinal study of academic web links : identifying and explaining changePayne, Nigel January 2007 (has links)
A problem common to all current web link analyses is that, as the web is continuously evolving, any web-based study may be out of date by the time it is published in academic literature. It is therefore important to know how web link analyses results vary over time, with a low rate of variation lengthening the amount of time corresponding to a tolerable loss in quality. Moreover, given the lack of research on how academic web spaces change over time, from an information science perspective it would interesting to see what patterns and trends could be identified by longitudinal research and the study of university web links seems to provide a convenient means by which to do so. The aim of this research is to identify and track changes in three academic webs (UK, Australia and New Zealand) over time, tracking various aspects of academic webs including site size and overall linking characteristics, and to provide theoretical explanations of the changes found. This should therefore provide some insight into the stability of previous and future webometric analyses. Alternative Document Models (ADMs), created with the purpose of reducing the extent to which anomalies occur in counts of web links at the page level, have been used extensively within webometrics as an alternative to using the web page as the basic unit of analysis. This research carries out a longitudinal study of ADMs in an attempt to ascertain which model gives the most consistent results when applied to the UK, Australia and New Zealand academic web spaces over the last six years. The results show that the domain ADM gives the most consistent results with the directory ADM also giving more reliable results than are evident when using the standard page model. Aggregating at the site (or university) level appears to provide less consistent results than using the page as the standard unit of measure, and this finding holds true over all three academic webs and for each time period examined over the last six years. The question of whether university web sites publish the same kind of information and use the same kind of hyperlinks year on year is important from the perspective of interpreting the results of academic link analyses, because changes in link types over time would also force interpretations of link analyses to change over time. This research uses a link classification exercise to identify temporal changes in the distribution of different types of academic web links, using three academic web spaces in the years 2000 and 2006. Significant increases in ‘research oriented’, ‘social/leisure’ and ‘superficial’ links were identified as well as notable decreases in the ‘technical’ and ‘personal’ links. Some of these changes identified may be explained by general changes in the management of university web sites and some by more wide-spread Internet trends, e.g., dynamic pages, blogs and social networking. The increase in the proportion of research-oriented links is particularly hopeful for future link analysis research. Identifying quantitative trends in the UK, Australian and New Zealand academic webs from 2000 to 2005 revealed that the number of static pages and links in each of the three academic webs appears to have stabilised as far back as 2001. This stabilisation may be partly due to an increase in dynamic pages which are normally excluded from webometric analyses. In response to the problem for webometricians due to the constantly changing nature of the Internet, the results presented here are encouraging evidence that webometrics for academic spaces may have a longer-term validity than would have been previously assumed. The relationship between university inlinks and research activity indicators over time was examined, as well as the reasons for individual universities experiencing significant increases and decreases in inlinks over the last six years. The findings indicate that between 66% and 70% of outlinks remain the same year on year for all three academic web spaces, although this stability conceals large individual differences. Moreover, there is evidence of a level of stability over time for university site inlinks when measured against research. Surprisingly however, inlink counts can vary significantly from year to year for individual universities, for reasons unrelated to research, underlining that webometric results should be interpreted cautiously at the level of individual universities. Therefore, on average since 2001 the university web sites of the UK, Australia and New Zealand have been relatively stable in terms of size and linking patterns, although this hides a constant renewing of old pages and areas of the sites. In addition, the proportion of research-related links seems to be slightly increasing. Whilst the former suggests that webometric results are likely to have a surprisingly long shelf-life, perhaps closer to five years than one year, the latter suggests that webometrics is going to be increasingly useful as a tool to track research online. While there have already been many studies involving academic webs spaces, and much work has been carried out on the web from a longitudinal perspective, this thesis concentrates on filling a critical gap in current webometric research by combining the two and undertaking a longitudinal study of academic webs. In comparison with previous web-related longitudinal studies this thesis makes a number of novel contributions. Some of these stem from extending established webometric results, either by introducing a longitudinal aspect (looking at how various academic web metrics such as research activity indicators, site size or inlinks change over time) or by their application to other countries. Other contributions are made by combining traditional webometric methods (e.g. combining topical link classification exercises with longitudinal study) or by identifying and examining new areas for research (for example, dynamic pages and non-HTML documents). No previous web-based longitudinal studies have focused on academic links and so the main findings that (for UK, Australian and New Zealand academic webs between 2000 and 2006) certain academic link types exhibit changing patterns over time, approximately two-thirds of outlinks remain the same year on year and the number of static pages and links appears to have stabilised are both significant and novel.
|
10 |
Effective web service discovery using a combination of a semantic model and a data mining techniqueBose, Aishwarya January 2008 (has links)
With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.
|
Page generated in 0.1086 seconds