Spelling suggestions: "subject:"forminformation networks."" "subject:"informationation networks.""
151 |
Uncovering and Managing the Impact of Methodological Choices for the Computational Construction of Socio-Technical Networks from TextsDiesner, Jana 01 September 2012 (has links)
This thesis is motivated by the need for scalable and reliable methods and technologies that support the construction of network data based on information from text data. Ultimately, the resulting data can be used for answering substantive and graph-theoretical questions about socio-technical networks.
One main limitation with constructing network data from text data is that the validation of the resulting network data can be hard to infeasible, e.g. in the cases of covert, historical and large-scale networks. This thesis addresses this problem by identifying the impact of coding choices that must be made when extracting network data from text data on the structure of networks and network analysis results. My findings suggest that conducting reference resolution on text data can alter the identity and weight of 76% of the nodes and 23% of the links, and can cause major changes in the value of commonly used network metrics. Also, performing reference resolution prior to relation extraction leads to the retrieval of completely different sets of key entities in comparison to not applying this pre-processing technique. Based on the outcome of the presented experiments, I recommend strategies for avoiding or mitigating the identified issues in practical applications.
When extracting socio-technical networks from texts, the set of relevant node classes might go beyond the classes that are typically supported by tools for named entity extraction. I address this lack of technology by developing an entity extractor that combines an ontology for sociotechnical networks that originates from the social sciences, is theoretically grounded and has been empirically validated in prior work, with a supervised machine learning technique that is based on probabilistic graphical models. This thesis does not stop at showing that the resulting prediction models achieve state of the art accuracy rates, but I also describe the process of integrating these models into an existing and publically available end-user product. As a result, users can apply these models to new text data in a convenient fashion.
While a plethora of methods for building network data from information explicitly or implicitly contained in text data exists, there is a lack of research on how the resulting networks compare with respect to their structure and properties. This also applies to networks that can be extracted by using the aforementioned entity extractor as part of the relation extraction process. I address this knowledge gap by comparing the networks extracted by using this process to network data built with three alternative methods: text coding based on thesauri that associate text terms with node classes, the construction of network data from meta-data on texts, such as key words and index terms, and building network data in collaboration with subject matter experts. The outcomes of these comparative analyses suggest that thesauri generated with the entity extractor developed for this thesis need adjustments with respect to particular categories and types of errors. I am providing tools and strategies to assist with these refinements. My results also show that once these changes have been made and in contrast to manually constructed thesauri, the prediction models generalize with acceptable accuracy to other domains (news wire data, scientific writing, emails) and writing styles (formal, casual). The comparisons of networks constructed with different methods show that ground truth data built by subject matter experts are hardly resembled by any automated method that analyzes text bodies, and even less so by exploiting existing meta-data from text corpora. Thus, aiming to reconstruct social networks from text data leads to largely incomplete networks. Synthesizing the findings from this work, I outline which types of information on socio-technical networks are best captured by what network data construction method, and how to best combine these methods in order to gain a more comprehensive view on a network.
When both, text data and relational data, are available as a source of information on a network, people have previously integrated these data by enhancing social networks with content nodes that represent salient terms from the text data. I present a methodological advancement to this technique and test its performance on the datasets used for the previously mentioned evaluation studies. By using this approach, multiple types of behavioral data, namely interactions between people as well as their language use, can be taken into account. I conclude that extracting content nodes from groups of structurally equivalent agents can be an appropriate strategy for enabling the comparison of the content that people produce, perceive or disseminate. These equivalence classes can represent a variety of social roles and social positions that network members occupy. At the same time, extracting content nodes from groups of structurally coherent agents can be suitable for enabling the enhancement of social networks with content nodes. The results from applying the latter approach to text data include a comparison of the outcome of topic modeling; an efficient and unsupervised information extraction technique, to the outcomes of alternative methods, including entity extraction based on supervised machine learning. My findings suggest that key entities from meta-data knowledge networks might serve as proper labels for unlabeled topics. Also, unsupervised and supervised learning leads to the retrieval of similar entities as highly likely members of highly likely topics, and key nodes from text-based knowledge networks, respectively.
In summary, the contributions made with this thesis help people to collect, manage and analyze rich network data at any scale. This is a precondition for asking substantive and graph-theoretical questions, testing hypotheses, and advancing theories about networks. This thesis uses an interdisciplinary and computationally rigorous approach to work towards this goal; thereby advancing the intersection of network analysis, natural language processing and computing.
|
152 |
Improving the robustness and effectiveness of rural telecommunication infrastructures in Dwesa South AfricaRanga, Memory Munashe January 2011 (has links)
In recent years, immense effort has been channelled towards the Information and Technological development of rural areas. To support this development, telecommunication networks have been deployed. The availability of these telecommunication networks is expected to improve the way people share ideas and communicate locally and globally, reducing limiting factors like distance through the use of the Internet. The major problem for these networks is that very few of them have managed to stay in operation over long periods of time. One of the major causes of this failure is the lack of proper monitoring and management as, in some cases, administrators are located far away from the network site. Other factors that contribute to the frequent failure of these networks are lack of proper infrastructure, lack of a constant power supply and other environmental issues. A telecommunication network was deployed for the people of Dwesa by the Siyakhula Living Lab project. During this research project, frequent visits were made to the site and network users were informally interviewed in order to gain insight into the network challenges. Based on the challenges, different network monitoring systems and other solutions were deployed on the network. This thesis analyses the problems encountered and presents possible and affordable solutions that were implemented on the network. This was done to improve the network‟s reliability, availability and manageability whilst exploring possible and practical ways in which the connectivity of the deployed telecommunication network can be maintained. As part of these solutions, a GPRS redundant link, Nagios and Cacti monitoring systems as well as Simple backup systems were deployed. v Acronyms AC Access Concentrators AMANDA Automatic Marylyn Network Disk Archiver CDMA Code Divison Multiple Access CGI Common Gateway Interface.
|
153 |
Investigating wireless network deployment configurations for marginalized areasNdlovu, Nkanyiso January 2011 (has links)
In recent years, immense effort has been channelled towards the Information and Technological development of rural areas. To support this development, telecommunication networks have been deployed. The availability of these telecommunication networks is expected to improve the way people share ideas and communicate locally and globally, reducing limiting factors like distance through the use of the Internet. The major problem for these networks is that very few of them have managed to stay in operation over long periods of time. One of the major causes of this failure is the lack of proper monitoring and management as, in some cases, administrators are located far away from the network site. Other factors that contribute to the frequent failure of these networks are lack of proper infrastructure, lack of a constant power supply and other environmental issues. A telecommunication network was deployed for the people of Dwesa by the Siyakhula Living Lab project. During this research project, frequent visits were made to the site and network users were informally interviewed in order to gain insight into the network challenges. Based on the challenges, different network monitoring systems and other solutions were deployed on the network. This thesis analyses the problems encountered and presents possible and affordable solutions that were implemented on the network. This was done to improve the network‟s reliability, availability and manageability whilst exploring possible and practical ways in which the connectivity of the deployed telecommunication network can be maintained. As part of these solutions, a GPRS redundant link, Nagios and Cacti monitoring systems as well as Simple backup systems were deployed. v Acronyms AC Access Concentrators AMANDA Automatic Marylyn Network Disk Archiver CDMA Code Divison Multiple Access CGI Common Gateway Interface.
|
154 |
Extrator de conhecimento coletivo : uma ferramenta para democracia participativa / Extractor Collective Knowledge : a tool for participatory democracyAngelo, Tiago Novaes, 1983- 26 August 2018 (has links)
Orientadores: Ricardo Ribeiro Gudwin, Cesar José Bonjuani Pagan / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-26T04:03:32Z (GMT). No. of bitstreams: 1
Angelo_TiagoNovaes_M.pdf: 3900207 bytes, checksum: 2eed8dd66c9bdc37e4d58e9eac614c9d (MD5)
Previous issue date: 2014 / Resumo: O surgimento das Tecnologias de Comunicação e Informação trouxe uma nova perspectiva para o fortalecimento da democracia nas sociedades modernas. A democracia representativa, modelo predominante nas sociedades atuais, atravessa uma crise de credibilidade cuja principal consequência é o afastamento do cidadão na participação política, enfraquecendo os ideais democráticos. Neste contexto, a tecnologia surge como possibilidade para construção de um novo modelo de participação popular que resgate uma cidadania mais ativa, inaugurando o que denomina-se de democracia digital. O objetivo desta pesquisa foi desenvolver e implementar uma ferramenta, denominada "Extrator de Conhecimento Coletivo", com o propósito de conhecer o que um coletivo pensa a respeito de sua realidade a partir de pequenos relatos de seus participantes, dando voz à população num processo de democracia participativa. Os fundamentos teóricos baseiam-se em métodos de mineração de dados, sumarizadores extrativos e redes complexas. A ferramenta foi implementada e testada usando um banco de dados formado por opiniões de clientes a respeito de suas estadias em um Hotel. Os resultados apresentaram-se satisfatórios. Para trabalhos futuros, a proposta é que o Extrator de Conhecimento Coletivo seja o núcleo de processamento de dados de um espaço virtual onde a população pode se expressar e exercer ativamente sua cidadania / Abstract: The emergence of Information and Communication Technologies brought a new perspective to the strengthening of democracy in modern societies. The representative democracy, prevalent model in today's societies, crosses a crisis of credibility whose main consequence is the removal of citizen participation in politics, weakening democratic ideals. In this context, technology emerges as a possibility for construction of a new model of popular participation to rescue more active citizenship, inaugurating what is called digital democracy. The objective of this research was to develop and implement a tool called "Collective Knowledge Extractor", with the purpose of knowing what the collective thinks about his reality through small reports of its participants, giving voice to the people in a process participatory democracy. The theoretical foundations are based on methods of data mining, extractive summarizers and complex networks. The tool was implemented and tested using a database consisting of customer reviews about their stay in a Hotel. The results were satisfactory. For future work, the proposal is that the Extractor Collective Knowledge be the core data processing of a virtual space where people can express themselves and actively exercise their citizenship / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
|
155 |
Managing resource sharing in selected Seventh-day Adventist tertiary institutions in Sub-Saharan Africa: problems and prospectsAdeogun, Margaret Olufunke 30 November 2004 (has links)
Universities in the new millennium find themselves in a knowledge-driven economy that is challenging them to produce a qualified and adaptable work force if they are to contribute to societal development. Owing to the structural change in the economy, entrepreneurs require high level scientists, professionals and technicians who not only have the capability to create and support innovations by adapting knowledge to local use but also people with managerial and lifelong learning skills. Such are they who can accelerate changes and make organizations more productive and efficient in the services they render. Consequently, universities in Sub-Saharan Africa are challenged to transform learning so as to produce graduates who have both knowledge and competencies. Such a system will create a balance between university education and the changing labour market. Satisfying these new educational demands are only possible through research and unhindered access to global information resources. Paradoxically, some private university libraries, because of limited funding, find themselves fiscally constrained in the provision of unhindered access to global stores of information particularly at a time of exponential growth both in number and cost of information resources. This had led libraries to re-examine resource sharing as a viable option to meeting the new demands placed on universities.
It is for the reasons above that this study examines the practice, problems and prospects of resource-sharing in selected Seventh-day Adventist university libraries in Sub-Saharan Africa. It examines scientifically the causes of poor sharing practices that are unique to each library, the situational and environmental factors that can enhance resource sharing. It provides also research-based information that will help to determine the best ways by which each library can have greater access to information resources. There are proposals for resolving the problems, and there are recommendations for dealing with the matter on a more permanent basis. The study advances resource-sharing model called Consortium of Adventist University Libraries in Africa (CAULA) as a resource sharing network for Seventh-day Adventist libraries in Africa. The organizational structure for CAULA are outlined and discussed. The proposed cooperation is not only sustainable but also structured to provide efficiency and greater regional cooperation of SDA libraries in Sub-Saharan Africa. / Information Science / DLITT ET PHIL (INF SCIENCE)
|
156 |
Authority control in an academic library consortium using a union catalogue maintained by a central office for authority controlMarais, Hester, 1961- 31 March 2004 (has links)
Authority control is the backbone of the library catalogue and therefore a critical library activity. Experienced staff create authority records to assist users in their quest for information. The focus of this study is on authority control as a means of co-operation in academic library consortia using a union catalogue maintained by a Central Office for Authority Control.
Literature studies were conducted on three sub-problems: the development of academic library consortia in South Africa, and various forms, characteristics and functions of academic library consortia in general; the characteristics, principals and objectives of authority control; and the functions of union catalogues with special reference to the role of Z39.50 within virtual union catalogues. The conclusion was that existing and new authority records should be made available as widely as possible within consortia through a union catalogue. It is however a partial solution, because not all the libraries within the consortium have the expertise to create new authority records.
Two empirical studies were conducted. A cost analysis was done to determine the cost of creating and changing authority records within academic library consortia in South Africa, in order to choose a system within which authority control can be performed effectively and speedily.
Secondly, a questionnaire was sent to libraries in the United States to gather information on their experiences with regard to authority control, library co-operation in general, and virtual union catalogues. The United States was the natural choice because it could be regarded as the birthplace of modern library consortia. Inferences drawn from the information received was used to develop the structure and functions for a Central Office for Authority Control in academic library consortia in South Africa.
It was found that authority control within an academic library consortium using a union catalogue could be conducted most cost-effectively and timeously through such a Central Office for Authority Control. The purpose of the Central Office would be to co-ordinate authority control within the consortium. Pooling available resources within the consortium would keep the cost of authority control as low as possible. Libraries with the required infrastructure and expertise would have the opportunity to create authority records on behalf of other libraries and be compensated for their services. Through such a Central Office more authority records created according to mutually accepted standards would be available for sharing within the consortium. / Information Science / D.Litt. et Phil. (Information Science)
|
157 |
Library automation as a prerequisite for 21st century library service provision for Lesotho library consortium librariesMonyane, Mamoeletsi Cecilia 07 1900 (has links)
Library automation is approaching its 90th birthday (deduced from Pace, 2009:1), and many librarians no longer remember the inefficiencies of the manual systems that were previously in place. For some, however, automation has not gone nearly far enough. In this second decade of the new millennium some libraries in Lesotho face multiple challenges in automating their services while libraries internationally are staying relevant by rapidly adapting their services to address the needs and demands of the clients.
It was anticipated that full library automation is a prerequisite for delivering 21st-century library services and the researcher embarked on a process to establish whether libraries belonging to the Lesotho Library Consortium (LELICO) have automated to the extent where they will be able to provide the services that are currently in demand. The purpose of this study was to analysewhether full library automation is indeed a prerequisite for libraries to offer the services required in the current millennium. The study focused on LELICO member libraries. Benchmarking was done with selected South African academic libraries. Data were collected by means of interviews with all respondents, namely, LELICO member libraries, librarians from South African libraries and with international system vendors operating from South Africa.
The study found that LELICO member libraries are indeed lagging behindin terms of service provision. LELICO member libraries do not appear to understand; which library services are possible when state-of-the-art technology is fully implemented. The study found furthermore that the laggard status is caused by factors such as a lack of funding, too few professional staff and ineffective support from management. These and other findings helped formulate recommendations that would underpin a renewal strategy for LELICO. The proposed recommendations include that LELICO should deliver a more meaningful service to its current members. LELICO member libraries should be using technology more effectively in their operations and good relationship between a system vendor and its clients should be seen as an asset that should be maintained.LELICO should be playing a key role in making change a reality. / Information Science / M.A. (Information Science)
|
158 |
A measuring tool for integrated internal communication : a case study of the University of South Africa libraryMandiwana, Awelani Reineth 01 1900 (has links)
Text in English, abstract in English, Afrikaans and Venda / This study developed and tested an integrated internal communication audit (IICA) tool to evaluate the communication strengths and weaknesses of the Unisa Library. The existing communication audit instruments were explored, namely: the Communication Satisfaction Questionnaire (CSQ) and the International Communication Association (ICA) audit were adapted and complemented by the Organisational Culture Survey (OCS) and the Critical Incident Technique (CIT). The current trends and the trends in South Africa were also explored.
The sequential mixed method design consisting of the semi-structured qualitative interviews and the quantitative surveys were used to collect data. The ATLAS.ti and the Statistical Package for Social Sciences (SPSS) software packages were used to analyse qualitative and quantitative data.
The results revealed the IICA as an appropriate tool for measuring the integrated internal communication of the Unisa Library. The IICA identified the communication needs of employees; the active and preferred communication channels; and the positive and negative communication experiences of employees. / Hierdie studie het ʼn geïntegreerde interne kommunikasie-oudit (IICA)-hulpmiddel ontwikkel en getoets om kommunikasie-sterkpunte en -swakhede van die Unisa-biblioteek te evalueer. Die bestaande kommunikasie-oudit-instrumente was ondersoek, naamlik: die Kommunikasietevredenheidsvraelys (CSQ) en die Internasionale Kommunikasievereniging (ICA) se oudit is aangepas en gekomplementeer deur die Organisasiekultuur-opname (OCS) en die Kritiese-insident-tegniek (CIT).
Die sekwensiële gemengdemetode-ontwerp, bestaande uit die halfgestruktureerde kwalitatiewe onderhoude en die kwantitatiewe opnames, is gebruik om data in te samel. Die ATLAS.ti-programmatuurpakket en die Statistiese Pakket vir Sosiale Wetenskappe (SPSS)-programmatuur is gebruik om kwalitatiewe en kwantitatiewe data te ontleed.
Die resultate gewys die IICA as ʼn geskikte hulpmiddel was in die meting van die geïntegreerde interne kommunikasie van die Biblioteek. Die IICA geïdentifiseer die kommunikasie behoeftes van werknemers; die aktiewe en voorkeur kommunikasie kanale; en die positiewe en negatiewe kommunikasie ervarings van werknemers. / Ngudo iyi yo bveledza na u linga tshishumiswa tsha u Sedzulusa Vhudavhidzani ha nga ngomu ho Ṱanganelaho (Integrated Internal Communication Audit (IICA), u ṱola vhuḓi na vhuvhi ha vhudavhidzani kha Ḽaiburari ya Univesithi ya Afrika Tshipembe. Zwishumiswa zwa u sedza vhudavhidzani zwi re hone zwo sedzuluswaho zwi katela: Mbudzisambekanywa dza Vhudavhidzani dzine dza fusha ṱhoḓea (Communication Satisfaction Questionnaire) (CSQ) na tshishumiswa tsha u sedzulusa vhudavhidzani tsha Dzangano ḽa Dzitshaka ḽa Vhudavhidzani (International Communication Association) (ICA). Zwishumiswa izwo zwo ḓadziswa nga tshishumiswa tsha Ṱhoḓisiso ya Mvelele ya Tshiimiswa (Organisational Culture Survey) (OCS) na Tshikalo tsha u ela Maitele a Zwithu zwa ndeme (Critical Incident Technique) (CIT).
Pulane ya thevhekano ya maitele o ṱanganelaho o vhumbwaho nga maitele a ṱhoḓisiso ane a shumiswa kha saintsi dza matshilisano (qualitative) na maitele a ṱhoḓisiso ane a shumisa zwiṱatisiṱika na mbalo (quantitative) zwo shumiswa u kuvhanganya mawanwa. Phakhedzhi ya Sofuthiwea ya ATLAS.ti na phakhedzhi ya Siṱatisiṱika ya Saintsi dza Matshilisano (Statistical Package for Social Sciences - SPSS) dzo shumiswa u saukanya mawanwa a ṱhoḓisiso dza matshilisano na a ṱhoḓisiso dza zwiṱatisiṱika na mbalo.
Mvelelo dzo bvisela khagala uri IICA ndi tshishumiswa tsho teaho u ela vhudavhidzani ho ṱanganelaho ha nga ngomu Ḽaiburari ya Univesithi ya Afrika Tshipembe. Tshishumiswa itshi tsho bvisela khagala thoḓea dza vhashumi dza vhudavhidzani, zwishumiswa zwa vhudavhidzani zwine zwa khou shumiswa na zwi takalelwaho; na tshenzhemo ya vhashumi kha vhudavhidzani havhudi and vhu si havhudi. / Communication Science / M. Comm (Communication Science)
|
159 |
Algorithmes de dissémination épidémiques dans les réseaux à grande échelle : comparaison et adaptation aux topologiesHu, Ruijing 02 December 2013 (has links) (PDF)
La dissémination d'informations (broadcast) est essentielle pour de nombreuses applications réparties. Celle-ci doit être efficace, c'est à dire limiter la redondance des messages, et assurer forte fiabilité et faible latence. Nous considérons ici les algorithmes répartis profitant des propriétés des topologies sous-jacentes. Cependant, ces propriétés et les paramètres dans les algorithmes sont hétérogènes. Ainsi, nous devons trouver une manière pour les comparer équitablement. D'abord, nous étudions les protocoles probabilistes de dissémination d'informations (gossip) exécutées sur trois graphes aléatoires. Les trois graphes représentent les topologies typiques des réseaux à grande-échelle : le graphe de Bernoulli, le graphe géométrique aléatoire et le graphe scale-free. Afin de comparer équitablement leurs performances, nous proposons un nouveau paramètre générique : le fanout effectif. Pour une topologie et un algorithme donnés, le fanout effectif caractérise la puissance moyenne de la dissémination des sites infectés. De plus, il simplifie la comparaison théorique des différents algorithmes sur une topologie. Après avoir compris l'impact des topologies et les algorithmes sur les performances , nous proposons un algorithme fiable et efficace pour la topologie scale-free.
|
160 |
Analyse exploratoire de flots de liens pour la détection d'événementsHeymann, Sébastien 03 December 2013 (has links) (PDF)
Un flot de liens représente une trace de l'activité d'un système complexe au cours du temps, où un lien apparaît lorsque deux entités du système entrent en interaction ; l'ensemble des entités et des liens forme un graphe. Ces traces constituent depuis quelques années des jeux de données stratégiques dans l'analyse de l'activité de systèmes complexes à grande échelle, impliquant des millions d'entités : réseaux de téléphone mobiles, réseaux sociaux, ou encore Internet. Cette thèse porte sur l'analyse exploratoire des flots de liens, en particulier sur la caractérisation de leur dynamique et l'identification d'anomalies au cours du temps (événements). Nous proposons un cadre exploratoire sans hypothèse sur les données, faisant appel à l'analyse statistique et à la visualisation. Les événements détectés sont statistiquement significatifs et nous proposons une méthode pour valider leur pertinence. Nous illustrons enfin notre méthodologie sur l'évolution du réseau social en ligne Github, où des centaines de milliers de développeurs collaborent sur des projets de logiciel.
|
Page generated in 0.1207 seconds