11 |
Softbridge: a socially aware framework for communication bridges over digital dividesTucker, William D. 01 May 2009 (has links)
Computer scientists must align social and technical factors for communication technologies in developing regions yet lack a framework to do so. The novel Softbridge framework comprises several components to address this gap. The Softbridge stack abstraction supplements the established Open Systems Interconnect model with a collection of technical layers clustered around 'people' issues. The Softbridge stack aligns the technological design of communication systems with awareness of social factors characteristic of developing regions. In a similar fashion, a new evaluation abstraction called Quality of Communication augments traditional Quality of Service by considering socio-cultural factors of a user's perception of system performance. The conceptualisation of these new abstractions was driven by long-term experimental interventions within two South African digital divides. One field study concerned communication bridges for socio-economically disadvantaged Deaf users. The second field study concerned a wireless telehealth system between rural nurses and doctors. The application domains were quite different yet yielded similarities that informed the Softbridge and Quality of Communication abstractions. The third Softbridge component is an iterative socially aware software engineering method that includes action research. This method was used to guide cyclical interventions with target communities to solve community problems with communication technologies. The Softbridge framework components are recursive products of this iterative approach, emerging via critical reflection on the design, evaluation and methodological processes of the respective field studies. Quantitative and qualitative data were triangulated on a series of communication prototypes for each field study with usage metrics, semi-structured interviews, focus groups and observation in the field. Action research journals documented the overall process to achieve post-positivist recoverability rather than positivistic replicability. Analysis of the results from both field studies was iteratively synthesised to develop the Softbridge framework and consider its implications. The most significant finding is that awareness of social issues helps explain why users might not accept a technically sound communication system. It was found that when facilitated effectively by intermediaries, the Softbridge framework enables unintended uses of experimental artefacts that empower users to appropriate communication technologies on their own. Thus, the Softbridge framework helps to align technical and socio-cultural factors.
|
12 |
Model Driven Communication Protocol Engineering and Simulation based Performance Analysis using UML 2.0de Wet, Nico 01 January 2005 (has links)
The automated functional and performance analysis of communication systems specified with some Formal Description Technique has long been the goal of telecommunication engineers. In the past SDL and Petri nets have been the most popular FDTs for the purpose. With the growth in popularity of UML the most obvious question to ask is whether one can translate one or more UML diagrams describing a system to a performance model. Until the advent of UML 2.0, that has been an impossible task since the semantics were not clear. Even though the UML semantics are still not clear for the purpose, with UML 2.0 now released and using ITU recommendation Z.109, we describe in this dissertation a methodology and tool called proSPEX (protocol Software Performance Engineering using XMI), for the design and performance analysis of communication protocols specified with UML.
Our first consideration in the development of our methodology was to identify the roles of UML 2.0 diagrams in the performance modelling process. In addition, questions regarding the specification of non-functional duration contraints, or temporal aspects, were considered. We developed a semantic time model with which a lack of means of specifying communication delay and processing times in the language are addressed. Environmental characteristics such as channel bandwidth and buffer space can be specified and realistic assumptions are made regarding time and signal transfer.
With proSPEX we aimed to integrate a commercial UML 2.0 model editing tool and a discrete-event simulation library. Such an approach has been advocated as being necessary in order to develop a closer integration of performance engineering with formal design and implementation methodologies. In order to realize the integration we firstly identified a suitable simulation library and then extended the library with features required to represent high-level SDL abstractions, such as extended finite state machines (EFSM) and signal addressing. In implementing proSPEX we filtered the XML output of our editor and used text templates for code generation. The filtering of the XML output and the need to extend our simulation library with EFSM abstractions was found to be significant implementation challenges.
Lastly, in order to to illustrate the utility of proSPEX we conducted a performance analysis case-study in which the efficient short remote operations (ESRO) protocol is used in a wireless e-commerce scenario.
|
13 |
Optimising Information Retrieval from the Web in Low-bandwidth EnvironmentsBalluck, Ashwinkoomarsing 01 June 2007 (has links)
The Internet has potential to deliver information to Web users that have no other way of getting to those resources. However, information on the Web is scattered without any proper semantics for classifying them and thus this makes information discovery difficult. Thus, to ease the querying of this huge bin of information, developers have built tools amongst which are the search engines and Web directories. However, for these tools to give optimal results, two factors need to be given due importance: the users’ ability to use these tools and the bandwidth that is present in these environments.
Unfortunately, after an initial study, none of these two factors were present in Mauritius where low bandwidth prevails. Hence, this study helps us get a better idea of how users use the search tools. To achieve this, we designed a survey where Web users were asked about their skills in using search tools. Then, a jump page using the search boxes of different search engines was developed to provide directed guidance for effective searching in low bandwidth environments. We then conducted a further evaluation, using a sample of users to see if there were any changes in the way users access the search tools.
The results from this study were then examined. We noticed that the users were initially unaware about the specificities of the different search tools thus preventing efficient use. However, during the survey, they were educated on how to use those tools and this was fruitful when a further evaluation was performed. Hence the efficient use of the search tools helped in reducing the traffic flow in low bandwidth environments.
|
14 |
A lightweight interface to local Grid scheduling systemsParker, Christopher P 01 May 2009 (has links)
Many complex research problems require an immense amount of computational power to solve. In order to solve such problems, the concept of the computational Grid was conceived. Although Grid technology is hailed as the next great enabling technology in Computer Science, the last being the inception of the World Wide Web, some concerns have to be addressed if this technology is going to be successful.
The main difference between the Web and the Grid in terms of adoption is usability. The Web was designed with both functionality and end-users in mind, whereas the Grid has been designed solely with functionality in mind. Although large Grid installations are operational around the globe, their use is restricted to those who have an in-depth knowledge of its complex architecture and functionality. Such technology is therefore out of reach for the very scientists who need these resources because of its sheer complexity. The Grid is likely to succeed as a tool for some large-scale problem solving as there is no alternative on a similar scale. However, in order to integrate such systems into our daily lives, just as the Web has been, such systems need to be accessible to ``novice'' users. Without such accessibility, the use and growth of such systems will remain constrained.
This dissertation details one possible way of making the Grid more accessible, by providing high-level access to the scheduling systems on which Grids rely. Since ``the Grid'' is a mechanism of transferring control of user submitted jobs to third-party scheduling systems, high-level access to the schedulers themselves was deemed to be a natural place to begin usability enhancing efforts.
In order to design a highly usable and intuitive interface to a Grid scheduling system, a series of interviews with scientists were conducted in order to gain insight into the way in which supercomputing systems are utilised. Once this data was gathered, a paper-based prototype system was developed. This prototype was then evaluated by a group of test subjects who set out to criticise the interface and make suggestions as to where it could be improved. Based on this new data, the final prototype was developed firstly on paper and then implemented in software. The implementation makes use of lightweight Web 2.0 technologies. Designing lightweight software allows one to make use of the dynamic properties of Web technologies and thereby create more usable interfaces that are also visually appealing. Finally, the system was once again evaluated by another group of test subjects. In addition to user evaluations, performance experiments and real-world case studies were carried out on the interface.
This research concluded that a dynamic Web 2.0-inspired interface appeals to a large group of users and allows for greater flexibility in the way in which data, in this case technical data, is presented. In terms of usability- the focal point of this research- it was found that it is possible to build an interface to a Grid scheduling system that can be used by users with no technical Grid knowledge. This is a significant outcome, as users were able to submit jobs to a Grid without fully comprehending the complexities involved with such actions, yet understanding the task they were required to perform. Finally, it was found that the use of a lightweight approach in terms of bandwidth usage and response time is superior to the traditional HTML-only approach. In this particular implementation of the interface, the benefits of using a lightweight approach are realised approximately halfway through a typical Grid job submission cycle.
|
15 |
Link prediction and link detection in sequences of large social networks using temporal and local metricsCooke, Richard J. E. 01 November 2006 (has links)
This dissertation builds upon the ideas introduced by Liben-Nowell and Kleinberg in The Link Prediction Problem for Social Networks [42]. Link prediction is the problem of predicting between which unconnected nodes in a graph a link will form next, based on the current structure of the graph.
The following research contributions are made:
• Highlighting the difference between the link prediction and link detection problems, which have been implicitly regarded as identical in current research. Despite hidden links and forming links having very highly significant differing metric values, they could not be distinguished from each other by a machine learning system using traditional metrics in an initial experiment. However, they could be distinguished from each other in a "simple" network (one where traditional metrics can be used for prediction successfully) using a combination of new graph analysis approaches.
• Defining temporal metric statistics by combining traditional statistical measures with measures commonly employed in financial analysis and traditional social network analysis. These metrics are calculated over time for a sequence of sociograms. It is shown that some of the temporal extensions of traditional metrics increase the accuracy of link prediction.
• Defining traditional metrics using different radii to those at which they are normally calculated. It is shown that this approach can increase the individual prediction accuracy of certain metrics, marginally increase the accuracy of a group of metrics, and greatly increase metric computation speed without sacrificing information content by computing metrics using smaller radii. It also solves the “distance-three task” (that common neighbour metrics cannot predict links between nodes at a distance greater than three).
• Showing that the combination of local and temporal approaches to link prediction can lead to very high prediction accuracies. Furthermore in “complex” networks (ones where traditional metrics cannot be used for prediction successfully) local and temporal metrics become even more useful.
|
Page generated in 0.0303 seconds