• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 3
  • 1
  • Tagged with
  • 36
  • 36
  • 36
  • 15
  • 15
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Model Driven Communication Protocol Engineering and Simulation based Performance Analysis using UML 2.0

de Wet, Nico 01 January 2005 (has links)
The automated functional and performance analysis of communication systems specified with some Formal Description Technique has long been the goal of telecommunication engineers. In the past SDL and Petri nets have been the most popular FDTs for the purpose. With the growth in popularity of UML the most obvious question to ask is whether one can translate one or more UML diagrams describing a system to a performance model. Until the advent of UML 2.0, that has been an impossible task since the semantics were not clear. Even though the UML semantics are still not clear for the purpose, with UML 2.0 now released and using ITU recommendation Z.109, we describe in this dissertation a methodology and tool called proSPEX (protocol Software Performance Engineering using XMI), for the design and performance analysis of communication protocols specified with UML. Our first consideration in the development of our methodology was to identify the roles of UML 2.0 diagrams in the performance modelling process. In addition, questions regarding the specification of non-functional duration contraints, or temporal aspects, were considered. We developed a semantic time model with which a lack of means of specifying communication delay and processing times in the language are addressed. Environmental characteristics such as channel bandwidth and buffer space can be specified and realistic assumptions are made regarding time and signal transfer. With proSPEX we aimed to integrate a commercial UML 2.0 model editing tool and a discrete-event simulation library. Such an approach has been advocated as being necessary in order to develop a closer integration of performance engineering with formal design and implementation methodologies. In order to realize the integration we firstly identified a suitable simulation library and then extended the library with features required to represent high-level SDL abstractions, such as extended finite state machines (EFSM) and signal addressing. In implementing proSPEX we filtered the XML output of our editor and used text templates for code generation. The filtering of the XML output and the need to extend our simulation library with EFSM abstractions was found to be significant implementation challenges. Lastly, in order to to illustrate the utility of proSPEX we conducted a performance analysis case-study in which the efficient short remote operations (ESRO) protocol is used in a wireless e-commerce scenario.
22

Optimising Information Retrieval from the Web in Low-bandwidth Environments

Balluck, Ashwinkoomarsing 01 June 2007 (has links)
The Internet has potential to deliver information to Web users that have no other way of getting to those resources. However, information on the Web is scattered without any proper semantics for classifying them and thus this makes information discovery difficult. Thus, to ease the querying of this huge bin of information, developers have built tools amongst which are the search engines and Web directories. However, for these tools to give optimal results, two factors need to be given due importance: the users’ ability to use these tools and the bandwidth that is present in these environments. Unfortunately, after an initial study, none of these two factors were present in Mauritius where low bandwidth prevails. Hence, this study helps us get a better idea of how users use the search tools. To achieve this, we designed a survey where Web users were asked about their skills in using search tools. Then, a jump page using the search boxes of different search engines was developed to provide directed guidance for effective searching in low bandwidth environments. We then conducted a further evaluation, using a sample of users to see if there were any changes in the way users access the search tools. The results from this study were then examined. We noticed that the users were initially unaware about the specificities of the different search tools thus preventing efficient use. However, during the survey, they were educated on how to use those tools and this was fruitful when a further evaluation was performed. Hence the efficient use of the search tools helped in reducing the traffic flow in low bandwidth environments.
23

A lightweight interface to local Grid scheduling systems

Parker, Christopher P 01 May 2009 (has links)
Many complex research problems require an immense amount of computational power to solve. In order to solve such problems, the concept of the computational Grid was conceived. Although Grid technology is hailed as the next great enabling technology in Computer Science, the last being the inception of the World Wide Web, some concerns have to be addressed if this technology is going to be successful. The main difference between the Web and the Grid in terms of adoption is usability. The Web was designed with both functionality and end-users in mind, whereas the Grid has been designed solely with functionality in mind. Although large Grid installations are operational around the globe, their use is restricted to those who have an in-depth knowledge of its complex architecture and functionality. Such technology is therefore out of reach for the very scientists who need these resources because of its sheer complexity. The Grid is likely to succeed as a tool for some large-scale problem solving as there is no alternative on a similar scale. However, in order to integrate such systems into our daily lives, just as the Web has been, such systems need to be accessible to ``novice'' users. Without such accessibility, the use and growth of such systems will remain constrained. This dissertation details one possible way of making the Grid more accessible, by providing high-level access to the scheduling systems on which Grids rely. Since ``the Grid'' is a mechanism of transferring control of user submitted jobs to third-party scheduling systems, high-level access to the schedulers themselves was deemed to be a natural place to begin usability enhancing efforts. In order to design a highly usable and intuitive interface to a Grid scheduling system, a series of interviews with scientists were conducted in order to gain insight into the way in which supercomputing systems are utilised. Once this data was gathered, a paper-based prototype system was developed. This prototype was then evaluated by a group of test subjects who set out to criticise the interface and make suggestions as to where it could be improved. Based on this new data, the final prototype was developed firstly on paper and then implemented in software. The implementation makes use of lightweight Web 2.0 technologies. Designing lightweight software allows one to make use of the dynamic properties of Web technologies and thereby create more usable interfaces that are also visually appealing. Finally, the system was once again evaluated by another group of test subjects. In addition to user evaluations, performance experiments and real-world case studies were carried out on the interface. This research concluded that a dynamic Web 2.0-inspired interface appeals to a large group of users and allows for greater flexibility in the way in which data, in this case technical data, is presented. In terms of usability- the focal point of this research- it was found that it is possible to build an interface to a Grid scheduling system that can be used by users with no technical Grid knowledge. This is a significant outcome, as users were able to submit jobs to a Grid without fully comprehending the complexities involved with such actions, yet understanding the task they were required to perform. Finally, it was found that the use of a lightweight approach in terms of bandwidth usage and response time is superior to the traditional HTML-only approach. In this particular implementation of the interface, the benefits of using a lightweight approach are realised approximately halfway through a typical Grid job submission cycle.
24

An evaluation of remote communities services telecentres in five rural Newfoundland communities /

Dwyer, Patricia, January 2004 (has links)
Thesis (M.Sc.)--Memorial University of Newfoundland, 2004. / Restricted until May 2005. Bibliography: leaves 76-92.
25

Processing and Extending Flow-Based Network Traffic Measurements / Verarbeitung und Erweiterung der Flow-basierten Messungen von Netzwerkverkehr

Anderson, Sven 20 April 2009 (has links)
No description available.
26

Disaster medicine- performance indicators, information support and documentation : a study of an evaluation tool /

Rüter, Anders, January 2006 (has links)
Diss. (sammanfattning) Linköping : Linköpings universitet, 2006. / Härtill 5 uppsatser.
27

On monitoring and fault management of next generation networks

Shi, Lei 04 November 2010 (has links)
No description available.
28

Firewall Traversal in Mobile IPv6 Networks / Firewall Traversal in Mobile IPv6 Networks

Steinleitner, Niklas 09 October 2008 (has links)
No description available.
29

Link prediction and link detection in sequences of large social networks using temporal and local metrics

Cooke, Richard J. E. 01 November 2006 (has links)
This dissertation builds upon the ideas introduced by Liben-Nowell and Kleinberg in The Link Prediction Problem for Social Networks [42]. Link prediction is the problem of predicting between which unconnected nodes in a graph a link will form next, based on the current structure of the graph. The following research contributions are made: • Highlighting the difference between the link prediction and link detection problems, which have been implicitly regarded as identical in current research. Despite hidden links and forming links having very highly significant differing metric values, they could not be distinguished from each other by a machine learning system using traditional metrics in an initial experiment. However, they could be distinguished from each other in a "simple" network (one where traditional metrics can be used for prediction successfully) using a combination of new graph analysis approaches. • Defining temporal metric statistics by combining traditional statistical measures with measures commonly employed in financial analysis and traditional social network analysis. These metrics are calculated over time for a sequence of sociograms. It is shown that some of the temporal extensions of traditional metrics increase the accuracy of link prediction. • Defining traditional metrics using different radii to those at which they are normally calculated. It is shown that this approach can increase the individual prediction accuracy of certain metrics, marginally increase the accuracy of a group of metrics, and greatly increase metric computation speed without sacrificing information content by computing metrics using smaller radii. It also solves the “distance-three task” (that common neighbour metrics cannot predict links between nodes at a distance greater than three). • Showing that the combination of local and temporal approaches to link prediction can lead to very high prediction accuracies. Furthermore in “complex” networks (ones where traditional metrics cannot be used for prediction successfully) local and temporal metrics become even more useful.
30

Visual simultaneous localization and mapping in a noisy static environment

Makhubela, J. K. 03 1900 (has links)
M. Tech. (Department of Information and Communication Technology, Faculty of Applied and Computer Sciences), Vaal University of Technology / Simultaneous Localization and Mapping (SLAM) has seen tremendous interest amongst the research community in recent years due to its ability to make the robot truly independent in navigation. Visual Simultaneous Localization and Mapping (VSLAM) is when an autonomous mobile robot is embedded with a vision sensor such as monocular, stereo vision, omnidirectional or Red Green Blue Depth (RGBD) camera to localize and map an unknown environment. The purpose of this research is to address the problem of environmental noise, such as light intensity in a static environment, which has been an issue that makes a Visual Simultaneous Localization and Mapping (VSLAM) system to be ineffective. In this study, we have introduced a Light Filtering Algorithm into the Visual Simultaneous Localization and Mapping (VSLAM) method to reduce the amount of noise in order to improve the robustness of the system in a static environment, together with the Extended Kalman Filter (EKF) algorithm for localization and mapping and A* algorithm for navigation. Simulation is utilized to execute experimental performance. Experimental results show a 60% landmark or landfeature detection of the total landmark or landfeature within a simulation environment and a root mean square error (RMSE) of 0.13m, which is minimal when compared with other Simultaneous Localization and Mapping (SLAM) systems from literature. The inclusion of a Light Filtering Algorithm has enabled the Visual Simultaneous Localization and Mapping (VSLAM) system to navigate in an obscure environment.

Page generated in 0.1203 seconds