• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1963
  • 183
  • 182
  • 147
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 16
  • 11
  • 9
  • 7
  • Tagged with
  • 2877
  • 2877
  • 750
  • 637
  • 506
  • 499
  • 393
  • 336
  • 314
  • 300
  • 299
  • 289
  • 288
  • 277
  • 276
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
781

Assessing program code through static structural similarity

Naude, Kevin Alexander January 2007 (has links)
Learning to write software requires much practice and frequent assessment. Consequently, the use of computers to assist in the assessment of computer programs has been important in supporting large classes at universities. The main approaches to the problem are dynamic analysis (testing student programs for expected output) and static analysis (direct analysis of the program code). The former is very sensitive to all kinds of errors in student programs, while the latter has traditionally only been used to assess quality, and not correctness. This research focusses on the application of static analysis, particularly structural similarity, to marking student programs. Existing traditional measures of similarity are limiting in that they are usually only effective on tree structures. In this regard they do not easily support dependencies in program code. Contemporary measures of structural similarity, such as similarity flooding, usually rely on an internal normalisation of scores. The effect is that the scores only have relative meaning, and cannot be interpreted in isolation, ie. they are not meaningful for assessment. The SimRank measure is shown to have the same problem, but not because of normalisation. The problem with the SimRank measure arises from the fact that its scores depend on all possible mappings between the children of vertices being compared. The main contribution of this research is a novel graph similarity measure, the Weighted Assignment Similarity measure. It is related to SimRank, but derives propagation scores from only the locally optimal mapping between child vertices. The resulting similarity scores may be regarded as the percentage of mutual coverage between graphs. The measure is proven to converge for all directed acyclic graphs, and an efficient implementation is outlined for this case. Attributes on graph vertices and edges are often used to capture domain specific information which is not structural in nature. It has been suggested that these should influence the similarity propagation, but no clear method for doing this has been reported. The second important contribution of this research is a general method for incorporating these local attribute similarities into the larger similarity propagation method. An example of attributes in program graphs are identifier names. The choice of identifiers in programs is arbitrary as they are purely symbolic. A problem facing any comparison between programs is that they are unlikely to use the same set of identifiers. This problem indicates that a mapping between the identifier sets is required. The third contribution of this research is a method for applying the structural similarity measure in a two step process to find an optimal identifier mapping. This approach is both novel and valuable as it cleverly reuses the similarity measure as an existing resource. In general, programming assignments allow a large variety of solutions. Assessing student programs through structural similarity is only feasible if the diversity in the solution space can be addressed. This study narrows program diversity through a set of semantic preserving program transformations that convert programs into a normal form. The application of the Weighted Assignment Similarity measure to marking student programs is investigated, and strong correlations are found with the human marker. It is shown that the most accurate assessment requires that programs not only be compared with a set of good solutions, but rather a mixed set of programs of varying levels of correctness. This research represents the first documented successful application of structural similarity to the marking of student programs.
782

Performance evaluation of the movable-slot TDM protocol and its application in metropolitan area networks

Hon, Lenny Kwok-Ming January 1987 (has links)
Movable-slot time-division multiplexing (MSTDM) is a medium access control protocol for the integration of voice and data in local area networks. In this thesis, the performance of this protocol is evaluated through mathematical analysis and simulation. Its application in metropolitan area networks is also studied. For the performance evaluation, a non-pre-emptive priority queuing model is first proposed for analysing the mean data delay characteristic of the slotted non-persistent carrier-sense multiple access with collision detection (CSMA/CD) protocol. Then this analytical approach is extended to the slotted MSTDM protocol with non-persistent data packet transmission, and its mean data delay performance is obtained. Numerical results from the analysis are shown and discussed. Moreover, simulation study of the MSTDM protocol is performed. Through the simulation results, the effects of this protocol on the general delay performances of voice and data are discussed. It is found that if first voice packets, which are generated at the beginning of talkspurts, are given a shorter retransmission delay than data packets, the channel-acquisition delay for voice sources can be reduced without sacrificing the data delay performance significantly. The simulation results are also used to verify the analytical results. As the comparisons show, the accuracy of the analysis is high although it is based on a simple approximate model. For the application of MSTDM in metropolitan area networks, a scheme which alleviates the distance and transmission rate constraints associated with this protocol is described. The approach is to divide the stations in a large area into regional groups, each operating in a different frequency band. Each group forms a sub-network which is part of the metropolitan area network. An access protocol is proposed for interconnecting these sub-networks. Also an analysis which finds the optimum number of sub-networks for interconnection is presented. The criterion is to minimize the mean data delay for communications in a sub-network. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
783

Design and implementation of a token bus protocol for a power line local area network

Gu, Hua January 1988 (has links)
This thesis presents the development and implementation of a token bus protocol for a Power Line Local Area Network (PLLAN) which utilizes intra-building power distribution circuit as the physical transmission medium. This medium provides a low cost means for data communications with a high degree of portability. Due to the characteristics of the power line and the prototype modem, the network would be easily saturated with data and would have a high collision probabilities. The IEEE 802.4 token bus standard is modified to fit the PLLAN and to bring its performance up. A comparative performance of the original protocol and the modified version shows that the latter provides an improvement in network throughput of up to 15 percent and a reduction in the network join-ring delay of up to 20 percent for a wide workload range. The performance figures of the modified version in a power line network of three SUN 3/50 workstations¹ transmitting at 9.6 kilo-bit per second is also presented and analyzed. ¹Sun workstation is a trademark of Sun Microsystems. / Science, Faculty of / Computer Science, Department of / Graduate
784

Authorisation as audit risk in an information technology environment

Kruger, Willem Jacobus 05 February 2014 (has links)
M.Comm. / Please refer to full text to view abstract
785

'n Bestuurs- en metodologiese benadering tot gebeurlikheidsbeplanning vir die gerekenariseerde stelsels van 'n organisasie

Nel, Yvette 28 July 2014 (has links)
M.Com. (Informatics) / The-utilization of information technology is essential for an organization, not only to handle daily business activities but also to facilitate management decisions. The greater the dependence of the organization upon information technology, the greater the risk the organization is exposed to in case of an information systems interruption. Computer disasters, such as fires, floods, storms, sabotage and human error, constitute a security threat which could prejudice the survival of an organization. Disaster recovery planning is a realistic and imperative activity for each organization whether large or small. In the light of the potential economic and legal implications o fa disaster, it is no longer acceptable not to be prepared for such an occurrence today.A well designed and tested disaster recovery plan, as part of the total information security strategy of the organization, is therefore not only essential in the terms of the recovery of business functions, but for the SURVIVAL of the organization. In viewpoint above, it can be expected that disaster counterrevolutionary be standard practice for all organizations. However that is not the case. The literature study undertook, as well as exposure in practice, indicate clearly that disaster recovery planning enjoys low priority in most organizations. The majority existentialists are superficial, unstructured and insufficient and will not be successful when real disaster strikes.:The most important single cause for the failure of an organization ~ disaster recovery plan, will be that too much emphasis is being placed on the technical aspects rather than on the management or organizational aspects. The solutions an integrated approach of strategies and the multiple technologies which are available today. These strategies and technologies should be combined to meet the specific needs of the individual organization. The purpose of this dissertation was firstly to identify the most critical problems related to disaster recovery planning and secondly to provide a methodology for the development and implementation of a disaster recovery plan which addresses these problems. This methodology constitutes an enhancement on an existing information security methodology in order to establish a total information security strategy for a large organization with disaster recovery as an essential aspect of this strategy. The final disaster recovery planning methodology as proposed in this dissertation, was developed as a result of an extensive literature study undertook as well as involvement during the development of a disaster recovery system by the company which initiated this study.
786

Internal communication media selection in the University of Pretoria with emphasis on computer-mediated communication media

Jordaan, Leonore Leatishia Truter 21 July 2006 (has links)
The selection of one medium of communication above another may appear to be a matter of personal choice, and of little research consequence. Yet, insight into media preference when it comes to receiving internal communication messages may mean the difference between effective communication and lack of communication within an organisation. A number of theoretical perspectives have been advanced to explain communication media choice decisions. For the purpose of this study, the Media Richness Theory (MRT) and the Symbolic Interactionism Theory (SIT) were used to explore media selection at the University of Pretoria (UP). The MRT is concerned with identifying the most appropriate medium in terms of "medium richness" for communication situations characterised by equivocality and uncertainty. The SIT concurs with the MRT, but goes further and predicts that situational determinants such as distance and time and the symbolic cues provided by a medium, also influence media choice. The hypotheses were tested with data obtained from 174 employees (academic and non-academic) based on the main campus of the UP. A mail questionnaire was used to gather data. The questionnaire was developed to test MRT and SIT predictions with regard to media selection. The gathered data were analysed to reach general findings from the descriptive statistics and to test the hypotheses by using inferential statistics such as (a) chi-square tests, (b) analysis of variance (ANOVA) and (c) factor analysis. Research findings indicate that employees at UP tend to select face-to-face media for highly equivocal messages and written media for clear, objective messages. The results also indicate that where situational constraints such as distance and time pressure are present, people tend to choose "leaner" media, such as telephone and computer-mediated communication media, irrespective of the contents of the message. When symbolic meaning is intended, however, such as a desire for teamwork and trust, a "rich" medium is preferred. These findings are in support of MRT and SIT predictions. The results from the factor analysis indicate that organisational culture in UP plays a more significant role than the communicator or recipient where media selection is concerned. Based on this research, it can be accepted (at a 95% confidence level) that:<ul> <li>media selection is determined by message equivocality, message uncertainty, situational constraints and symbolic meaning; </li> <li>there is no significant dependence between years service and media selection; </li> <li>there is a tendency to use computer-mediated communication media as much as or more than conventional media where messages of a non¬personal nature are concerned; this is, however not true for messages of a personal nature.r</li></ul> In conclusion, although the findings of this study are only of an exploratory nature and based on a small section of the employees at UP, the results indicate the existence of a significant relationship between message contents, situational factors and media selection. Thus, effective internal communication may mean selecting the right medium to fit message contents and the situation in order to achieve mutual understanding and success. / Dissertation (MA)--University of Pretoria, 2006. / Communication Management / MA / Unrestricted
787

Proposta e validação de nova arquitetura de redes de data center / Proposal and Validation of New Architecture for Data Center Networks

Macapuna, Carlos Alberto Bráz 18 August 2018 (has links)
Orientadores: Mauricio Ferreira Magalhães; Christian Esteve Rothenberg / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-18T11:07:49Z (GMT). No. of bitstreams: 1 Macapuna_CarlosAlbertoBraz_M.pdf: 1236245 bytes, checksum: a91bba6ee11302ae78b90231dd6c0241 (MD5) Previous issue date: 2011 / Resumo: Assim como as grades computacionais, os centros de dados em nuvem são estruturas de processamento de informações com requisitos de rede bastante exigentes. Esta dissertação contribui para os esforços em redesenhar a arquitetura de centro de dados de próxima geração, propondo um serviço eficaz de encaminhamento de pacotes, que explora a disponibilidade de switches programáveis com base na API OpenFlow. Desta forma, a dissertação descreve e avalia experimentalmente uma nova arquitetura de redes de centro de dados que implementa dois serviços distribuídos e resilientes a falhas que fornecem as informações de diretório e topologia necessárias para codificar aleatoriamente rotas na origem usando filtros de Bloom no cabeçalho dos pacotes. Ao implantar um exército de gerenciadores de Rack atuando como controladores OpenFlow, a arquitetura proposta denominada Switching with in-packet Bloom filters (SiBF) promete escalabilidade, desempenho e tolerância a falhas. O trabalho ainda defende a ideia que o encaminhamento de pacotes pode tornar-se um serviço interno na nuvem e que a sua implementação pode aproveitar as melhores práticas das aplicações em nuvem como, por exemplo, os sistemas de armazenamento distribuído do tipo par <chave,valor>. Além disso, contrapõe-se ao argumento de que o modelo de controle centralizado de redes (OpenFlow) está vinculado a um único ponto de falhas. Isto é obtido através da proposta de uma arquitetura de controle fisicamente distribuída, mas baseada em uma visão centralizada da rede resultando, desta forma, em uma abordagem de controle de rede intermediária, entre totalmente distribuída e centralizada / Abstract: Cloud data centers, like computational Grids, are information processing fabrics with very demanding networking requirements. This work contributes to the efforts in re-architecting next generation data centers by proposing an effective packet forwarding service that exploits the availability of programmable switches based on the OpenFlow API. Thus, the dissertation describes and experimentally evaluates a new architecture for data center networks that implements two distributed and fault-tolerant services that provide the directory and topology information required to encode randomized source routes with in-packet Bloom filters. By deploying an army of Rack Managers acting as OpenFlow controllers, the proposed architecture called Switching with in-packet Bloom filters (SiBF) promises scalability, performance and fault-tolerance. The work also shows that packet forwarding itself may become a cloud internal service implemented by leveraging cloud application best practices such as distributed key-value storage systems. Moreover, the work contributes to demystify the argument that the centralized controller model of OpenFlow networks is prone to a single point of failure and shows that direct network controllers can be physically distributed, yielding thereby an intermediate approach to networking between fully distributed and centralized / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
788

Development of a standards based open environment for the worldwide military command and control system

Laska, William David 30 March 2010 (has links)
<p>The Worldwide Military Command and Control System (WWMCCS) is an operational multi-service/agency program that supports command and control functions for the National Command Authority and the Commanders of major unified and specified commands. These functions provide data processing capabilities including status of forces reporting, support requirements and contingency planning used in national security decision making.</p> / Master of Science
789

Extending OWns to include protection functionality

Chittenden, Albert-Bruce 05 April 2007 (has links)
The objective of this dissertation is to enhance the functionality of an existing simulation package that is used to simulate fiber optic networks. These enhancements include the capability to simulate protection mechanisms following link failure, which is a necessity in real-world optical networks to ensure the continued flow of information following a failure in a part of the network. The capability for network traffic to choose from additional paths is also an addition to the software. The enhanced, as well as the original simulation software, are open source: this allows anyone to freely modify and improve the source code to suit his or her requirements. This dissertation will focus on mesh-based optical network topologies, which are commonly found in regional optical backbone networks, but which are also increasingly found in metropolitan areas. The regional networks all make use of wavelength division multiplexing (WDM), which consists of putting multiple different wavelengths of light on the same physical fiber. A single fiber breakage will therefore disrupt multiple fiber-optic connections. A fiber-optic network designer has to satisfy various conflicting requirements when designing a network: it must satisfy current and predicted future traffic requirements, it must be immune to equipment failure, but it must also be as inexpensive as possible. The network designer therefore has to evaluate different topologies and scenarios, and a good network simulator will provide invaluable assistance in finding an optimal solution. Protection and restoration need to be looked at in conjunction with routing and wavelength assignment (RWA), to ensure that resources in a network are used at maximum efficiency. Connection restoration time will also be looked at: this should be minimised to ensure minimal network downtime and ensuing loss of revenue. The chosen alternate connection path should also be as short as possible to minimise use of resources and maximise the carrying capacity of the network. Blocking probability (the inability to establish a connection due to a congested network) is a crucial factor and is also investigated. The topologies investigated in this dissertation consist of various mesh based real-world regional WDM fiber-optic networks. The impact of various link failures, the addition of additional alternate paths, as well as the effect of a protection mechanism on these topologies are also investigated. The proposed goals were all successfully achieved. The capability of simulating single as well as multiple link failures was introduced to the simulation package. The blocking probability of various network topologies was compared to each other in the presence of link failures. Success was also achieved in the introduction of a third alternate path to the simulation package. / Dissertation (MEng(Electronic))--University of Pretoria, 2005. / Electrical, Electronic and Computer Engineering / unrestricted
790

L'espace transnational et la localité : le réseautage et la sédimentation du passage

Roberge, Claire. January 2007 (has links)
No description available.

Page generated in 0.0663 seconds