• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 104
  • 29
  • 12
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 342
  • 342
  • 342
  • 112
  • 105
  • 88
  • 78
  • 60
  • 56
  • 47
  • 46
  • 46
  • 40
  • 40
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Decision support framework for the adoption of software development methodologies.

Simelane, Lynette January 2019 (has links)
M. Tech. (Department of Information and Communication Technology, Faculty of Applied and Computer Sciences), Vaal University of Technology. / There are many software development methodologies that are used to control the process of developing a software system. However, no exact system has been found which could help software engineers in selecting the best software development methodology (SDM). The increasing complexity of software development today has led to complex management of software systems. This complexity increases the challenges faced by professionals in selecting the most appropriate SDM to adopt in a project. This is important because the wrong choice of methodology is costly for the organization as it may impact on deliveries, maintenance costs, budget projects and reliability. In this study we propose a decision support framework to assist professionals in the selection of appropriate software development methodologies that would fit each organisation and project setting. The case based reasoning (CBR) methodology was implemented in this study. This methodology focuses on problem solving that centres on the reutilization of past experiences. The CBR methodology was implemented using the SQL programming language. We tested the precision of the decision support framework by comparing the recommended methodology to the actual software methodology that was adopted for the project. The DS framework recorded an 80% precision result. In addition, the findings contribute to reducing the software crisis faced by today’s professionals. Therefore the framework can be adopted as a reliable tool for methodology selection in software development projects.
212

Learning to Edit Code : Towards Building General Purpose Models for Source Code Editing

Chakraborty, Saikat January 2022 (has links)
The way software developers edit code day-to-day tends to be repetitive, often using existing code elements. Many researchers have tried to automate the repetitive code editing process by mining specific change templates. However, such templates are often manually implemented for automated applications. Consequently, such template-based automated code editing is very tedious to implement. In addition, template-based code editing is often narrowly-scoped and low noise tolerant. Machine Learning, specially deep learning-based techniques, could help us solve these problems because of their generalization and noise tolerance capacities. The advancement of deep neural networks and the availability of vast open-source evolutionary data opens up the possibility of automatically learning those templates from the wild and applying those in the appropriate context. However, deep neural network-based modeling for code changes, and code, in general, introduces some specific problems that need specific attention from the research community. For instance, source code exhibit strictly defined syntax and semantics inherited from the properties of Programming Language (PL). In addition, source code vocabulary (possible number of tokens) can be arbitrarily large. This dissertation formulates the problem of automated code editing as a multi-modal translation problem, where, given a piece of code, the context, and some guidance, the objective is to generate edited code. In particular, we divide the problem into two sub-problems — source code understanding and generation. We empirically show that the deep neural networks (models in general) for these problems should be aware of the PL-properties (i.e., syntax, semantics). This dissertation investigates two primary directions of endowing the models with knowledge about PL-properties — (i) explicit encoding: where we design models catering to a specific property, and (ii) implicit encoding: where we train a very-large model to learn these properties from very large corpus of source code in unsupervised ways. With implicit encoding, we custom design the model to cater to the need for that property. As an example of such models, we developed CODIT — a tree-based neural model for syntactic correctness. We design CODIT based on the Context Free Grammar of the programming language. Instead of generating source code, CODIT first generates the tree structure by sampling the production rule from the CFG. Such a mechanism prohibits infeasible production rule selection. In the later stage, CODIT generates the edited code conditioned on the tree generated earlier. Suchconditioning makes the edited code syntactically correct. CODIT showed promise in learning code edit patterns in the wild and effectiveness in automatic program repair. In another empirical study, we showed that a graph-based model is better suitable for source code understanding tasks such as vulnerability detection. On the other hand, with implicit encoding, we use a very large (with several hundred million parameters) yet generic model. However, we pre-train these models on a super-large (usually hundreds of gigabytes) collection of source code and code metadata. We empirically show that if sufficiently pre-trained, such models are capable enough to learn PL properties such as syntax and semantics. In this dissertation, we developed two such pre-trained models, with two different learning objectives. First, we developed PLBART— the first-ever pre-trained encoder-decoder-based model for source code and show that such pre-train enables the model to generate syntactically and semantically correct code. Further, we show an in-depth empirical study on using PLBART in automated code editing. Finally, we develop another pre-trained model — NatGen to encode the natural coding convention followed by developers into the model. To design NatGen, we first deliberately modify the code from the developers’ written version preserving the original semantics. We call such transformations ‘de-naturalizing’ transformations. Following the previous studies on induced unnaturalness in code, we defined several such ‘de-naturalizing’ transformations and applied those to developer-written code. We pre-train NatGen to reverse the effect of these transformations. That way, NatGen learns to generate code similar to the developers’ written by undoing any unnaturalness induced by our forceful ‘de-naturalizing‘ transformations. NatGen has performed well in code editing and other source code generation tasks. The models and empirical studies we performed while writing this dissertation go beyond the scope of automated code editing and are applicable to other software engineering automation problems such as Code translation, Code summarization, Code generation, Vulnerability detection,Clone detection, etc. Thus, we believe this dissertation will influence and contribute to the advancement of AI4SE and PLP.
213

Rulemaking as Play: A Transdisciplinary Inquiry about Virtual Worldmaking

Qi, Zhenzhen January 2023 (has links)
In the age of computing, we rely on software to manage our days, from the moment we wake up until we go to sleep. Software predicts the future based on actualized data from the past. It produces procedures instead of experiences and solutions instead of care. Software systems tend to perpetuate a normalized state of equilibrium. Their application in social media, predictive policing, and social profiling is increasingly erasing diversity in culture and identity. Our immediate reality is narrowing towards cultural conventions shared among the powerful few, whose voices directly influence contemporary digital culture. On the other hand, computational collective intelligence can sometimes generate emergent forces to counter this tendency and force software systems to open up. Historically, artists from different artistic moments have adopted collaborative making to redefine the boundary of creative expression. Video Gaming, especially open-world simulation games, is rapidly being adopted as an emerging form of communication, expression, and self-organization. How can gaming conventions such as Narrative Emergence, Hacking, and Modding help us understand collective play as countering forces against the systematic tendency of normalization? How can people from diverse backgrounds come together to contemplate, make, and simulate rules and conditions for an alternative virtual world? What does it mean to design and virtually inhabit a world where rules are rewritten continuously by everyone, and no one is in control?
214

Integrating digital images into computer-based instruction: adapting an instructional design model to reflect new media development guidelines and strategies

Purcell, Steven L. 06 June 2008 (has links)
By and large, contemporary design models do little more than acknowledge the art and science of media development, and instead, place inordinate emphasis on media selection. While many texts on instructional design will discuss, in general terms, the circumstances under which media needs to be developed, their primary focus is on the selection and customization (e.g., repurposing videodiscs} of extant materials that support previously adopted goals, objectives, and instructional strategies. Although contemporary instructional design models do acknowledge computer-assisted instruction in general terms as part of the media selection and development processes, they fail to address specifically the development issues confronted when digital video is selected as an integral component of computer-based applications. Practitioners wishing to develop their own instructional materials (particularly those which incorporate digital video) are provided few specific details for creating those products in the context of a systems approach to instructional development. This study examined the essential design tasks involved in incorporating digital video into computer-based applications. The strategy adopted for this study consisted of the following: 1) The author produced a computer-based application for The Museum of Natural History at Virginia Tech that integrated both digital motion-video sequences and still-image graphics; 2) Each of the development “steps” made by the author was preserved through a set of design notes as well as videotaped records of designer and participant comments; 3) The design notes and videotaped records were subjected to qualitative analyses borrowed from standard ethnographic research procedures; 4) Subsequent considerations for integrating digital video into computer-based applications were abstracted from the analyses and presented as practical guidelines for practitioner-developers pursuing media development. A “traditional” model of instructional design was also modified to reflect state-of-the-art media development strategies. The model illustrates the general procedure of media development and places it in the context of a larger, systems approach to instructional design. The development steps include defining the product, conducting research, brainstorming ideas, generating design solutions, developing the prototype, testing the prototype, and developing the end-product. The model also illustrated (by way of example) the creation of the computer-based application developed for The Virginia Museum of Natural History at Virginia Tech. / Ph. D.
215

A multi-agent system for administering the prescription of anti-retroviral and anti-TB drugs

Kuyler, Wilhelmina Johanna January 2007 (has links)
Thesis (M. Tech.) -- Central University of Technology, Free State, 2007 / Multi-agent systems (MAS) consist of a number of autonomous agents that communicate among themselves to coordinate their activities in order to solve collectively a complex problem that cannot be tackled by any agent individually. These kinds of systems are appropriate in many domains where problems that are complex, distributed and heterogeneous require communication and coordination between separate autonomous agents, which may be running on different machines distributed over the Internet and are located in many different places. In the health care domain, MAS have been used for distributed patient scheduling, organ and tissue transplant management, community care, decision support, training and so on. One other promising area of application is in the prescription of antiretroviral and antiTB drugs. The drugs used to treat the two diseases have many and similar side effects that complicate the prescription process. These factors have to be considered when prescribing medication to a person coinfected with HIV and tuberculosis. This is usually done manually using drug recommendation tables, which are complicated to use and require a great deal of decisionmaking. The design and implementation of a multiagent system that assists health care staff in carrying out the complex task of combining antiretroviral and antiTB drugs in an efficient way is described. The system consists of a number of collaborating agents requiring the communication of complex and diverse forms of information between a variety of clinical and other settings, as well as the coordination between groups of health care professionals (doctors, nurses, counsellors, etcetera.) with very different skills and roles. The agents in the system include: patient agents, nurse agents, lab agents, medication agents and physician agents. The agents may be hosted on different machines, located in many different places distributed over the Internet. The system saves time, minimises decision errors and increases the standard of health care provided to patients.
216

A study on creating a custom South Sotho spellchecking and correcting software desktop application

Grobbelaar, Leon A. January 2007 (has links)
Thesis (B. Tech.) - Central University of Technology, Free State, 2007
217

Exception handling in object-oriented analysis and design

Van Rensburg, Annelise Janse 01 January 2002 (has links)
This dissertation investigates current trends concerning exceptions. Exceptions influence the reliability of software systems. In order to develop software systems that are most robust, thus delivering higher availability at a lower development and operating cost, the occurence of exceptions needs to be reduced and the effects of the exceptions controlled. In order to do this, issues such as detection, identification, classification, propagation, handling, language implementation, software testing and reporting of exceptions must be attended to. Although some of these areas are well researched there are remaining problems. The quest is to establish if a unified exception-handling framework is possible and viable, which can address the issues and problems throughout the software development life cycle, and if so, the requirements for such a framework. / Computing / M.Sc. (Information Systems)
218

Researching the effects of culture on usability

Ford, Gabrielle 31 January 2005 (has links)
An experiment was conducted to determine the effects of subjective culture on the usability of computerized systems. The results of the experiment did not provide sufficient evidence to conclude that any of the tested cultural dimensions affected the usability of the product. Analysis of the results indicated that the differences in scores could have been attributable to variables other than those tested and controlled for. This indicated a need to build a more detailed conceptual model of usability before empirical research of this nature can be effectively conducted. Consequently, further work needed to be done to identify the variables that influence usability, and the strategies for controlling for these variables under experimental conditions. Through a literature investigation, the validity of some of the proposed variables was established, and some additional variables were identified. The valid variables were then incorporated into a conceptual model of usability for use in future research endeavors. / Information systems / M. Sc.
219

A computer software model for the assessment of commercial property loans

Wright, John Beric 03 1900 (has links)
Thesis (MBA)--Stellenbosch University, 2001. / ENGLISH ABSTRACT: The development of computer software is a complex and laborious task, further complicated by the fact that copyright legislation is vague, at best. If the software is being developed for commercial exploitation then speed to market is essential and, even then, there is little to prevent skilled competitors from copying or even cloning the model. During the course of the year 2000 a team of developers, c ompr t s tn g Phillip Munday, Chris Vietri and the writer, not only managed to develop and prototype a complex loan evaluation software model, but have carried it through to the initial stages of a phased implementation and are presently involved in negotiations to sell the intellectual property rights (IPR) to a firm which specialises in the marketing of software to the banking industry internationally. It is virtually impossible for a single person to develop a model of this nature as it requires a comprehensive skills asset, including broad-based financial knowledge, specialised banking skills as well as a sound knowledge of information systems architecture, not to mention software p rogramming skills. The implementation and subsequent sale of the model further required comprehensive project management skills as well as the human resources understanding required for the substantial change management involved. Each of these 3 parties brought not only their particular exp ert i se to the table, but also a holistic view of the final shap e and form of the model. As is the case with projects of this magnitude numerous difficulties were encountered. These were, however, all overcome, via a series of iterations, and the model was introduced to the business on schedule. The implementation itself was fraught with difficulty, but the combination of a phased approach, together with comprehensive training and support, has led to the acceptance of the model by business users. There remain some technical difficulties which require to be resolved, particularly the disappointing performance of the model over a wide area network and also its integration with existing systems, but the model itself has exceeded expectations. It is simple to use, allows for a comprehensive and focused loan assessment and offers the ability to perform sophisticated sensitivity analysis in a fraction of a second. The model is now in its final shape and has been formally named Version 1.0, yet a great deal of work remains. We, as a bank, are not ideally suited to become purveyors of software and need to expedite the transfer of the IPR to a neutral party, to avoid local banks who might wish to purchase it from viewing our involvement with suspicion. Once this has been done, and the final phase of implementation concluded in March 2001, we will be able to move on to the exciting task of creating derivatives of the model, aimed at meeting the needs of other elements of the industry. / AFRIKAANSE OPSOMMING: Die ontwikkeling van rekenaar-sagteware is 'n lang en intensiewe proses. Hierdie proses word voorts bemoeilik deur onvoldoende en ongetoetsde patentreg-wetgewing. Wanneer die doel van sagtewareontwikkeling winsbejag is, is leweringspoed na gebruikers van die uiterste belang aangesien menige mededinger die vermoë het om 'n model na te boots en te verbeter. Gedurende die afgelope jaar het 'n ontwikkelingspan bestaande uit Phillip Munday, Chris Vietri en die skrywer 'n werksmodel van 'n gesofistikeerde, krediet-evalueri ng sagteware modelontwikkel. Nie net is hierdie model deurgevoer tot 'n gefasseerde, interne implementering nie; dit is nou vêr genoeg ontwikkel om die intellektuele eiendomsreg te verkoop aan 'n groep wat toegespits is op die bemarking van bankgerigte sagteware op 'n wêreldwye basis. Dit is bykans onmoontlik vir een persoon om 'n soortgelyke model te ontwikkel weens die omvattende finansiëIe - en bankpraktyk kennis wat vereis word. Verdere vereistes is 'n deeglike kennis van sagteware argitektuur en programmering. Die implimentering en verkoop van die program vereis ook wye kundigheid op die gebiede van projekbestuur en vernuwingsbestuur weens die potensiële strukturele veranderinge in 'n nuwe gebruiker. Elkeen van die 3 partye het benewens sy eie kundigheid ook 'n oorsigtige bydrae gelewer tot die finale model. Soos met elke projek van hierdie omvang was daar groot struikelblokke. Die uitdagings is oorkom deur menige probeerslae en die model is betyds aan die besigheid bekendgestel. Die implimentering was moeiliker as verwag maar deur 'n gefasseerde proses en omvattende opleiding en ondersteuning is aanvaarding deur gebruikers verseker. Daar is enkele, onopgelosde tegniese probleme soos die swak werkverrigting oor 'n wye' area-netwerk en die moeilike integrasie met bestaande stelsels. Desnieteenstaande het die model die meeste verwagtinge oortref. Dit is maklik om te gebruik, dit verseker deeglike krediet-evaluering en dit skep die geleentheid om veelvuldige sensitiwiteitsanalises tegelykertyd te doen. Die modeI is nou in sy finale weergawe en is bekend as " Version 1.0 " maar dit vereis nog heelwat skaafwerk. As 'n bank is ons nie geskik om sagteware te voorsien nie en daarom moet die verkoop van die intellektuele eiendomsreg na 'n tussenparty bespoedig word. Dit sal verhoed dat ons bank se betrokkenheid met agterdog deur plaaslike banke bejeën word. Wanneer dit bewerkstellig is en die finale implimenteringsfase is voltooi teen Maart 2001, kan ons beweeg na die opwindende taak om afgeleide modelle te ontwikkel wat sal voldoen aan wyer sektor-behoeftes.
220

The development and testing of a computer aided instructional resource for the teaching of physical science

Van Zyl, Kevin Clive 12 1900 (has links)
Thesis (PhD)--University of Stellenbosch, 2004. / ENGLISH ABSTRACT: This study set out to develop and test a Computer Aided Instructional Resource for Physical Science in Grades 11 and 12. The software was tested in the context of Newtonian Mechanics. This study differed from most other studies in that it did not develop or test tutoring-type software that the learner uses on a one to one basis in a computer laboratory. It did, however, test and develop software to be used by the teacher in the classroom while teaching. A theoretical framework is presented, built on experience-based as well as literature-based theory. In this framework, the effects of computer interventions on the teaching and learning situation as reported in the literature are viewed within the South African context. In the light of what is reported in the literature, the education authorities’ attempts to disseminate the curriculum with the use of technology, are questioned. Reasons for not doing a quantitative assessment of learner understanding of concepts are presented with reference to criticism in the literature against such assessments. The dissertation reports on the type of questions that need to be asked according to the literature. This discussion then leads to research questions that describe a process for the developing and testing of a resource that could assist teachers in teaching Physical Science. Developmental methods as well as ways of assessing had to be researched to determine the best way in which such a resource could be developed and tested. During this research it was found that the implementation of Information and Communication Technology (ICT) to deliver the curriculum had focused more on the development of tutoring type software and it seemed that the use of computers for actual classroom instruction did not receive as much attention. It was however possible to identify developmental and assessment principles that were common to research that had been done and the project that is reported in this dissertation. The Computer Aided Instructional Resource (CAIR) was developed by the researcher in the form of a presentations package that the teacher could use in the classroom while teaching. It was tested in a Prototyping Stage in the researcher’s classroom before being tested in eight project schools during the Piloting Stage. This was done by connecting personal computers up to 74cm televisions and then displaying the CAIR on the TV whileteaching. This was made possible by TRAC South Africa that funded the project. It also provided an opportunity to assess the use of the TRAC system in the same schools. After assessment criteria had been identified, assessment instruments were developed to assess the project in different ways. There were questionnaires for each stage to be completed by learners and teachers as well as an observation instrument that was used by the researcher during classroom visits. These assessment instruments made it possible to assess the CAIR with respect to didactical, visual and technical considerations. Results of the empirical study are presented under the assessment criteria that had been identified and are discussed with reference to the original research questions. The results of the assessment were very positive for both the CAIR and TRAC systems. The study has however tried to focus on the negative rather than positive outcomes to present as unbiased a picture as possible of the assessment results. It was also necessary to focus on the negative to determine how and where the CAIR could be improved and, to make recommendations regarding the implementation of the TRAC system. Recommendations are also made for immediate action and further investigations. / AFRIKAANSE OPSOMMING: Hierdie studie het gepoog om a rekenaar gesteunde onderrighulpmiddel te ontwikkel en te toets. Die sagteware is ontwikkel en getoets in die konteks van die onderrig van meganika. Die studie verskil van die meeste ander studies daarin dat die sagteware nie ontwikkel is vir die gebruik van leerders in ’n een-tot-een situasie in ’n rekenaar laboratorium nie. Die sagteware is eerder ontwikkel om deur die onderwyser gebruik te word terwyl onderrig in die klaskamer plaasvind. ‘n Teoretiese raamwerk wat op ondervinding en literatuurnavorsing gebou is, word aangebied. In hierdie raamwerk word die effek wat rekenaarintervensies op die onderrigleer situasie het, soos in die literatuur vermeld, binne die Suid Afrikaanse konteks geplaas. Die opvoedkundige owerhede se pogings om die kurrikulum te versprei met behulp van tegnologie, word bevraagteken na aanleiding van inligting wat in die literatuur verkry is. Redes waarom ‘n kwantitatiewe evaluering van leerderbegrip van konsepte nie gedoen is nie, word aangebied met verwysing na kritiek teen sulke evaluerings vanuit die literatuur. Vrae wat volgens die literatuur wel gevra moet word, word gerapporteer. Hierdie bespreking lei na die navorsingsvrae wat ‘n proses beskryf vir die ontwikkeling en toetsing van ‘n hulpmiddel wat onderwysers van nut kan wees in die onderrig van Natuur en Skeikunde. Ontwikkelingsmetodes sowel as kwalitatiewe evaluering is nagevors om die beste metodes vir ontwikkeling en toetsing te bepaal. Daar is gevind dat die implementering van Inligting en Kommunikasie Tegnologie om die kurrikulum oor te dra, meer op tutorial-tipe sagteware gefokus het. Die gebruik van rekenaars vir klaskamerinstruksie het nie soveel aandag in die literatuur geniet nie. Dit was egter moontlik om beginsels vir ontwikkeling en toetsing te identifiseer wat in ander studies gebruik is en wat hier ook toegepas kon word. Die hulpmiddel is ontwikkel in die form van ’n aanbiedingspaket wat die onderwyser in die klaskamer kan gebruik terwyl hy of sy onderrig gee. Die prototype is in die navorser se klaskamer getoets voordat dit in agt projekskole in ’n loodsprogram getoets is. Dit is gedoen deur ‘n persoonlike rekenaar in elke klaskamer aan ’n 74cm televisie te koppel.Dit is moontlik gemaak deur TRAC Suid-Afrika wat befondsing vir die projek verskaf het. Dit het ook ’n geleentheid verskaf om ’n kwalitatiewe evaluering van die TRAC stelsel in dieselfde skole te doen. Nadat evalueringskriteria geïdentifiseer is, is meetinstrumente ontwikkel om die projek op verskillende maniere te toets. Vraelyste moes in elke fase deur leerders en onderwysers voltooi word. Daar was ook ’n instrument vir gebruik deur die navorser tydens klasbesoek. Die hulpmiddel kon sodoende getoets word in terme van didaktiese, visuele en tegniese aspekte. Die resultate van die empiriese studie word aangebied onder die evalueringskriteria en word bespreek met verwysing na die oorspronklike navorsingsvrae. Die resultate was baie positief vir beide die onderrighulpmiddel en die TRAC stelsel. In die studie is gepoog om resultate so neutral moontlik aan te bied deur eerder op die negatiewe te konsentreer. Dit was egter ook nodig om op die negatiewe te konsentreer om te bepaal hoe die hulpmiddel verbeter kon word en om aanbevelings ten opsigte van die implementering van die TRAC stelsel te maak. Aanbevelings is ook gemaak oor onmiddellike aksie wat geneem kan word, sowel as vir moontlike verdere ondersoek.

Page generated in 0.3627 seconds