• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 385
  • 133
  • 28
  • 27
  • 22
  • 20
  • 17
  • 13
  • 12
  • 7
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 835
  • 288
  • 268
  • 98
  • 89
  • 80
  • 75
  • 74
  • 74
  • 73
  • 73
  • 67
  • 65
  • 57
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Investment justification of information systems : a focus on the evaluation of MRPII

Irani, Zahir January 1998 (has links)
A review of the normative literature, in the field of Information Technology (IT)/ Information System (IS) justification, examines how organisations evaluate their investments in Manufacturing Resource Planning (MRPII). This is achieved through investigating the issues surrounding capital budgeting, with a particular focus on investment appraisal. In doing so, a novel taxonomy of generic appraisal techniques is proposed. This taxonomy identifies a number of methods for appraising MRPII investments, and through describing these techniques, a classification is offered that identifies their respective characteristics and limitations. In doing so, it becomes clear that although many of the benefits and savings resulting from MRPII are suitable for inclusion within traditional accountancy frameworks, it is their intangible and non-financial nature, together with a range of indirect project costs that confuse the justification process. These factors, together with a range of human and organisational implications, that further complicate the decision making process are also identified. Hence, it appears through a critical review of the literature that many companies are unable to assess the implications of their MRPII investments, thus amounting to a myopic appraisal process that focuses on the analysis of those benefits and costs that are financially quantifiable. In acknowledging the limitations of traditional appraisal techniques, a conceptual model for IT/IS investment evaluation is proposed, which is underpinned by research hypotheses. To test the validity of the proposed hypotheses, a robust novel research methodology is then developed. In doing so, an interpretivist stance is adopted, which favours the use of qualitative research methods during a multiple case enquiry. Whilst conducting the empirical research, it soon emerged that the hypotheses represented significant factors for consideration within the presented model. As a result, such constructs now establish themselves as integral parts within a structured evaluation process. However, during the empirical research, complementary evaluation criteria also emerged, which resulted in modifications being made to the previously presented conceptual model. In doing so, culminating in the development of descriptive MRPII evaluation criteria and a model, which provides investment decision makers with novel frames of reference during the evaluation of MRPII investment proposals.
352

Enhancing Information Security in Cloud Computing Services using SLA based metrics / Enhancing Information Security in Cloud Computing Services using SLA based metrics

, Nia, Mganga, Ramadianti Putri;, Charles, Medard January 2011 (has links)
Context: Cloud computing is a prospering technology that most organizations are considering for adoption as a cost effective strategy for managing IT. However, organizations also still consider the technology to be associated with many business risks that are not yet resolved. Such issues include security, privacy as well as legal and regulatory risks. As an initiative to address such risks, organizations can develop and implement SLA to establish common expectations and goals between the cloud provider and customer. Organizations can base on the SLA to measure the achievement of the outsourced service. However, many SLAs tend to focus on cloud computing performance whilst neglecting information security issues. Objective: We identify threats and security attributes applicable in cloud computing. We also select a framework suitable for identifying information security metrics. Moreover, we identify SLA based information security metrics in the cloud in line with the COBIT framework. Methods: We conducted a systematic literature review (SLR) to identify studies focusing on information security threats in the cloud computing. We also used SLR to select frameworks available for identification of security metrics. We used Engineering Village and Scopus online citation databases as primary sources of data for SLR. Studies were selected based on the inclusion/exclusion criteria we defined. A suitable framework was selected based on defined framework selection criteria. Based on the selected framework and conceptual review of the COBIT framework we identified SLA based information security metrics in the cloud. Results: Based on the SLR we identified security threats and attributes in the cloud. The Goal Question Metric (GQM) framework was selected as a framework suitable for identification of security metrics. Following the GQM approach and the COBIT framework we identified ten areas that are essential and related with information security in the cloud computing. In addition, covering the ten essential areas we identified 41 SLA based information security metrics that are relevant for measuring and monitoring security performance of cloud computing services. Conclusions: Cloud computing faces similar threats as traditional computing. Depending on the service and deployment model adopted, addressing security risks in the cloud may become a more challenging and complex undertaking. This situation therefore appeals to the cloud providers the need to execute their key responsibilities of creating not only a cost effective but also a secure cloud computing service. In this study, we assist both cloud provider and customers on the security issues that are to be considered for inclusion in their SLA. We have identified 41 SLA based information security metrics to aid both cloud providers and customers obtain common security performance expectations and goals. We anticipate that adoption of these metrics can help cloud providers in enhancing security in the cloud environment. The metrics will also assist cloud customers in evaluating security performance of the cloud for improvements.
353

Non-Functional Requirement Modeling in the Early-Phase Software Product Life Cycle : A Systematic Literature Review and a Meta Model

Nanduru, Pavan Kumar January 2017 (has links)
Context. Non-functional requirements (NFRs) are important aspects which directly or indirectly determine whether a product is a success or a failure. It becomes essential to incorporate and understand them, before the software product enters the development phase. Despite the increasing emphasis put into NFR studies (namely; models and frameworks etc.) over the past few years, most industries prefer not use these techniques or to deal with NFRs later, in simpler manners. This could limit the efficiency of the development process. Integration of the existing NFR models/frameworks into the earlier phases of the product life cycle can provide a systematic approach to plan and anticipate NFRs for any software product. Objectives. This study aims to provide a generic meta model which acts as a compilation of the best NFR models/frameworks integrated into the early phases of the software product life cycle. This study also provides a real-world example which applies the conceptual meta model. Lastly, the meta model undergoes some limited validation to determine its relevance to what is being used and the extent of its practical use. Methods. Initially, a systematic literature review (snowballing) was conducted, to identify the different types of NFR models/frameworks. A comparative pro-con analysis was performed on the results of the SLR, which was the basis of the inclusion criteria for the meta model. The conceptual meta model was developed based on the International Software Product Management Association’s (ISPMA) definition of a product life cycle. Each phase of this meta model was imbedded with an NFR model/framework associated to the purpose of that phase and the results from the SLR. The application of the meta model was then demonstrated using a mobile phone example. Finally, the meta model was validated limitedly via an exploratory survey and the results were analyzed. Results. The meta model introduced can be used for the constructive inclusion of NFRs from product inception to product development. All phases required for the fulfillment of an NFR, are included. The overall positive feedback of the meta model is at 67%. Validations and assessments by practitioners helped determine to some extent that some industries are open to using the approach. Keeping in mind most of the available models on NFRs have not been validated, the NFR works used in this research have gone through some preliminary validation in this study. Conclusions. The study promotes the use of NFR models in the early phases of the software product life cycle. Some of the best modeling techniques were included based on results of literature analysis and their capability to fit into each phase. This study also analyzed the various insights of practitioners and researchers, justifying the significance of modeling, and the proposed technique. Possible extensions to this research were also mentioned.
354

Assessment of Agile Maturity Models : A Survey

Deekonda, Rahul, Sirigudi, Prithvi Raj January 2016 (has links)
Context. In recent years Agile has gained lots of importance in the fieldof software development. Many organization and software practitioners hasalready adopted agile practice due to its flexibility in nature. Hence, agiledevelopment methodologies have been replaced to traditional developmentmethods. Agile is a family of several methodologies namely Scrum. eXtremeprogramming (XP) and several others. These several methods areembedded with different set of agile practices for the organizations to adoptand implement for their development process. But there is still a need forempirical research to understand the benefits of implementing the Agilepractices which contributes to the overall success of accomplishment of thesoftware project. Several agile maturity models have been published over adecade but not all of the models have been empirically validated. Hence,additional research in the context of agile maturity is essential and needed. Objectives. This study focus on providing a comprehensive knowledgeon the Agile Maturity Models which help in guiding the organizations regardingthe implementation of Agile practices. There are several maturitymodels published with different set of Agile practices that are recommendedto the industries. The primary aim is to compare the agile maturity maturitymodels and to investigate how the agile practices are implemented inthe industry Later the benefits and limitations faced by the software practitionersdue to implementation of agile practices are identified. Methods. For this particular research an industrial survey was conductedto identify the agile practices that are implemented in the industry. Inaddition, this survey aims at identifying the benefits and limitations of implementingthe agile practices. A literature review is conducted to identifythe order of agile practices recommended from the literature in agile MaturityModels. Results. From the available literature nine Maturity Models have beenextracted with their set of recommended agile practices. Then the resultsfrom the survey and literature are compared and analyzed to see if thereexist any commonalities or differences regarding the implementation of agilepractices in a certain order. From the results of the survey the benefitsand limitations of implementing the Agile practices in a particular order areidentified and reported. Conclusions. The findings from the literature review and the survey resultsin evaluating the agile maturity models regarding the implementationof agile practices.
355

Managing Data Location in Cloud Computing (Evaluating Data localization frameworks in Amazon Web Services)

Kokkonda, Vijay, Taduri, Krishna Sandeep January 2012 (has links)
Context: Cloud Computing is an emerging technology where present IT is trending towards it. Many enterprises and users are still uncomfortable to adopt Cloud Computing as it uncovers many potential and critical threats which remind the most concerned security issue to consider. As data is migrated between different data centers dispersed globally, security for data is a very important issue. In Cloud environment Cloud user should be aware of the physical location of the data to ensure whether their data reside within certain jurisdiction as there are different data privacy laws. Evaluating different data localization frameworks in Amazon Web Services by deploying web application in Amazon availability zones (US, Europe and Asia) is the main context of this study. Objectives: In this study we investigate which strategic data localization frameworks have been proposed, can be used to identify data location of web application resource deployed in Cloud and validate those considered three frameworks by conducting experiment in a controlled environment. Methods: Literature Review is performed by using search string in data bases like Compendex, IEEE, Inspec, ACM digital Library, Science Direct and Springer Link to identify the data location frameworks. Later these data location frameworks are evaluated by conducting a controlled Experiment. Experiment is performed by following the guidelines proposed by Wohlin, C [66]. Results: Three data localization frameworks out of ten, obtained from literature study are considered for the evaluation. The evaluation of these frameworks in Amazon Web Services resulted that replication of three data localization studies is possible, predicting the location of data in US, Europe and Asia close enough accurate and the factors considered from the frameworks are valid. Conclusions: We conclude that from the identified ten frameworks, three data location frameworks are considered for evaluation in which one framework allows verifying the location of data by trusting the information provided by cloud provider and second framework is to verify the location of cloud resources without need of trusting cloud provider, finally the third framework is to identify the replicated files in cloud however this framework also does not need trusting the Cloud provider. These frameworks address the data location problem but in a different way. Now the identified three frameworks are validated by performing a controlled experiment. The activities performed from the three frameworks in the experiment setup allow identifying the data location of web application deployed in US, Europe and Asia. The evaluation of these frameworks in Amazon Cloud environment allowed proposing a checklist that should be considered to manage the web application deployed in cloud regarding data location. This checklist is proposed based on the activities performed in the experiment. Moreover, authors conclude that there is a need for further validation, whether the proposed checklist is applicable for real Cloud user who deploys and manages Cloud resources. / Vijay Kokkonda +918121185670, Krishna Sandeep Taduri +917893970042
356

On Applying a Method for Developing Context Dependent CASE-tool Evaluation Frameworks

Rehbinder, Adam January 2000 (has links)
This dissertation concerns the application of a method for developing context dependent CASE-tool evaluation frameworks. Evaluation of CASE-tools prior to adoption is an important but complex issue; there are a number of reports in the literature of the unsuccessful adoption of CASE-tools. The reason for this is that the tools have often failed in meeting contextual expectations. The genuine interest and willingness among organisational stakeholder to participate in the study indicate that evaluation of CASE-tools is indeed a relevant problem, for which method support is scarce. To overcome these problems, a systematic approach to pre-evaluation has been suggested, in which contextual demands and expectations are elucidated before evaluating technology support. The proposed method has been successfully applied in a field study. This dissertation contains a report and reflections on its use in a specific organisational context. The application process rendered an evaluation framework, which accounts for demands and expectations covering the entire information systems development life cycle relevant to the given context. The method user found that method transfer was indeed feasible, both from method description to the analyst and further from the analyst to the organisational context. Also, since the span of the evaluation framework and the organisation to which the method was applied is considered to be large, this indicates that the method scales appropriately for large organisations.
357

Automated Recognition of Algorithmic Patterns in DSP Programs

Shafiee Sarvestani, Amin January 2011 (has links)
We introduce an extensible knowledge based tool for idiom (pattern) recognition in DSP(digital signal processing) programs. Our tool utilizesfunctionality provided by the Cetus compiler infrastructure fordetecting certain computation patterns that frequently occurin DSP code. We focus on recognizing patterns for for-loops andstatements in their bodies as these often are the performance criticalconstructs in DSP applications for which replacementby highly optimized, target-specific parallel algorithms will bemost profitable. For better structuring and efficiency of patternrecognition, we classify patterns by different levels of complexitysuch that patterns in higher levels are defined in terms of lowerlevel patterns.The tool works statically on the intermediate representation(IR). It traverses the abstract syntax tree IR in post-orderand applies bottom-up pattern matching, at each IR nodeutilizing information about the patterns already matched for itschildren or siblings.For better extensibility and abstraction,most of the structuralpart of recognition rules is specified in XML form to separatethe tool implementation from the pattern specifications.Information about detected patterns will later be used foroptimized code generation by local algorithm replacement e.g. for thelow-power high-throughput multicore DSP architecture ePUMA.
358

Investigating the Single Crystal to Single Crystal Transformations of Highly Porous Metal-Organic Frameworks Through the Crystalline Sponge Method

Brunet, Gabriel January 2016 (has links)
The development of a new technique capable of analyzing compounds crystallographically without first needing to crystallize them has been recently described. The present thesis aims to demonstrate the potential of such a technique, which utilizes crystalline sponges, in order to regularly order guest compounds in a porous media. The structural stability of the molecular sponges, which are highly porous metal-organic frameworks (MOFs), is first investigated, revealing that the Co-based MOF, 1, undergoes two remarkable transformations. This thesis also demonstrates how the technique can be employed to visualize the motion and occupancy of gaseous guests in a MOF. The Zn-based MOF, 4, was found to physisorb and chemisorb molecular iodine, leading to the formation of a variety of polyiodide species. The flexible nature of the host was determined to be an essential component in the exceptionally large iodine uptake capacity of the MOF. These results illustrate that the crystalline sponge method can be an effective strategy for directly visualizing guest molecules and obtaining vital information on the interactions formed between the host and guest.
359

Étude de la prise en compte de la compétence 5 du référentiel en enseignement lors de son adaptation et de son adoption dans le programme de BEPEP de l’Université de Montréal

Wagne, Ramatoulaye 04 1900 (has links)
No description available.
360

Android GUI Testing : A comparative study of open source Android GUI testing frameworks

Esbjörnsson, Linus January 2015 (has links)
Android is one of the most popular mobile operating systems on the market today with a vast majority of the market share when it comes to mobile devices. Graphical user interfaces (GUI) are often seen on the applications on these devices. Testing of these GUIs is important since they often make up for half of the source code of the application, and they are used to interact with the application. Automating these tests is very useful since it saves a lot of time, but can be difficult. The tools that are available for automating the tests are often not suitable for the developers’ needs, because of the lack of functionality. Therefore, the need for a characterization of the frameworks is needed, so that developers more easily can fit a framework for their needs. In this study, four open source frameworks for Android GUI testing have been selected for evaluation: Robotium, Selendroid, UI Automator and Espresso. Criteria used in the evaluation have been identified with the help of a literature analysis.The results show that two of the frameworks, Robotium and Espresso, lack the ability to fully test activities, which is the main component of Android application GUIs. Furthermore, the study resulted in characterizations of the frameworks.

Page generated in 0.0507 seconds