• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 778
  • 220
  • 122
  • 65
  • 54
  • 33
  • 32
  • 30
  • 28
  • 21
  • 15
  • 14
  • 9
  • 9
  • 7
  • Tagged with
  • 1599
  • 1599
  • 390
  • 281
  • 244
  • 243
  • 240
  • 236
  • 231
  • 226
  • 215
  • 210
  • 177
  • 174
  • 152
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

An investigation of the normal tax consequences for non-resident cloud computing service providers in South Africa

Steenkamp, Shene 12 1900 (has links)
Thesis (MAcc)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: Cloud computing is a universal occurrence, to which South Africa is no exception. The technology of cloud computing has been the focus of extensive research, but the tax consequences have not been investigated in such research. However, the nature of cloud computing activities, which are conducted via the internet, highlights many difficulties related to taxation. The main taxation-related problems are elicited by the composition of these activities, namely the making available of the cloud by the service provider via the internet and the subsequent use of it by the consumer at any worldwide location. This composition makes the classification of such transactions and the subsequent taxation source determination problematic. Yet, from a South African perspective, there is little assistance regarding these problems. As a result, significant income may escape South African taxation liabilities. The aim of this study was to investigate South African taxation consequences for non-resident1 cloud service providers who conduct activities with residents1 via the internet. The focus of the study was twofold: first, to identify factors, which indicates the classification of cloud computing activities as either a lease, a royalty (or its closely related know-how) or a service; and second, to determine the tax source of each of these classifications. Hence, this study sought to determine whether non-resident cloud service providers could possibly be liable for South African taxation and to identify related challenges that need to be addressed to ensure the collection of such taxes. / AFRIKAANSE OPSOMMING: Wolkbewerking (“Cloud computing”) is wêreldwye verskynsel wat ook in Suid- Afrika voorkom. Wolkbewerkingstegnologie was al die fokuspunt van omvangryke navorsing, alhoewel die belastinggevolge nog nie in sodanige navorsing ondersoek is nie. Die aard van wolkbewerkingsaktiwiteite, wat via die internet plaasvind, benadruk egter verskeie belastingverwante vraagstukke. Die hoofbelastingvraagstukke word deur die samestelling van hierdie aktiwiteite, naamlik die beskikbaarstelling van die sogenaamde wolk deur die diensverskaffer via die internet en die gevolglike gebruik daarvan deur die verbruiker te enige wêreldwye ligging, uitgelig. Die klassifikasie en daaropvolgende vasstelling van die belastingbron van hierdie aktiwiteite word as gevolg van hierdie samestelling problematies. Tog, vanaf Suid-Afrikaanse perspektief, bestaan min leiding vir hierdie vraagstukke. As gevolg hiervan kan beduidende inkomstebedrae moontlik Suid-Afrikaanse belastingaanspreeklikheid ontsnap. Die doel van hierdie studie was om ondersoek in te stel na die Suid-Afrikaanse belastinggevolge vir nie-inwoner2 wolkdiensverskaffers wat via die internet met inwoners2 handelsaktiwiteite uitvoer. Die fokus van hierdie studie was tweeledig: eerstens om faktore te identifiseer wat die klassifikasie van wolkbewerkingsaktiwiteite as óf huur, óf tantième (of nou-verwante bedryfskennis) óf dienste kan aandui; en tweedens om die belasting bronne van elk van hierdie klassifikasies vas te stel. Gevolglik is daar in hierdie studie gepoog om vas te stel of nie-inwoner wolkdiensverskaffers moontlik vir Suid-Afrikaanse belasting aanspreeklik mag wees en om verwante uitdagings wat aangespreek moet word om die invordering van hierdie belasting te verseker, te identifiseer.
462

AUTOMATION OF A CLOUD HOSTED APPLICATION : Performance, Automated Testing, Cloud Computing / AUTOMATION OF A CLOUD HOSTED APPLICATION : Performance, Automated Testing, Cloud Computing

CHAVALI, SRIKAVYA January 2016 (has links)
Context: Software testing is the process of assessing quality of a software product to determine whether it matches with the existing requirements of the customer or not. Software testing is one of the “Verification and Validation,” or V&V, software practices. The two basic techniques of software testing are Black-box testing and White box testing. Black-box testing focuses solely on the outputs generated in response to the inputs supplied neglecting the internal components of the software. Whereas, White-box testing focuses on the internal mechanism of the software of any application. To explore the feasibility of black-box and white-box testing under a given set of conditions, a proper test automation framework needs to be deployed. Automation is deployed in order to reduce the manual effort and to perform testing continuously, thereby increasing the quality of the product.   Objectives: In this research, cloud hosted application is automated using TestComplete tool. The objective of this thesis is to verify the functionality of Cloud application known as Test data library or Test Report Analyzer through automation and to measure the impact of the automation on release cycles of the organization.   Methods: Here automation is implemented using scrum methodology which is an agile development software process. Using scrum methodology, the product with working software can be delivered to the customers incrementally and empirically with updating functionalities in it. Test data library or Test Report Analyzer functionality of Cloud application is verified deploying testing device thereby the test cases can be analyzed thereby analyzing the pass or failed test cases.   Results: Automation of test report analyzer functionality of cloud hosted application is made using TestComplete and impact of automation on release cycles is reduced. Using automation, nearly 24% of change in release cycles can be observed thereby reducing the manual effort and increasing the quality of delivery.   Conclusion: Automation of a cloud hosted application provides no manual effort thereby utilization of time can be made effectively and application can be tested continuously increasing the efficiency and the quality of an application. / AUTOMATION OF A CLOUD HOSTED APPLICATION
463

A comparative study of cloud computing environments and the development of a framework for the automatic deployment of scaleable cloud based applications

Mlawanda, Joyce 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012 / ENGLISH ABSTRACT: Modern-day online applications are required to deal with an ever-increasing number of users without decreasing in performance. This implies that the applications should be scalable. Applications hosted on static servers are in exible in terms of scalability. Cloud computing is an alternative to the traditional paradigm of static application hosting and o ers an illusion of in nite compute and storage resources. It is a way of computing whereby computing resources are provided by a large pool of virtualised servers hosted on the Internet. By virtually removing scalability, infrastructure and installation constraints, cloud computing provides a very attractive platform for hosting online applications. This thesis compares the cloud computing infrastructures Google App Engine and AmazonWeb Services for hosting web applications and assesses their scalability performance compared to traditionally hosted servers. After the comparison of the three application hosting solutions, a proof-of-concept software framework for the provisioning and deployment of automatically scaling applications is built on Amazon Web Services which is shown to be best suited for the development of such a framework.
464

Database Forensics in the Service of Information Accountability

Pavlou, Kyriacos Eleftheriou January 2012 (has links)
Regulations and societal expectations have recently emphasized the need to mediate access to valuable databases, even by insiders. At one end of a spectrum is the approach of restricting access to information; at the other is information accountability. The focus of this work is on effecting information accountability of data stored in relational databases. One way to ensure appropriate use and thus end-to-end accountability of such information is through continuous assurance technology, via tamper detection in databases built upon cryptographic hashing. We show how to achieve information accountability by developing and refining the necessary approaches and ideas to support accountability in high-performance databases. These concepts include the design of a reference architecture for information accountability and several of its variants, the development of a sequence of successively more sophisticated forensic analysis algorithms and their forensic cost model, and a systematic formulation of forensic analysis for determining when the tampering occurred and what data were tampered with. We derive a lower bound for the forensic cost and prove that some of the algorithms are optimal under certain circumstances. We introduce a comprehensive taxonomy of the types of possible corruption events, along with an associated forensic analysis protocol that consolidates all extant forensic algorithms and the corresponding type(s) of corruption events they detect. Finally, we show how our information accountability solution can be used for databases residing in the cloud. In order to evaluate our ideas we design and implement an integrated tamper detection and forensic analysis system named DRAGOON. This work shows that information accountability is a viable alternative to information restriction for ensuring the correct storage, use, and maintenance of high-performance relational databases.
465

Opinion and Practice in a Tech-Successful Elementary School: The 21st Century Classroom

Bauland, David January 2012 (has links)
Web-based connectivity technologies have changed the very nature of learning and inquiry. Technology-integrated 21st century classrooms require that teachers adopt new roles, shifting toward critical thinking, collaboration, creativity and communication. This study examined teachers' opinions regarding integration of web based teaching tools into K-5 classrooms. Data were gathered through teacher interviews, collecting examples of websites used by teachers, recognizing common themes in successful technology integration, identifying benefits of technology integration for students, and clarifying professional development that teachers considered most beneficial. The sample was drawn from a Tucson, Arizona elementary school. Teachers' comments reflected a culture of strong support for technology integration with 21st century learning goals, and the need for more effective tools designed to help them search for and integrate web-based resources and share their successes or challenges with other teachers through digital learning communities or collaborative online professional development platforms.
466

Planificación dinámica sobre entornos grid

Bertogna, Mario Leandro 04 September 2013 (has links)
El objetivo de esta Tesis es el análisis para la gestión de entornos virtuales de manera eficiente. En este sentido, se realizó una optimización sobre el middleware de planificación en forma dinámica sobre entornos de computación Grid, siendo la meta a alcanzar la asignación y utilización óptima de recursos para la ejecución coordinada de tareas. Se investigó en particular la interacción entre servicios Grid y la problemática de la distribución de tareas en meta-organizaciones con requerimientos de calidad de servicio no trivial, estableciendo una relación entre la distribución de tareas y las necesidades locales pertenecientes a organizaciones virtuales. La idea tuvo origen en el estudio de laboratorios virtuales y remotos para la creación de espacios virtuales. En muchas organizaciones públicas y de investigación se dispone de gran cantidad de recursos, pero estos no siempre se encuentran accesibles, debido a la distancia geográfica, o no se dispone de la capacidad de interconectarlos para lograr un fin común. El concepto de espacio virtual introduce una capa de abstracción sobre estos recursos logrando independencia de ubicación y la interactividad entre dispositivos heterogéneos, logrando de esta manera hacer uso eficiente de los medios disponibles. Durante el desarrollo se ha experimentado y logrado la implementación de un entorno para la generación de espacios virtuales. Se ha definido la infraestructura, se implementaron dos tipos de laboratorios y se ha propuesto una optimización para lograr el máximo aprovechamiento en un entorno para aplicaciones paralelas. Actualmente estos conceptos han evolucionando y algunas de las ideas publicadas se han implementado en prototipos funcionales para infraestructuras comerciales, si bien aún se encuentra en investigación la planificación sobre centros de cómputos para miles de equipos.
467

Virtualizacijos technologijų pritaikymas debesyje (Cloud) / Virtualization in the cloud

Mardosas, Jonas 09 July 2011 (has links)
Šiame darbe aprašomos technologijos naudojamos debesų kompiuterijos platformose. Pilnai išanalizuojama nemokama debesies platforma Eucalyptus. Bandoma sukurti internetinių puslapių talpinimo paslaugą debesyje (PaaS paslauga), kuria naudotis galėtų daug vartotojų. Taip pat sudaromas planas kaip galėtų atrodyti panašių paslaugų perkėlimas į debesies infrastruktūras. Išnagrinėjus, kokios programinės įrangos reikia tokiai paslaugai teikti, paruošti pavyzdiniai instaliaciniai skriptai, nubraižytos schemos kaip tokia paslauga galėtų veikti ir kokias funkcijas, bei kokią naudą gauna galutinis vartotojas naudodamas tokią paslaugą. Suprojektuota sistema, kuri automatiškai turi rūpintis tokios paslaugos valdymu, bei stebėjimu. Pateikti tokios automatizuotos sistemos kodo pavyzdžiai. / This document describes the technologies used in cloud computing platforms. Also this work completely analyze cloud open free platform Eucalyptus. On this platform trying to create a web page hosting service in the cloud as a PaaS service, which could be used of many users. Also work describes the plan/scheme as it might be possible to transfer similiar services to the cloud infrastructure. Examination of which software must be provided the following services, preparing model system installation scripts, either as a scheme for such a service can operate and what functions and what benefits the final consumer gets using this service. Designed a system that automatically can provide such a service management and monitoring. Shows such an automated system code examples.
468

Scheduling with Space-Time Soft Constraints In Heterogeneous Cloud Datacenters

Tumanov, Alexey 01 August 2016 (has links)
Heterogeneity in modern datacenters is on the rise, in hardware resource characteristics, in workload characteristics, and in dynamic characteristics (e.g., a memoryresident copy of input data). As a result, which machines are assigned to a given job can have a significant impact. For example, a job may run faster on the same machine as its input data or with a given hardware accelerator, while still being runnable on other machines, albeit less efficiently. Heterogeneity takes on more complex forms as sets of resources differ in the level of performance they deliver, even if they consist of identical individual units, such as with rack-level locality. We refer to this as combinatorial heterogeneity. Mixes of jobs with strict SLOs on completion time and increasingly available runtime estimates in production datacenters deepen the challenge of matching the right resources to the right workloads at the right time. In this dissertation, we hypothesize that it is possible and beneficial to simultaneously leverage all of this information in the form of declaratively specified spacetime soft constraints. To accomplish this, we first design and develop our principal building block—a novel Space-Time Request Language (STRL). It enables the expression of jobs’ preferences and flexibility in a general, extensible way by using a declarative, composable, intuitive algebraic expression structure. Second, building on the generality of STRL, we propose an equally general STRL Compiler that automatically compiles STRL expressions into Mixed Integer Linear Programming (MILP) problems that can be aggregated and solved to maximize the overall value of shared cluster resources. These theoretical contributions form the foundation for the system we architect, called TetriSched, that instantiates our conceptual contributions: (a) declarative soft constraints, (b) space-time soft constraints, (c) combinatorial constraints, (d) orderless global scheduling, and (e) in situ preemption. We also propose a set of mechanisms that extend the scope and the practicality of TetriSched’s deployment by analyzing and improving on its scalability, enabling and studying the efficacy of preemption, and featuring a set of runtime mis-estimation handling mechanisms to address runtime prediction inaccuracy. In collaboration with Microsoft, we adapt some of these ideas as we design and implement a heterogeneity-aware resource reservation system called Aramid with support for ordinal placement preferences targeting deployment in production clusters at Microsoft scale. A combination of simulation and real cluster experiments with synthetic and production-derived workloads, a range of workload intensities, degrees of burstiness, preference strengths, and input inaccuracies support our hypothesis that leveraging space-time soft constraints (a) significantly improves scheduling quality and (b) is possible to achieve in a practical deployment.
469

High-contrast imaging in the cloud with klipReduce and Findr

Haug-Baltzell, Asher, Males, Jared R., Morzinski, Katie M., Wu, Ya-Lin, Merchant, Nirav, Lyons, Eric, Close, Laird M. 08 August 2016 (has links)
Astronomical data sets are growing ever larger, and the area of high contrast imaging of exoplanets is no exception. With the advent of fast, low-noise detectors operating at 10 to 1000 Hz, huge numbers of images can be taken during a single hours-long observation. High frame rates offer several advantages, such as improved registration, frame selection, and improved speckle calibration. However, advanced image processing algorithms are computationally challenging to apply. Here we describe a parallelized, cloud-based data reduction system developed for the Magellan Adaptive Optics VisAO camera, which is capable of rapidly exploring tens of thousands of parameter sets affecting the Karhunen-Loeve image processing (KLIP) algorithm to produce high-quality direct images of exoplanets. We demonstrate these capabilities with a visible-wavelength high contrast data set of a hydrogen-accreting brown dwarf companion.
470

Cost- and Performance-Aware Resource Management in Cloud Infrastructures

Nasim, Robayet January 2017 (has links)
High availability, cost effectiveness and ease of application deployment have accelerated the adoption rate of cloud computing. This fast proliferation of cloud computing promotes the rapid development of large-scale infrastructures. However, large cloud datacenters (DCs) require infrastructure, design, deployment, scalability and reliability and need better management techniques to achieve sustainable design benefits. Resources inside cloud infrastructures often operate at low utilization, rarely exceeding 20-30%, which increases the operational cost significantly, especially due to energy consumption. To reduce operational cost without affecting quality of service (QoS) requirements, cloud applications should be allocated just enough resources to minimize their completion time or to maximize utilization.  The focus of this thesis is to enable resource-efficient and performance-aware cloud infrastructures by addressing above mentioned cost and performance related challenges. In particular, we propose algorithms, techniques, and deployment strategies for improving the dynamic allocation of virtual machines (VMs) into physical machines (PMs).  For minimizing the operational cost, we mainly focus on optimizing energy consumption of PMs by applying dynamic VM consolidation methods. To make VM consolidation techniques more efficient, we propose to utilize multiple paths to spread traffic and deploy recent queue management schemes which can maximize network resource utilization and reduce both downtime and migration time for live migration techniques. In addition, a dynamic resource allocation scheme is presented to distribute workloads among geographically dispersed DCs considering their location based time varying costs due to e.g. carbon emission or bandwidth provision. For optimizing performance level objectives, we focus on interference among applications contending in shared resources and propose a novel VM consolidation scheme considering sensitivity of the VMs to their demanded resources. Further, to investigate the impact of uncertain parameters on cloud resource allocation and applications’ QoS such as unpredictable variations in demand, we develop an optimization model based on the theory of robust optimization. Furthermore, in order to handle the scalability issues in the context of large scale infrastructures, a robust and fast Tabu Search algorithm is designed and evaluated. / High availability, cost effectiveness and ease of application deployment have accelerated the adoption rate of cloud computing. This fast proliferation of cloud computing promotes the rapid development of large-scale infrastructures. However, large cloud datacenters (DCs) require infrastructure, design, deployment, scalability and reliability and need better management techniques to achieve sustainable design benefits. Resources inside cloud infrastructures often operate at low utilization, rarely exceeding 20-30%, which increases the operational cost significantly, especially due to energy consumption. To reduce operational cost without affecting quality of service (QoS) requirements, cloud applications should be allocated just enough resources to minimize their completion time or to maximize utilization.  The focus of this thesis is to enable resource-efficient and performance-aware cloud infrastructures by addressing above mentioned cost and performance related challenges. In particular, we propose algorithms, techniques, and deployment strategies for improving the dynamic allocation of virtual machines (VMs) into physical machines (PMs).

Page generated in 0.064 seconds