• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 6
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 61
  • 19
  • 16
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Web Based Resource Management for Multi-Tiered Web Applications

Ott, Bryce Daniel 04 December 2007 (has links)
The currently emerging trend of building more complex web applications to solve increasingly more involved software problems has led to the the need for a more automated and practical means for deploying resources required by these advanced web applications. As web based applications become more complex and involve more developers, greater system redundancy, and a larger number of components, traditional means of resource deployment become painfully inadequate as they fail to scale sufficiently. The purpose of this research is to provide evidence that a more sound and scalable test and deployment process can be employed and that many of the components of this improved process can be automated and/or delegated to various system actors to provide a more usable, reliable, stable, and efficient deployment process. The deployable resources that have been included for their commonality in web based applications are versioned resources (both ASCII based and binary files), database resources, cron files, and scripting commands. In order to achieve an improved test and deployment process and test its effectiveness, a web-based code deployment tool was developed and deployed in a production environment where its effects could be accurately measured. This deployment tool heavily leverages the use of Subversion to provide the management of versioned resources because of its extensive ability to manage the creation and merging of branches.
22

Supporting learning by tracing personal knowledge formation

Thaul, Witold January 2014 (has links)
Internet-based and mobile technologies enable new ways of learning. They offer us new possibilities to access an enormous amount of knowledge at any time and everywhere. Among many advantages, the adaptations require a rethinking of our previous learning behaviour patterns and processes. The challenge for students is no longer to get access to information and knowledge, but to select the right one and to deal with the information and knowledge overflow. The aim of this research is to define, design and validate an advanced concept to support the contemporary learning processes. Therefore, the requirements for a new approach have been assessed, the available solutions from the related area of (personal) Knowledge Management have been investigated, and the weaknesses in the context of learning identified. The identified issues have been substantiated by university students via a quantitative survey. Besides several smaller aspects, knowledge fragmentation and the nescience of the knowledge formation process have been classified as the most critical ones. To overcome these problems, a methodological concept has been developed, and a corresponding technological design created. The chosen approach is an intelligent, independent intermediate layer, which traces the different steps our knowledge entities are going through. Based on personal and individual configurations, the system provides a comprehensive and overall observation of nearly all our knowledge work activities. It supports the building and accessing of the knowledge formation paths for every important knowledge unit, later path combination and the access to automatically generated versions of our work. Moreover, it helps the users not only to remember what they did, but also gives them some strong indications why they did it. This is achieved by combining different knowledge actions and looking at the influences they have on each other. The suggested concept has been critically proved and confirmed via a qualitative expert analysis and backed up by a quantitative survey among university students.
23

Attribute-Level Versioning: A Relational Mechanism for Version Storage and Retrieval

Bell, Charles Andrew 01 January 2005 (has links)
Data analysts today have at their disposal a seemingly endless supply of data and repositories hence, datasets from which to draw. New datasets become available daily thus making the choice of which dataset to use difficult. Furthermore, traditional data analysis has been conducted using structured data repositories such as relational database management systems (RDBMS). These systems, by their nature and design, prohibit duplication for indexed collections forcing analysts to choose one value for each of the available attributes for an item in the collection. Often analysts discover two or more datasets with information about the same entity. When combining this data and transforming it into a form that is usable in an RDBMS, analysts are forced to deconflict the collisions and choose a single value for each duplicated attribute containing differing values. This deconfliction is the source of a considerable amount of guesswork and speculation on the part of the analyst in the absence of professional intuition. One must consider what is lost by discarding those alternative values. Are there relationships between the conflicting datasets that have meaning? Is each dataset presenting a different and valid view of the entity or are the alternate values erroneous? If so, which values are erroneous? Is there a historical significance of the variances? The analysis of modern datasets requires the use of specialized algorithms and storage and retrieval mechanisms to identify, deconflict, and assimilate variances of attributes for each entity encountered. These variances, or versions of attribute values, contribute meaning to the evolution and analysis of the entity and its relationship to other entities. A new, distinct storage and retrieval mechanism will enable analysts to efficiently store, analyze, and retrieve the attribute versions without unnecessary complexity or additional alterations of the original or derived dataset schemas. This paper presents technologies and innovations that assist data analysts in discovering meaning within their data and preserving all of the original data for every entity in the RDBMS.
24

Managing the challenges of event sourcing : Versioning and incorrect states

Karlsson, Andreas, Pettersson, Nils, Malmquist, Peter January 2019 (has links)
Event sourcing has caught the interest of many developers due to desirable features such as an implicit audit log and a simplified database design. This thesis presents a case study with a focus on managing the challenges of versioning and correcting incorrect states. The techniques upcasting and support multiple versions are investigated for handling versioning within event sourcing. Partial and full reversal techniques are applied to investigate the correction of incorrect states. The techniques will be implemented within an event sourcing prototype written in F# to demonstrate how the techniques behave in practice, which can be of use for developers that want to endeavor into event sourcing projects. The results of the study show that all investigated techniques can handle the associated challenges. The comparison of techniques shows the advantages and disadvantages associated with the techniques when implemented in the prototype.
25

ReserveTM: Optimizing for Eager Software Transactional Memory

Jain, Gaurav January 2013 (has links)
Software Transactional Memory (STM) helps programmers write correct concurrent code by allowing them to identify atomic sections rather than focusing on the mechanics of concurrency control. Given code with atomic sections, the compiler and STM runtime can work together to ensure proper controlled access to shared memory. STM runtimes use either lazy or eager version management. Lazy versioning buffers transaction updates, whereas eager versioning applies updates in-place. The current set of primitives suit lazy versioning since memory needs to be accessed through the runtime. We present a new set of runtime primitives that better suit eager versioned STM. We propose a novel extension to the compiler/runtime interface, consisting of memory reservations and memory releases. These extensions enable optimizations specific to eager versioned runtimes. A memory reservation allows a transaction to perform instrumentation-free access on a memory address. A release allows a read-only address to be modified by another transaction. Together, these reduce the instrumentation overhead required to support STM and improve concurrency between readers and writers. We have implemented these primitives and evaluated its performance on the STAMP benchmarks. Our results show strong performance and scalability improvements to eager versioned algorithms.
26

To free, or not to free : the impact of free versions, average user raings, and App characteristics on the adoption speed of paid mobile Apps

Arora, Sandeep 25 June 2014 (has links)
The mobile application (App) industry has grown tremendously over the past five years, primarily fueled by small App development businesses. Lacking advertising budgets, these relatively unknown, small businesses often offer free versions of their paid Apps to reduce customer uncertainty about App quality and get noticed in the crowded App industry. In this research I investigate the implications of offering free versions on the adoption speed of paid Apps by building on the existing marketing and information systems literature on sampling and versioning. Using a unique dataset of 2.82 million observations from 4,180 Apps and accounting for endogeneity, I find that while the strategy of offering free versions of paid Apps is popular, it impacts the adoption speed of paid Apps negatively. I also find that the presence of free versions has a larger negative impact on the adoption speed of Apps bought for fun and pleasure (hedonic Apps) and in the later life stages of paid Apps. I expect that the results of my study will enable App developers to make informed decisions about offering free versions of paid Apps and prompt academicians to produce more work focusing on this industry. / text
27

ReserveTM: Optimizing for Eager Software Transactional Memory

Jain, Gaurav January 2013 (has links)
Software Transactional Memory (STM) helps programmers write correct concurrent code by allowing them to identify atomic sections rather than focusing on the mechanics of concurrency control. Given code with atomic sections, the compiler and STM runtime can work together to ensure proper controlled access to shared memory. STM runtimes use either lazy or eager version management. Lazy versioning buffers transaction updates, whereas eager versioning applies updates in-place. The current set of primitives suit lazy versioning since memory needs to be accessed through the runtime. We present a new set of runtime primitives that better suit eager versioned STM. We propose a novel extension to the compiler/runtime interface, consisting of memory reservations and memory releases. These extensions enable optimizations specific to eager versioned runtimes. A memory reservation allows a transaction to perform instrumentation-free access on a memory address. A release allows a read-only address to be modified by another transaction. Together, these reduce the instrumentation overhead required to support STM and improve concurrency between readers and writers. We have implemented these primitives and evaluated its performance on the STAMP benchmarks. Our results show strong performance and scalability improvements to eager versioned algorithms.
28

Built-in recovery support for explorative programming : preserving immediate access to static and dynamic information of intermediate development states

Steinert, Bastian January 2014 (has links)
This work introduces concepts and corresponding tool support to enable a complementary approach in dealing with recovery. Programmers need to recover a development state, or a part thereof, when previously made changes reveal undesired implications. However, when the need arises suddenly and unexpectedly, recovery often involves expensive and tedious work. To avoid tedious work, literature recommends keeping away from unexpected recovery demands by following a structured and disciplined approach, which consists of the application of various best practices including working only on one thing at a time, performing small steps, as well as making proper use of versioning and testing tools. However, the attempt to avoid unexpected recovery is both time-consuming and error-prone. On the one hand, it requires disproportionate effort to minimize the risk of unexpected situations. On the other hand, applying recommended practices selectively, which saves time, can hardly avoid recovery. In addition, the constant need for foresight and self-control has unfavorable implications. It is exhaustive and impedes creative problem solving. This work proposes to make recovery fast and easy and introduces corresponding support called CoExist. Such dedicated support turns situations of unanticipated recovery from tedious experiences into pleasant ones. It makes recovery fast and easy to accomplish, even if explicit commits are unavailable or tests have been ignored for some time. When mistakes and unexpected insights are no longer associated with tedious corrective actions, programmers are encouraged to change source code as a means to reason about it, as opposed to making changes only after structuring and evaluating them mentally. This work further reports on an implementation of the proposed tool support in the Squeak/Smalltalk development environment. The development of the tools has been accompanied by regular performance and usability tests. In addition, this work investigates whether the proposed tools affect programmers’ performance. In a controlled lab study, 22 participants improved the design of two different applications. Using a repeated measurement setup, the study examined the effect of providing CoExist on programming performance. The result of analyzing 88 hours of programming suggests that built-in recovery support as provided with CoExist positively has a positive effect on programming performance in explorative programming tasks. / Diese Arbeit präsentiert Konzepte und die zugehörige Werkzeugunterstützung um einen komplementären Umgang mit Wiederherstellungsbedürfnissen zu ermöglichen. Programmierer haben Bedarf zur Wiederherstellung eines früheren Entwicklungszustandes oder Teils davon, wenn ihre Änderungen ungewünschte Implikationen aufzeigen. Wenn dieser Bedarf plötzlich und unerwartet auftritt, dann ist die notwendige Wiederherstellungsarbeit häufig mühsam und aufwendig. Zur Vermeidung mühsamer Arbeit empfiehlt die Literatur die Vermeidung von unerwarteten Wiederherstellungsbedürfnissen durch einen strukturierten und disziplinierten Programmieransatz, welcher die Verwendung verschiedener bewährter Praktiken vorsieht. Diese Praktiken sind zum Beispiel: nur an einer Sache gleichzeitig zu arbeiten, immer nur kleine Schritte auszuführen, aber auch der sachgemäße Einsatz von Versionskontroll- und Testwerkzeugen. Jedoch ist der Versuch des Abwendens unerwarteter Wiederherstellungsbedürfnisse sowohl zeitintensiv als auch fehleranfällig. Einerseits erfordert es unverhältnismäßig hohen Aufwand, das Risiko des Eintretens unerwarteter Situationen auf ein Minimum zu reduzieren. Andererseits ist eine zeitsparende selektive Ausführung der empfohlenen Praktiken kaum hinreichend, um Wiederherstellungssituationen zu vermeiden. Zudem bringt die ständige Notwendigkeit an Voraussicht und Selbstkontrolle Nachteile mit sich. Dies ist ermüdend und erschwert das kreative Problemlösen. Diese Arbeit schlägt vor, Wiederherstellungsaufgaben zu vereinfachen und beschleunigen, und stellt entsprechende Werkzeugunterstützung namens CoExist vor. Solche zielgerichtete Werkzeugunterstützung macht aus unvorhergesehenen mühsamen Wiederherstellungssituationen eine konstruktive Erfahrung. Damit ist Wiederherstellung auch dann leicht und schnell durchzuführen, wenn explizit gespeicherte Zwischenstände fehlen oder die Tests für einige Zeit ignoriert wurden. Wenn Fehler und unerwartete Ein- sichten nicht länger mit mühsamen Schadensersatz verbunden sind, fühlen sich Programmierer eher dazu ermutig, Quelltext zu ändern, um dabei darüber zu reflektieren, und nehmen nicht erst dann Änderungen vor, wenn sie diese gedanklich strukturiert und evaluiert haben. Diese Arbeit berichtet weiterhin von einer Implementierung der vorgeschlagenen Werkzeugunterstützung in der Squeak/Smalltalk Entwicklungsumgebung. Regelmäßige Tests von Laufzeitverhalten und Benutzbarkeit begleiteten die Entwicklung. Zudem prüft die Arbeit, ob sich die Verwendung der vorgeschlagenen Werkzeuge auf die Leistung der Programmierer auswirkt. In einem kontrollierten Experiment, verbesserten 22 Teilnehmer den Aufbau von zwei verschiedenen Anwendungen. Unter der Verwendung einer Versuchsanordnung mit wiederholter Messung, ermittelte die Studie die Auswirkung von CoExist auf die Programmierleistung. Das Ergebnis der Analyse von 88 Programmierstunden deutet darauf hin, dass sich eingebaute Werkzeugunterstützung für Wiederherstellung, wie sie mit CoExist bereitgestellt wird, positiv bei der Bearbeitung von unstrukturierten ergebnisoffenen Programmieraufgaben auswirkt.
29

Scientific Computing on Multicore Architectures

Tillenius, Martin January 2014 (has links)
Computer simulations are an indispensable tool for scientists to gain new insights about nature. Simulations of natural phenomena are usually large, and limited by the available computer resources. By using the computer resources more efficiently, larger and more detailed simulations can be performed, and more information can be extracted to help advance human knowledge. The topic of this thesis is how to make best use of modern computers for scientific computations. The challenge here is the high level of parallelism that is required to fully utilize the multicore processors in these systems. Starting from the basics, the primitives for synchronizing between threads are investigated. Hardware transactional memory is a new construct for this, which is evaluated for a new use of importance for scientific software: atomic updates of floating point values. The evaluation includes experiments on real hardware and comparisons against standard methods. Higher level programming models for shared memory parallelism are then considered. The state of the art for efficient use of multicore systems is dynamically scheduled task-based systems, where tasks can depend on data. In such systems, the software is divided up into many small tasks that are scheduled asynchronously according to their data dependencies. This enables a high level of parallelism, and avoids global barriers. A new system for managing task dependencies is developed in this thesis, based on data versioning. The system is implemented as a reusable software library, and shown to be as efficient or more efficient than other shared-memory task-based systems in experimental comparisons. The developed runtime system is then extended to distributed memory machines, and used for implementing a parallel version of a software for global climate simulations. By running the optimized and parallelized version on eight servers, an equally sized problem can be solved over 100 times faster than in the original sequential version. The parallel version also allowed significantly larger problems to be solved, previously unreachable due to memory constraints. / UPMARC / eSSENCE
30

Service versioning and compatibility at feature level / Versionamento e compatibilidade de serviços em nível de feature

Yamashita, Marcelo Correa January 2013 (has links)
A evolução de serviços requer estratégicas para lidar adequadamente com a gerência de versões resultantes das alterações ocorridas durante o ciclo de vida do serviço. Normalmente, uma versão de serviço é exposta como um documento que descreve a funcionalidade do serviço, orientando desenvolvedores clientes sobre os detalhes de acesso ao serviço. No entanto, não existe um padrão para o tratamento de versões dos documentos que descrevem o serviço. Isso implica na dificuldade de identificação e localização de alterações, bem como na medição do seu impacto, especialmente em uma perspectiva mais granular. A compatibilidade aborda um estilo mais elegante de evolução de serviços, considerando os efeitos provenientes das alterações nas aplicações cliente. Ela define um conjunto de alterações permissivas, as quais não afetem a integração externa com o serviço. Entretanto, provedores não conseguem garantir que as alterações necessárias ao serviço estarão no conjunto de alterações compatíveis. Além disso, o conceito de compatibilidade é muitas vezes aplicado sobre a descrição do serviço como um todo, o que pode não ser representativo do uso real do serviço por uma aplicação cliente em particular. Assim, é de responsabilidade dos desenvolvedores clientes avaliar a extensão das alterações no serviço a fim de medir o impacto no seu cenário em particular. Esse processo pode ser difícil e propenso a erros sem o uso de mecanismos de identificação de mudanças. Este trabalho aborda a evolução do serviço de maneira mais granular, o que chamamos de nível de feature. Desse modo, nós propomos um modelo de controle de versões e um algoritmo de compatibilidade a nível de feature, que permite a identificação e qualificação do impacto das alterações, assim como a avaliação da compatibilidade das mudanças neste nível de feature. Este trabalho também apresenta um experimento com base em um serviço real, que explora o modelo de controle de versões para avaliar a extensão das mudanças implícitas e explícitas e sua avaliação de compatibilidade. / Service evolution requires sound strategies to appropriately manage versions resulting from changes during service lifecycle. Typically, a service version is exposed as a description document that describes the service functionality, guiding client developers on the details for accessing the service. However, there is no standard for handling the versioning of service descriptions, which implies on difficulties on identifying and tracing changes as well as measuring their impact, particularly in a finer grain perspective. Compatibility addresses the graceful evolution of services by considering the effects of changes on client applications. It defines a set of permissible change cases that do not disrupt the service external integration. However, providers cannot always guarantee that the necessary changes yield compatible service descriptions. Moreover, the concept of compatibility is often applied to the entire service description, which can not be representative of the actual use of the service by a particular client application. So, it is the client’s developers responsibility to assess the extent of the change and their impact in their particular usage scenario, which can be hard and error-prone without proper change identification mechanisms. This work addresses service evolution in a finer grain manner, which we refer to as feature level. Hence, we propose a versioning model and a compatibility algorithm at feature level, which allows the identification and qualification of changes impact points, their ripple effect, as well as the assessment of changes’ compatibility in this finer grain of features. This work also reports an experiment based on a real service, which explores the versioning model to assess the scope of implicit and explicit changes and their compatibility assessment.

Page generated in 0.1129 seconds