• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 1
  • Tagged with
  • 133
  • 133
  • 133
  • 122
  • 51
  • 22
  • 19
  • 19
  • 19
  • 18
  • 17
  • 16
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Seniors with Diabetes-Investigation of the Impact of Semantic Auditory Distractions on the Usability of a Blood Glucose Tracking Mobile Application

Rivera Rodriguez, Jose A. 01 January 2015 (has links)
Diabetes is the seventh leading cause of death in the United States. With the population rapidly aging, it is expected that 1 out of 3 Americans will have diabetes by 2050. Mobile devices and mobile applications have the potential to contribute to diabetes self-care by allowing users to manage their diabetes by keeping track of their blood glucose levels. Usability is important for systems that help people self-manage conditions such as diabetes. Age and diabetes-related cognitive decline might intensify the impact of usability issues for the users who need these mobile applications the most. As highlighted by usability researchers, the context of use (i.e. environment, user, task, and technology) has a significant impact on usability. The environment (lighting, temperature, audio and visual distractions, etc.) is of special interest to the mobile usability arena since in the case of mobile devices, is always changing. This dissertation aims to support the claim that context and more specifically environmental distraction such as semantic auditory distractions impact the usability of mobile applications. In doing so, it attempts to answer the following research questions: 1) Does semantic auditory distractions reduce the effectiveness of a blood glucose tracking mobile application? 2) Does semantic auditory distractions reduce the efficiency of a blood glucose tracking mobile application? 3) Does semantic auditory distractions reduce the user satisfaction of a blood glucose tracking mobile application? To answer the study research questions, a true experimental design was performed involving 30 adults with type 2 diabetes. Participants were paired based on their age and experience with smartphones and randomly assigned to the control (no semantic auditory distractions) or experimental (semantic auditory distractions) group. Research questions were tested using the general linear model. The results of this study confirmed that semantic auditory distractions have a significant effect on efficiency and effectiveness, and hence they need to be taken into account when evaluating mobile usability. This study also showed that semantic auditory distractions have no significant effect on user satisfaction. This dissertation enhances the current knowledge about the impact of semantic auditory distractions on the usability of mobile applications within the diabetic senior population.
92

Automated Knowledge Extraction from Archival Documents

Malki, Khalil 31 July 2019 (has links)
Traditional archival media such as paper, film, photographs, etc. contain a vast storage of knowledge. Much of this knowledge is applicable to current business and scientific problems, and offers solutions; consequently, there is value in extracting this information. While it is possible to manually extract the content, this technique is not feasible for large knowledge repositories due to cost and time. In this thesis, we develop a system that can extract such knowledge automatically from large repositories. A Graphical User Interface that permits users to indicate the location of the knowledge components (indexes) is developed, and software features that permit automatic extraction of indexes from similar documents is presented. The indexes and the documents are stored in a persistentdata store.The system is tested on a University Registrar’s legacy paper-based transcript repository. The study shows that the system provides a good solution for large-scale extraction of knowledge from archived paper and other media.
93

A noisy-channel based model to recognize words in eye typing systems / Um modelo baseado em canal de ruído para reconhecer palavras digitadas com os olhos

Hanada, Raíza Tamae Sarkis 04 April 2018 (has links)
An important issue with eye-based typing iis the correct identification of both whrn the userselects a key and which key is selected. Traditional solutions are based on predefined gaze fixation time, known as dwell-time methods. In an attempt to improve accuracy long dwell times are adopted, which un turn lead to fatigue and longer response limes. These problems motivate the proposal of methods free of dwell-time, or with very short ones, which rely on more robust recognition techniques to reduce the uncertainty about user\'s actions. These techniques are specially important when the users have disabilities which affect their eye movements or use inexpensive eye trackers. An approach to deal with the recognition problem is to treat it as a spelling correction task. An usual strategy for spelling correction is to model the problem as the transmission of a word through a noisy-channel, such that it is necessary to determine which known word of a lexicon is the received string. A feasible application of this method requires the reduction of the set of candidate words by choosing only the ones that can be transformed into the imput by applying up to k character edit operations. This idea works well on traditional typing because the number of errors per word is very small. However, this is not the case for eye-based typing systems, which are much noiser. In such a scenario, spelling correction strategies do not scale well as they grow exponentially with k and the lexicon size. Moreover, the error distribution in eye typing is different, with much more insertion errors due to specific sources, of noise such as the eye tracker device, particular user behaviors, and intrinsic chracteeristics of eye movements. Also, the lack of a large corpus of errors makes it hard to adopt probabilistic approaches based on information extracted from real world data. To address all these problems, we propose an effective recognition approach by combining estimates extracted from general error corpora with domain-specific knowledge about eye-based input. The technique is ablçe to calculate edit disyances effectively by using a Mor-Fraenkel index, searchable using a minimun prfect hashing. The method allows the early processing of most promising candidates, such that fast pruned searches present negligible loss in word ranking quality. We also propose a linear heuristic for estimating edit-based distances which take advantage of information already provided by the index. Finally, we extend our recognition model to include the variability of the eye movements as source of errors, provide a comprehensive study about the importance of the noise model when combined with a language model and determine how it affects the user behaviour while she is typing. As result, we obtain a method very effective on the task of recognizing words and fast enough to be use in real eye typing systems. In a transcription experiment with 8 users, they archived 17.46 words per minute using proposed model, a gain of 11.3% over a state-of-the-art eye-typing system. The method was particularly userful in more noisier situations, such as the first use sessions. Despite significant gains in typing speed and word recognition ability, we were not able to find statistically significant differences on the participants\' perception about their expeience with both methods. This indicates that an improved suggestion ranking may not be clearly perceptible by the users even when it enhances their performance. / Um problema importante em sistemas de digitação com os olhos é a correta identificação tanto de quando uma letra é selecionada como de qual letra foi selecionada pelo usuário. As soluções tradicionais para este problema são baseadas na verificação de quanto tempo o olho permanece retido em um alvo. Se ele fica por um certo limite de tempo, a seleção é reconhecida. Métodos em que usam esta ideia são conhecidos como baseados em tempo de retenção (dwell time). É comum que tais métodos, com intuito de melhorar a precisão, adotem tempos de retenção alto. Isso, por outro lado, leva à fadiga e tempos de resposta altos. Estes problemas motivaram a proposta de métodos não baseados em tempos de retenção reduzidos, que dependem de técnicas mais robustas de reconhecimento para inferir as ações dos usuários. Tais estratégias são particularmente mais importantes quando o usuário tem desabilidades que afetam o movimento dos olhos ou usam dispositivos de rastreamento ocular (eye-trackers) muito baratos e, portanto, imprecisos. Uma forma de lidar com o problema de reconhecimento das ações dos usuários é tratá-lo como correção ortográfica. Métodos comuns para correção ortográfica consistem em modelá-lo como a transmissão de uma palavra através de um canal de ruído, tal que é necessário determinar que palavra de um dicionário corresponde à string recebida. Para que a aplicação deste método seja viável, o conjunto de palavras candidatas é reduzido somente àquelas que podem ser transformadas na string de entrada pela aplicação de até k operações de edição de carácter. Esta ideia funciona bem em digitação tradicional porque o número de erros por palavra é pequeno. Contudo, este não é o caso de digitação com os olhos, onde há muito mais ruído. Em tal cenário, técnicas de correção de erros ortográficos não escalam pois seu custo cresce exponencialmente com k e o tamanho do dicionário. Além disso, a distribuição de erros neste cenário é diferente, com muito mais inserções incorretas devido a fontes específicas de ruído como o dispositivo de rastreamento ocular, certos comportamentos dos usuários e características intrínsecas dos movimentos dos olhos. O uso de técnicas probabilísticas baseadas na análise de logs de digitação também não é uma alternativa uma vez que não há corpora de dados grande o suficiente para tanto. Para lidar com todos estes problemas, propomos um método efetivo de reconhecimento que combina estimativas de corpus de erros gerais com conhecimento específico sobre fontes de erro encontradas em sistemas de digitação com os olhos. Nossa técnica é capaz de calcular distâncias de edição eficazmente usando um índice de Mor-Fraenkel em que buscas são feitas com auxílio de um hashing perfeito mínimo. O método possibilita o processamento ordenado de candidatos promissores, de forma que as operações de busca podem ser podadas sem que apresentem perda significativa na qualidade do ranking. Nós também propomos uma heurística linear para estimar distância de edição que tira proveito das informações já mantidas no índice, estendemos nosso modelo de reconhecimento para incluir erros vinculados à variabilidade decorrente dos movimentos oculares e fornecemos um estudo detalhado sobre a importância relativa dos modelos de ruído e de linguagem. Por fim, determinamos os efeitos do modelo no comportamento do usuário enquanto ele digita. Como resultado, obtivemos um método de reconhecimento muito eficaz e rápido o suficiente para ser usado em um sistema real. Em uma tarefa de transcrição com 8 usuários, eles alcançaram velocidade de 17.46 palavras por minuto usando o nosso modelo, o que corresponde a um ganho de 11,3% sobre um método do estado da arte. Nosso método se mostrou mais particularmente útil em situação onde há mais ruído, tal como a primeira sessão de uso. Apesar dos ganhos claros de velocidade de digitação, não encontramos diferenças estatisticamente significativas na percepção dos usuários sobre sua experiência com os dois métodos. Isto indica que uma melhoria no ranking de sugestões pode não ser claramente perceptível pelos usuários mesmo quanto ela afeta positivamente os seus desempenhos.
94

A noisy-channel based model to recognize words in eye typing systems / Um modelo baseado em canal de ruído para reconhecer palavras digitadas com os olhos

Raíza Tamae Sarkis Hanada 04 April 2018 (has links)
An important issue with eye-based typing iis the correct identification of both whrn the userselects a key and which key is selected. Traditional solutions are based on predefined gaze fixation time, known as dwell-time methods. In an attempt to improve accuracy long dwell times are adopted, which un turn lead to fatigue and longer response limes. These problems motivate the proposal of methods free of dwell-time, or with very short ones, which rely on more robust recognition techniques to reduce the uncertainty about user\'s actions. These techniques are specially important when the users have disabilities which affect their eye movements or use inexpensive eye trackers. An approach to deal with the recognition problem is to treat it as a spelling correction task. An usual strategy for spelling correction is to model the problem as the transmission of a word through a noisy-channel, such that it is necessary to determine which known word of a lexicon is the received string. A feasible application of this method requires the reduction of the set of candidate words by choosing only the ones that can be transformed into the imput by applying up to k character edit operations. This idea works well on traditional typing because the number of errors per word is very small. However, this is not the case for eye-based typing systems, which are much noiser. In such a scenario, spelling correction strategies do not scale well as they grow exponentially with k and the lexicon size. Moreover, the error distribution in eye typing is different, with much more insertion errors due to specific sources, of noise such as the eye tracker device, particular user behaviors, and intrinsic chracteeristics of eye movements. Also, the lack of a large corpus of errors makes it hard to adopt probabilistic approaches based on information extracted from real world data. To address all these problems, we propose an effective recognition approach by combining estimates extracted from general error corpora with domain-specific knowledge about eye-based input. The technique is ablçe to calculate edit disyances effectively by using a Mor-Fraenkel index, searchable using a minimun prfect hashing. The method allows the early processing of most promising candidates, such that fast pruned searches present negligible loss in word ranking quality. We also propose a linear heuristic for estimating edit-based distances which take advantage of information already provided by the index. Finally, we extend our recognition model to include the variability of the eye movements as source of errors, provide a comprehensive study about the importance of the noise model when combined with a language model and determine how it affects the user behaviour while she is typing. As result, we obtain a method very effective on the task of recognizing words and fast enough to be use in real eye typing systems. In a transcription experiment with 8 users, they archived 17.46 words per minute using proposed model, a gain of 11.3% over a state-of-the-art eye-typing system. The method was particularly userful in more noisier situations, such as the first use sessions. Despite significant gains in typing speed and word recognition ability, we were not able to find statistically significant differences on the participants\' perception about their expeience with both methods. This indicates that an improved suggestion ranking may not be clearly perceptible by the users even when it enhances their performance. / Um problema importante em sistemas de digitação com os olhos é a correta identificação tanto de quando uma letra é selecionada como de qual letra foi selecionada pelo usuário. As soluções tradicionais para este problema são baseadas na verificação de quanto tempo o olho permanece retido em um alvo. Se ele fica por um certo limite de tempo, a seleção é reconhecida. Métodos em que usam esta ideia são conhecidos como baseados em tempo de retenção (dwell time). É comum que tais métodos, com intuito de melhorar a precisão, adotem tempos de retenção alto. Isso, por outro lado, leva à fadiga e tempos de resposta altos. Estes problemas motivaram a proposta de métodos não baseados em tempos de retenção reduzidos, que dependem de técnicas mais robustas de reconhecimento para inferir as ações dos usuários. Tais estratégias são particularmente mais importantes quando o usuário tem desabilidades que afetam o movimento dos olhos ou usam dispositivos de rastreamento ocular (eye-trackers) muito baratos e, portanto, imprecisos. Uma forma de lidar com o problema de reconhecimento das ações dos usuários é tratá-lo como correção ortográfica. Métodos comuns para correção ortográfica consistem em modelá-lo como a transmissão de uma palavra através de um canal de ruído, tal que é necessário determinar que palavra de um dicionário corresponde à string recebida. Para que a aplicação deste método seja viável, o conjunto de palavras candidatas é reduzido somente àquelas que podem ser transformadas na string de entrada pela aplicação de até k operações de edição de carácter. Esta ideia funciona bem em digitação tradicional porque o número de erros por palavra é pequeno. Contudo, este não é o caso de digitação com os olhos, onde há muito mais ruído. Em tal cenário, técnicas de correção de erros ortográficos não escalam pois seu custo cresce exponencialmente com k e o tamanho do dicionário. Além disso, a distribuição de erros neste cenário é diferente, com muito mais inserções incorretas devido a fontes específicas de ruído como o dispositivo de rastreamento ocular, certos comportamentos dos usuários e características intrínsecas dos movimentos dos olhos. O uso de técnicas probabilísticas baseadas na análise de logs de digitação também não é uma alternativa uma vez que não há corpora de dados grande o suficiente para tanto. Para lidar com todos estes problemas, propomos um método efetivo de reconhecimento que combina estimativas de corpus de erros gerais com conhecimento específico sobre fontes de erro encontradas em sistemas de digitação com os olhos. Nossa técnica é capaz de calcular distâncias de edição eficazmente usando um índice de Mor-Fraenkel em que buscas são feitas com auxílio de um hashing perfeito mínimo. O método possibilita o processamento ordenado de candidatos promissores, de forma que as operações de busca podem ser podadas sem que apresentem perda significativa na qualidade do ranking. Nós também propomos uma heurística linear para estimar distância de edição que tira proveito das informações já mantidas no índice, estendemos nosso modelo de reconhecimento para incluir erros vinculados à variabilidade decorrente dos movimentos oculares e fornecemos um estudo detalhado sobre a importância relativa dos modelos de ruído e de linguagem. Por fim, determinamos os efeitos do modelo no comportamento do usuário enquanto ele digita. Como resultado, obtivemos um método de reconhecimento muito eficaz e rápido o suficiente para ser usado em um sistema real. Em uma tarefa de transcrição com 8 usuários, eles alcançaram velocidade de 17.46 palavras por minuto usando o nosso modelo, o que corresponde a um ganho de 11,3% sobre um método do estado da arte. Nosso método se mostrou mais particularmente útil em situação onde há mais ruído, tal como a primeira sessão de uso. Apesar dos ganhos claros de velocidade de digitação, não encontramos diferenças estatisticamente significativas na percepção dos usuários sobre sua experiência com os dois métodos. Isto indica que uma melhoria no ranking de sugestões pode não ser claramente perceptível pelos usuários mesmo quanto ela afeta positivamente os seus desempenhos.
95

REAL-TIME CAPTURE AND RENDERING OF PHYSICAL SCENE WITH AN EFFICIENTLY CALIBRATED RGB-D CAMERA NETWORK

Su, Po-Chang 01 January 2017 (has links)
From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. With the recent explosive growth of Augmented Reality (AR) and Virtual Reality (VR) platforms, utilizing camera RGB-D camera networks to capture and render dynamic physical space can enhance immersive experiences for users. To maximize coverage and minimize costs, practical applications often use a small number of RGB-D cameras and sparsely place them around the environment for data capturing. While sparse color camera networks have been studied for decades, the problems of extrinsic calibration of and rendering with sparse RGB-D camera networks are less well understood. Extrinsic calibration is difficult because of inappropriate RGB-D camera models and lack of shared scene features. Due to the significant camera noise and sparse coverage of the scene, the quality of rendering 3D point clouds is much lower compared with synthetic models. Adding virtual objects whose rendering depend on the physical environment such as those with reflective surfaces further complicate the rendering pipeline. In this dissertation, I propose novel solutions to tackle these challenges faced by RGB-D camera systems. First, I propose a novel extrinsic calibration algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Second, I propose a novel rendering pipeline that can capture and render, in real-time, dynamic scenes in the presence of arbitrary-shaped reflective virtual objects. Third, I have demonstrated a teleportation application that uses the proposed system to merge two geographically separated 3D captured scenes into the same reconstructed environment. To provide a fast and robust calibration for a sparse RGB-D camera network, first, the correspondences between different camera views are established by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic using rigid transformation that is optimal only for pinhole cameras, different view transformation functions including rigid transformation, polynomial transformation, and manifold regression are systematically tested to determine the most robust mapping that generalizes well to unseen data. Third, the celebrated bundle adjustment procedure is reformulated to minimize the global 3D projection error so as to fine-tune the initial estimates. To achieve a realistic mirror rendering, a robust eye detector is used to identify the viewer's 3D location and render the reflective scene accordingly. The limited field of view obtained from a single camera is overcome by our calibrated RGB-D camera network system that is scalable to capture an arbitrarily large environment. The rendering is accomplished by raytracing light rays from the viewpoint to the scene reflected by the virtual curved surface. To the best of our knowledge, the proposed system is the first to render reflective dynamic scenes from real 3D data in large environments. Our scalable client-server architecture is computationally efficient - the calibration of a camera network system, including data capture, can be done in minutes using only commodity PCs.
96

Functional Reactive Musical Performers

Phillips, Justin M 01 December 2010 (has links)
Computers have been assisting in recording, sound synthesis and other fields of music production for quite some time. The actual performance of music continues to be an area in which human players are chosen over computer performers. Musical performance is an area in which personalization is more important than consistency. Human players play with each other, reacting to phrases and ideas created by the players that they are playing with. Computer performers lack the ability to react to the changes in the performance that humans perceive naturally, giving the human players an advantage over the computer performers. This thesis creates a framework for describing unique musical performers that can play along in realtime with human players. FrTime, a reactive programming language, is used to constantly create new musical phrases. Musical phrases are constructed by unique user programmed performers and by chord changes that the framework provides. The reactive language creates multiple musical phrases for each point in time. A simple module which chooses musical phrases to be performed at the time of performance is created.
97

Experimental Studies of Android APP Development for Smart Chess Board System

Gopu, Srujan 01 August 2013 (has links)
Playing chess on a smart phone has gained popularity in the last few years, offering the convenience of correspondence play, automatic recording of a game, etc. Although a good number of players love playing chess on a tablet/smart phone, it doesn't come close to the experience of playing over the traditional board. The feel and pleasure are more real when playing face down with the opponent sitting across each other rather than playing in mobile devices. This is especially true during chess tournaments. It would be ideal to enhance the experience of playing chess on board with the features of chess playing on smart phones. Based on the design of a roll able smart chess board, an android app has been implemented to interact with the board. It reads signals from the smart chess board and maps the movements of the chess pieces to the phone. The recorded play would be used as input for game analysis. The design and implementation of a server for playing and reviewing a game online have also been studied in this thesis.
98

VISUAL SEMANTIC SEGMENTATION AND ITS APPLICATIONS

Gao, Jizhou 01 January 2013 (has links)
This dissertation addresses the difficulties of semantic segmentation when dealing with an extensive collection of images and 3D point clouds. Due to the ubiquity of digital cameras that help capture the world around us, as well as the advanced scanning techniques that are able to record 3D replicas of real cities, the sheer amount of visual data available presents many opportunities for both academic research and industrial applications. But the mere quantity of data also poses a tremendous challenge. In particular, the problem of distilling useful information from such a large repository of visual data has attracted ongoing interests in the fields of computer vision and data mining. Structural Semantics are fundamental to understanding both natural and man-made objects. Buildings, for example, are like languages in that they are made up of repeated structures or patterns that can be captured in images. In order to find these recurring patterns in images, I present an unsupervised frequent visual pattern mining approach that goes beyond co-location to identify spatially coherent visual patterns, regardless of their shape, size, locations and orientation. First, my approach categorizes visual items from scale-invariant image primitives with similar appearance using a suite of polynomial-time algorithms that have been designed to identify consistent structural associations among visual items, representing frequent visual patterns. After detecting repetitive image patterns, I use unsupervised and automatic segmentation of the identified patterns to generate more semantically meaningful representations. The underlying assumption is that pixels capturing the same portion of image patterns are visually consistent, while pixels that come from different backdrops are usually inconsistent. I further extend this approach to perform automatic segmentation of foreground objects from an Internet photo collection of landmark locations. New scanning technologies have successfully advanced the digital acquisition of large-scale urban landscapes. In addressing semantic segmentation and reconstruction of this data using LiDAR point clouds and geo-registered images of large-scale residential areas, I develop a complete system that simultaneously uses classification and segmentation methods to first identify different object categories and then apply category-specific reconstruction techniques to create visually pleasing and complete scene models.
99

A ROBUST RGB-D SLAM SYSTEM FOR 3D ENVIRONMENT WITH PLANAR SURFACES

Su, Po-Chang 01 January 2013 (has links)
Simultaneous localization and mapping is the technique to construct a 3D map of unknown environment. With the increasing popularity of RGB-depth (RGB-D) sensors such as the Microsoft Kinect, there have been much research on capturing and reconstructing 3D environments using a movable RGB-D sensor. The key process behind these kinds of simultaneous location and mapping (SLAM) systems is the iterative closest point or ICP algorithm, which is an iterative algorithm that can estimate the rigid movement of the camera based on the captured 3D point clouds. While ICP is a well-studied algorithm, it is problematic when it is used in scanning large planar regions such as wall surfaces in a room. The lack of depth variations on planar surfaces makes the global alignment an ill-conditioned problem. In this thesis, we present a novel approach for registering 3D point clouds by combining both color and depth information. Instead of directly searching for point correspondences among 3D data, the proposed method first extracts features from the RGB images, and then back-projects the features to the 3D space to identify more reliable correspondences. These color correspondences form the initial input to the ICP procedure which then proceeds to refine the alignment. Experimental results show that our proposed approach can achieve better accuracy than existing SLAMs in reconstructing indoor environments with large planar surfaces.
100

Eliciting User Requirements Using Appreciative Inquiry

Gonzales, Carol Kernitzki 01 January 2010 (has links)
Many software development projects fail because they do not meet the needs of users, are over-budget, and abandoned. To address this problem, the user requirements elicitation process was modified based on principles of Appreciative Inquiry. Appreciative Inquiry, commonly used in organizational development, aims to build organizations, processes, or systems based on success stories using a hopeful vision for an ideal future. Spanning five studies, Appreciative Inquiry was evaluated for its effectiveness with eliciting user requirements. In the first two cases, it was compared with traditional approaches with end-users and proxy-users. The third study was a quasi-experiment comparing the use of Appreciative Inquiry in different phases of in the software development cycle. The final two case studies combined all lessons learned using Appreciative Inquiry, with multiple case studies to gain additional understanding for the requirements gathered during various project phases. Each study evaluated the requirements gathered, developer and user attitudes, and the Appreciative Inquiry process itself. Requirements were evaluated for the quantity and their type regardless of whether they were implemented or not. Attitudes were evaluated for process feedback, as well as requirements and project commitment. The Appreciative Inquiry process was evaluated with differing groups, projects, and project phases to determine how and when it is best applied. Potentially interceding factors were also evaluated including: team effectiveness, emotional intelligence, perceived stress, the experience of the facilitator, and the development project type itself. Appreciative Inquiry produced positive results for the participants, the requirements obtained, and the general requirements eliciting-process. Appreciative Inquiry demonstrated benefits to the requirements gathered by increasing the number of unique requirements as well as identifying more quality-based (non-functional) and forward-looking requirements. It worked well with defined projects, when there was time for participants to reflect on the thought-provoking questions, structured questions and extra time to facilitate the extraction and translation of requirements, and a knowledgeable interviewer. The participants (end-users and developers) expressed improved vision and confidence. End-users participated consistently with immediate buy-in and enthusiasm, especially those users who were technically-inhibited. Development teams expressed improved confidence, and improved user communication and understanding.

Page generated in 0.0908 seconds