• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3146
  • 783
  • 590
  • 320
  • 184
  • 177
  • 142
  • 72
  • 66
  • 65
  • 46
  • 43
  • 42
  • 36
  • 28
  • Tagged with
  • 6427
  • 1926
  • 1760
  • 1254
  • 1225
  • 1221
  • 1186
  • 1158
  • 997
  • 854
  • 746
  • 690
  • 613
  • 587
  • 575
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Organisational social media platforms : exploring user participation behaviours in software and technology firms

Demetriou, Georgia January 2012 (has links)
The aim of this research was to explore the user participation behaviours in the emerging structure of organisational social media platforms; a term coined and defined in this thesis. This emerging community structure originates from technical discussion forums and knowledge repository systems, and appears to be concerned with solving user problems, generating professional and technical content, and facilitating interaction in the external organisational domain. This research has explored three such platforms in the software and technology sector; namely, the SAP Community Network, the Oracle Community site, and Microsoft's professional platforms, MSDN and TechNet. Qualitative open-ended interviews were conducted and analysed under the interpretive paradigm, to generate a theoretically-grounded account on the use of social media tools in this context, the benefits and value outcomes gained, the underlined reasons and motivations that drive participation, and the emerging impact of active contribution as external users gain recognition. It was found that organisational social media platforms enable the development of rich technical content, personalised experience and thought leadership, creating in this way an environment for problem solving, professional development and expert recognition. The voluntary participation observed is evidently underlined with a combination of altruistic attitudes (e.g. satisfaction, enjoyment and a pro-sharing attitude), reciprocal helping behaviours (e.g. paying it forward, and sharing knowledge and experience) and personal gain expectations (e.g. visibility, recognition and career advancement). The individual platform users appear to acquire participation roles based on their technical expertise (newbie, knowledgeable and expert) and on the level of engagement they wish to undertake (lurker, contributor, community influencer and recognised user). A group of highly active users is formed in this way at the top tier of participation that establishes channels for professional credibility, product feedback and external advocacy through a close relationship with organisational members. These findings suggest that organisational social media platforms can constitute a new interface with the external environment and a potential business model, under which flexible forms of communication and interaction affect the support infrastructure changing the way in which customer service can be delivered, product and sales advocacy can be established, and innovation and product development can be achieved; complementing in this way internal processes with external activity.
162

The management of multiple submissions in parallel systems: the fair scheduling approach / La gestion de plusieurs soumissions dans les systèmes parallèles: l\'approche d\'ordonnancement équitable

Vinicius Gama Pinheiro 14 February 2014 (has links)
La communauté de Calcul Haute Performance est constamment confrontée à de nouveaux défis en raison de la demande toujours croissante de la puissance de traitement provenant dapplications scientifiques diverses. Les systèmes parallèles et distribués sont la clé pour accélérer lexécution de ces applications, et atteindre les défis associés car de nombreux processus peuvent être exécutés simultanément. Ces systèmes sont partagés par de nombreux utilisateurs qui soumettent des tâches sur de longues périodes au fil du temps et qui attendent un traitement équitable par lordonnanceur. Le travail effectué dans cette thèse se situe dans ce contexte: analyser et développer des algorithmes équitables et efficaces pour la gestion des ressources informatiques partagés entre plusieurs utilisateurs. Nous analysons les scénarios avec de nombreux soumissions issues de plusieurs utilisateurs. Ces soumissions contiennent un ou plusieurs processus et lensemble des soumissions sont organisées dans des campagnes successives. Dans ce que nous appelons le modèle dordonnancement des campagnes les processus dune campagne ne commencent pas avant que tous les processus de la campagne précédente soient terminés. Chaque utilisateur est intéressé à minimiser la somme des temps dexécution de ses campagnes. Cela est motivé par le comportement de lutilisateur tandis que lexécution dune campagne peut être réglé par les résultats de la campagne précédente. Dans la première partie de ce travail, nous définissons un modèle théorique pour lordonnancement des campagnes sous des hypothèses restrictives et nous montrons que, dans le cas général, il est NP-difficile. Pour le cas mono-utilisateur, nous montrons que lalgorithme dapproximation pour le problème (classique) dordonnancement de processus parallèles fournit également le même rapport dapproximation pour lordonnancement des campagnes. Pour le cas général avec plusieurs utilisateurs, nous établissons un critère déquité inspiré par une situation idéalisée de partage des ressources. Ensuite, nous proposons un algorithme dordonnancement appelé FairCamp qui impose des dates limite pour les campagnes pour assurer léquité entre les utilisateurs entre les campagnes successives. La deuxième partie de ce travail explore un modèle dordonnancement de campagnes plus relâché et réaliste, avec des caractéristiques dynamiques. Pour gérer ce cadre, nous proposons un nouveau algorithme appelé OStrich dont le principe est de maintenir un ordonnancement partagé virtuel dans lequel le même nombre de processeurs est assigné à chaque utilisateur. Les temps dachèvement dans lordonnancement virtuel déterminent lordre dexécution sur le processeurs physiques. Ensuite, les campagnes sont entrelacées de manière équitable. Pour des travaux indépendants séquentiels, nous montrons que OStrich garantit le stretch dune campagne en étant proportionnel à la taille de la campagne et le nombre total dutilisateurs. Le stretch est utilisé pour mesurer le ralentissement par rapport au temps quil prendrait dans un système dédié. Enfin, la troisième partie de ce travail étend les capacités dOStrich pour gérer des tâches parallèles rigides. Cette nouvelle version exécute les campagnes utilisant une approche gourmande et se sert aussi dun mécanisme de redimensionnement basé sur les événements pour mettre à jour lordonnancement virtuel selon le ratio dutilisation du système. / The High Performance Computing community is constantly facing new challenges due to the ever growing demand for processing power from scientific applications that represent diverse areas of human knowledge. Parallel and distributed systems are the key to speed up the execution of these applications as many jobs can be executed concurrently. These systems are shared by many users who submit their jobs over time and expect a fair treatment by the scheduler. The work done in this thesis lies in this context: to analyze and develop fair and efficient algorithms for managing computing resources shared among multiple users. We analyze scenarios with many submissions issued from multiple users over time. These submissions contain several jobs and the set of submissions are organized in successive campaigns. In what we define as the Campaign Scheduling model, the jobs of a campaign do not start until all the jobs from the previous campaign are completed. Each user is interested in minimizing the flow times of their own campaigns. This is motivated by the user submission behavior whereas the execution of a new campaign can be tuned by the results of the previous campaign. In the first part of this work, we define a theoretical model for Campaign Scheduling under restrictive assumptions and we show that, in the general case, it is NP-hard. For the single-user case, we show that an approximation scheduling algorithm for the (classic) parallel job scheduling problem also delivers the same approximation ratio for the Campaign Scheduling problem. For the general case with multiple users, we establish a fairness criteria inspired by time sharing. Then, we propose a scheduling algorithm called FairCamp which uses campaign deadlines to achieve fairness among users between consecutive campaigns. The second part of this work explores a more relaxed and realistic Campaign Scheduling model, provided with dynamic features. To handle this setting, we propose a new algorithm called OStrich whose principle is to maintain a virtual time-sharing schedule in which the same amount of processors is assigned to each user. The completion times in the virtual schedule determine the execution order on the physical processors. Then, the campaigns are interleaved in a fair way. For independent sequential jobs, we show that OStrich guarantees the stretch of a campaign to be proportional to campaigns size and to the total number of users. The stretch is used for measuring by what factor a workload is slowed down relatively to the time it takes to be executed on an unloaded system. Finally, the third part of this work extends the capabilities of OStrich to handle parallel jobs. This new version executes campaigns using a greedy approach and uses an event-based resizing mechanism to shape the virtual time-sharing schedule according to the system utilization ratio.
163

Forms of Interaction in Mixed Reality Media Perfomances - a study of the artistic event DESERT RAIN

Rinman, Marie-Louise January 2002 (has links)
NR 20140805
164

Comparative Analysis of Interface Usability for Cybersecurity Applications

Andrews, Wyly West January 2021 (has links)
In cybersecurity, understanding the technologies and the best ways to interface with them is paramount for staying ahead of growing cyberthreats. Developers of cybersecurity software will benefit greatly from a greater understanding of how users prefer to interact with cybersecurity technology. In the modern world, two primary interface methods are currently used: the command-line interface (CLI) and the graphical user interface (GUI). This study is a survey and introspective into what benefits and drawbacks that each method has when in the hands of users who do not have a comprehensive background in cybersecurity. Untrained individuals showed proficiency when working with GUI systems, showing that developing modern cybersecurity systems with GUIs would improve ease of use for such individuals. Additionally, the CLI was favorable for more complex operations but was difficult for users who were not accustomed to the CLI.
165

Issues and challenges in the provision and utilisation of public library services in Nigeria

Salman, Abdulsalam Abiodun January 2017 (has links)
A dissertation submitted to the Faculty of Arts in fulfilment of the requirements for the Degree of Doctor of Philosophy( Library and Information Studies) in the Department of Information Studies at the University Of Zululand, 2017. / This study set out to investigate the provision and use of public library services in Nigeria with a view to determining the satisfaction level of users with the services offered. Additionally the study wanted to develop a framework that will address the issues and challenges identified when providing public library services to the Nigerian population. Providing access to information through an institution such as a public library presupposes a well-governed and efficiently managed system. Lacking these, service delivery might be compromised, resulting in a population dissatisfied with the services delivered. The study is centred on the IFLA Public Library Service Guidelines, with the use of theoretical models such as the Traditional Public Administration Model (TPAM) and the New Public Management (NPM). An interpretivist approach to research was adopted involving mainly qualitative methods. A quantitative paradigm was also used as a supplementary method. The case study design methodology was used by conducting in-depth interviews with three permanent secretaries, six public library directors, and six heads of rural community libraries, cutting across the six geo-political zones in Nigeria. An informal interview was held with the children using the public library services in order to gauge their opinion about the services provided in the children’s section of the library. A questionnaire was administered to public library users in order to understand their responses on awareness, accessibility, use and satisfaction with the services provided by public libraries. Observation was used for validation of the responses from the interviews and questionnaire. In all, fifteen interviews were conducted with the administrators/managers of public libraries in Nigeria. The reason for using multiple instruments (interview, observation and questionnaire) was for triangulation of the responses in order to identify areas of divergence and convergence during data analysis. The interview responses were thematically analysed using content analysis, while the data collected through a survey questionnaire were analysed using the statistical packages for social sciences (SPSS) in order to arrive at summary and descriptive statistics. A test-retest reliability method using two methods was imperative; (1) expert opinion where a content validity index (CVI) was computed, and (2) Cronbach’s Alpha, that became more useful where continuous and non-dichotomous data were included in the analysis. It was therefore concluded that the instrument was internally consistent and reliable. Ethical considerations were also taken into account with informed consent forms, approval seeking, permission as well as confidentiality. vi The findings of this study showed that variables such as: relevant academic qualifications, years of experience, and designation of the administrators of public libraries affected the service delivery. The study also revealed that the pattern of administration of public libraries in Nigeria still conforms to the Traditional Public Administration Model (TPAM), which was criticised for its top-down and inefficient administrative approach. Very little community participation in the administration of the public libraries was identified, and it was established that there is still a huge dependency on the parent bodies for decision-making and funding. Additionally the study also found the following challenges as impediments to the expected service delivery: inadequate funding; insufficient staff; irregular electricity supply; outdated library materials; lack of functional library resources and facilities; and inappropriate public library legislation. Digital resources were found to be lacking in most of the libraries, and the physical infrastructure was found lacking, especially in the rural areas. Awareness of the services provided by the public libraries was found to be low, and it is mainly traditional services that are currently offered. This situation subsequently resulted in a low level of satisfaction with the use of the services. The study recommended that laws specific to public libraries should be promulgated in order to regulate the governance and administration of this public unit; more relevant and adequate services should be provided; alternative forms of funding should be explored in order to alleviate the dependence on governmental budgets; and training and retraining of public library staff should be actively explored with special emphasis on attaining IT skills.
166

Improving Discoverability of New Functionality : Evaluating User Onboarding Elements and Embedded User Assistance for Highlighting New Features in a PACS

Eriksson, Rebecca January 2023 (has links)
The aim of this study was to explore if users of a Picture Archiving and Communication System (PACS) could benefit from User Onboarding elements and Embedded User Assistance (EUA) to discover and learn about new functionality within the system. Five sub-questions, related to perceived intrusiveness, intuitiveness, helpfulness and persuasion, as well as the need to guide users to nested functionality, were explored as well to help answer the research question. Through a combined Research through Design and Case Study approach the study resulted in a design process with four phases; Exploration, Concepting, Prototyping and Evaluation. After initial exploration and assessments, one concept was implemented in an interactive Figma prototype of the PACS. This prototype was tested and evaluated by users as well as an expert group using the Cognitive Walkthrough method. Overall positive feedback about the usefulness and intuitiveness of the suggested design proposal was found. The results suggest that implementing onboarding elements, such as highlighting features and interactive tooltips, could be a good approach to offer EUA in a PACS to help users discover and learn about new functionality. Further elaboration of the results, important considerations and suggestions regarding future implementation in a live system is discussed in greater detail in the report.
167

MANAGEMENT, ANALYSIS AND DISPLAY OF LOW SPEED DATA FOR LONG TERM BRIDGE MONITORING BY CONSTRUCTING RECONFIGURABLE AND CUSTOMIZABLE GRAPHICAL USER INTERFACES

SALGAONKAR, VASANT ANIL 04 April 2007 (has links)
No description available.
168

Real-Time Processing and Visualization of 3D Time-Variant Datasets

Elshahali, Mai Hassan Ahmed Ali 14 September 2015 (has links)
Scientific visualization is primarily concerned with the visual presentation of three-dimensional phenomena in domains like medicine, meteorology, astrophysics, etc. The emphasis in scientific visualization research has been on the efficient rendering of measured or simulated data points, surfaces, volumes, and a time component to convey the dynamic nature of the studied phenomena. With the explosive growth in the size of the data, interactive visualization of scientific data becomes a real challenge. In recent years, the graphics community has witnessed tremendous improvements in the performance capabilities of graphics processing units (GPUs), and advances in GPU-accelerated rendering have enabled data exploration at interactive rates. Nevertheless, the majority of techniques rely on the assumption that a true three-dimensional geometric model capturing physical phenomena of interest, is available and ready for visualization. Unfortunately, this assumption does not hold true in many scientific domains, in which measurements are obtained from a given scanning modality at sparsely located intervals in both space and time. This calls for the fusion of data collected from multiple sources in order to fill the gaps and tell the story behind the data. For years, data fusion has relied on machine learning techniques to combine data from multiple modalities, reconstruct missing information, and track features of interest through time. However, these techniques fall short in solving the problem for datasets with large spatio-temporal gaps. This realization has led researchers in the data fusion domain to acknowledge the importance of human-in-the-loop methods where human expertise plays a major role in data reconstruction. This PhD research focuses on developing visualization and interaction techniques aimed at addressing some of the challenges that experts are faced with when analyzing the spatio-temporal behavior of physical phenomena. Given a number of datasets obtained from different measurement modalities and from simulation, we propose a generalized framework that can guide research in the field of multi-sensor data fusion and visualization. We advocate the use of GPU parallelism in our developed techniques in order to emphasize interaction as a key component in the successful exploration and analysis of multi-sourced data sets. The goal is to allow the user to create a mental model that captures their understanding of the spatio-temporal behavior of features of interest; one which they can test against real data measurements to verify their model. This model creation and verification is an iterative process in which the user interacts with the visualization, explores and builds an understanding of what occurred in the data, then tests this understanding against real-world measurements and improves it. We developed a system as a reference implementation of the proposed framework. Reconstructed data is rendered in a way that completes the users' cognitive model, which encodes their understanding of the phenomena in question with a high degree of accuracy. We tested the usability of the system and evaluated its support for this cognitive model construction process. Once an acceptable model is constructed, it is fed back to the system in the form of a reference dataset, which our framework uses to guide the real-time tracking of measurement data. Our results show that interactive exploration tasks enable the construction of this cognitive model and reference set, and that real-time interaction is achievable during the exploration, reconstruction, and enhancement of multi-modal time-variant three-dimensional data, by designing and implementing advanced GPU-based visualization techniques. / Ph. D.
169

Towards Understanding Systems Through User Interactions

Smestad, Doran 30 April 2015 (has links)
Modern computer systems are complex. Even in the best of conditions, it can be difficult to understand the behavior of the system and identify why certain actions are occurring. Existing systems attempt to provide insight by reviewing the effects of actions on the system and estimating their cause. As computer systems are strongly driven by actions of the user, we propose an approach to identify processes which have interacted with the user and provide data to which system behaviors were caused by the user. We implement three sensors within the graphical user interface capable of extracting the necessary information to identify these processes. We show our instrumentation is effective in characterizing applications with an on-screen presence, and provide data towards the determination of user intentions. We prove that our method for obtaining the information from the user interface can be done in an efficient manner with minimal overheads.
170

User Workshops: A Procedure For Eliciting User Needs And User Defined Problems

Tore, Gulsen 01 September 2006 (has links) (PDF)
Not in every case, the designer is knowledgeable about the potential user. Users can be consulted, in order to obtain knowledge, which is required for the design process. However such a consultation process can be problematic, since users may have difficulty in expressing their needs and problems or they may not be aware of them. The study is devised originating from the idea that if appropriate tools are provided for users, they can express their needs and design related problems. The thesis involves a literature review on the necessity of user knowledge as an input for the design process, and methods, techniques and tools, which provide this knowledge. Based on the findings from the literature review, three fictional case studies were planned and performed by employing two techniques, namely mood boards and drawing and shaping ideal products. These two techniques are developed into a procedure step by step by carrying out the case studies. The thesis proposes guidelines for the procedure of &ldquo / user workshops&rdquo / as a way to elicit users&rsquo / tangible and intangible needs, and user defined problems by directing them to imagine and express a usage context and conceptualize solutions considering their design related problems through a concept development activity and additional creative activities.

Page generated in 0.0749 seconds