Spelling suggestions: "subject:"collaborative editing"" "subject:"kollaborative editing""
1 |
Helping job seekers prepare for technical interviews by enabling context-rich interview feedbackLu, Yi 11 June 2024 (has links)
Technical interviews have become a popular method for recruiters in the tech industry to assess job candidates' proficiency in both soft skills and technical skills as programmers. However, these interviews can be stressful and frustrating for interviewees. One significant cause of the negative experience of technical interviews was the lack of feedback, making it difficult for job seekers to improve their performance progressively by participating in technical interviews. Although there are open platforms like Leetcode that allow job seekers to practice their technical proficiency, resources for conducting mock interviews to practice soft skills like communication are limited and costly to interviewees. To address this, we investigated how professional interviewers provide feedback if they were conducting a mock interview and the difficulties they face when interviewing job seekers by running mock interviews between software engineers and job seekers. With the insights from the formative studies, we developed a new system for technical interviews aiming to help interviewers conduct technical interviews with less cognitive load and provide context-rich feedback. An evaluation study on the usability of using our system to conduct technical interviews further revealed the unresolved cognitive loads of interviewers, underscoring the requirements for further improvement to facilitate easier interview processes and enable peer-to-peer interview practices. / Master of Science / Technical interview is a common method used by tech companies to evaluate job candidates. During these interviews, candidates are asked to solve algorithm problems and explain their thought processes while coding. Running these interviews, recruiters can assess the job candidate's ability to write codes and solve problems in a limited time. At the same time, the requirements for interviewees to talk aloud help interviewers evaluate their communication and collaboration skills. Although technical interviews enable employers to assess job applicants from multiple perspectives, they also introduce interviewees to stress and anxiety. Among the many complaints about technical interviews, one significant difficulty of the interview process is the lack of feedback from interviewers. As a result, it is difficult for interviewees to improve progressively by participating in technical interviews repeatedly. Although there are platforms for interviewees to practice code writing, resources like mock interviews with actual interviewers for job seekers to practice communication skills are costly and rare. Our study investigated how professional programmers run mock technical interviews and provide feedback when required. The mock interview observations helped us understand the standard procedure and common practices of how practitioners run these interviews. At the same time, we concluded the potential cause of cognitive loads and difficulties for interviewers to run such interviews. To answer the difficulties of conducting technical interviews, we developed a new system that enabled interviewers to conduct technical interviews with less cognitive load and provide enriched feedback. After rerunning mock interviews with our system, we noted that while some features in our system helped make the interview process easier, additional cognitive loads are unresolved. Looking into these difficulties, we suggested several directions for future studies to improve our design to enable an easier interview process for interviewers and support interview rehearsals between job seekers.
|
2 |
Préservation des intentions et maintien de la cohérence des données répliquées en temps réel / Intentions Preservation and Consistency Maintenance for Real Time Replicated DataAndré, Luc 13 May 2016 (has links)
L'édition collaborative en temps réel permet à plusieurs utilisateurs d'éditer un même document simultanément grâce à des outils informatiques. Les applications d'édition collaborative en temps réel répliquent les données éditées chez chaque utilisateur, pour garantir une édition des données réactive et possible à chaque instant. Les conflits d'édition sont fréquents, et doivent être gérés automatiquement par l'application. L'application doit faire converger toutes les répliques vers un document commun, qui contient toutes les modifications exprimées par tous les utilisateurs. Les algorithmes actuels fonctionnent de manière satisfaisante pour des types de données simples et des possibilités d'édition minimes. Lorsque le document est plus complexe (document XML, texte structuré), ou qu'il peut être édité avec un ensemble élargi d'opérations (déplacement de texte, styliser du texte), lors de la résolution de conflits d'édition, les algorithmes échouent à proposer un contenu qui convienne aux utilisateurs. Les intentions des utilisateurs ne sont pas respectées. L'objectif de cette thèse est de proposer des algorithmes d'édition collaborative en temps réel qui respectent mieux les intentions des utilisateurs que les algorithmes actuels.Pour cela, nous étudions deux approches de la littérature nommées transformées opérationnelles et modèles de données répliquées commutatives, et montrons comment il est possible d'attribuer plus de sémantique aux opérations de base des algorithmes d'édition collaborative, ce qui permet aux utilisateurs d'exprimer avec plus de moyens leurs intentions, et aux algorithmes de résoudre plus efficacement les conflits d'édition / Real-time collaborative editors, like GoogleDocs or Etherpad, allow the simultaneous edition of a document by several users. These applications need to replicate the edited document, for the so called real-time purpose of permitting a fast and reactive editing by any user at any time. Editing conflicts frequently occur, and must be automatically handled by the application, in order to provide every users with the same copy of the document, containing every modifications issued. Most of current real-time collaborative editing algorithms were designed for simple data structures, like linear text, and simple editing ways, like inserting or removing a character only. These algorithms fail to offer an appropriate editing conflict resolution when used with a complex data structure, like XML, or with complex operations, like moving some text or adding some style. Copies are the sames but users' intentions are not preserved. The goal of this thesis is to design new real-time collaborative editing algorithms that ensure a better preservation of users' intentions. The first contribution of this thesis is an algorithm based on the Operational Transformation approach (OT). Our contribution is designed to handle rich text document (with stylized text and paragraphs) and to preserve the intentions of a set of high editing level operations (add style, merge paragraphs...). The second contribution uses the Commutative Replicated Data Types approach (CRDT), and offers an algorithm which preserves the update intention, while improving global performance of the approach when dealing with large blocs of data
|
3 |
Towards federated social infrastructures for plug-based decentralized social networks / Vers des infrastructures sociales fédérées pour des réseaux sociaux décentralisés à base d'ordinateurs contraintsAriyattu, Resmi 05 July 2017 (has links)
Dans cette thèse, nous abordons deux problèmes soulevés par les systèmes distribués décentralisés - le placement de réseaux logiques de façon compatible avec le réseau physique sous-jacent et la construction de cohortes d'éditeurs pour dans les systèmes d'édition collaborative. Bien que les réseaux logiques (overlay networks) été largement étudiés, la plupart des systèmes existant ne prennent pas ou prennent mal en compte la topologie du réseau physique sous-jacent, alors que la performance de ces systèmes dépend dans une grande mesure de la manière dont leur topologie logique exploite la localité présente dans le réseau physique sur lequel ils s'exécutent. Pour résoudre ce problème, nous proposons dans cette thèse Fluidify, un mécanisme décentralisé pour le déploiement d'un réseau logique sur une infrastructure physique qui cherche à maximiser la localité du déploiement. Fluidify utilise une stratégie double qui exploite à la fois les liaisons logiques d'un réseau applicatif et la topologie physique de son réseau sous-jacent pour aligner progressivement l'une avec l'autre. Le protocole résultant est générique, efficace, évolutif et peut améliorer considérablement les performances de l'ensemble. La deuxième question que nous abordons traite des plates-formes d'édition collaborative. Ces plates-formes permettent à plusieurs utilisateurs distants de contribuer simultanément au même document. Seuls un nombre limité d'utilisateurs simultanés peuvent être pris en charge par les éditeurs actuellement déployés. Un certain nombre de solutions pair-à-pair ont donc été proposées pour supprimer cette limitation et permettre à un grand nombre d'utilisateurs de collaborer sur un même document sans aucune coordination centrale. Ces plates-formes supposent cependant que tous les utilisateurs d'un système éditent le même jeu de document, ce qui est peu vraisemblable. Pour ouvrir la voie à des systèmes plus flexibles, nous présentons, Filament, un protocole décentralisé de construction de cohorte adapté aux besoins des grands éditeurs collaboratifs. Filament élimine la nécessité de toute table de hachage distribuée (DHT) intermédiaire et permet aux utilisateurs travaillant sur le même document de se retrouver d'une manière rapide, efficace et robuste en générant un champ de routage adaptatif autour d'eux-mêmes. L'architecture de Filament repose sur un ensemble de réseaux logiques auto-organisées qui exploitent les similarités entre jeux de documents édités par les utilisateurs. Le protocole résultant est efficace, évolutif et fournit des propriétés bénéfiques d'équilibrage de charge sur les pairs impliqués. / In this thesis, we address two issues in the area of decentralized distributed systems: network-aware overlays and collaborative editing. Even though network overlays have been extensively studied, most solutions either ignores the underlying physical network topology, or uses mechanisms that are specific to a given platform or applications. This is problematic, as the performance of an overlay network strongly depends on the way its logical topology exploits the underlying physical network. To address this problem, we propose Fluidify, a decentralized mechanism for deploying an overlay network on top of a physical infrastructure while maximizing network locality. Fluidify uses a dual strategy that exploits both the logical links of an overlay and the physical topology of its underlying network to progressively align one with the other. The resulting protocol is generic, efficient, scalable and can substantially improve network overheads and latency in overlay based systems. The second issue that we address focuses on collaborative editing platforms. Distributed collaborative editors allow several remote users to contribute concurrently to the same document. Only a limited number of concurrent users can be supported by the currently deployed editors. A number of peer-to-peer solutions have therefore been proposed to remove this limitation and allow a large number of users to work collaboratively. These decentralized solution assume however that all users are editing the same set of documents, which is unlikely to be the case. To open the path towards more flexible decentralized collaborative editors, we present Filament, a decentralized cohort-construction protocol adapted to the needs of large-scale collaborative editors. Filament eliminates the need for any intermediate DHT, and allows nodes editing the same document to find each other in a rapid, efficient and robust manner by generating an adaptive routing field around themselves. Filament's architecture hinges around a set of collaborating self-organizing overlays that utilizes the semantic relations between peers. The resulting protocol is efficient, scalable and provides beneficial load-balancing properties over the involved peers.
|
4 |
Collaborative Text Editing in a Portal / Collaborative Text Editing in a PortalKorčák, Ján January 2012 (has links)
V tomto texte sa zameriame na populárnu koncepciu kolaboratívnej tvorby dokumentov. Predstavíme si myšlienku využitia tohto mechanizmu v rôznych oblastiach rozhodovania, popíšeme si koncept a princíp fungovania. Následne si predstavíme a rozoberieme portály a portletovú technológiu, ich výhody a využitie. Cieľom práce je implementácia kolaboratívneho editora s využitím knižnice pre prácu so zmenami v dokumentoch s perzistentnou a aplikačnou logikou na platforme JEE a vytvorenie jednoduchého portletu pre túto službu.
|
5 |
Encrypted Collaborative Editing SoftwareTran, Augustin 05 1900 (has links)
Cloud-based collaborative editors enable real-time document processing via remote connections. Their common application is to allow Internet users to collaboratively work on their documents stored in the cloud, even if these users are physically a world apart. However, this convenience comes at a cost in terms of user privacy. Hence, the growth of popularity of cloud computing application stipulates the growth in importance of cloud security. A major concern with the cloud is who has access to user data. In order to address this issue, various third-party services offer encryption mechanisms for protection of the user data in the case of insider attacks or data leakage. However, these services often only encrypt data-at-rest, leaving the data which is being processed potentially vulnerable. The purpose of this study is to propose a prototype software system that encrypts collaboratively edited data in real-time, preserving the user experience similar to that of, e.g., Google Docs.
|
6 |
Contribution au renforcement de la protection de la vie privée. : Application à l' édition collaborative et anonyme des documents.Vallet, Laurent 18 September 2012 (has links)
Ce travail de thèse a pour objectif de renforcer la protection de la vie privée dans l'édition collaborative de documents dématérialisés. Une première partie de notre travail a consisté en l'élaboration d'un modèle sécurisé d'éditeur collaboratif de documents. La confidentialité des documents est assurée grâce à un contrôle d'accès décentralisé. Une seconde partie a été de protéger l'identité des utilisateurs communicants en assurant leur anonymat lors de l'édition des documents. Notre concept d'éditeur collaboratif est fondé sur l'utilisation d'ensembles de modifications apportées à un document, que nous nommons deltas, lesquels définissent des versions du document. Les deltas permettent de conserver l'historique des modifications successives. Lorsqu'une nouvelle version d'un document apparait, seul le nouveau delta qui en découle doit être transmis et stocké : les documents sont ainsi distribués à travers un ensemble de sites de stockage. Cela permet, d'une part, d'alléger la charge du gestionnaire des versions des documents, et d'autre part, d'améliorer la sécurité générale. La récupération décentralisée d'une nouvelle version nécessite seulement celle des deltas manquants par rapport à une version précédente; elle est donc moins coûteuse que la récupération du document en entier. L'information du lien entre les deltas est enregistrée directement avec les deltas et non dans une table ou dans un objet de métadonnées ce qui améliore leur sécurité. La gestion des versions est faite afin de maintenir une cohérence entre les versions. / This thesis aims to strengthen the protection of privacy of authors in collaborative editing of electronic documents. The first part of our work consisted in the design a secured model for a collaborative document editor. Documents' confidentiality is ensured through a decentralized access control mechanism. A second part of the work was to protect the identity of communicating users by ensuring their anonymity during collaborative document editing. Our concept of collaborative editor is based on the use of sets of modifications for a document, which we call textit{deltas}. Thus, deltas define versions of the document and they can keep the history of subsequent modifications. When a new version of a document appears, only the new delta that results must be transmitted and stored : documents are thus distributed across a set of storage sites (servers). We show how this allows, first, to reduce the load of document version control system, and secondly, to improve overall security of the collaborative system. Decentralized recovery of a new version requires only recovery of appropriate deltas against a previous version and is therefore more efficient than recovering the entire document. The information of the link between deltas is stored directly with deltas and not in a table or a metadata object, that improves their security. The version control system maintains consistency between versions. Once a user has retrieved versions, she holds them in local, and the history can be accessed at any time without being necessary to connect to the central server. Deltas can be identified, indexed and maintained their integrity through the use of hash functions.
|
7 |
Rethinking Consistency Management in Real-time Collaborative Editing SystemsPreston, Jon Anderson 28 June 2007 (has links)
Networked computer systems offer much to support collaborative editing of shared documents among users. Increasing concurrent access to shared documents by allowing multiple users to contribute to and/or track changes to these shared documents is the goal of real-time collaborative editing systems (RTCES); yet concurrent access is either limited in existing systems that employ exclusive locking or concurrency control algorithms such as operational transformation (OT) may be employed to enable concurrent access. Unfortunately, such OT based schemes are costly with respect to communication and computation. Further, existing systems are often specialized in their functionality and require users to adopt new, unfamiliar software to enable collaboration. This research discusses our work in improving consistency management in RTCES. We have developed a set of deadlock-free multi-granular dynamic locking algorithms and data structures that maximize concurrent access to shared documents while minimizing communication cost. These algorithms provide a high level of service for concurrent access to the shared document and integrate merge-based or OT-based consistency maintenance policies locally among a subset of the users within a subsection of the document – thus reducing the communication costs in maintaining consistency. Additionally, we have developed client-server and P2P implementations of our hierarchical document management algorithms. Simulations results indicate that our approach achieves significant communication and computation cost savings. We have also developed a hierarchical reduction algorithm that can minimize the space required of RTCES, and this algorithm may be pipelined through our document tree. Further, we have developed an architecture that allows for a heterogeneous set of client editing software to connect with a heterogeneous set of server document repositories via Web services. This architecture supports our algorithms and does not require client or server technologies to be modified – thus it is able to accommodate existing, favored editing and repository tools. Finally, we have developed a prototype benchmark system of our architecture that is responsive to users’ actions and minimizes communication costs.
|
8 |
Byzantine Fault Tolerant Collaborative EditingBABI, MAMDOUH O. 24 May 2017 (has links)
No description available.
|
9 |
Une approche du patching audio collaboratif : enjeux et développement du collecticiel Kiwi. / An approach of collaborative audio patching : challenges and development of the Kiwi groupwareParis, Eliott 05 December 2018 (has links)
Les logiciels de patching audio traditionnels, tels que Max ou Pure Data, sont des environnements qui permettent de concevoir et d’exécuter des traitements sonores en temps réel. Ces logiciels sont mono-utilisateurs, or, dans bien des cas, les utilisateurs ont besoin de travailler en étroite collaboration à l’élaboration ou à l’exécution d’un même traitement. C’est notamment le cas dans un contexte pédagogique ainsi que pour la création musicale collective. Des solutions existent, mais ne conviennent pas forcément à tous les usages. Aussi avons-nous cherché à nous confronter de manière concrète à cette problématique en développant une nouvelle solution de patching audio collaborative, baptisée Kiwi, qui permet l’élaboration d’un même traitement sonore à plusieurs mains de manière distribuée. À travers une étude critique des solutions logicielles existantes nous donnons des clefs de compréhension pour appréhender la conception d’un système multi-utilisateur de ce type. Nous énonçons les principaux verrous que nous avons eu à lever pour rendre cette pratique viable et présentons la solution logicielle. Nous exposons les possibilités offertes par l’application et les choix de mise en œuvre techniques et ergonomiques que nous avons faits pour permettre à plusieurs personnes de coordonner leurs activités au sein d’un espace de travail mis en commun. Nous revenons ensuite sur différents cas d’utilisation de ce collecticiel dans un contexte pédagogique et de création musicale afin d’évaluer la solution proposée. Nous exposons enfin les développements plus récents et ouvrons sur les perspectives futures que cette application nous permet d’envisager. / Traditional audio patching software, such as Max or Pure Data, are environments that allow you to design and execute sound processing in real time. These programs are single-user, but, in many cases, users need to work together and in a tight way to create and play the same sound processing. This is particularly the case in a pedagogical context and for collective musical creation. Solutions exist, but are not necessarily suitable for all uses. We have tried to confront this problem in a concrete way by developing a new collaborative audio patching solution, named Kiwi, which allows the design of a sound processing with several hands in a distributed manner. Through a critical study of the existing software solutions we give keys of comprehension to apprehend the design of a multi-user system of this type. We present the main barriers that we had to lift to make this practice viable and present the software solution. We show the possibilities offered by the application and the technical and ergonomic implementation choices that we have made to allow several people to coordinate their activities within a shared workspace. Then, we study several uses of this groupware in pedagogical and musical creation contexts in order to evaluate the proposed solution. Finally, we present the recent developments and open up new perspectives for the application.
|
10 |
Design and implementation of a mobile wiki : mobile RikWikHuang, Wei-Che (Darren) January 2007 (has links)
Wikis are a popular collaboration technology. They support the collaborative editing of web pages through a simple mark-up language. Mobile devices are becoming ubiquitous, powerful and affordable. Thus it is advantageous for people to get the benefits of wikis in a mobile setting. However, mobile computing leads to its own challenges such as limited screen size, bandwidth, memory and battery life; they also have intermittent connectivity due to the mobility and the coverage of network. I investigate how wikis can be made mobile; that is how wiki forms of collaborative editing can be achieved through mobile devices such as PDAs and smart phones. A prototype mobile wiki has been created using .NET, which addresses these issues and enables simple collaborative working whilst on and offline through smart mobile devices. A cut down wiki runs on the mobile device. This communicates with a main central wiki to cache pages for off line use. When sitting in a powered cradle eager, downloading and synchronization of pages is supported. During mobile operation, pages are cached lazily on demand to minimize power use and to save the limited and expensive bandwidth. On re-connection, offline edited as well as new pages are synchronized with the main wiki server. Finally a pluggable page rendering engine enables pages to be rendered in different ways to suit different sized screens.
|
Page generated in 0.1103 seconds