Spelling suggestions: "subject:"webtechnologies"" "subject:"destechnologies""
11 |
Improving Centruflow using semantic web technologies : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Science at Massey University, Palmerston North, New ZealandGiles, Jonathan Andrew January 2007 (has links)
Centruflow is an application that can be used to visualise structured data. It does this by drawing graphs, allowing for users to explore information relationships that may not be visible or easily understood otherwise. This helps users to gain a better understanding of their organisation and to communicate more effectively. In earlier versions of Centruflow, it was difficult to develop new functionality as it was built using a relatively unsupported and proprietary visualisation toolkit. In addition, there were major issues surrounding information currency and trust. Something had to be done, and this was a sub-project of this thesis. The main purpose of this thesis however was to research and develop a set of mathematical algorithms to infer implicit relationships in Centruflow data sources. Once these implicit relationships were found, we could make them explicit by showing them within Centruflow. To enable this, relationships were to be calculated based on providing users with the ability to 'tag' resources with metadata. We believed that by using this tagging metadata, Centruflow could offer users far more insight into their own data. Implementing this was not a straight-forward task, as it required a considerable amount of research and development to be undertaken to understand and appreciate technologies that could help us in our goal. Our focus was primarily on technologies and approaches common in the semantic web and 'Web 2.0' areas. By pursuing semantic web technologies, we ensured that Centruflow would be considerably more standards-compliant than it was previously. At the conclusion of our development period, Centruflow had been rather substantially 'retrofitted', with all proprietary technologies replaced with equivalent semantic web technologies. The result of this is that Centruflow is now positioned on the forefront of the semantic web wave, allowing for far more comprehensive and rapid visualisation of a far larger set of readily-available data than what was possible previously. Having implemented all necessary functionality, we validated our approach and were pleased to find that our improvements led to a considerably more intelligent and useful Centruflow application than was previously available. This functionality is now available as part of 'Centruflow 3.0', which will be publicly released in March 2008. Finally, we conclude this thesis with a discussion on the future work that should be undertaken to improve on the current release.
|
12 |
Improving Centruflow using semantic web technologies : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Science at Massey University, Palmerston North, New ZealandGiles, Jonathan Andrew January 2007 (has links)
Centruflow is an application that can be used to visualise structured data. It does this by drawing graphs, allowing for users to explore information relationships that may not be visible or easily understood otherwise. This helps users to gain a better understanding of their organisation and to communicate more effectively. In earlier versions of Centruflow, it was difficult to develop new functionality as it was built using a relatively unsupported and proprietary visualisation toolkit. In addition, there were major issues surrounding information currency and trust. Something had to be done, and this was a sub-project of this thesis. The main purpose of this thesis however was to research and develop a set of mathematical algorithms to infer implicit relationships in Centruflow data sources. Once these implicit relationships were found, we could make them explicit by showing them within Centruflow. To enable this, relationships were to be calculated based on providing users with the ability to 'tag' resources with metadata. We believed that by using this tagging metadata, Centruflow could offer users far more insight into their own data. Implementing this was not a straight-forward task, as it required a considerable amount of research and development to be undertaken to understand and appreciate technologies that could help us in our goal. Our focus was primarily on technologies and approaches common in the semantic web and 'Web 2.0' areas. By pursuing semantic web technologies, we ensured that Centruflow would be considerably more standards-compliant than it was previously. At the conclusion of our development period, Centruflow had been rather substantially 'retrofitted', with all proprietary technologies replaced with equivalent semantic web technologies. The result of this is that Centruflow is now positioned on the forefront of the semantic web wave, allowing for far more comprehensive and rapid visualisation of a far larger set of readily-available data than what was possible previously. Having implemented all necessary functionality, we validated our approach and were pleased to find that our improvements led to a considerably more intelligent and useful Centruflow application than was previously available. This functionality is now available as part of 'Centruflow 3.0', which will be publicly released in March 2008. Finally, we conclude this thesis with a discussion on the future work that should be undertaken to improve on the current release.
|
13 |
Improving Centruflow using semantic web technologies : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Science at Massey University, Palmerston North, New ZealandGiles, Jonathan Andrew January 2007 (has links)
Centruflow is an application that can be used to visualise structured data. It does this by drawing graphs, allowing for users to explore information relationships that may not be visible or easily understood otherwise. This helps users to gain a better understanding of their organisation and to communicate more effectively. In earlier versions of Centruflow, it was difficult to develop new functionality as it was built using a relatively unsupported and proprietary visualisation toolkit. In addition, there were major issues surrounding information currency and trust. Something had to be done, and this was a sub-project of this thesis. The main purpose of this thesis however was to research and develop a set of mathematical algorithms to infer implicit relationships in Centruflow data sources. Once these implicit relationships were found, we could make them explicit by showing them within Centruflow. To enable this, relationships were to be calculated based on providing users with the ability to 'tag' resources with metadata. We believed that by using this tagging metadata, Centruflow could offer users far more insight into their own data. Implementing this was not a straight-forward task, as it required a considerable amount of research and development to be undertaken to understand and appreciate technologies that could help us in our goal. Our focus was primarily on technologies and approaches common in the semantic web and 'Web 2.0' areas. By pursuing semantic web technologies, we ensured that Centruflow would be considerably more standards-compliant than it was previously. At the conclusion of our development period, Centruflow had been rather substantially 'retrofitted', with all proprietary technologies replaced with equivalent semantic web technologies. The result of this is that Centruflow is now positioned on the forefront of the semantic web wave, allowing for far more comprehensive and rapid visualisation of a far larger set of readily-available data than what was possible previously. Having implemented all necessary functionality, we validated our approach and were pleased to find that our improvements led to a considerably more intelligent and useful Centruflow application than was previously available. This functionality is now available as part of 'Centruflow 3.0', which will be publicly released in March 2008. Finally, we conclude this thesis with a discussion on the future work that should be undertaken to improve on the current release.
|
14 |
Αποτελεσματικοί αλγόριθμοι και δομές δεδομένων με εφαρμογές στην ανάκτηση πληροφορίας και στις τεχνολογίες διαδικτύουΑντωνίου, Δημήτρης 23 May 2011 (has links)
Αντικείμενο της παρούσας διδακτορικής διατριβής είναι η μελέτη και τροποποίηση βασικών δομών δεδομένων με σκοπό τη δημιουργία νέων και την τροποποίηση υπαρχουσών λύσεων, με εφαρμογές στην Ανάκτηση Πληροφορίας, τη Βιοπληροφορική και το Διαδίκτυο.
Αρχικά, δίνεται έμφαση στην ανάπτυξη και πειραματική επιβεβαίωση αλγοριθμικών τεχνικών για τη σχεδίαση αυτοοργανώμενων δομών δεδομένων (self-organizing data structures). Μέχρι σήμερα, ο μόνος πιθανός υποψήφιος αλγόριθμος αναζήτησης σε δένδρο που μπορεί να είναι Ο(1)-ανταγωνιστικός είναι το splay δένδρο (splay tree) που παρουσιάστηκε από τους Sleator και Tarjan [1]. Επιπρόσθετα, μελετώνται διάφορες εναλλακτικές τεχνικές αυτοοργάνωσης ([2],[3],[4],[5],[6]) και γίνεται επιβεβαίωση των πάνω ορίων που ισχύουν για την απόδοση των splay trees και για αυτές. Η ανάπτυξη των διάφορων αλγοριθμικών αυτών τεχνικών βρίσκει εφαρμογές πάνω στη συμπίεση δεδομένων. Οι αλγόριθμοι συμπίεσης δεδομένων μπορούν να βελτιώσουν την αποδοτικότητα με την οποία τα δεδομένα αποθηκεύονται ή μεταφέρονται, μέσω της μείωσης του ποσού της πλεονάζουσας πληροφορίας. Η χρήση αυτών των αλγορίθμων τόσο στην κρυπτογράφηση όσο και στην επεξεργασία εικόνας είναι αποδοτική και έχει μεγάλο ερευνητικό ενδιαφέρον. Γενικότερα, οι αυτοοργανώμενες δομές δεδομένων χρίζουν ιδιαίτερης προσοχής στους on-line αλγόριθμους. Αναλυτικότερα, στην παρούσα διατριβή, εφαρμόζεται συμπίεση σε βιολογικά δεδομένα αλλά και σε κείμενα τόσο με χρήση του κλασσικού splay δέντρου [10] αλλά και της log log n ανταγωνιστικής παραλλαγής του. Επιπλέον, παρουσιάζονται τυχαιοποιημένες εκδόσεις των παραπάνω δομών και εφαρμόζονται και αυτές στη συμπίεση δεδομένων. Οι log log n ανταγωνιστικές δομές έχουν καλύτερη απόδοση όσον αφορά την πολυπλοκότητά τους σε σχέση με την κλασσική splay δομή. Το γεγονός αυτό επιβεβαιώνεται πειραματικά, όπου η επιτυγχανόμενη συμπίεση είναι στις περισσότερες των περιπτώσεων καλύτερη από την αντίστοιχη της κλασικής δομής .
Επιπλέον, ιδιαίτερο ερευνητικό ενδιαφέρον βρίσκει η εφαρμογή βασικών δομών δεδομένων στο διαδίκτυο. Επιδιώκουμε την ανάπτυξη και θεωρητική επιβεβαίωση αλγορίθμων για προβλήματα όπως η ανάθεση «καυτών συνδέσμων» (hot links [7]), η αναδιοργάνωση ιστοσελίδων και η ανάκτηση πληροφορίας ([8],[9]). Σε πρώτο στάδιο, προτείνονται ευριστικοί αλγόριθμοι με σκοπό την ανάθεση «καυτών συνδέσμων» (hotlinks) και τη βελτίωση της τοπολογίας ενός ιστότοπου ([12],[13],[14]). Σκοπός του αλγορίθμου είναι η προώθηση των δημοφιλών ιστοσελίδων ενός ιστότοπου, μέσω της ανάθεσης συνδέσμων προς αυτές, από ιστοσελίδες οι οποίες είναι σχετικές με αυτές ως προς το περιεχόμενο αλλά και ταυτόχρονα συντελούν στη μείωση της απόστασής τους από την αρχική σελίδα. Παρουσιάζεται το μοντέλο του αλγορίθμου, καθώς και μετρικές οι οποίες χρησιμοποιούνται για την ποσοτική αξιολόγηση της αποδοτικότητας του αλγορίθμου σε σχέση με ειδικά χαρακτηριστικά ενός ιστότοπου, όπως η εντροπία του.
Σε δεύτερο στάδιο, γίνεται μελέτη τεχνικών προσωποποίησης ιστοσελίδων [11]. Συγκεκριμένα, σκοπός είναι η υλοποίηση ενός αλγορίθμου, ο οποίος θα ανακαλύπτει την αυξημένη ζήτηση μίας κατηγορίας ιστοσελίδων Α από έναν χρήστη και αξιοποιώντας την καταγεγραμμένη συμπεριφορά άλλων χρηστών, θα προτείνει κατηγορίες σελίδων οι οποίες προτιμήθηκαν από χρήστες οι οποίοι ομοίως παρουσίασαν αυξημένο ενδιαφέρον προς την κατηγορία αυτή. Αναλύεται το φαινόμενο της έξαρσης επισκεψιμότητας (burst) και η αξιοποίηση του στο πεδίο της εξατομίκευσης ιστοσελίδων. Ο αλγόριθμος υλοποιείται με τη χρήση δύο δομών δεδομένων, των Binary heaps και των Splay δέντρων, και αναλύεται η χρονική και χωρική πολυπλοκότητά του. Επιπρόσθετα, γίνεται πειραματική επιβεβαίωση της ορθής και αποδοτικής εκτέλεσης του αλγορίθμου. Αξίζει να σημειωθεί πως ο προτεινόμενος αλγόριθμος λόγω της φύσης του, χρησιμοποιεί χώρο, ο οποίος επιτρέπει τη χρησιμοποίηση του στη RAM. Τέλος, ο προτεινόμενος αλγόριθμος δύναται να βρει εφαρμογή σε εξατομίκευση σελίδων με βάση το σημασιολογικό τους περιεχόμενο σε αντιστοιχία με το διαχωρισμό τους σε κατηγορίες.
Σε τρίτο στάδιο, γίνεται παρουσίαση πρωτότυπης τεχνικής σύστασης ιστοσελίδων [15] με χρήση Splay δέντρων. Σε αυτή την περίπτωση, δίνεται ιδιαίτερο βάρος στην εύρεση των σελίδων που παρουσιάζουν έξαρση επισκεψιμότητας και στη σύστασή τους στους χρήστες ενός ιστότοπου. Αρχικά, τεκμηριώνεται η αξία της εύρεσης μιας σελίδας, η οποία δέχεται ένα burst επισκέψεων. H έξαρση επισκεψιμότητας (burst) ορίζεται σε σχέση τόσο με τον αριθμό των επισκέψεων, όσο και με το χρονικό διάστημα επιτέλεσής τους. Η εύρεση των σελίδων επιτυγχάνεται με τη μοντελοποίηση ενός ιστότοπου μέσω ενός splay δέντρου. Με την τροποποίηση του δέντρου μέσω της χρήσης χρονοσφραγίδων (timestamps), ο αλγόριθμος είναι σε θέση να επιστρέφει σε κάθε χρονική στιγμή την ιστοσελίδα που έχει δεχθεί το πιο πρόσφατο burst επισκέψεων. Ο αλγόριθμος αναλύεται όσον αφορά τη χωρική και χρονική του πολυπλοκότητα και συγκρίνεται με εναλλακτικές λύσεις. Μείζονος σημασίας είναι η δυνατότητα εφαρμογής του αλγορίθμου και σε άλλα φαινόμενα της καθημερινότητας μέσω της ανάλογης μοντελοποίησης. Παραδείγματος χάρη, στην περίπτωση της απεικόνισης ενός συγκοινωνιακού δικτύου μέσω ενός γράφου, ο αλγόριθμος σύστασης δύναται να επιστρέφει σε κάθε περίπτωση τον κυκλοφοριακό κόμβο ο οποίος παρουσιάζει την πιο πρόσφατη συμφόρηση.
Τέλος, όσον αφορά το πεδίο της ανάκτησης πληροφορίας, η διατριβή επικεντρώνεται σε μία πρωτότυπη και ολοκληρωμένη μεθοδολογία με σκοπό την αξιολόγηση της ποιότητας ενός συστήματος λογισμικού βάσει του Προτύπου Ποιότητας ISO/IEC-9126.
Το κύριο χαρακτηριστικό της είναι ότι ολοκληρώνει την αξιολόγηση ενός συστήματος λογισμικού ενσωματώνοντας την αποτίμηση όχι μόνο των χαρακτηριστικών που είναι προσανατολισμένα στο χρήστη, αλλά και εκείνων που είναι πιο τεχνικά και αφορούν τους μηχανικούς λογισμικού ενός συστήματος. Σε αυτή τη διατριβή δίνεται βάρος στην εφαρμογή μεθόδων εξόρυξης δεδομένων πάνω στα αποτελέσματα της μέτρησης μετρικών οι οποίες συνθέτουν τα χαρακτηριστικά του πηγαίου κώδικα, όπως αυτά ορίζονται από το Προτύπο Ποιότητας ISO/IEC-9126 [16][17]. Ειδικότερα εφαρμόζονται αλγόριθμοι συσταδοποίησης με σκοπό την εύρεση τμημάτων κώδικα με ιδιαίτερα χαρακτηριστικά, που χρήζουν προσοχής. / In this dissertation we take an in-depth look at the use of effective and efficient data structures and algorithms in the fields of data mining and web technologies. The main goal is to develop algorithms based on appropriate data structures, in order to improve the performance at all levels of web applications.
In the first chapter the reader is introduced to the main issues studied dissertation. In the second chapter, we propose novel randomized versions of the splay trees. We have evaluated the practical performance of these structures in comparison with the original version of splay trees and with their log log n-competitive variations, in the application field of compression. Moreover, we show that the Chain Splay tree achieves O(logn) worst-case cost per query. In order to evaluate performance, we utilize plain splay trees, the log log n-competitive variations, the proposed randomized version with the Chain Splay technique to compress data. It is observed experimentally that the compression achieved in the case of the log log n-competitive technique is, as expected, more efficient than the one of the plain splay trees.
The third chapter focuses on hotlinks assignment techniques. Enhancing web browsing experience is an open issue frequently dealt using hotlinks assignment between webpages, shortcuts from one node to another. Our aim is to provide a novel, more efficient approach to minimize the expected number of steps needed to reach expected pages when browsing a website. We present a randomized algorithm, which combines the popularity of the webpages, the website structure, and for the first time to the best authors’ knowledge, the similarity of context between pages in order to suggest the placement of suitable hotlinks. We verify experimentally that users need less page transitions to reach expected information pages when browsing a website, enhanced using the proposed algorithm.
In the fourth chapter we investigate the problem of web personalization. The explosive growth in the size and use of the World Wide Web continuously creates new great challenges and needs. The need for predicting the users’ preferences in order to expedite and improve the browsing though a site can be achieved through personalizing of the Websites. Recommendation and personalization algorithms aim at suggesting WebPages to users based on their current visit and past users’ navigational patterns. The problem that we address is the case where few WebPages become very popular for short periods of time and are accessed very frequently in a limited temporal space. Our aim is to deal with these bursts of visits and suggest these highly accessed pages to the future users that have common interests. Hence, in this paper, we propose a new web personalization technique, based on advanced data structures. The data structures that are used are the Splay tree (1) and Binary heaps (2). We describe the architecture of the technique, analyze the time and space complexity and prove its performance. In addition, we compare both theoretically and experimentally the proposed technique to another approach to verify its efficiency. Our solution achieves O(P2) space complexity and runs in k log P time, where k is the number of pages and P the number of categories of WebPages.
Extending this algorithm, we propose an algorithm which efficiently detects bursts of visits to webpages. As an increasing number of Web sites consist of multiple pages, it is more difficult for the visitors to rapidly reach their own target. This results in an urgent need for intelligent systems that effectively support the users’ navigation to high demand Web content. In many cases, due to specific conditions, web pages become very popular and receive excessively large number of hits. Therefore, there is a high probability that these web pages will be of interest to the majority of the visitors at a given time. The data structure that is used for the purposes of the recommendation algorithm is the Splay tree. We describe the architecture of the technique, analyze the time and space complexity and show its performance.
The dissertation’s last chapter elaborates on how to use clustering for the evaluation of a software system’s maintainability according to the ISO/IEC-9126 quality standard. More specifically it proposes a methodology that combines clustering and multicriteria decision aid techniques for knowledge acquisition by integrating groups of data from source code with the expertise of a software system’s evaluators. A process for the extraction of elements from source code and Analytical Hierarchical Processing for assigning weights to these data are provided; k-Attractors clustering algorithm is then applied on these data, in order to produce system overviews and deductions. The methodology is evaluated on Apache Geronimo, a large Open Source Application Server, results are discussed and conclusions are presented together with directions for future work.
|
15 |
The development of a sports statistics web application : Sports Analytics and Data Models for a sports data web applicationAlvarsson, Andreas January 2017 (has links)
Sports and technology have always co-operated to bring better and more specific sports statistics. The collection of sports game data as well as the ability to generate valuable sports statistics of it is growing. This thesis investigates the development of a sports statistics application that should be able to collect sports game data, structure the data according to suitable data models and show statistics in a proper way. The application was set to be a web application that was developed using modern web technologies. This purpose led to a comparison of different software stack solutions and web frameworks. A theoretical study of sports analytics was also conducted, which gave a foundation for how sports data could be stored and how valuable sports statistics could be generated. The resulting design of the prototype for the sports statistics application was evaluated. Interviews with persons working in sports contexts evaluated the prototype to be both user-friendly, functional and fulfilling the purpose to generate valuable statistics during sport games.
|
16 |
AJAX förhämtning baserad på besöksinformationDervisevic, Denis January 2012 (has links)
Det finns idag fortfarande ett behov av att minska den uppfattade responstiden för användare. Som ett sätt att förbättra prestandan föreslås förhämtning. Förhämtning kan ytterligare förbättra prestandan av AJAX. Med prestanda menas primärt responstiden men bandbredden är också en viktig faktor. Problemet handlar om hur prestandan påverkas av AJAX förhämtning baserad på historiska hints jämfört mot vanlig AJAX. Metoden är experiment och prestandan av 2 versioner av en webbplats för kursinformation jämförs. Under genomförandet byggdes siterna, förhämtningsversionen har en initial förhämtning baserad på historik samt pågående förhämtning baserad på sökningar och interaktion. Tester visar på att responstiden förbättras med förhämtningen, mellan 63 % och 29 % beroende på träffbilden av förhämtningen under en session. Bandbredden ökade dock som ett resultat mellan 61 % och 33 % på de olika sessionerna.
|
17 |
Využitie technológie Flash v e-commerce aplikáciách / Utilization of Flash technology in e-commerce applicationsNagy, František January 2008 (has links)
The growing intensity of competition between traders leads to harder acquisition of competitive advantage. Businesses and companies are therefore trying to get this benefit from e-commerce solutions, enabling them to serve as a presentation tool as well as a communication tool with the customers; reduce the transaction costs and facilitate the marketing of products and services. The increasing amount of firms using these solutions requires them to create them the most attractive and technologically advanced: therefore they utilize more sophisticated technologies that enable them to achieve this purpose. This work is dedicated to one of these technologies - Adobe Flash. It intends to familiarize the reader with this technology, compare it with the competing technologies and demonstrate its use in the commercial sphere on the World Wide Web. As well it should point out possible problems with application development and the possible solutions for these issues. On a realistic example it should demonstrate the inevitability of this technology, despite of growing competition. The readers who intend to create an e-commerce application should have enough information to be able to do so after reading this paper (the work does not devote to ActionScript programming in detail), or be able to choose a more suitable technology for their application. It also includes several recommendations to optimize Flash elements and applications to contribute to the most user-friendly experience.
|
18 |
Behavioral Monitoring on Smartphones for Intrusion Detection in Web Systems : A Study of Limitations and Applications of Touchscreen Biometrics / Bevakning av användarbeteende på mobila enheter för identifiering av intrång i webbsystemLövmar, Anton January 2015 (has links)
Touchscreen biometrics is the process of measuring user behavior when using a touchscreen, and using this information for authentication. This thesis uses SVM and k-NN classifiers to test the applicability of touchscreen biometrics in a web environment for smartphones. Two new concepts are introduced: model training using the Local Outlier Factor (LOF), as well as building custom models for touch behaviour in the context of individual UI components instead of the whole screen. The lowest error rate achieved was 5.6 \% using the k-NN classifier, with a standard deviation of 2.29 \%. No real benefit using the LOF algorithm in the way presented in this thesis could be found. It is found that the method of using contextual models yields better performance than looking at the entire screen. Lastly, ideas for using touchscreen biometrics as an intrusion detection system is presented. / Pekskärmsbiometri innebär att mäta beteende hos en användare som använder en pekskärm och känna denna baserat på informationen. I detta examensarbete används SVM och k-NN klassifierare för att testa tillämpligheten av denna typ av biometri i en webbmiljö för smarttelefoner. Två nya koncept introduceras: modellträning med ''Local Outlier Factor'' samt att bygga modeller för användarinteraktioner med enskilda gränssnittselement iställer för skärmen i sin helhet. De besta resultaten för klassifierarna hade en felfrekvens på 5.6 \% med en standardavvikelse på 2.29 \%. Ingen fördel med användning av LOF för träning framför slumpmässig träning kunde hittas. Däremot förbättrades resultaten genom att använda kontextuella modeller. Avslutande så presenteras idéer för hur ett system som beskrivet kan användas för att upptäcka intrång i webbsystem.
|
19 |
Using Semantic Web Technologies for Classification Analysis in Social NetworksOpuszko, Marek January 2011 (has links)
The Semantic Web enables people and computers to interact and exchange
information. Based on Semantic Web technologies, different machine learning applications have been designed. Particularly to emphasize is the possibility to create complex metadata descriptions for any problem domain, based on pre-defined ontologies. In this paper we evaluate the use of a semantic similarity measure based on pre-defined ontologies as an input for a classification analysis. A link prediction between actors of a social network is performed, which could serve as a recommendation system. We measure the prediction performance based on an ontology-based metadata modeling as well as a feature vector modeling. The findings demonstrate that the prediction accuracy based on ontology-based metadata is comparable to traditional approaches and shows that data mining using ontology-based metadata can be considered as a very promising approach.
|
20 |
Utveckling av användargränssnitt med användbarhet i fokusPasic, Moris January 2016 (has links)
Vi lever i spännande tider, där vi har tillgång till olika användargränssnitt som hjälper oss att kommunicera med andra människor i realtid, oavsett var i världen de befinner sig. Ricoh är ett globalt IT företag som har utvecklat ett kompakt videokonferenssystem för dessa ändamål som heter ”P3500M”. Utveckling av mjukvara för denna typ av teknologi kan medföra olika tekniska utmaningar. Samtidigt håller organisationer viktiga möten via videokonferens och ställer ofta höga krav på kvalitén. Att skapa ett användbart gränssnitt som beaktar alla dessa aspekter kan bli en utmanande uppgift. Denna studie syftar till att utveckla ett nytt konceptgränssnitt som effekti- viserar utveckling, samt användning av videokonferenssystem och P3500M används som en ut- gångspunkt. Genom att utnyttja framväxande webbaserade teknologier och riktlinjer från tidigare studier inom produktutveckling med användbarhet i fokus, har man i denna studie resulterat i skapandet av en designlösningen som heter ”Cloud Vision”. Studien föreslår ett nytt sätt att ut- veckla användbara gränssnitt för videokonferenssystem, genom utveckling av en central webbap- plikation som tillhandahåller gränssnittet. Med gränssnitt som kan appliceras på olika videokon- ferenssystem som en separat modul, oberoende av plattform, kan det bli lättare att underhålla utvecklingen och hålla fokus på användbarhetsperspektiven. / We live in exciting times, where we have access to different user interfaces that help us commu- nicate with other people in real-time, regardless of where they are in the world. Ricoh is a global IT company that has developed a compact videoconferencing system for these purposes, called ”P3500M”. Development of software for this type of technology can lead to various technical challenges. At the same time, organizations have important meetings through videoconferencing and often make high demands on the quality. To create a useful interface that takes all these aspects into account can be a challenging task. This study aims to develop a new concept interface that streamlines the development and use of videoconferencing systems, where P3500M is used as a starting point. By making use of emerging web technologies and guidelines from previous studies in product development with usability in mind, this study results in the creation of a new design called ”Cloud Vision”. The study proposes a new way to develop usable interfaces for videoconfe- rencing systems, through the development of a central web application that provides the interface. With interfaces that can be applied to various videoconferencing systems as a separate module, regardless of platform, it can be easier to maintain the development and keep focus on usability perspectives.
|
Page generated in 0.0462 seconds