Spelling suggestions: "subject:" eeb"" "subject:" beb""
681 |
A prototype to discover and penetrate access restricted web pages in an ExtranetVan Jaarsveld, Rudi 13 October 2014 (has links)
M.Sc. (Information Technology) / The internet grew exponentially over the last decade. With more information available on the web, search engines, with the help of web crawlers also known as web bots, gather information on the web and indexes billions of web pages. This indexed information helps users to find relevant information on the internet. An extranet is a sub-set of the internet. This part of the web controls access for a selected audience to a specific resource and are also referred to as restricted web sites. Various industries use extranets for different purposes and store different types of information on it. Some of this information could be of a confidential nature and therefore it is important that this information is adequately secured and should not be accessible by web bots. In some cases web bots can accidently stumble onto poorly secured pages in an extranet and add the restricted web pages to their indexed search results. Search engines like Google, that are designed to filter through a large amount of data, can accidently crawl onto access restricted web pages if such pages are not secured properly. Researchers found that it is possible for web crawlers of well known search engines to access poorly secured web pages in access restricted web sites. The risk is that not all web bots have good intentions and that some have a more malicious intent. These malicious web bots search for vulnerabilities in extranets and use the vulnerabilities to access confidential information. The main objective of this dissertation is to develop a prototype web bot called Ferret that would crawl through a web site developed by a web developer(s). Ferret will try to discover and access restricted web pages that are poorly secured in the extranet and report the weaknesses. From the information and findings of this research a best practice guideline will be drafted that will help developers to ensure access restricted web pages are secured and invisible to web bots.
|
682 |
Mejoramiento de una metodología para la identificación de website keyobject mediante la aplicación de tecnologías eye tracking, análisis de dilatación pupilar y algoritmos de web miningMartínez Azocar, Gustavo Adolfo January 2013 (has links)
Ingeniero Civil Industrial / El crecimiento acelerado de internet ha creado un aumento sostenido de los sitios web para todo tipo de empresas, organizaciones y particulares, provocando un nivel de oferta inmensamente alto. Estos sitios comienzan cada vez más a ser un importante canal tanto de comunicación directa con el cliente como de ventas, por lo que se hace necesario tratar de generar estrategias que permitan atraer a más usuarios al sitio y además hacer que los actuales usuarios continúen utilizándolo. Esto lleva a preguntarse qué tipo de información resulta de utilidad para el usuario final y como poder identificar esa información.
Anteriormente se ha tratado de abordar este problema mediante técnica de web mining a las áreas de contenido, estructuras y usabilidad de un sitio web, de modo de poder encontrar patrones que permitan generar información y conocimiento sobre estos datos. Estos a su vez permitirían tomar mejores decisiones respecto de la estructura y contenido de los sitios web.
Sin embargo este tipo de técnicas incluía la conjunción de datos objetivos (web logs) con datos subjetivos (encuestas y focus group principalmente), los cuales poseen una alta variabilidad tanto personal como interpersonal. Esto provoca que el análisis posterior de los datos pueda contener errores, lo que redunda en peores decisiones.
Para resolver en cierta manera eso, este proyecto de memoria desarrolló algoritmos de web mining que incluyen análisis de exploración visual y neurodatos. Al ser ambas fuentes de datos objetivas, se elimina en cierta parte la variabilidad de los resultados posteriores, con la consecuente mejora en las decisiones a tomar.
El resultado principal de este proyecto son algoritmos de web mining y modelos de comportamiento del usuario que incluyen información de análisis de exploración visual y datos obtenidos a través de ténicas de neurociencia. Se incluyen también una lsita de website keyobjects encontrados en la página de prueba para este proyecto.
Se incluyen además una revisión general acerca de los principales temas sobre los que el proyecto se basa: la web e internet, el proceso KDD, Web Mining, sistemas de eye tracking y website keyobjects. Por otra parte se especificaron los alcances del proyecto de memoria, tanto técnicos como de investigación.
Se concluye que el resultado del trabajo fue exitoso, incluso siendo el resultado de los algoritmos similares a la metodología previa. Sin embargo se abre un nuevo camino en cuanto al análisis de sitio dadas las relaciones encontradas entre el comportamiento pupilar y el análisis del sitio. Son incluidas ciertas consideraciones y recomendaciones para continuar y mejorar este trabajo.
|
683 |
Analysis of low-level interaction events as a proxy for familiarityApaolaza, Aitor January 2016 (has links)
This thesis provides insight into long-term factors of user behaviour with a Web site or application using low-level interaction events (such as mouse movement, and scroll action) as a proxy. Current laboratory studies employ scenarios where confounding variables can be controlled. Unfortunately, these scenarios are not naturalistic or ecologically valid. Existing remote alternatives fail to provide either the required granularity or the necessary naturalistic aspect. Without appropriate longitudinal approaches, the effects of long-term factors can only be analysed via cross-sectional studies, ignoring within-subject variability. Using a naturalistic remote interaction data capturing tool represents a key improvement and supports the analysis of longitudinal user interaction in the wild. Naturalistic low-level fine-grained Web interaction data (from URLs visited, to keystrokes and mouse movements) has been captured in the wild from publicly available working live sites for over 16 months. Different combinations of low-level indicators are characterised as micro behaviours to enable the analysis of interaction captured for extended periods of time. The extraction of micro behaviours provides an extensible technique to obtain meaning from long-term low-level interaction data. 18 thousand recurring users have been extracted and 53 million events have been analysed. A relation of users' interaction time with the site and their degree of familiarity has been found via a remote survey. This relation enables the use of users' active time with the site as a proxy for their degree of familiarity. Analysing the evolution of extracted micro behaviours enables an understanding of how users' interaction behaviour changes over time. The results demonstrate that monitoring micro behaviours offers a simple and easily extensible post hoc approach to understand how Web-based behaviour changes over time. Results of the analysis have identified key aspects from micro behaviours that are strongly correlated with users' degree of familiarity. In the case of users scrolling continuously for short periods of time, it has been found that the speed of the scroll increased as users' become more familiar with the Web site. Users have also been found to spend more time on the Web site without interacting with the mouse. Understanding long-term interaction factors such as familiarity supports the design of interfaces that accommodate users' interaction evolution. Combining found key aspects enables a prediction of a user's degree of familiarity without the need for continuous observation. The presented approach also allows for the validation of hypothesis on longitudinal user interaction behaviour factors.
|
684 |
Identifying the benefits of social media within large financial institutions in South AfricaVan der Ross, Robert January 2015 (has links)
Magister Commercii (Information Management) - MCom(IM) / In recent years, the information systems / information technology industry has been one of the most fast growing industries. Regularly, existing technologies are being upgraded and new technologies are being introduced within the industry. For these reasons, business institutions have to stay abreast with market trends and understand what the market is doing. Since the inceptions of social media, a relatively new phenomenon within industry, institutions have to get on board in terms of using these technologies simply because of what the customers are doing. The augmentation of social media applications within business has proved valuable in the sense that institutions are capitalising on what the customers are really saying. Social media applications take many forms and in this particular paper, the benefits of social media within large financial institution will be analysed. The main aim is to identify the benefits of social media platforms and how large financial institutions are benefiting from these revolutionary communication mediums. In order to fully conceptualise the nature of this research study, it takes the form of a
literature review at first, followed by empirical field research. Thereafter the research study uses case study methodology where interviews and survey questionnaires were used to make an in depth analysis of the benefits related to the financial companies. The outcomes of the study showed that there are many benefits of social media within financial institutions. The findings suggest that social media has the ability to enhance the brand, increase customer satisfaction as well as boost business services through innovation. Apart from this study adding to the existing body of knowledge, it could potentially create awareness of the benefits (if any) to financial industries and other industries as well and therefore could be advantageous. In essence, the study outcome could contribute to the improvement of current businesses.
|
685 |
A grounded theory analysis of networking capabilities in virtual organizingKoekemoer, Johannes Frederik 10 November 2008 (has links)
The use of the Internet by web-based organizations impacts on all aspects of their business activities. The continuous evolution of e-commerce technologies enables web-based business (consisting of virtual supply chain partners) to integrate its manufacturing operations and to gain competitive advantage through entire virtual supply chains. Although the interplay of e-commerce and virtual supply chain cooperation is not clear when considering supply chain forecasting, planning, scheduling, execution and after-service, the potential for virtual coordination of business activities by means of e-commerce technologies is growing in importance. In this regard, networking capabilities that enable virtual organizing activities in the virtual value chain network are of particular importance to web-based organizations. The research investigated this using a grounded theory approach. The Grounded Theory analysis consisted of three phases. First, following a comprehensive review of the relevant literature, a set of particularly relevant articles was identified to provide the basic data from which to develop a first, preliminary framework or theory. This framework was subsequently refined to produce a concluding framework, using data collected during interviews with representatives of six different web-based businesses. Finally, the concluding framework or theory was validated by applying it to a particular case. The concluding framework contains twelve networking capabilities, adding three to the nine identified in the preliminary framework. The conceptual framework with theoretical description of relationships between identified networking capabilities clarify the use of networking capabilities with virtual organizing in a virtual value network of organizations. An interpretation of the concluding framework, based on Actor-Network Theory, shows how the entrepreneur can leverage the inter-relationships between the networking capabilities to enable more effective and efficient virtual organizing. In particular, it shows how the entrepreneur can utilize knowledge and skills related to the identified networking capabilities to build and maintain a stable and eventually institutionalized network of partners. Finally, using the results of this interpretation of the grounded theory, the entrepreneurial process was defined in which the role of information technology as well as the role of the entrepreneur in establishing and maintaining the virtual value network was described. / Thesis (PhD)--University of Pretoria, 2008. / Informatics / unrestricted
|
686 |
Semantic Analysis of Web Pages for Task-based Personal Web InteractionsManjunath, Geetha January 2013 (has links) (PDF)
Mobile widgets now form a new paradigm of simplified web. Probably, the best experience of the Web is when a user has a widget for every frequently executed task, and can execute it anytime, anywhere on any device. However, the current method of programmatically creating personally relevant mobile widgets for every user does not scale. Creation of these mobile web widgets requires application programming as well as knowledge of web-related protocols. Furthermore, these mobile widgets are also limited to smart phones with data connectivity and such smart phones form just about 15% of the mobile phones in India. How do we make web accessible on devices that most people can afford? How does one create simple relevant tasks for the numerous diverse needs of every person? In this thesis, we attempt to address these issues and propose a new method of web simplification that enables an end-user to create simple single-click widgets for a complex personal task - without any programming. The proposed solution enables even low-end phones to access personal web tasks over SMS and voice. We propose a system that enables end users to create personal widgets via programming-by-browsing.
A new concept called Tasklets to represent a user’s personal interaction, and a notion of programming over websites using a Web Virtual Machine are presented. Ensuring correct execution of these end user widgets posed interesting problems in web data mining and required us to investigate new methods to characterize and semantically model browser-based interactions. In particular, an instruction set for programming over web sites, new domain-specific similarity measures using ontologies, algorithms for frequent-pattern mining of web interactions and change detection with a proof of its NP-completeness are presented. A quantitative metric to measure the interaction complexity of web browsing and a method of classifying relational data using semantics hidden in the schema are introduced as well. This new web architecture to enable multi-device access to user's personal tasks over low-end phones was piloted with real users, as a solution named SiteOnMobile, and has received very positive response.
|
687 |
M-crawler: Crawling Rich Internet Applications Using Menu Meta-modelChoudhary, Suryakant January 2012 (has links)
Web applications have come a long way both in terms of adoption to provide information and services and in terms of the technologies to develop them. With the emergence of richer and more advanced technologies such as Ajax, web applications have become more interactive, responsive and user friendly. These applications, often called Rich Internet Applications (RIAs) changed the traditional web applications in two primary ways: Dynamic manipulation of client side state and Asynchronous communication with the server. At the same time, such techniques also introduce new challenges. Among these challenges, an important one is the difficulty of automatically crawling these new applications. Crawling is not only important for indexing the contents but also critical to web application assessment such as testing for security vulnerabilities or accessibility. Traditional crawlers are no longer sufficient for these newer technologies and crawling in RIAs is either inexistent or far from perfect. There is a need for an efficient crawler for web applications developed using these new technologies. Further, as more and more enterprises use these new technologies to provide their services, the requirement for a better crawler becomes inevitable. This thesis studies the problems associated with crawling RIAs. Crawling RIAs is fundamentally more difficult than crawling traditional multi-page web applications. The thesis also presents an efficient RIA crawling strategy and compares it with existing methods.
|
688 |
Využití serveru www.o2extra.cz k propojení marketingových a komunikačních aktivit společnosti Telefónica O2 Czech Republic / Server www.o2extra.cz and its usage as a connection tool of communication a sponzorship activities of Telefónica O2 Czech Republic, a.s.Havlena, Lukáš January 2008 (has links)
The thesis of this work is focused on evaluation of effectivity of server www.o2extra.cz, in a context of todays Internet phenomenon - Web 2.0. Server O2 Extra is a community web, which is used as a place for sponzorship and loyality programme. User can do variety of things - read news about sponzorship projects, take part in a contest or discuss with others and watch photos. The theory part is aimed at introducing Web 2.0 to the reader - given examples of Web 2.0 are: Google, Myspace, Facebook and Last.fm. Their functions and goals are also listed, as it's crucial to understand how community servers work. Space is given also to a part about sponzorship activities, as O2 Extra is mainly used as a headquarters for them. Main part of the work is about O2 Extra itself - its functions and visual look is described and presented, followed by the own effectivity evaluation. For this purposes, Google Analytics was used as a evaluation tool. The conclutions and interpretations are given right after presented statistics, statistics as visits, bounce rate, pageviews etc. Work can be therefore used as a manual for O2 Extra owners, to improve its effectivity and number of users.
|
689 |
Ingénierie des applications Web : réduire la complexité sans diminuer le contrôle / Web applications engineering : reduce the complexity without loosing controlRichard-Foy, Julien 09 December 2014 (has links)
L'automatisation de certaines tâches et traitements d'information grâce aux outils numériques permet de réaliser des économies considérables sur nos activités. Le Web est une plateforme propice à la mise en place de tels outils : ceux-ci sont hébergés par des serveurs, qui centralisent les informations et coordonnent les utilisateurs, et ces derniers accèdent aux outils depuis leurs terminaux clients (ordinateur, téléphone, tablette, etc.) en utilisant un navigateur Web, sans étape d'installation. La réalisation de ces applications Web présente des difficultés pour les développeurs. La principale difficulté vient de la distance entre les postes client et serveur. D'une part, la distance physique (ou distance matérielle ) entre les machines nécessite qu'une connexion réseau soit toujours établie entre elles pour que l'application fonctionne correctement. Cela pose plusieurs problèmes : comment gérer les problèmes de latence lors des échanges d'information ? Comment assurer une qualité de service même lorsque la connexion réseau est interrompue ? Comment choisir quelle part de l'application s'exécute sur le client et quelle part s'exécute sur le serveur ? Comment éviter aux développeurs d'avoir à résoudre ces problèmes sans pour autant masquer la nature distribuée des applications Web et au risque de perdre les avantages de ces architectures ? D'autre part, l'environnement d'exécution est différent entre les clients et serveurs, produisant une distance logicielle. En effet, côté client, le programme s'exécute dans un navigateur Web dont l'interface de programmation (API) permet de réagir aux actions de l'utilisateur et de mettre à jour le document affiché. De l'autre côté, c'est un serveur Web qui traite les requêtes des clients selon le protocole HTTP. Certains aspects d'une application Web peuvent être communs aux parties client et serveur, par exemple la construction de morceaux de pages Web, la validation de données saisies dans les formulaires, la navigation ou même certains calculs métier. Cependant, comme les API des environnements client et serveur sont différentes, comment mutualiser ces aspects tout en bénéficiant des mêmes performances d'exécution qu'en utilisant les API natives ? De même, comment conserver la possibilité de tirer parti des spécificités de chaque environnement ? Les travaux de cette thèse ont pour but de raccourcir cette distance, tant logicielle que matérielle, tout en préservant la capacité à tirer parti de cette distance, c'est-à-dire en donnant autant de contrôle aux développeurs. / Thanks to information technologies, some tasks or information process can be automated, thus saving a significant amount of money. The web platform brings numerous of such digital tools. These are hosted on web servers that centralize information and coordinate users, which can use the tools from several kinds of devices (desktop computer, laptop, smartphone, etc.), by using a web browser, without installing anything. Nevertheless, developing such web applications is challenging. The difficulty mainly comes from the distance between client and server devices. First, the physical distance between these machines requires them to be networked. This raises several issues. How to manage latency ? How to provide a good quality of service even when the network is down ? How to choose on which side (client or server) to execute a computation ? How to free developers from addressing these problems without yet hiding the distributed nature of web application so that they can still benefit from their advantages ? Second, the execution environment is different between clients and servers. Indeed, on client-side the program is executed within a web browser whose API provides means of reacting to user actions and of updating the page. On server-side, the program is executed on a web server that processes HTTP requests. Some aspects of web applications can be shared between client and server sides (e.g. content display, form validation, navigation, or even some business computations). However, the APIs and environments are different between clients and servers, so how to share the same code while keeping the same execution performance as with native APIs ? How to keep the opportunity to leverage the specificities of a given platform ? This work aims at shortening this distance while keeping the opportunity to leverage it, that is while giving developers as much expressive power.
|
690 |
Problematika obsahového webu / The Issue of Content-Based WebsiteSova, Martin January 2012 (has links)
The theme of the present thesis is a content-based website. The paper defines the concept presented on a model of layers functioning content-based website and analyses its functioning from the perspective of systems theory on the basis of identified major transformation functions bound to the operation of the web content. The reader will be acquainted with the model of content distribution on the website and the possibilities of financing its operations. Formulated hypotheses are testing possibilities of return on investment using specific advertising possibilities; validity of these hypotheses is then tested on the data collected during operation of the specific content sites. Than the problem of processes taking place in creating web content is further analyzed. There is a practical example of selection and implementation of an information system built to support the creation of content on a particular website: analyzing operating processes, it describes how the selection of appropriate resources and their deployment is made. The goal of this thesis is to help answer the question whether the operation of the content-based website may be financed by advertising the location of elements, identify what kind of processes are operated in content creation site, and state how to select and implement an information system to support them.
|
Page generated in 0.0571 seconds