• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 5
  • 4
  • 4
  • 3
  • 1
  • Tagged with
  • 25
  • 13
  • 11
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Analýza a vylepšování aplikací pro prohlížeče na základě trendů užívání. / Browser extensions analysis based on usage trends and their improvements

Marek, Lukáš January 2013 (has links)
This master thesis deals with the topic of browser extensions, their environment and analysis. The goal is to describe the extensions environment, online Webstores, that offers extensions and to show best practices for analysis and optimization of its extensions and their assets. Within the thesis you can find very precise analysis of online Webstores for Google Chrome extensions and Mozilla Firefox add-ons. Conclusions are made based on this analysis that include special characteristics of the previously mentioned browsers. The master thesis consists of two parts, theoretical and practical. The theoretical parts deals with the description of the browser extensions environment and it presents specific characteristics about online Webstores and browser extensions to the reader. In the practical part the thesis is focused on objectives set by the thesis and it presents the results of the Webstore analysis and description of the universal Google Analytics solution that helps developers to analyze their extensions The thesis contributes to the topic mainly with the precise description of the browser Webstores and extensions environment, best practices and recommendations and by creating the universal Google Analytics solutions for the developers.
12

Mitteilungen des URZ 3/2002

Brose,, Clauß,, Grunewald,, Heide,, Heik,, Richter,, Riedel,, Schmidt,, Wegener, 03 December 2002 (has links) (PDF)
Mitteilungen des URZ 3/2002
13

Mitteilungen des URZ 4/2002

Becher,, Fischer,, Grunewald,, Junghänel,, Müller,, Richter,, Riedel, 17 December 2002 (has links)
Mitteilungen des URZ 4/2002
14

Mitteilungen des URZ 1/2003

Ziegler,, Richter,, Riedel,, Hille, 10 March 2003 (has links)
Mitteilungen des URZ 1/2003
15

Analýza realitního trhu pomocí informací na Internetu / Analysis of real estate market using information on Internet

Bulín, Martin January 2017 (has links)
This thesis was created as a helping tool to estimate the price of properties by comparative method, which needs a database of properties. This database is created by a program. The main aim of this thesis is to create an application in Python language, which will search through contents of websites of a chosen real estate portal in Czech Republic. The next aim of this thesis is to create program which will search for real estate according to chosen criteria. The purpose of these criteria is to compare and estimate the price of properties. In the theoretical part of this thesis I will describe data mining and architecture of data mining. In the practical part I will design an aplication and in the end I will implement it. This aplication will go through websites and obtain data about real estate offices, their advertisements and photos of advertisements. For easier work with the output file that contains the advertisements, I created an application for searching individual advertisements based on users specification. This program allows to quickly search for the requested data from the output file. These data are subject to further work, it is possible to analyse them and create interesting statistics and maps.
16

Mitteilungen des URZ 3/2007

Ehrig, Matthias, Heide, Gerd, Richter, Frank, Riedel, Wolfgang, Trapp, Holger, Worm, Stefan 07 August 2007 (has links)
Informationen des Universitätsrechenzentrums:Softwareausstattung der Ausbildungspools im Wintersemester 2007/08 Neue Standard-Software für WWW und E-Mail Bedienung von Mozilla Firefox 1.5.X Bedienung von Mozilla Thunderbird 1.5.X Drucken im URZ Das Projekt SyS-C: Chemnitzer Schulen ans Netz Kurzinformationen: Neues vom TUC-WLAN, IT-News aus der UB, Pilotbetrieb Windows VISTA und Scientific Linux 5, Außerbetriebnahme von Hardware Software-News: Neue Software-Handbücher
17

Investigating the Impact of Personal, Temporal and Participation Factors on Code Review Quality

Cao, Yaxin 07 1900 (has links)
La révision du code est un procédé essentiel quelque soit la maturité d'un projet; elle cherche à évaluer la contribution apportée par le code soumis par les développeurs. En principe, la révision du code améliore la qualité des changements de code (patches) avant qu'ils ne soient validés dans le repertoire maître du projet. En pratique, l'exécution de ce procédé n'exclu pas la possibilité que certains bugs passent inaperçus. Dans ce document, nous présentons une étude empirique enquétant la révision du code d'un grand projet open source. Nous investissons les relations entre les inspections des reviewers et les facteurs, sur les plans personnel et temporel, qui pourraient affecter la qualité de telles inspections.Premiérement, nous relatons une étude quantitative dans laquelle nous utilisons l'algorithme SSZ pour détecter les modifications et les changements de code favorisant la création de bogues (bug-inducing changes) que nous avons lié avec l'information contenue dans les révisions de code (code review information) extraites du systéme de traçage des erreurs (issue tracking system). Nous avons découvert que les raisons pour lesquelles les réviseurs manquent certains bogues était corrélées autant à leurs caractéristiques personnelles qu'aux propriétés techniques des corrections en cours de revue. Ensuite, nous relatons une étude qualitative invitant les développeurs de chez Mozilla à nous donner leur opinion concernant les attributs favorables à la bonne formulation d'une révision de code. Les résultats de notre sondage suggèrent que les développeurs considèrent les aspects techniques (taille de la correction, nombre de chunks et de modules) autant que les caractéristiques personnelles (l'expérience et review queue) comme des facteurs influant fortement la qualité des revues de code. / Code review is an essential element of any mature software development project; it aims at evaluating code contributions submitted by developers. In principle, code review should improve the quality of code changes (patches) before they are committed to the project's master repository. In practice, the execution of this process can allow bugs to get in unnoticed. In this thesis, we present an empirical study investigating code review of a large open source project. We explore the relationship between reviewers'code inspections and personal, temporal and participation factors that might affect the quality of such inspections. We first report a quantitative study in which we applied the SZZ algorithm to detect bug-inducing changes that were then linked to the code review information extracted from the issue tracking system. We found that the reasons why reviewers miss bugs are related to both their personal characteristics, as well as the technical properties of the patches under review. We then report a qualitative study that aims at soliciting opinions from Mozilla developers on their perception of the attributes associated with a well-done code review. The results of our survey suggest that developers find both technical (patch size, number of chunks, and module) and personal factors (reviewer's experience and review queue) to be strong contributors to the review quality.
18

Kampen fortsätter : En studie om kompatibilitetsproblem mellan moderna webbläsare / The fight continues : A study of compatibility problems of modern web browsers

Trenkler, Silja January 2006 (has links)
<p>Under 1990-talet utspelade sig en bitter kamp om marknadsandelar mellan de två ledande webbläsare Internet Explorer och Netscape Navigator, det så kallade webbläsar-kriget. Kriget hade till följd att webbläsarna blev nästan helt inkompatibla. Sedan dess pågår en ständig utveckling av gemensamma standarder för webben. Idag är förutsättningarna för kompatibilitet mycket bättre än för tio år sidan, men problemet är inte fullständigt avhjälpt. De moderna webbläsare Internet Explorer 6, Firefox 1.5, Opera 8.5 och Safari kan återge en och samma webbsida visas på olika sätt trots att det finns gemensamma standarder. Syftet med denna uppsats är att ta reda på de tekniska orsakerna bakom problemet samt att ta fram lösningsförslag för att skapa en webbsida som är helt kompatibel i de moderna webbläsarna. Uppsatsen innehåller ett omfattande teorikapitel som behandlar definitioner, historik och problem. Teorin kompletteras av tre fältintervjuer med professionella webbutvecklare. Undersökningarna visar att kompati-bilitetsproblem beror på flera faktorer och att det är omöjligt att skapa en heltäckande lösning som kommer åt alla problem. Men genom att kombinera olika tekniker kan man skapa en metod som täcker en stor del av såväl generella som specifika kompatibilitetsproblemen utan att kollidera med rekommenderade standarder.</p> / <p>During the 1990’s the two leading web browsers, Internet Explorer and Netscape Navigator, fought each other in a battle for market shares, the so-called browser war. This war caused almost complete incompatibility between the web browsers. Since then, there has been a continual development of common standards for the web. Today conditions for compatibility are a lot better compared to ten years ago, but the problem is not completely solved. The modern web browsers Internet Explorer 6, Firefox 1.5, Opera 8.5 and Safari can display the exact same web page differently despite common standards. The aim of this essay is to investigate the technical causes of the problem and to develop suggested solutions for creating a web page that is fully compatible in modern browsers. The essay contains an extensive literature study, considering definitions, history and problems. The theory was completed with three field interviews with professional web designers. The investigations show that compatibility problems depend on several factors and that it is impossible to create one exhaustive solution that encompasses all problems. However, by combining different techniques one can create a method that covers a large part of both general and specific compatibility problems without colliding with recommended standards.</p>
19

Kampen fortsätter : En studie om kompatibilitetsproblem mellan moderna webbläsare / The fight continues : A study of compatibility problems of modern web browsers

Trenkler, Silja January 2006 (has links)
Under 1990-talet utspelade sig en bitter kamp om marknadsandelar mellan de två ledande webbläsare Internet Explorer och Netscape Navigator, det så kallade webbläsar-kriget. Kriget hade till följd att webbläsarna blev nästan helt inkompatibla. Sedan dess pågår en ständig utveckling av gemensamma standarder för webben. Idag är förutsättningarna för kompatibilitet mycket bättre än för tio år sidan, men problemet är inte fullständigt avhjälpt. De moderna webbläsare Internet Explorer 6, Firefox 1.5, Opera 8.5 och Safari kan återge en och samma webbsida visas på olika sätt trots att det finns gemensamma standarder. Syftet med denna uppsats är att ta reda på de tekniska orsakerna bakom problemet samt att ta fram lösningsförslag för att skapa en webbsida som är helt kompatibel i de moderna webbläsarna. Uppsatsen innehåller ett omfattande teorikapitel som behandlar definitioner, historik och problem. Teorin kompletteras av tre fältintervjuer med professionella webbutvecklare. Undersökningarna visar att kompati-bilitetsproblem beror på flera faktorer och att det är omöjligt att skapa en heltäckande lösning som kommer åt alla problem. Men genom att kombinera olika tekniker kan man skapa en metod som täcker en stor del av såväl generella som specifika kompatibilitetsproblemen utan att kollidera med rekommenderade standarder. / During the 1990’s the two leading web browsers, Internet Explorer and Netscape Navigator, fought each other in a battle for market shares, the so-called browser war. This war caused almost complete incompatibility between the web browsers. Since then, there has been a continual development of common standards for the web. Today conditions for compatibility are a lot better compared to ten years ago, but the problem is not completely solved. The modern web browsers Internet Explorer 6, Firefox 1.5, Opera 8.5 and Safari can display the exact same web page differently despite common standards. The aim of this essay is to investigate the technical causes of the problem and to develop suggested solutions for creating a web page that is fully compatible in modern browsers. The essay contains an extensive literature study, considering definitions, history and problems. The theory was completed with three field interviews with professional web designers. The investigations show that compatibility problems depend on several factors and that it is impossible to create one exhaustive solution that encompasses all problems. However, by combining different techniques one can create a method that covers a large part of both general and specific compatibility problems without colliding with recommended standards.
20

Automatizovaná rekonstrukce webových stránek / Automatic Webpage Reconstruction

Serečun, Viliam January 2018 (has links)
Many legal institutions require a burden of proof regarding web content. This thesis deals with a problem connected to web reconstruction and archiving. The primary goal is to provide an open source solution, which will satisfy legal institutions with their requirements. This work presents two main products. The first is a framework, which is a fundamental building block for developing web scraping and web archiving applications. The second product is a web application prototype. This prototype shows the framework utilization. The application output is MAFF archive file which comprises a reconstructed web page, web page screenshot, and meta information table. This table shows information about collected data, server information such as IP addresses and ports of a device where is the original web page located, and time stamp.

Page generated in 0.0361 seconds