• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 28
  • 8
  • 6
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 101
  • 31
  • 30
  • 18
  • 16
  • 15
  • 15
  • 15
  • 14
  • 14
  • 14
  • 13
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Performance Analysis of JavaScript

Smedberg, Fredrik January 2010 (has links)
<p>In the last decade, web browsers have seen a remarkable increase of performance, especially in the JavaScript engines. JavaScript has over the years gone from being a slow and rather limited language, to today have become feature-rich and fast. It’s speed can be around the same or half of comparable code written in C++, but this speed is directly dependent on the choice of the web browser, and the best performance is seen in browsers using JIT compilation techniques.</p><p>Even though the language has seen a dramatic increase in performance, there’s still major problems regarding memory usage. JavaScript applications typically consume 3-4 times more memory than similar applications written in C++. Many browser vendors, like Opera Software, acknowledge this and are currently trying to optimize their memory usage. This issue is hopefully non-existent within a near future.</p><p>Because the majority of scientific papers written about JavaScript only compare performance using the industry benchmarks SunSpider and V8, this thesis have chosen to widen the scope. The benchmarks really give no information about how JavaScript stands in comparison to C#, C++ and other popular languages. To be able to compare that, I’ve implemented a GIF decoder, an XML parser and various elementary tests in both JavaScript and C++ to compare how far apart the languages are in terms of speed, memory usage and responsiveness.</p>
82

Um analisador sintático neural multilíngue baseado em transições

Costa, Pablo Botton da 24 January 2017 (has links)
Submitted by Ronildo Prado (ronisp@ufscar.br) on 2017-08-23T18:26:08Z No. of bitstreams: 1 DissPBC.pdf: 1229668 bytes, checksum: 806b06dd0fbdd6a4076384a7d0f90456 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-23T18:26:15Z (GMT) No. of bitstreams: 1 DissPBC.pdf: 1229668 bytes, checksum: 806b06dd0fbdd6a4076384a7d0f90456 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-23T18:26:21Z (GMT) No. of bitstreams: 1 DissPBC.pdf: 1229668 bytes, checksum: 806b06dd0fbdd6a4076384a7d0f90456 (MD5) / Made available in DSpace on 2017-08-23T18:26:28Z (GMT). No. of bitstreams: 1 DissPBC.pdf: 1229668 bytes, checksum: 806b06dd0fbdd6a4076384a7d0f90456 (MD5) Previous issue date: 2017-01-24 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / A dependency parser consists in inducing a model that is capable of extracting the right dependency tree from an input natural language sentence. Nowadays, the multilingual techniques are being used more and more in Natural Language Processing (NLP) (BROWN et al., 1995; COHEN; DAS; SMITH, 2011), especially in the dependency parsing task. Intuitively, a multilingual parser can be seen as vector of different parsers, in which each one is individually trained on one language. However, this approach can be a really pain in the neck in terms of processing time and resources. As an alternative, many parsing techniques have been developed in order to solve this problem (MCDONALD; PETROV; HALL, 2011; TACKSTROM; MCDONALD; USZKOREIT, 2012; TITOV; HENDERSON, 2007) but all of them depends on word alignment (TACKSTROM; MCDONALD; USZKOREIT, 2012) or word clustering, which increases the complexity since it is difficult to induce alignments between words and syntactic resources (TSARFATY et al., 2013; BOHNET et al., 2013a). A simple solution proposed recently (NIVRE et al., 2016a) uses an universal annotated corpus in order to reduce the complexity associated with the construction of a multilingual parser. In this context, this work presents an universal model for dependency parsing: the NNParser. Our model is a modification of Chen e Manning (2014) with a more greedy and accurate model to capture distributional representations (MIKOLOV et al., 2011). The NNparser reached 93.08% UAS in English Penn Treebank (WSJ) and better results than the state of the art Stack LSTM parser for Portuguese (87.93% × 86.2% LAS) and Spanish (86.95% × 85.7% LAS) on the universal dependencies corpus. / Um analisador sintático de dependência consiste em um modelo capaz de extrair a estrutura de dependência de uma sentença em língua natural. No Processamento de Linguagem Natural (PLN), os métodos multilíngues tem sido cada vez mais utilizados (BROWN et al., 1995; COHEN; DAS; SMITH, 2011), inclusive na tarefa de análise de dependência. Intuitivamente, um analisador sintático multilíngue pode ser visto como um vetor de analisadores sintáticos treinados individualmente em cada língua. Contudo, a tarefa realizada com base neste vetor torna-se inviável devido a sua alta demanda por recursos. Como alternativa, diversos métodos de análise sintática foram propostos (MCDONALD; PETROV; HALL, 2011; TACKSTROM; MCDONALD; USZKOREIT, 2012; TITOV; HENDERSON, 2007), mas todos dependentes de alinhamento entre palavras (TACKSTROM; MCDONALD; USZKOREIT, 2012) ou de técnicas de agrupamento, o que também aumenta a complexidade associada ao modelo (TSARFATY et al., 2013; BOHNET et al., 2013a). Uma solução simples surgiu recentemente com a construção de recursos universais (NIVRE et al., 2016a). Estes recursos universais têm o potencial de diminuir a complexidade associada à construção de um modelo multilíngue, uma vez que não é necessário um mapeamento entre as diferentes notações das línguas. Nesta linha, este trabalho apresenta um modelo para análise sintática universal de dependência: o NNParser. O modelo em questão é uma modificação da proposta de Chen e Manning (2014) com um modelo mais guloso e preciso na captura de representações distribuídas (MIKOLOV et al., 2011). Nos experimentos aqui apresentados o NNParser atingiu 93, 08% de UAS para o inglês no córpus Penn Treebank e resultados melhores do que o estado da arte, o Stack LSTM, para o português (87,93% × 86,2% LAS) e o espanhol (86,95% × 85,7% LAS) no córpus UD 1.2.
83

Performance Analysis of JavaScript

Smedberg, Fredrik January 2010 (has links)
In the last decade, web browsers have seen a remarkable increase of performance, especially in the JavaScript engines. JavaScript has over the years gone from being a slow and rather limited language, to today have become feature-rich and fast. It’s speed can be around the same or half of comparable code written in C++, but this speed is directly dependent on the choice of the web browser, and the best performance is seen in browsers using JIT compilation techniques. Even though the language has seen a dramatic increase in performance, there’s still major problems regarding memory usage. JavaScript applications typically consume 3-4 times more memory than similar applications written in C++. Many browser vendors, like Opera Software, acknowledge this and are currently trying to optimize their memory usage. This issue is hopefully non-existent within a near future. Because the majority of scientific papers written about JavaScript only compare performance using the industry benchmarks SunSpider and V8, this thesis have chosen to widen the scope. The benchmarks really give no information about how JavaScript stands in comparison to C#, C++ and other popular languages. To be able to compare that, I’ve implemented a GIF decoder, an XML parser and various elementary tests in both JavaScript and C++ to compare how far apart the languages are in terms of speed, memory usage and responsiveness.
84

Nativní XML rozhraní pro relační databázi / Native XML Interface for a Relational Database

Piwko, Karel January 2010 (has links)
XML has emerged as leading document format for exchanging data. Because of vast amounts of XML documents available and transfered, there is a strong need to store and query information in these documents. However, the most companies are still using a RDBMS for their data warehouses and it is often necessary to combine legacy data with the ones in XML format, so it might be useful to consider storage possibilities for XML documents in a relation database. In this thesis we focused on structured and semi-structured data-based XML documents, because they are the most common when exchanging data and they can be easily validated against an XML schema. We propose a slightly modified Hybrid algorithm to shred doc- uments into relations using an XSD scheme and we allowed redundancy to make queries faster. Our goal was not to provide an academic solution, but fully working system supporting latest standards, which will beat up native XML databases both by performance and vertical scalability.
85

Syntaktický analyzátor pro český jazyk / Syntactic Analyzer for Czech Language

Beneš, Vojtěch January 2014 (has links)
Master&#8217;s thesis describes theoretical basics, solution design, and implementation of constituency (phrasal) parser for Czech language, which is based on a part of speech association into phrases. Created program works with manually built and annotated Czech sample corpus to generate probabilistic context free grammar within runtime machine learning. Parser implementation, based on extended CKY algorithm, then for the input Czech sentence decides if the sentence can be generated by the created grammar and for the positive cases constructs the most probable derivation tree. This result is then compared with the expected parse to evaluate constituency parser success rate.
86

Webový portál pro správu a klasifikaci informací z distribuovaných zdrojů / Web Application for Managing and Classifying Information from Distributed Sources

Vrána, Pavel January 2011 (has links)
This master's thesis deals with data mining techniques and classification of the data into specified categories. The goal of this thesis is to implement a web portal for administration and classification of data from distributed sources. To achieve the goal, it is necessary to test different methods and find the most appropriate one for web articles classification. From the results obtained, there will be developed an automated application for downloading and classification of data from different sources, which would ultimately be able to substitute a user, who would process all the tasks manually.
87

Dynamische Dokumenterstellung mit dem Webbrowser

Knauf, Robert, Schröder, Daniel 31 January 2009 (has links)
Wie lassen sich Corporate Design-konforme Drucksachen erzeugen, ohne Zugang zur Gestaltung zu haben. Der Vortrag stellt das strukturierte Datenformat XML, die Transformationssprache XSLT, die Formatierungssprache XSL-FO und den FO-Prozessor Apache FOP vor. Am praktischen Beispiel des TU Chemnitz-Plakatgenerators wird erläutert, wie der Formatierungsprozess abläuft. Des Weiteren wird die Softwarearchitektur des Generators vorgestellt, der sich vorliegender XML-Schablonen bedient, um automatisch und dynamisch das Nutzer-Eingabeformular im Webbrowser zu erzeugen.
88

Konverze ASP do ASP.NET / Translation of ASP into ASP.NET

Vilímek, Jan January 2007 (has links)
The goal of this dissertation is to implement an application for ASP to ASPX conversion. The ASP pages should be written in the VBScript language, the target language for ASPX will be C#. The application is developed on the .NET platform. The conversion process should be automatic. There should be no need to alter the converted files by a programmer. The first part of this dissertation introduces the whole problematic. It shows also current solutions. The next part is the analysis and the design of the application itself. The main part of this dissertation is the VBScript grammar conversion, problems while conversion and its solving.
89

One Compiler to Rule Them All : Extending the Storm Programming Language Platform with a Java Frontend

Ahrenstedt, Simon, Huber, Daniel January 2023 (has links)
The thesis aims to develop a method for extending the language platform Storm with a Java frontend.The project was conducted using an Action Research methodology and highlights triumphs andchallenges. Despite the significant overhead related to note generation and problem statementformulation, this methodology proved beneficial in identifying problems and providing the framework tosolve them. The first research question (RQ.1) evaluates to what extent the language platform Storm is suitable forimplementing the object oriented language Java. Using Storm, only a BNF and a specification for three-address code instructions are needed. Despite encountering difficulties during the implementation, theplatform offers tools that allow comprehensive customization of the new language's intended behaviorand functionality. The second research question (RQ.2) explores a suitable method for implementing a new language inStorm. It is suggested to first implement a foundational structure comprising of statements, blocks,scope handling and variable declarations. From this foundation, new functionalities can be graduallyintroduced and tested by connecting them to the appropriate location in the structure. When allfunctionality is added and tested a refactoring step can take place to modify the BNF if needed.
90

Additive manufacturing : Optimization of process parameters for fused filament fabrication

Hayagrivan, Vishal January 2018 (has links)
An obstacle to the wide spread use of additive manufacturing (AM) is the difficulty in estimating the effects of process parameters on the mechanical properties of the manufactured part. The complex relationship between the geometry, parameters and mechanical properties makes it impractical to derive an analytical relationship and calls for the use of a numerical model. An approach to formulate a numerical model in developed in this thesis. The AM technique focused in this thesis is fused filament fabrication (FFF). A numerical model is developed by recreating FFF build process in a simulation environment. Machine instructions generated by a slicer to build a part is used to create a numerical model. The model acts as a basis to determine the effects of process parameters on the stiffness and the strength of a part. Determining the stiffness of the part is done by calculating the response of the model to a uniformly distributed load. The strength of the part depends on it's thermal history. The developed numerical model serves as a basis to implement models describing the relation between thermal history and strength. The developed model is suited to optimize FFF parameters as it encompass effects of all FFF parameters. A genetic algorithm is used to optimize the FFF parameters for minimum weight with a minimum stiffness constraint. / Ett hinder för att additiv tillverkning (AT), eller ”3D-printing”, ska få ett bredare genomslag är svårigheten att uppskatta effekterna av processparametrar på den tillverkade produktens mekaniska prestanda. Det komplexa förhållandet mellan geometri och processparametrar gör det opraktiskt och komplicerat att härleda analytiska uttryck för att förutsäga de mekaniska egenskaperna. Alternativet är att istället använda numeriska modeller. Huvudsyftet med denna avhandling har därför varit att utveckla en numerisk modell som kan användas för att förutsäga de mekaniska egenskaperna för detaljer tillverkade genom AT. AT-tekniken som avses är inriktad på Fused Filament Fabrication (FFF). En numerisk modell har utvecklats genom att återskapa FFF-byggprocessen i en simuleringsmiljö. Instruktioner (skriven i GCode) som används för att bygga en detalj genom FFF har här översatts till en numerisk FE-modell. Modellen används sen för att bestämma effekterna av processparametrar på styvheten och styrkan hos den tillverkade detaljen. I detta arbete har strukturstyvheten hos olika detaljer beräknats genom att utvärdera modellens svar för jämnt fördelade belastningsfall. Styrkan, vilket är starkt beroende på den tillverkade detaljens termiska historia, har inte utvärderats. Den utvecklade numeriska modellen kan dock fungera som underlag för implementering av modeller som beskriver relationen mellan termisk historia och styrka. Den utvecklade modellen är anpassad för optimering av FFF-parametrar då den omfattar effekterna av alla FFF-parametrar. En genetisk algoritm har använts i detta arbete för att optimera parametrarna med avseende på vikt för en given strukturstyvhet.

Page generated in 0.0314 seconds