• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 288
  • 169
  • 79
  • 37
  • 27
  • 21
  • 14
  • 11
  • 8
  • 8
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 719
  • 152
  • 140
  • 89
  • 76
  • 73
  • 72
  • 72
  • 71
  • 70
  • 61
  • 60
  • 51
  • 50
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Supported Programming for Beginning Developers

Gilbert, Andrew 01 March 2019 (has links)
Testing code is important, but writing test cases can be time consuming, particularly for beginning programmers who are already struggling to write an implementation. We present TestBuilder, a system for test case generation which uses an SMT solver to generate inputs to reach specified lines in a function, and asks the user what the expected outputs would be for those inputs. The resulting test cases check the correctness of the output, rather than merely ensuring the code does not crash. Further, by querying the user for expectations, TestBuilder encourages the programmer to think about what their code ought to do, rather than assuming that whatever it does is correct. We demonstrate, using mutation testing of student projects, that tests generated by TestBuilder perform better than merely compiling the code using Python’s built-in compile function, although they underperform the tests students write when required to achieve 100% test coverage.
112

Webová aplikace pro odevzdávání studentských prací / Web interface for submitting student projects

Hill, Tomáš January 2011 (has links)
This master thesis deals with the description of web framework Web2py. The web frameworks are designed to facilitate the development of web applications. Selected web framework, Web2py, is written in Python programming language. The main purpose of this thesis is to create web application able to manage student projects and also manage students attendance. Created application is optimised for special needs of laboratory PA126.
113

Testování odolnosti sítí a ochrana před útoky odepření služeb / Network protection testing and DoS attacks protection

Hanzal, Jan January 2014 (has links)
The aim of this Master thesis is a testing of Cisco ASA\,5510 firewall with affect of Denial of Service attacks. Part of the thesis is a teoretical description some of the attacks and practical tests. Practical part covers basic testing of Cisco ASA with Spirent Avalanche 3100B. Number of TCP connections per second and firewalls throughput on 7th layer of ISO/OSI model were tested. Also the effect of Denial of Service attacks on the throughput. In a next part there is described one possible way how to generate Denial of Service attacks from a Linux server to the firewall. Python scripts were used for generation DoS packets. With those scripts it is possible to generate five types of attacks.
114

Překladač podmnožiny jazyka Python / A Compiler of Language Python Subset

Falhar, Radek January 2014 (has links)
Python is dynamically typed interpreted programming language. Thanks to its dynamic type system, it is difficult to compile it into statically typed source code. The kind of source code, where it is exactly specified what types exist and what their structure is. Multiple approaches exist how to achieve this and one of the primary ones is type inference. This approach is attempting to infer the type structure from the source code. In case of Python language, this approach is difficult, because resulting type system is quite complex and language itself is not designed for type inference. In this work, I have focused on identifying subset of this language, so that type inference is possible while keeping the natural way the language is used. Then I implemented a compiler, which will compile this subset into statically typed language, which can be translated into native code.
115

Pokročilý simulátor mikrokontrolérů rodiny MSP430 / Advanced Simulator of MSP430 Microcontrollers

Kaluža, Jan January 2014 (has links)
The goal of this master's thesis is to provide an introduction to MSP430 microcontrollers and to design a simulator of these microcontrollers, focusing on easy implementation of extensions using peripherals. After a short introduction, the MSP430 microcontrollers are briefly described, including their internal peripherals and the formats used to store the binary executable code. The thesis continues with a  description of discrete event simulation using the DEVS formalism. Based on previous chapters, the new simulator (consisting of a simulation core, graphical user interface and library for MSP430 microcontroller simulation)  is designed and implemented. The implementation is tested by comparison with real microcontroller and in the end of the thesis, there is a summary and evaluation of the implemented simulator.
116

Centrální stanice chytré domácnosti - SmartFlat / SmartFlat central station

Chudík, Vladimír January 2016 (has links)
This diploma thesis contains description and realization of smart home system SmartFlat. In the first part of the thesis is analysis of issue and listing of most common variants of home systems with examples. Further into the thesis is the description of parts of system SmartFlat, hardware solution of main unit and there are also explained core software functions of the system. At the end of this thesis are commented results of debugging and testing with real peripheral devices. Conclusion at the end includes brief summary of diploma thesis and commentary of achieved results.
117

Monitorování přenosových parametrů sítě Internet / Monitoring of communication properties in Internet

Iľko, Pavol January 2016 (has links)
This thesis deals with measuring transmission parameters of the Internet network, in particular latency of ping, SSH protocol and bandwidth. The thesis is divided into a theoretical and a practical part. Theoretical part describes PlanetLab network, its brief history and contemporary projects. At the same time, tools for data mining from web pages are described. These information obtained from the theoretical part are used for creating PlanetLab nodes list and for programming applications which measure the network transmission parameters. Applications, list of nodes and obtained data are attached on DVD disc.
118

Theoretical Analysis of Drug Analogues and VOC Pollutants

Garibay, Luis K. 08 1900 (has links)
While computational chemistry methods have a wide range of applications within the set of traditional physical sciences, very little is being done in terms of expanding their usage into other areas of science where these methods can help clarify research questions. One such promising field is Forensic Science, where detailed, rapidly acquired sets of chemical data can help in decision-making at a crime scene. As part of an effort to create a database that fits these characteristics, the present work makes use of computational chemistry methods to increase the information readily available for the rapid identification and scheduling of drugs to the forensic scientist. Ab initio geometry optimizations, vibrational spectra calculations and ESI-MS fragmentation prediction of a group of common psychedelics are here presented. In addition, we describe an under development graphical user interface to perform ab initio calculations using the GAMESS software package in a more accessible manner. Results show that the set of theoretical techniques here utilized, closely approximate experimental data. Another aspect covered in this work is the implementation of a boiling point estimation method based on group contributions to generate chemical dispersion areas with the ALOHA software package. Once again, theoretical results showed to be in agreement with experimental boiling point values. A computer program written to facilitate the execution of the boiling point estimation method is also shown.
119

Parallelization of Dataset Transformation with Processing Order Constraints in Python / Parallelisering av datamängdstransformation med ordningsbegränsningar i Python

Gramfors, Dexter January 2016 (has links)
Financial data is often represented with rows of values, contained in a dataset. This data needs to be transformed into a common format in order for comparison and matching to be made, which can take a long time for larger datasets. The main goal of this master’s thesis is speeding up these transformations through parallelization using Python multiprocessing. The datasets in question consist of several rows representing trades, and are transformed into a common format using rules known as filters. In order to devise a parallelization strategy, the filters were analyzed in order to find ordering constraints, and the Python profiler cProfile was used to find bottlenecks and potential parallelization points. This analysis resulted in the use of a task-based approach for the implementation, in which the transformation was divided into an initial sequential pre-processing step, a parallel step where chunks of several trade rows were distributed among workers, and a sequential post processing step. The implementation was tested by transforming four datasets of differing sizes using up to 16 workers, and execution time and memory consumption was measured. The results for the tiny, small, medium, and large datasets showed a speedup of 0.5, 2.1, 3.8, and 4.81. They also showed linearly increasing memory consumption for all datasets. The test transformations were also profiled in order to understand the parallel program’s behaviour for the different datasets. The experiments gave way to the conclusion that dataset size heavily influences the speedup, partly because of the fact that the sequential parts become less significant. In addition, the large memory increase for larger amount of workers is noted as a major downside of multiprocessing when using caching mechanisms, as data is duplicated instead of shared. This thesis shows that it is possible to speed up the dataset transformations using chunks of rows as tasks, though the speedup is relatively low. / Finansiell data representeras ofta med rader av värden, samlade i en datamängd. Denna data måste transformeras till ett standardformat för att möjliggöra jämförelser och matchning. Detta kan ta lång tid för stora datamängder. Huvudmålet för detta examensarbete är att snabba upp dessa transformationer genom parallellisering med hjälp av Python-modulen multiprocessing. Datamängderna omvandlas med hjälp av regler, kallade filter. Dessa filter analyserades för att identifiera begränsningar på ordningen i vilken datamängden kan behandlas, och därigenom finna en parallelliseringsstrategi. Python-profileraren cProfile an- vändes även för att hitta potentiella parallelliseringspunkter i koden. Denna analys resulterade i användandet av ett “task”-baserat tillvägagångssätt, där transformationen delades in i ett sekventiellt pre-processingsteg, ett parallelt steg där grupper av rader distribuerades ut bland arbetar-processer, och ett sekventiellt post-processingsteg. Implementationen testades genom transformation av fyra datamängder av olika storlekar, med upp till 16 arbetarprocesser. Resultaten för de fyra datamängderna var en speedup på 0.5, 2.1, 3.8 respektive 4.81. En linjär ökning i minnesanvändning uppvisades även. Experimenten resulterade i slutsatsen att datamängdens storlek var en betydande faktor i hur mycket speedup som uppvisades, delvis på grund av faktumet att de sekventiella delarna tar upp en mindre del av programmet. Den stora minnesåtgången noterades som en nackdel med att använda multiprocessing i kombination med cachning, på grund av duplicerad data. Detta examensarbete visar att det är möjligt att snabba upp datamängdstransformation genom att använda radgrupper som tasks, även om en relativt låg speedup uppvisades.
120

Benchmarking Python Interpreters : Measuring Performance of CPython, Cython, Jython and PyPy / Jämförelse av Pythoninterpreterarna CPython, Cython, Jython och PyPy

Roghult, Alexander January 2016 (has links)
For the Python programming language there are several different interpreters and implementations. In this thesis project the performance regarding execution time is evaluated for four of these; CPython, Cython, Jython and PyPy. The performance was measured in a test suite, created during the project, comprised of tests for Python dictionaries, lists, tuples, generators and objects. Each test was run with both integers and objects as test data with varying problem size. Each test was implemented using Python code. For Cython and Jython separate versions of the test were also implemented which contained syntax and data types specific for that interpreter. The results showed that Jython and PyPy were fastest for a majority of the tests when running code with only Python syntax and data types. Cython uses the Python C/API and is therefore dependent on CPython. The performance of Cython was therefore often similar to CPython. Cython did perform better on some of the tests when using Cython syntax and data types, as it could therefore decrease its reliance to CPython. Each interpreter was able to perform fastest on at least one test, showing that there is not an interpreter that is best for all problems. / Det existerar flera olika implementationer och interpreterare för programmeringsspråket Python. I detta examensarbete evalueras prestandan avseende exekveringstid för fyra av dessa; CPython, Cython, Jython och PyPy. Prestandan mättes i en testsvit som skapades i detta projekt. Testsviten bestod av tester för Pythons dictionary, list, tuple, generator och objekt. Varje test kördes med bå de heltal och objekt som testdata med varierande problemstorlek. Varje test var implementerat i programmeringsspråket Python. För Cython och Jython implementerades ytterliggare en version av testerna som innehöll syntax och datatyper specifika för dessa interpreterare. Resultaten visade att Jython och PyPy var snabbast för en majoritet av testerna som endast använde sig av Pythons syntax och datatyper. Cython använder sig av Pythons C/API och är därför beroende av CPython. Prestandan av Cython var därför lik CPythons. Cython presterade bättre på vissa av testerna som utnyttjade Cythons syntax och datatyper, då den därmed kunde minska sitt beroende av CPython. Varje interpreterare lyckades prestera snabbast på minst ett test. Detta visar att det inte finns en interpreterare som är mest lämpad för alla problem.

Page generated in 0.0514 seconds