281 |
Test av OCR-verktyg för Linux / OCR software tests for LinuxNilsson, Elin January 2010 (has links)
Denna rapport handlar om att ta fram ett OCR-verktyg för digitalisering av pappersdokument. Krav på detta verktyg är att bland annat det ska vara kompatibelt med Linux, det ska kunna ta kommandon via kommandoprompt och dessutom ska det kunna hantera skandinaviska tecken. Tolv OCR-verktyg granskades, sedan valdes tre verktyg ut; Ocrad, Tesseract och OCR Shop XTR. För att testa dessa scannades två dokument in och digitaliserades i varje verktyg. Resultatet av testerna är att Tesseract är de verktyget som är mest precist och Ocrad är det verktyget som är snabbast. OCR Shop XTR visar på sämst resultat både i tidtagning och i antal korrekta ord. / This report is about finding OCR software for digitizing paper documents. Requirements were to include those which were compatible with Linux, being able to run commands via the command line and also being able to handle the Scandinavian characters. Twelve OCR softwares were reviewed, and three softwares were chosen; Ocrad, Tesseract and OCR Shop XTR. To test these, two document were scanned and digitized in each tool. The results of the tests are that Tesseract is the tool which is the most precise and Ocrad is the tool which is the fastest. OCR Shop XTR shows the worst results both in timing and number of correct words.
|
282 |
IT-Forensisk undersökning av flyktigt minne : På Linux och Android enheter / Forensic examination of volatile memory under Linux and AndroidHedlund, Niklas January 2013 (has links)
Att kunna gör en effektiv undersökning av det flyktiga minnet är något som blir viktigare ochviktigare i IT-forensiska utredningar. Dels under Linux och Windows baserade PC installationermen också för mobila enheter i form av Android och enheter baserade andra mobila opperativsy-stem.Android använder sig av en modifierad Linux-kärna var modifikationer är för att anpassa kärnantill de speciella krav som gäller för ett mobilt operativsystem. Dessa modifikationer innefattardels meddelandehantering mellan processer men även ändringar till hur internminnet hanteras ochövervakas.Då dessa två kärnor är så pass nära besläktade kan samma grundläggande principer användas föratt dumpa och undersöka minne. Dumpningen sker via en kärn-modul vilket i den här rapportenutgörs av en programvara vid namn LiME vilken kan hantera bägge kärnorna.Analys av minnet kräver att verktygen som används har en förståelse för minneslayouten i fråga.Beroende på vilken metod verktyget använder så kan det även behövas information om olika sym-boler. Verktyget som används i det här examensarbetet heter Volatility och klarar på papperet avatt extrahera all den information som behövs för att kunna göra en korrekt undersökning.Arbetet avsåg att vidareutveckla existerande metoder för analys av det flyktiga minnet på Linux-baserade maskiner (PC) och inbyggda system(Android). Problem uppstod då undersökning avflyktigt minne på Android och satta mål kunde inte uppnås fullt ut. Det visade sig att minnesanalysriktat emot PC-plattformen är både enklare och smidigare än vad det är mot Android. / The ability to be able to make a efficient investigation of volatile memory is something that getsmore and more important in IT forensic investigations. Partially for Linux and Windows based PCsystems but also for mobile devices in the form of the Android or devices based on other mobileoperative systems.Android uses a modified Linux kernel where the modifications exclusively are to adapt it to thedemands that exists in a operative system targeting mobile devices. These modifications containsmessage passing systems between processes as well as changes to the memory subsystems in theaspect of handling and monitoring.Since these two kernels are so closely related it is possible to use the same basic principles for dum-ping and analysing of the memory. The actual memory dumping is done by a kernel module whichin this report is done by the software called LiME which handles both kernels very well.Tools used to analyse the memory needs to understand the memory layout used on the systemin question, depending on the type of analyse method used it might also need information aboutthe different symbols involved. The tool used in this project is called Volatility which in theory iscapable of extracting all the information needed in order to make a correct investigation.The purpose was to expand on existing methods for analysing volatile memory on Linux-basedsystems, in the form of PC machines as well as embedded systems like Android. Difficulties arisedwhen the analysing of volatile memory for Android could not be completed according to existinggoals. The final result came to show that memory analysis targeting the PC platform is bothsimpler and more straight forward then what it is if Android is involved.
|
283 |
A comparative study of the Linux and windows device driver architecture with a focus on IEEE1394 (high speed serial bus) driversTsegaye, Melekam Asrat January 2004 (has links)
New hardware devices are continually being released to the public by hardware manufactures around the world. For these new devices to be usable under a PC operating system, device drivers that extend the functionality of the target operating system have to be constructed. This work examines and compares the device driver architectures currently in use by two of the most widely used operating systems, Microsoft’s Windows and Linux. The IEEE1394 (high speed serial bus) device driver stacks on each operating system are examined and compared as an example of a major device driver stack implementation, including driver requirements for the upcoming IEEE1394.1 bridging standard.
|
284 |
Desenvolvimento e automatização de um método teórico para a avaliação quantitativa da seletividade de proteínas do MHC por diferentes antígenos / Development and automation of a theoretical method for quantitative evaluation of the MHC proteins of different antigens selectivityJackson Gois da Silva 25 June 2004 (has links)
Realizamos neste trabalho a automatização da nova metodologia MHCalc, desenvolvida anteriormente em nosso laboratório, que teve como resultado o programa homônimo escrito em linguagem C para ambiente GNU/Linux, o qual avalia quantitativamente a seletividade de uma determinada proteína MHC. Esta avaliação quantitativa possibilita o estabelecimento de uma escala de preferência dos resíduos de ocorrência natural para cada uma das posições de interação da fenda de apresentação do MHC; por permutação destes resíduos, podem-se derivar regras de composição para peptídeos reconhecíveis em cada alelo estudado, inclusive na ausência de dados experimentais. A metodologia desenvolvida em nosso laboratório se baseia na avaliação da seletividade de cada bolsão independentemente, através da energia de interação do mesmo com cada aminoácido de ocorrência natural. O programa MHCalc utiliza o pacote de programas THORMM [Moret, 1998] de modelagem molecular para otimização geométrica das estruturas descritas, e tem como entrada unicamente um arquivo de coordenadas em formato POB contendo as coordenadas de um complexo MHC/peptídeo, tendo como saída uma tabela de dados contendo os resíduos de ocorrência natural em ordem de preferência para cada bolsão. Testamos o programa MHCalc para o complexo formado pela molécula HLA-DR1 (DRA DRB1 *01 01) e o peptídeo de Hemaglutinina, obtido no Brookhaven Protein Data Bank com a entrada 10LH, sendo este sistema escolhido por ser altamente estudado e com abundância de dados experimentais e teóricos para comparação de resultados. Até o presente momento obtivemos os dados referentes ao bolsão 1, os quais estão em pleno acordo com a literatura. / In this work we automated the new methodology MHCalc, developed previously in our laboratory, which resulted in the computational program with the same name, written in C language to GNU/Linux environment. The program MHCalc evaluates quantitatively the selectivity of a given MHC protein. This quantitative evaluation allows the establishment of a preference scale of the naturally occurring residues for each pocket of a MHC molecule. From such a study it may be derived composition rules to peptides recognizable in each allele studied, even in the absence of experimental data. The methodology developed in our laboratory is based on the selectivity evaluation of each pocket independently, through its interaction energy with each naturally occurring amino acid. The program MHCalc uses the molecular modeling package THORMM [Moret, 1998] to optimize the geometry of the structures described below, and needs as only entry the PDS like file with the coordinates of the MHC/peptide complex. The program finishes printing out a data file containing the naturally occurring residues in preference order for each pocket and the data used to order these residues. We tested the program MHCalc to the complex molecule HLA-DR1 (DRA DRB1 *01 01) with the Hemaglutinin peptide, obtained at the BrookHaven Prote in Data Bank (1 DLH entry). This system was chosen because it has been highly studied and so offers abundant experimental and theoretical data to compare our results. We obtained data referring to pocket 1 so far, which is in full agreement with the literature.
|
285 |
Multiplatformní aplikace pro správu síťových prvků Mikrotik / Multiplatform application for Mikrotik network devices managementBárdossy, Adrián January 2018 (has links)
Diploma thesis contains the description of the application developement for management of network entities based on mikrotik devices. In the intro, there is the description of used libraries, also description of API. In the next part of thesis there is programmed part of application backend. This part contains description of individual directories of project, which was written in pycharm. Every directory is described by one file together with UML diagram and table of methods in specific class. In the next part of thesis, there is the description of graphical part of the application and its example on one section of programmed buttons. It contains also output in form of pictures from the application. In the last section, there is tutorial for modules instalation, which are needed to run the application and contains manual testing of application.
|
286 |
Implementace CDN a clusteringu v prostředí GNU/Linux s testy výkonnosti. / CDN and clustering in GNU/Linux with performance testingMikulka, Pavel January 2008 (has links)
Fault tolerance is essential in a production-grade service delivery network. One of the solution is build a clustered environment to keep system failure to a minimum. This thesis examines the use of high availability and load balancing services using open source tools in GNU/Linux. The thesis discusses some general technologies of high availability computing as virtualization, synchronization and mirroring. To build relatively cheap high availability clusters is suitable DRDB tool. DRDB is tool for build synchronized Linux block devices. This paper also examines Linux-HA project, Redhat Cluster Suite, LVS, etc. Content Delivery Networks (CDN) replicate content over several mirrored web servers strategically placed at various locations in order to deal with the flash crowds. A CDN has some combination a request-routing and replication mechanism. Thus CDNs offer fast and reliable applications and services by distributing content to cache servers located close to end-users. This work examines open-source CDNs Globule and CoralCDN and test performance of this CDNs in global deployment.
|
287 |
Framework pro sestavení a testování bezpečnostního síťového řešení / Framework for Building and Testing Network Security SolutionSuška, Jiří January 2011 (has links)
This master thesis discourses upon problems with automatic building and testing of network security solution AVG for Linux/FreeBSD on platforms GNU/Linux and FreeBSD. This work introduces AVG for Linux/FreeBSD and its usage. Compilation and link are discussed from the source code to the distribution packages, which users can install on your computer. Also, the repository term was introduced, containing the information about their creation and usage. The part about AVG for Linux/FreeBSD testing discusses suitable automatic testing proposals for this product and implementation of the best solutions. In the practical part, the testing tool was developed. AVG for Linux/FreeBSD was tested using the test tool and implemented test set.
|
288 |
DVB-T - Tux sieht fernHeik, Andreas 19 November 2007 (has links)
Der Vortrag gibt einen Überblick zu DVB-T.
Dabei wird auf Übertragungstechniken, geeignete Hardware
und die Konfiguration am Beispiel von Fedora Linux vorgestellt.
Ausgewählte Anwendungen runden die Präsentation ab.
|
289 |
Managing VMware Virtual Infrastructure EnvironmentsHeik, Andreas 27 January 2011 (has links)
Der Vortrag beschreibt Stand, Entwicklung und Realisierung der Virtualisierungsinfrastruktur am Universitätsrechenzentrum der TU-Chemnitz
in Form einer technischen Sicht.
|
290 |
Chemnitzer Linux-Tage 2013: Tagungsband – 16. und 17. März 2013Meier, Wilhelm, Berger, Uwe, Heik, Andreas, König, Harald, Kölbel, Cornelius, Loschwitz, Martin Gerhard, Wachtler, Axel, Findeisen, Ralph, Kubieziel, Jens, Seidel, Philipp, Luithardt, Wolfram, Gachet, Daniel, Schuler, Jean-Roland 04 April 2013 (has links)
Die Chemnitzer Linux-Tage sind eine Veranstaltung rund um das Thema Open Source.
Im Jahr 2013 wurden 106 Vorträge und Workshops gehalten. Der Band enthält ausführliche Beiträge zu 11 Hauptvorträgen sowie Zusammenfassungen zu 95 weiteren Vorträgen.
|
Page generated in 0.0625 seconds