301 |
Shorův algoritmus v kvantové kryptografii / Shor's algorithm in Quantum CryptographyNwaokocha, Martyns January 2021 (has links)
Kryptografie je velmi důležitým aspektem našeho každodenního života, protože poskytuje teoretický základ informační bezpečnosti. Kvantové výpočty a informace se také stávají velmi důležitou oblastí vědy kvůli mnoha aplikačním oblastem včetně kryptologie a konkrétněji v kryptografii veřejných klíčů. Obtížnost čísel do hlavních faktorů je základem některých důležitých veřejných kryptosystémů, jejichž klíčem je kryptosystém RSA . Shorův kvantový faktoringový al-goritmus využívá zejména kvantový interferenční účinek kvantového výpočtu k faktorovým semi-prime číslům v polynomiálním čase na kvantovém počítači. Ačkoli kapacita současných kvantových počítačů vykonávat Shorův algoritmus je velmi omezená, existuje mnoho rozsáhlých základních vědeckých výzkumů o různých technikách optimalizace algoritmu, pokud jde o faktory, jako je počet qubitů, hloubka obvodu a počet bran. v této práci jsou diskutovány, analyzovány a porovnávány různé varianty Shorova factoringového algoritmu a kvantových obvodů. Některé varianty Shorova algoritmu jsou také simulované a skutečně prováděné na simulátorech a kvantových počítačích na platformě IBM QuantumExperience. Výsledky simulace jsou porovnávány z hlediska jejich složitosti a míry úspěšnosti. Organizace práce je následující: Kapitola 1 pojednává o některých klíčových historických výsledcích kvantové kryptografie, uvádí problém diskutovaný v této práci a představuje cíle, kterých má být dosaženo. Kapitola 2 shrnuje matematické základy kvantového výpočtu a kryptografie veřejných klíčů a popisuje notaci použitou v celé práci. To také vysvětluje, jak lze k rozbití kryptosystému RSA použít realizovatelný algoritmus pro vyhledávání objednávek nebo factoring. Kapitola 3 představuje stavební kameny Shorova algoritmu, včetně kvantové Fourierovy transformace, kvantového odhadu fází, modulární exponentiace a Shorova algoritmu. Zde jsou také uvedeny a porovnány různé varianty optimalizace kvantových obvodů. Kapitola 4 představuje výsledky simulací různých verzí Shorova algoritmu. V kapitole 5 pojednejte o dosažení cílů disertační práce, shrňte výsledky výzkumu a nastíňte budoucí směry výzkumu.
|
302 |
Adaptivní regulátory s principy umělé inteligence a jejich porovnání s klasickými metodami identifikace. / Adaptive controllers with principles of artificial intelligence and its comparison with classical identifications methodsDokoupil, Jakub January 2009 (has links)
This piece of work deals with a philosophy of design adaptive controller, which is based on knowledge of mathematical model controlled plant. This master thesis is focused on closed-loop on-line parametric identification methods. An estimation of model´s parametres is solved by two main concepts: recursive leastsquare algorithms and neural estimators. In case of least-squares algorithm the strategy of preventing the typical problems are solved here. For instance numerical stability, accurecy and restricted forgetting. Back Propagation and Marquardt- Levenberg algorithm were choosen to represent artificial inteligence. There is still a little supermacy on the side of methods based on least-squares algorithm. To compare individual algorithms the grafical interface in MATLAB/Simulink was created.
|
303 |
Softwarová podpora výuky kryptosystémů založených na problému faktorizace velkých čísel / Software support of education in cryptography based on integer factorizationVychodil, Petr January 2009 (has links)
This thesis deals with new teaching software, which supports asymmetric encryption algorithms based on the issue of large numbers´ factorization. A model program was created. It allows to carry out operations related to encryption and decryption with an interactive control. There is a simple way to understand the principle of the RSA encryption method with its help. The encryption of algorithms is generally analysed in chapters 1,2. Chapters 3 and 4 deals with RSA encryption algorithm in much more details, and it also describes the principles of the acquisition, management and usage of encryption keys. Chapters 5 suggest choosing of appropriate technologies for the creation of the final software product, which allow an appropriate way to present the characteristics of the extended RSA encryption algorithm. The final software product is the java applet. Aplet is described in the chaprers 6 and 7. It is divided into a theoretical and practical part. The theoretical part presents general information about the RSA encryption algorithm. The practical part allows the users of the program to have a try at tasks connected with the RSA algorthm in their own computers. The last part of Java applet deals with the users´ work results. The information obtained by the user in the various sections of the program are satisfactory enough to understand the principle of this algorithm´s operations.
|
304 |
Moderní asymetrické kryptosystémy / Modern Asymmetric CryptosystemsWalek, Vladislav January 2011 (has links)
Asymmetric cryptography uses two keys for encryption public key and for decryption private key. The asymmetric cryptosystems include RSA, ElGamal, Elliptic Curves and others. Generally, asymmetric cryptography is mainly used for secure short messages and transmission encryption key for symmetric cryptography. The thesis deals with these systems and implements selected systems (RSA, ElGamal, McEliece, elliptic curves and NTRU) into the application. The application can test the features of chosen cryptosystems. These systems and their performance are compared and evaluated with the measured values. These results can predict the future usage of these systems in modern informatics systems.
|
305 |
Faktorizacija polinoma dve promenljive sa celobrojnim koeficijentima pomoću Newton-ovog poligona i primena u dekodiranju nekih klasa Reed – Solomon kodova / Factoring bivariate polynomials with integer coefficients via Newton polygon and its application in decoding of some classes of Reed – Solomon codesPavkov Ivan 29 September 2107 (has links)
<p>Predmet istraživanja doktorske disertacije je faktorizacija polinoma dve promenljive sa celobrojnim koeficijentima pomoću njima pridruženih Newton-ovih poligona. Formalizacija potrebnog i dovoljnog uslova za postojanje netrivijalne faktorizacije polinoma dve promenljive sa celobrojnim koeficijentima omogućava konstrukciju efektivnog algoritma za faktorizaciju. Konačno, dobijeni teorijski rezultati su primenjeni na dekodiranje jedne klase Reed – Solomon kodova, miksa dve kodne reči.</p> / <p>The research subject of the thesis is factorization of bivariate polynomials with integer coefficients via associated Newton polygons. Formalization of the necessary and sufficient condition for the existence of a non – trivial factorization of an arbitrary bivariate polynomial with integer coefficients obtains theoretical basis for construction of an effective factorization algorithm. Finally, these theoretical results are applied in decoding special class of Reed – Solomon codewords, mixture of two codewords.</p>
|
306 |
Parcimonie, diversité morphologique et séparation robuste de sources / Sparse modeling, morphological diversity and robust source separationChenot, Cécile 29 September 2017 (has links)
Cette thèse porte sur le problème de Séparation Aveugle de Sources (SAS) en présence de données aberrantes. La plupart des méthodes de SAS sont faussées par la présence de déviations structurées par rapport au modèle de mélange linéaire classique: des évènements physiques inattendus ou des dysfonctionnements de capteurs en sont des exemples fréquents.Nous proposons un nouveau modèle prenant en compte explicitement les données aberrantes. Le problème de séparation en résultant, mal posé, est adressé grâce à la parcimonie. L'utilisation de cette dernière est particulièrement intéressante en SAS robuste car elle permet simultanément de démélanger les sources et de séparer les différentes contributions. Ces travaux sont étendus pour l'estimation de variabilité spectrale pour l'imagerie hyperspectrale terrestre.Des comparaisons avec des méthodes de l'état-de-l'art montrent la robustesse et la fiabilité des algorithmes associés pour un large éventail de configurations, incluant le cas déterminé. / This manuscript addresses the Blind Source Separation (BSS) problem in the presence of outliers. Most BSS techniques are hampered by the presence of structured deviations from the standard linear mixing model, such as unexpected physical events or malfunctions of sensors. We propose a new data model taking explicitly into account the deviations. The resulting joint estimation of the components is an ill-posed problem, tackled using sparse modeling. The latter is particularly efficient for solving robust BSS since it allows for a robust unmixing of the sources jointly with a precise separation of the components. These works are then extended for the estimation of spectral variability in the framework of terrestrial hyperspectral imaging. Numerical experiments highlight the robustness and reliability of the proposed algorithms in a wide range of settings, including the full-rank regime.
|
307 |
Switching hybrid recommender system to aid the knowledge seekersBacklund, Alexander January 2020 (has links)
In our daily life, time is of the essence. People do not have time to browse through hundreds of thousands of digital items every day to find the right item for them. This is where a recommendation system shines. Tigerhall is a company that distributes podcasts, ebooks and events to subscribers. They are expanding their digital content warehouse which leads to more data for the users to filter. To make it easier for users to find the right podcast or the most exciting e-book or event, a recommendation system has been implemented. A recommender system can be implemented in many different ways. There are content-based filtering methods that can be used that focus on information about the items and try to find relevant items based on that. Another alternative is to use collaboration filtering methods that use information about what the consumer has previously consumed in correlation with what other users have consumed to find relevant items. In this project, a hybrid recommender system that uses a k-nearest neighbors algorithm alongside a matrix factorization algorithm has been implemented. The k-nearest neighbors algorithm performed well despite the sparse data while the matrix factorization algorithm performs worse. The matrix factorization algorithm performed well when the user has consumed plenty of items.
|
308 |
Contextualizing music recommendations : A collaborative filtering approach using matrix factorization and implicit ratings / Kontextualisering av musikrekommendationerHäger, Alexander January 2020 (has links)
Recommender systems are helpful tools employed abundantly in online applications to help users find what they want. This thesis re-purposes a collaborative filtering recommender built for incorporating social media (hash)tags to be used as a context-aware recommender, using time of day and activity as contextual factors. The recommender uses a matrix factorization approach for implicit feedback, in a music streaming setting. Contextual data is collected from users' mobile phones while they are listening to music. It is shown in an offline test that this approach improves recall when compared to a recommender that does not account for the context the user was in. Future work should explore the qualities of this model further, as well as investigate how this model's recommendations can be surfaced in an application.
|
309 |
Získávání skrytých znalostí z online dat souvisejících s vysokými školamiHlaváč, Jakub January 2019 (has links)
Social networks are a popular form of communication. They are also used by universities in order to simplify information providing and addressing candidates for study. Foreign study stays are also a popular form of education. Students, however, encounter a number of obstacles. The results of this work can help universities make their social network communication more efficient and better support foreign studies. In this work, the data from Facebook related to Czech universities and the Erasmus program questionnaire data were analyzed in order to find useful knowledge. The main emphasis was on textual content of communication. The statistical and machine learning methods, including mostly feature selection, topic modeling and clustering were used. The results reveal interesting and popular topics discussed on Czech universities social networks. The main problems of students related to their foreign studies were identified too and some of them were compared for countries and universities.
|
310 |
Efficient LU Factorization for Texas Instruments Keystone Architecture Digital Signal Processors / Effektiv LU-faktorisering för Texas Instruments digitala signalprocessorer med Keystone-arkitekturNetzer, Gilbert January 2015 (has links)
The energy consumption of large-scale high-performance computer (HPC) systems has become one of the foremost concerns of both data-center operators and computer manufacturers. This has renewed interest in alternative computer architectures that could offer substantially better energy-efficiency.Yet, the for the evaluation of the potential of these architectures necessary well-optimized implementations of typical HPC benchmarks are often not available for these for the HPC industry novel architectures. The in this work presented LU factorization benchmark implementation aims to provide such a high-quality tool for the HPC industry standard high-performance LINPACK benchmark (HPL) for the eight-core Texas Instruments TMS320C6678 digitalsignal processor (DSP). The presented implementation could perform the LU factorization at up to 30.9 GF/s at 1.25 GHz core clock frequency by using all the eight DSP cores of the System-on-Chip (SoC). This is 77% of the attainable peak double-precision floating-point performance of the DSP, a level of efficiency that is comparable to the efficiency expected on traditional x86-based processor architectures. A presented detailed performance analysis shows that this is largely due to the optimized implementation of the embedded generalized matrix-matrix multiplication (GEMM). For this operation, the on-chip direct memory access (DMA) engines were used to transfer the necessary data from the external DDR3 memory to the core-private and shared scratchpad memory. This allowed to overlap the data transfer with computations on the DSP cores. The computations were in turn optimized by using software pipeline techniques and were partly implemented in assembly language. With these optimization the performance of the matrix multiplication reached up to 95% of attainable peak performance. A detailed description of these two key optimization techniques and their application to the LU factorization is included. Using a specially instrumented Advantech TMDXEVM6678L evaluation module, described in detail in related work, allowed to measure the SoC’s energy efficiency of up to 2.92 GF/J while executing the presented benchmark. Results from the verification of the benchmark execution using standard HPL correctness checks and an uncertainty analysis of the experimentally gathered data are also presented. / Energiförbrukningen av storskaliga högpresterande datorsystem (HPC) har blivit ett av de främsta problemen för såväl ägare av dessa system som datortillverkare. Det har lett till ett förnyat intresse för alternativa datorarkitekturer som kan vara betydligt mer effektiva ur energiförbrukningssynpunkt. För detaljerade analyser av prestanda och energiförbrukning av dessa för HPC-industrin nya arkitekturer krävs väloptimerade implementationer av standard HPC-bänkmärkningsproblem. Syftet med detta examensarbete är att tillhandhålla ett sådant högkvalitativt verktyg i form av en implementation av ett bänkmärkesprogram för LU-faktorisering för den åttakärniga digitala signalprocessorn (DSP) TMS320C6678 från Texas Instruments. Bänkmärkningsproblemet är samma som för det inom HPC-industrin välkända bänkmärket “high-performance LINPACK” (HPL). Den här presenterade implementationen nådde upp till en prestanda av 30,9 GF/s vid 1,25 GHz klockfrekvens genom att samtidigt använda alla åtta kärnor i DSP:n. Detta motsvarar 77% av den teoretiskt uppnåbara prestandan, vilket är jämförbart med förväntningar på effektivteten av mer traditionella x86-baserade system. En detaljerad prestandaanalys visar att detta tillstor del uppnås genom den högoptimerade implementationen av den ingående matris-matris-multiplikationen. Användandet av specialiserade “direct memory access” (DMA) hårdvaruenheter för kopieringen av data mellan det externa DDR3 minnet och det interna kärn-privata och delade arbetsminnet tillät att överlappa dessa operationer med beräkningar. Optimerade mjukvaruimplementationer av dessa beräkningar, delvis utförda i maskinspåk, tillät att utföra matris-multiplikationen med upp till 95% av den teoretiskt nåbara prestandan. I rapporten ges en detaljerad beskrivning av dessa två nyckeltekniker. Energiförbrukningen vid exekvering av det implementerade bänkmärket kunde med hjälp av en för ändamålet anpassad Advantech TMDXEVM6678L evalueringsmodul bestämmas till maximalt 2,92 GF/J. Resultat från verifikationen av bänkmärkesimplementationen och en uppskattning av mätosäkerheten vid de experimentella mätningarna presenteras också.
|
Page generated in 0.1055 seconds