• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 213
  • 45
  • 27
  • 26
  • 24
  • 21
  • 16
  • 15
  • 12
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 456
  • 71
  • 56
  • 55
  • 47
  • 40
  • 39
  • 35
  • 31
  • 31
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Ranking of Android Apps based on Security Evidences

Ayush Maharjan (9728690) 07 January 2021 (has links)
<p>With the large number of Android apps available in app stores such as Google Play, it has become increasingly challenging to choose among the apps. The users generally select the apps based on the ratings and reviews of other users, or the recommendations from the app store. But it is very important to take the security into consideration while choosing an app with the increasing security and privacy concerns with mobile apps. This thesis proposes different ranking schemes for Android apps based on security apps evaluated from the static code analysis tools that are available. It proposes the ranking schemes based on the categories of evidences reported by the tools, based on the frequency of each category, and based on the severity of each evidence. The evidences are gathered, and rankings are generated based on the theory of Subjective Logic. In addition to these ranking schemes, the tools are themselves evaluated against the Ghera benchmark. Finally, this work proposes two additional schemes to combine the evidences from difference tools to provide a combined ranking.</p>
322

Řešení pro clusterování serverů / Server clustering techniques

Čech, Martin January 2009 (has links)
The work is given an analysis of Open Source Software (further referred as OSS), which allows use and create computer clusters. It explored the issue of clustering and construction of clusters. All installations, configuration and cluster management have been done on the operating system GNU / Linux. Presented OSS makes possible to compile a storage cluster, cluster with load distribution, cluster with high availability and computing cluster. Different types of benchmarks was theoretically analyzed, and practically used for measuring cluster’s performance. Results were compared with others, eg. the TOP500 list of the best clusters available online. Practical part of the work deals with comparing performance computing clusters. With several tens of computational nodes has been established cluster, where was installed package OpenMPI, which allows parallelization of calculations. Subsequently, tests were performed with the High Performance Linpack, which by calculation of linear equations provides total performance. Influence of the parallelization to algorithm PEA was also tested. To present practical usability, cluster has been tested by program John the Ripper, which serves to cracking users passwords. The work shall include the quantity of graphs clarifying the function and mainly showing the achieved results.
323

Zkoumání souvislostí mezi pokrytím poruch a testovatelností elektronických systémů / Investigating of Relations between Fault-Coverage and Testability of Electronic Systems

Rumplík, Michal January 2010 (has links)
This work deals with testability analysis of digital circuits and fault coverage. It contains a desription of digital systems, their diagnosis, a description of tools for generating and applying tests and sets of benchmark circuits. It describes the testing of circuits and experimentation in tool TASTE for testability analysis and commercial tool for generating and applying tests. The experiments are focused on increase the testability of circuits.
324

Modélisation électromagnétique appliquée à la détermination des harmoniques de forces radiale et tangentielle dans les machines électriques en exploitant l’approche des sous-domaines / Electromagnetic subdomain modeling technique for the fast prediction of radial and circumferential stress harmonics in electrical machines

Devillers, Emile 13 December 2018 (has links)
La présence d’harmoniques de forces électromagnétiques dans les machines électriques est généralement source de bruit acoustique et de vibrations (B&V). Ce phénomène doit être considéré dès les premières phases de conception pour respecter les normes en matière de B&V, en particulier dans le secteur automobile. Le niveau de B&V s’obtient à partir d’une simulation multi-physique basée sur des modèles électromagnétiques, mécaniques et acoustiques, de préférence rapides et précis de manière à l’inclure le plus tôt possible dans la phase de conception. Cette thèse CIFRE est partie intégrante du programme de recherche interne de la société EOMYS ENGINEERING, qui développe et commercialise son logiciel MANATEE dédié à la simulation électromagnétique et vibroacoustique des machines électriques. Dans ce contexte de modélisation, cette thèse porte sur la méthode électromagnétique semi-analytique des sous-domaines pour le calcul des harmoniques de forces 2D dans l’entrefer d’une large variété de machines électriques, et se concentre particulièrement sur la Machine Synchrone à Aimant Permanents en Surface (MSAPS) et la machine asynchrone à cage d’écureuil. La thèse s’intéresse également à deux verrous scientifiques concernant la contribution des forces tangentielles au niveau de vibration global, et l’effet de modulation des dents qui apparaît dans les machines avec un nombre proche d’encoches et de pôles. A cet effet, un banc d’essai comprenant une machine bruyante particulière (une MSAPS avec 12 encoches et 10 pôles) et l’instrumentation nécessaire a été conçu et réalisé. Le banc d’essai vise enfin à comparer les différents modèles utilisés couramment dans les simulations B&V / The presence of magnetic stress harmonics inside the electrical machine is generally responsible for vibrations and acoustic noise generation. This phenomenon is called e-NVH (Noise, Vibrations and Harshness due to electromagnetic excitations) and has to be considered in the machine design to meet with NVH standard requirements, especially in automotive applications. The e-NVH assertion requires a multiphysics simulation including electromagnetic, mechanical and acoustic models, which must be fast and accurate especially for early design stages. This industrial PhD thesis takes part of the internal research program of EOMYS ENGINEERING company, which develops and commercializes MANATEE software, dedicated to the e-NVH simulation of electrical machines. In this modeling context, the present thesis investigates and extends the semi-analytical electromagnetic model, called Subdomain Method (SDM), for the computation of two-dimensional airgap magnetic stress harmonics in various topologies of electrical machines, mainly focusing on Surface Permanent Magnet Synchronous Machines (SPMSMs) and Squirrel Cage Induction Machines (SCIMs). The thesis also investigates two scientific open questions concerning the contribution of circumferential excitations to the overall vibration level and the slotting modulation effect, which appears in electrical machines with a close number of poles and teeth. For this purpose, an experimental test rig including a particular noisy machine (a SPMSM with 12 slots and 10 poles) and appropriate sensors has been designed and built. The test rig also aims at benchmarking the different multiphysics models currently used in e-NVH simulation workflow
325

A COMPARISON OF DATA INGESTION PLATFORMS IN REAL-TIME STREAM PROCESSING PIPELINES

Tallberg, Sebastian January 2020 (has links)
In recent years there has been an increasing demand for real-time streaming applications that handle large volumes of data with low latency. Examples of such applications include real-time monitoring and analytics, electronic trading, advertising, fraud detection, and more. In a streaming pipeline the first step is ingesting the incoming data events, after which they can be sent off for processing. Choosing the correct tool that satisfies application requirements is an important technical decision that must be made. This thesis focuses entirely on the data ingestion part by evaluating three different platforms: Apache Kafka, Apache Pulsar and Redis Streams. The platforms are compared both on characteristics and performance. Architectural and design differences reveal that Kafka and Pulsar are more suited for use cases involving long-term persistent storage of events, whereas Redis is a potential solution when only short-term persistence is required. They all provide means for scalability and fault tolerance, ensuring high availability and reliable service. Two metrics, throughput and latency, were used in evaluating performance in a single node cluster. Kafka proves to be the most consistent in throughput but performs the worst in latency. Pulsar manages high throughput with low message sizes but struggles with larger message sizes. Pulsar performs the best in overall average latency across all message sizes tested, followed by Redis. The tests also show Redis being the most inconsistent in terms of throughput potential between different message sizes
326

Instruction Timing Analysis for Linux/x86-based Embedded and Desktop Systems

John, Tobias 19 October 2005 (has links)
Real-time aspects are becoming more important in standard desktop PC environments and x86 based processors are being utilized in embedded systems more often. While these processors were not created for use in hard real time systems, they are fast and inexpensive and can be used if it is possible to determine the worst case execution time. Information on CPU caches (L1, L2) and branch prediction architecture is necessary to simulate best and worst cases in execution timing, but is often not detailed enough and sometimes not published at all. This document describes how the underlying hardware can be analysed to obtain this information.
327

Validierung des gekoppelten neutronenkinetischen-thermohydraulischen Codes ATHLET/DYN3D mit Hilfe von Messdaten des OECD Turbine Trip Benchmarks

Kliem, Sören, Grundmann, Ulrich January 2003 (has links)
Das Vorhaben bestand in der Validierung des gekoppelten neutronenkinetisch-thermohydraulischen Programmkomplexes ATHLET/DYN3D für Siedewasserreaktoren durch Teilnahme an dem OECD/NRC Benchmark zum Turbinenschnellschluss. Das von der OECD und der amerikanischen NRC definierte Benchmark basiert auf einem Experiment mit Schließens des Turbinenschnellschlussventils, das 1977 im Rahmen einer Serie von 3 Experimenten im Kernkraftwerk Peach Bottom 2 durchgeführt wurde. Im Experiment erzeugte das Schließen des Ventils eine Druckwelle, die sich unter Abschwächung bis in den Reaktorkern ausbreitete. Die durch den Druckanstieg bewirkte Kondensation von Dampf im Reaktorkern führte zu einem positiven Reaktivitätseintrag. Der folgende Anstieg der Reaktorleistung wurde durch die Rückkopplung und das Einfahren der Regelstäbe begrenzt. Im Rahmen des Benchmarks konnten die Rechenprogramme durch Vergleiche mit den Messergebnissen und den Ergebnissen der anderen Teilnehmer an dem Benchmark validiert werden. Das Benchmark wurde in 3 Phasen oder Exercises eingeteilt. Die Phase I diente der Überprüfung des thermohydraulischen Modells für das System bei vorgegebener Leistungsfreisetzung im Kern. In der Phase II wurden 3-dimensionale Berechnungen des Reaktorkerns für vorgegebene thermohydraulische Randbedingungen durchgeführt. Die gekoppelten Rechnungen für das ausgewählte Experiment und für 4 extreme Szenarien erfolgten in der Phase III. Im Rahmen des Projekts nahm FZR an Phase II und Phase III des Benchmarks teil. Die Rechnungen für Phase II erfolgten mit dem Kernmodell DYN3D unter Berücksichtigung der Heterogenitätsfaktoren und mit 764 thermohydraulischen Kanälen (1 Kanal/Brennelement). Der ATHLET-Eingabedatensatz für die Reaktoranlage wurde von der Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) übernommen und für die Rechnungen zu Phase III, die mit der parallelen Kopplung von ATHLET mit DYN3D erfolgten, geringfügig modifiziert. Für räumlich gemittelte Parameter wurde eine gute Übereinstimmung mit den Messergebnissen und den Resultaten anderer Codes erzielt. Der Einfluss der Modellunterschiede wurde mit Hilfe von Variantenrechnungen zu Phase II untersucht. So können Unterschiede in der Leistungs- und Voidverteilung in einzelnen Brennelementen auf die unterschiedliche neutronenkinetische und thermohydraulische Modellierung des Reaktorkerns zurückgeführt werden. Vergleiche zwischen ATHLET/DYN3D (parallele Kopplung) und ATHLET/QUABOX-CUBBOX (interne Kopplung) zeigen für räumlich gemittelte Parameter nur geringe Unterschiede. Abweichungen in den lokalen Parametern können im wesentlichen mit der unterschiedlichen Modellierung des Reaktorkerns erklärt werden (geringere Anzahl von modellierten Kühlkanälen, keine Berücksichtigung der Heterogenitätsfaktoren und ein anderes Siedemodell in der Rechnung mit ATHLET/QUABOX-CUBBOX). Die Rechnungen für die extremen Szenarien von Phase III zeigen die Anwendbarkeit des gekoppelten Programms ATHLET/DYN3D für die Bedingungen bei Störfällen, die weit über das Experiment hinausgehen.
328

Qualifizierung des Kernmodells DYN3D im Komplex mit dem Störfallcode ATHLET als fortgeschrittenes Werkzeug für die Störfallanalyse von WWER-Reaktoren - Teil 2

Kliem, S., Grundmann, U., Rohde, U. January 2002 (has links)
Benchmark calculations for the validation of the coupled neutron kinetics/thermohydraulic code complex DYN3D-ATHLET are described. Two benchmark problems concerning hypothetical accident scenarios with leaks in the steam system for a VVER-440 type reactor and the TMI-1 PWR have been solved. The first benchmark task has been defined by FZR in the frame of the international association "Atomic Energy Research" (AER), the second exercise has been organised under the auspices of the OECD. While in the first benchmark the break of the main steam collector in the sub-critical hot zero power state of the reactor was considered, the break of one of the two main steam lines at full reactor power was assumed in the OECD benchmark. Therefore, in this exercise the mixing of the coolant from the intact and the defect loops had to be considered, while in the AER benchmark the steam collector break causes a homogeneous overcooling of the primary circuit. In the AER benchmark, each participant had to use its own macroscopic cross section libraries. In the OECD benchmark, the cross sections were given in the benchmark definition. The main task of both benchmark problems was to analyse the re-criticality of the scrammed reactor due to the overcooling. For both benchmark problems, a good agreement of the DYN3D-ATHLET solution with the results of other codes was achieved. Differences in the time of re-criticality and the height of the power peak between various solutions of the AER benchmark can be explained by the use of different cross section data. Significant differences in the thermohydraulic parameters (coolant temperature, pressure) occurred only at the late stage of the transient during the emergency injection of highly borated water. In the OECD benchmark, a broader scattering of the thermohydraulic results can be observed, while a good agreement between the various 3D reactor core calculations with given thermohydraulic boundary conditions was achieved. Reasons for the differences in the thermohydraulics were assumed in the difficult modelling of the vertical once-through steam generator with steam superheating. Sensitivity analyses which considered the influence of the nodalisation and the impact of the coolant mixing model were performed for the DYN3D-ATHLET solution of the OECD benchmark. The solution of the benchmarks essentially contributed to the qualification of the code complex DYN3D-ATHLET as an advanced tool for the accident analysis for both VVER type reactors and Western PWRs.
329

Ekonometrické modelovanie výkonu fondov

Tuchyňová, Barbora January 2019 (has links)
In this diploma thesis we gather information on European mutual funds and ETFs that would help to inform the decision of an investment manager. We cre-ated OLS models for three types of mutual funds - money market, bond and equity – to demonstrate a relationship between funds' volatility and their annualised return. We then utilised VAR models to test Granger causation between an ETF and its tracking index using their net asset value.
330

The Performance of Post-Quantum Key Encapsulation Mechanisms : A Study on Consumer, Cloud and Mainframe Hardware

Gustafsson, Alex, Stensson, Carl January 2021 (has links)
Background. People use the Internet for communication, work, online banking and more. Public-key cryptography enables this use to be secure by providing confidentiality and trust online. Though these algorithms may be secure from attacks from classical computers, future quantum computers may break them using Shor’s algorithm. Post-quantum algorithms are therefore being developed to mitigate this issue. The National Institute of Standards and Technology (NIST) has started a standardization process for these algorithms. Objectives. In this work, we analyze what specialized features applicable for post-quantum algorithms are available in the mainframe architecture IBM Z. Furthermore, we study the performance of these algorithms on various hardware in order to understand what techniques may increase their performance. Methods. We apply a literature study to identify the performance characteristics of post-quantum algorithms as well as what features of IBM Z may accommodate and accelerate these. We further apply an experimental study to analyze the practical performance of the two prominent finalists NTRU and Classic McEliece on consumer, cloud and mainframe hardware. Results. IBM Z was found to be able to accelerate several key symmetric primitives such as SHA-3 and AES via the Central Processor Assist for Cryptographic Functions (CPACF). Though the available Hardware Security Modules (HSMs) did not support any of the studied algorithms, they were found to be able to accelerate them via a Field-Programmable Gate Array (FPGA). Based on our experimental study, we found that computers with support for the Advanced Vector Extensions (AVX) were able to significantly accelerate the execution of post-quantum algorithms. Lastly, we identified that vector extensions, Application-Specific Integrated Circuits (ASICs) and FPGAs are key techniques for accelerating these algorithms. Conclusions. When considering the readiness of hardware for the transition to post-quantum algorithms, we find that the proposed algorithms do not perform nearly as well as classical algorithms. Though the algorithms are likely to improve until the post-quantum transition occurs, improved hardware support via faster vector instructions, increased cache sizes and the addition of polynomial instructions may significantly help reduce the impact of the transition. / Bakgrund. Människor använder internet för bland annat kommunikation, arbete och bankärenden. Asymmetrisk kryptering möjliggör att detta sker säkert genom att erbjuda sekretess och tillit online. Även om dessa algoritmer förväntas vara säkra från attacker med klassiska datorer, riskerar framtida kvantdatorer att knäcka dem med Shors algoritm. Därför utvecklas kvantsäkra krypton för att mitigera detta problem. National Institute of Standards and Technology (NIST) har påbörjat en standardiseringsprocess för dessa algoritmer. Syfte. I detta arbete analyserar vi vilka specialiserade funktioner för kvantsäkra algoritmer som finns i stordator-arkitekturen IBM Z. Vidare studerar vi prestandan av dessa algoritmer på olika hårdvara för att förstå vilka tekniker som kan öka deras prestanda. Metod. Vi utför en litteraturstudie för att identifiera vad som är karaktäristiskt för kvantsäkra algoritmers prestanda samt vilka funktioner i IBM Z som kan möta och accelerera dessa. Vidare applicerar vi en experimentell studie för att analysera den praktiska prestandan av de två framträdande finalisterna NTRU och Classic McEliece på konsument-, moln- och stordatormiljöer. Resultat. Vi fann att IBM Z kunde accelerera flera centrala symmetriska primitiver så som SHA-3 och AES via en hjälpprocessor för kryptografiska funktioner (CPACF). Även om befintliga hårdvarusäkerhetsmoduler inte stödde några av de undersökta algoritmerna, fann vi att de kan accelerera dem via en på-plats-programmerbar grind-matris (FPGA). Baserat på vår experimentella studie, fann vi att datorer med stöd för avancerade vektorfunktioner (AVX) möjlggjorde en signifikant acceleration av kvantsäkra algoritmer. Slutligen identifierade vi att vektorfunktioner, applikationsspecifika integrerade kretsar (ASICs) och FPGAs är centrala tekniker som kan nyttjas för att accelerera dessa algortmer. Slutsatser. Gällande beredskapen hos hårdvara för en övergång till kvantsäkra krypton, finner vi att de föreslagna algoritmerna inte presterar närmelsevis lika bra som klassiska algoritmer. Trots att det är sannolikt att de kvantsäkra kryptona fortsatt förbättras innan övergången sker, kan förbättrat hårdvarustöd för snabbare vektorfunktioner, ökade cachestorlekar och tillägget av polynomoperationer signifikant bidra till att minska påverkan av övergången till kvantsäkra krypton.

Page generated in 0.0376 seconds