• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 38
  • 13
  • 11
  • 7
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 183
  • 80
  • 65
  • 58
  • 56
  • 46
  • 44
  • 26
  • 25
  • 25
  • 25
  • 24
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Metriky pro detekci útoků v síťovém provozu / Metrics for Intrusion Detection in Network Traffic

Homoliak, Ivan January 2012 (has links)
Publication aims to propose and apply new metrics for intrusion detection in network traffic according to analysis of existing metrics, analysis of network traffic and behavioral characteristics of known attacks. The main goal of the thesis is to propose and implement new collection of metrics which will be capable to detect zero day attacks.
152

Ochrana proti distribuovaným útokům hrubou silou / Distributed Brute Force Attacks Protection

Richter, Jan January 2010 (has links)
This project deals with analysis of brute force attacks focused on breaking authentication of common services (especially ssh) of Linux and xBSD operating systems. It also examines real attacks, actual tools and ways of detection of theese attacks. Finaly there are designed new mechanisms of coordination and evaluation of distributed brute force attacks in distributed environment. These mechanisms are then implemented in distributed system called DBFAP.
153

Hypervisor-based cloud anomaly detection using supervised learning techniques

Nwamuo, Onyekachi 23 January 2020 (has links)
Although cloud network flows are similar to conventional network flows in many ways, there are some major differences in their statistical characteristics. However, due to the lack of adequate public datasets, the proponents of many existing cloud intrusion detection systems (IDS) have relied on the DARPA dataset which was obtained by simulating a conventional network environment. In the current thesis, we show empirically that the DARPA dataset by failing to meet important statistical characteristics of real-world cloud traffic data centers is inadequate for evaluating cloud IDS. We analyze, as an alternative, a new public dataset collected through cooperation between our lab and a non-profit cloud service provider, which contains benign data and a wide variety of attack data. Furthermore, we present a new hypervisor-based cloud IDS using an instance-oriented feature model and supervised machine learning techniques. We investigate 3 different classifiers: Logistic Regression (LR), Random Forest (RF), and Support Vector Machine (SVM) algorithms. Experimental evaluation on a diversified dataset yields a detection rate of 92.08% and a false-positive rate of 1.49% for the random forest, the best performing of the three classifiers. / Graduate
154

Near Real-time Detection of Masquerade attacks in Web applications : catching imposters using their browsing behavor

Panopoulos, Vasileios January 2016 (has links)
This Thesis details the research on Machine Learning techniques that are central in performing Anomaly and Masquerade attack detection. The main focus is put on Web Applications because of their immense popularity and ubiquity. This popularity has led to an increase in attacks, making them the most targeted entry point to violate a system. Specifically, a group of attacks that range from identity theft using social engineering to cross site scripting attacks, aim at exploiting and masquerading users. Masquerading attacks are even harder to detect due to their resemblance with normal sessions, thus posing an additional burden. Concerning prevention, the diversity and complexity of those systems makes it harder to define reliable protection mechanisms. Additionally, new and emerging attack patterns make manually configured and Signature based systems less effective with the need to continuously update them with new rules and signatures. This leads to a situation where they eventually become obsolete if left unmanaged. Finally the huge amount of traffic makes manual inspection of attacks and False alarms an impossible task. To tackle those issues, Anomaly Detection systems are proposed using powerful and proven Machine Learning algorithms. Gravitating around the context of Anomaly Detection and Machine Learning, this Thesis initially defines several basic definitions such as user behavior, normality and normal and anomalous behavior. Those definitions aim at setting the context in which the proposed method is targeted and at defining the theoretical premises. To ease the transition into the implementation phase, the underlying methodology is also explained in detail. Naturally, the implementation is also presented, where, starting from server logs, a method is described on how to pre-process the data into a form suitable for classification. This preprocessing phase was constructed from several statistical analyses and normalization methods (Univariate Selection, ANOVA) to clear and transform the given logs and perform feature selection. Furthermore, given that the proposed detection method is based on the source and1request URLs, a method of aggregation is proposed to limit the user privacy and classifier over-fitting issues. Subsequently, two popular classification algorithms (Multinomial Naive Bayes and Support Vector Machines) have been tested and compared to define which one performs better in our given situations. Each of the implementation steps (pre-processing and classification) requires a number of different parameters to be set and thus a method called Hyper-parameter optimization is defined. This method searches for the parameters that improve the classification results. Moreover, the training and testing methodology is also outlined alongside the experimental setup. The Hyper-parameter optimization and the training phases are the most computationally intensive steps, especially given a large number of samples/users. To overcome this obstacle, a scaling methodology is also defined and evaluated to demonstrate its ability to handle larger data sets. To complete this framework, several other options have been also evaluated and compared to each other to challenge the method and implementation decisions. An example of this, is the "Transitions-vs-Pages" dilemma, the block restriction effect, the DR usefulness and the classification parameters optimization. Moreover, a Survivability Analysis is performed to demonstrate how the produced alarms could be correlated affecting the resulting detection rates and interval times. The implementation of the proposed detection method and outlined experimental setup lead to interesting results. Even so, the data-set that has been used to produce this evaluation is also provided online to promote further investigation and research on this field. / Det här arbetet behandlar forskningen på maskininlärningstekniker som är centrala i utförandet av detektion av anomali- och maskeradattacker. Huvud-fokus läggs på webbapplikationer på grund av deras enorma popularitet och att de är så vanligt förekommande. Denna popularitet har lett till en ökning av attacker och har gjort dem till den mest utsatta punkten för att bryta sig in i ett system. Mer specifikt så syftar en grupp attacker som sträcker sig från identitetsstölder genom social ingenjörskonst, till cross-site scripting-attacker, på att exploatera och maskera sig som olika användare. Maskeradattacker är ännu svårare att upptäcka på grund av deras likhet med vanliga sessioner, vilket utgör en ytterligare börda. Vad gäller förebyggande, gör mångfalden och komplexiteten av dessa system det svårare att definiera pålitliga skyddsmekanismer. Dessutom gör nya och framväxande attackmönster manuellt konfigurerade och signaturbaserade system mindre effektiva på grund av behovet att kontinuerligt uppdatera dem med nya regler och signaturer. Detta leder till en situation där de så småningom blir obsoleta om de inte sköts om. Slutligen gör den enorma mängden trafik manuell inspektion av attacker och falska alarm ett omöjligt uppdrag. För att ta itu med de här problemen, föreslås anomalidetektionssystem som använder kraftfulla och beprövade maskininlärningsalgoritmer. Graviterande kring kontexten av anomalidetektion och maskininlärning, definierar det här arbetet först flera enkla definitioner såsom användarbeteende, normalitet, och normalt och anomalt beteende. De här definitionerna syftar på att fastställa sammanhanget i vilket den föreslagna metoden är måltavla och på att definiera de teoretiska premisserna. För att under-lätta övergången till implementeringsfasen, förklaras även den bakomliggande metodologin i detalj. Naturligtvis presenteras även implementeringen, där, med avstamp i server-loggar, en metod för hur man kan för-bearbeta datan till en form som är lämplig för klassificering beskrivs. Den här för´-bearbetningsfasen konstruerades från flera statistiska analyser och normaliseringsmetoder (univariate se-lection, ANOVA) för att rensa och transformera de givna loggarna och utföra feature selection. Dessutom, givet att en föreslagen detektionsmetod är baserad på käll- och request-URLs, föreslås en metod för aggregation för att begränsa problem med överanpassning relaterade till användarsekretess och klassificerare. Efter det så testas och jämförs två populära klassificeringsalgoritmer (Multinomialnaive bayes och Support vector machines) för att definiera vilken som fungerar bäst i våra givna situationer. Varje implementeringssteg (för-bearbetning och klassificering) kräver att ett antal olika parametrar ställs in och således definieras en metod som kallas Hyper-parameter optimization. Den här metoden söker efter parametrar som förbättrar klassificeringsresultaten. Dessutom så beskrivs tränings- och test-ningsmetodologin kortfattat vid sidan av experimentuppställningen. Hyper-parameter optimization och träningsfaserna är de mest beräkningsintensiva stegen, särskilt givet ett stort urval/stort antal användare. För att övervinna detta hinder så definieras och utvärderas även en skalningsmetodologi baserat på dess förmåga att hantera stora datauppsättningar. För att slutföra detta ramverk, utvärderas och jämförs även flera andra alternativ med varandra för att utmana metod- och implementeringsbesluten. Ett exempel på det är ”Transitions-vs-Pages”-dilemmat, block restriction-effekten, DR-användbarheten och optimeringen av klassificeringsparametrarna. Dessu-tom så utförs en survivability analysis för att demonstrera hur de producerade alarmen kan korreleras för att påverka den resulterande detektionsträ˙säker-heten och intervalltiderna. Implementeringen av den föreslagna detektionsmetoden och beskrivna experimentuppsättningen leder till intressanta resultat. Icke desto mindre är datauppsättningen som använts för att producera den här utvärderingen också tillgänglig online för att främja vidare utredning och forskning på området.
155

Comparing Anomaly-Based Network Intrusion Detection Approaches Under Practical Aspects

Helmrich, Daniel 07 July 2021 (has links)
While many of the currently used network intrusion detection systems (NIDS) employ signature-based approaches, there is an increasing research interest in the examination of anomaly-based detection methods, which seem to be more suited for recognizing zero-day attacks. Nevertheless, requirements for their practical deployment, as well as objective and reproducible evaluation methods, are hereby often neglected. The following thesis defines aspects that are crucial for a practical evaluation of anomaly-based NIDS, such as the focus on modern attack types, the restriction to one-class classification methods, the exclusion of known attacks from the training phase, a low false detection rate, and consideration of the runtime efficiency. Based on those principles, a framework dedicated to developing, testing and evaluating models for the detection of network anomalies is proposed. It is applied to two datasets featuring modern traffic, namely the UNSW-NB15 and the CIC-IDS-2017 datasets, in order to compare and evaluate commonly-used network intrusion detection methods. The implemented approaches include, among others, a highly configurable network flow generator, a payload analyser, a one-hot encoder, a one-class support vector machine, and an autoencoder. The results show a significant difference between the two chosen datasets: While for the UNSW-NB15 dataset several reasonably well performing model combinations for both the autoencoder and the one-class SVM can be found, most of them yield unsatisfying results when the CIC-IDS-2017 dataset is used. / Obwohl viele der derzeit genutzten Systeme zur Erkennung von Netzwerkangriffen (engl. NIDS) signaturbasierte Ansätze verwenden, gibt es ein wachsendes Forschungsinteresse an der Untersuchung von anomaliebasierten Erkennungsmethoden, welche zur Identifikation von Zero-Day-Angriffen geeigneter erscheinen. Gleichwohl werden hierbei Bedingungen für deren praktischen Einsatz oft vernachlässigt, ebenso wie objektive und reproduzierbare Evaluationsmethoden. Die folgende Arbeit definiert Aspekte, die für eine praxisorientierte Evaluation unabdingbar sind. Dazu zählen ein Schwerpunkt auf modernen Angriffstypen, die Beschränkung auf One-Class Classification Methoden, der Ausschluss von bereits bekannten Angriffen aus dem Trainingsdatensatz, niedrige Falscherkennungsraten sowie die Berücksichtigung der Laufzeiteffizienz. Basierend auf diesen Prinzipien wird ein Rahmenkonzept vorgeschlagen, das für das Entwickeln, Testen und Evaluieren von Modellen zur Erkennung von Netzwerkanomalien bestimmt ist. Dieses wird auf zwei Datensätze mit modernem Netzwerkverkehr, namentlich auf den UNSW-NB15 und den CIC-IDS- 2017 Datensatz, angewendet, um häufig genutzte NIDS-Methoden zu vergleichen und zu evaluieren. Die für diese Arbeit implementierten Ansätze beinhalten, neben anderen, einen weit konfigurierbaren Netzwerkflussgenerator, einen Nutzdatenanalysierer, einen One-Hot-Encoder, eine One-Class Support Vector Machine sowie einen Autoencoder. Die Resultate zeigen einen großen Unterschied zwischen den beiden ausgewählten Datensätzen: Während für den UNSW-NB15 Datensatz verschiedene angemessen gut funktionierende Modellkombinationen, sowohl für den Autoencoder als auch für die One-Class SVM, gefunden werden können, bringen diese für den CIC-IDS-2017 Datensatz meist unbefriedigende Ergebnisse.
156

Analýza systémových záznamů / System Log Analysis

Ščotka, Jan January 2008 (has links)
The goal of this master thesis is to make possible to perform system log analysis in more general way than well-known host-based instrusion detection systems (HIDS). The way how to achieve this goal is via proposed user-friendly regular expressions. This thesis deals with making regular expressions possible to use in the field of log analysis, and mainly by users unfamiliar with formal aspects of computer science.
157

Performance evaluation of security mechanisms in Cloud Networks

Kannan, Anand January 2012 (has links)
Infrastructure as a Service (IaaS) is a cloud service provisioning model which largely focuses on data centre provisioning of computing and storage facilities. The networking aspects of IaaS beyond the data centre are a limiting factor preventing communication services that are sensitive to network characteristics from adopting this approach. Cloud networking is a new technology which integrates network provisioning with the existing cloud service provisioning models thereby completing the cloud computing picture by addressing the networking aspects. In cloud networking, shared network resources are virtualized, and provisioned to customers and end-users on-demand in an elastic fashion. This technology allows various kinds of optimization, e.g., reducing latency and network load. Further, this allows service providers to provision network performance guarantees as a part of their service offering. However, this new approach introduces new security challenges. Many of these security challenges are addressed in the CloNe security architecture. This thesis presents a set of potential techniques for securing different resource in a cloud network environment which are not addressed in the existing CloNe security architecture. The thesis begins with a holistic view of the Cloud networking, as described in the Scalable and Adaptive Internet Solutions (SAIL) project, along with its proposed architecture and security goals. This is followed by an overview of the problems that need to be solved and some of the different methods that can be applied to solve parts of the overall problem, specifically a comprehensive, tightly integrated, and multi-level security architecture, a key management algorithm to support the access control mechanism, and an intrusion detection mechanism. For each method or set of methods, the respective state of the art is presented. Additionally, experiments to understand the performance of these mechanisms are evaluated on a simple cloud network test bed. The proposed key management scheme uses a hierarchical key management approach that provides fast and secure key update when member join and member leave operations are carried out. Experiments show that the proposed key management scheme enhances the security and increases the availability and integrity. A newly proposed genetic algorithm based feature selection technique has been employed for effective feature selection. Fuzzy SVM has been used on the data set for effective classification. Experiments have shown that the proposed genetic based feature selection algorithm reduces the number of features and hence decreases the classification time, while improving detection accuracy of the fuzzy SVM classifier by minimizing the conflicting rules that may confuse the classifier. The main advantages of this intrusion detection system are the reduction in false positives and increased security. / Infrastructure as a Service (IaaS) är en Cloudtjänstmodell som huvudsakligen är inriktat på att tillhandahålla ett datacenter för behandling och lagring av data. Nätverksaspekterna av en cloudbaserad infrastruktur som en tjänst utanför datacentret utgör en begränsande faktor som förhindrar känsliga kommunikationstjänster från att anamma denna teknik. Cloudnätverk är en ny teknik som integrerar nätverkstillgång med befintliga cloudtjänstmodeller och därmed fullbordar föreställningen av cloud data genom att ta itu med nätverkaspekten.  I cloudnätverk virtualiseras delade nätverksresurser, de avsätts till kunder och slutanvändare vid efterfrågan på ett flexibelt sätt. Denna teknik tillåter olika typer av möjligheter, t.ex. att minska latens och belastningen på nätet. Vidare ger detta tjänsteleverantörer ett sätt att tillhandahålla garantier för nätverksprestandan som en del av deras tjänsteutbud. Men denna nya strategi introducerar nya säkerhetsutmaningar, exempelvis VM migration genom offentligt nätverk. Många av dessa säkerhetsutmaningar behandlas i CloNe’s Security Architecture. Denna rapport presenterar en rad av potentiella tekniker för att säkra olika resurser i en cloudbaserad nätverksmiljö som inte behandlas i den redan existerande CloNe Security Architecture. Rapporten inleds med en helhetssyn på cloudbaserad nätverk som beskrivs i Scalable and Adaptive Internet Solutions (SAIL)-projektet, tillsammans med dess föreslagna arkitektur och säkerhetsmål. Detta följs av en översikt över de problem som måste lösas och några av de olika metoder som kan tillämpas för att lösa delar av det övergripande problemet. Speciellt behandlas en omfattande och tätt integrerad multi-säkerhetsarkitektur, en nyckelhanteringsalgoritm som stödjer mekanismens åtkomstkontroll och en mekanism för intrångsdetektering. För varje metod eller för varje uppsättning av metoder, presenteras ståndpunkten för respektive teknik. Dessutom har experimenten för att förstå prestandan av dessa mekanismer utvärderats på testbädd av ett enkelt cloudnätverk. Den föreslagna nyckelhantering system använder en hierarkisk nyckelhantering strategi som ger snabb och säker viktig uppdatering när medlemmar ansluta sig till och medlemmarna lämnar utförs. Försöksresultat visar att den föreslagna nyckelhantering system ökar säkerheten och ökar tillgänglighet och integritet. En nyligen föreslagna genetisk algoritm baserad funktion valet teknik har använts för effektiv funktion val. Fuzzy SVM har använts på de uppgifter som för effektiv klassificering. Försök har visat att den föreslagna genetiska baserad funktion selekteringsalgoritmen minskar antalet funktioner och därmed minskar klassificering tiden, och samtidigt förbättra upptäckt noggrannhet fuzzy SVM klassificeraren genom att minimera de motstående regler som kan förvirra klassificeraren. De främsta fördelarna med detta intrångsdetekteringssystem är den minskning av falska positiva och ökad säkerhet.
158

An Interactive Distributed Simulation Framework With Application To Wireless Networks And Intrusion Detection

Kachirski, Oleg 01 January 2005 (has links)
In this dissertation, we describe the portable, open-source distributed simulation framework (WINDS) targeting simulations of wireless network infrastructures that we have developed. We present the simulation framework which uses modular architecture and apply the framework to studies of mobility pattern effects, routing and intrusion detection mechanisms in simulations of large-scale wireless ad hoc, infrastructure, and totally mobile networks. The distributed simulations within the framework execute seamlessly and transparently to the user on a symmetric multiprocessor cluster computer or a network of computers with no modifications to the code or user objects. A visual graphical interface precisely depicts simulation object states and interactions throughout the simulation execution, giving the user full control over the simulation in real time. The network configuration is detected by the framework, and communication latency is taken into consideration when dynamically adjusting the simulation clock, allowing the simulation to run on a heterogeneous computing system. The simulation framework is easily extensible to multi-cluster systems and computing grids. An entire simulation system can be constructed in a short time, utilizing user-created and supplied simulation components, including mobile nodes, base stations, routing algorithms, traffic patterns and other objects. These objects are automatically compiled and loaded by the simulation system, and are available for dynamic simulation injection at runtime. Using our distributed simulation framework, we have studied modern intrusion detection systems (IDS) and assessed applicability of existing intrusion detection techniques to wireless networks. We have developed a mobile agent-based IDS targeting mobile wireless networks, and introduced load-balancing optimizations aimed at limited-resource systems to improve intrusion detection performance. Packet-based monitoring agents of our IDS employ a CASE-based reasoner engine that performs fast lookups of network packets in the existing SNORT-based intrusion rule-set. Experiments were performed using the intrusion data from MIT Lincoln Laboratories studies, and executed on a cluster computer utilizing our distributed simulation system.
159

Improved performance high speed network intrusion detection systems (NIDS). A high speed NIDS architectures to address limitations of Packet Loss and Low Detection Rate by adoption of Dynamic Cluster Architecture and Traffic Anomaly Filtration (IADF).

Akhlaq, Monis January 2011 (has links)
Intrusion Detection Systems (IDS) are considered as a vital component in network security architecture. The system allows the administrator to detect unauthorized use of, or attack upon a computer, network or telecommunication infrastructure. There is no second thought on the necessity of these systems however; their performance remains a critical question. This research has focussed on designing a high performance Network Intrusion Detection Systems (NIDS) model. The work begins with the evaluation of Snort, an open source NIDS considered as a de-facto IDS standard. The motive behind the evaluation strategy is to analyze the performance of Snort and ascertain the causes of limited performance. Design and implementation of high performance techniques are considered as the final objective of this research. Snort has been evaluated on highly sophisticated test bench by employing evasive and avoidance strategies to simulate real-life normal and attack-like traffic. The test-methodology is based on the concept of stressing the system and degrading its performance in terms of its packet handling capacity. This has been achieved by normal traffic generation; fussing; traffic saturation; parallel dissimilar attacks; manipulation of background traffic, e.g. fragmentation, packet sequence disturbance and illegal packet insertion. The evaluation phase has lead us to two high performance designs, first distributed hardware architecture using cluster-based adoption and second cascaded phenomena of anomaly-based filtration and signature-based detection. The first high performance mechanism is based on Dynamic Cluster adoption using refined policy routing and Comparator Logic. The design is a two tier mechanism where front end of the cluster is the load-balancer which distributes traffic on pre-defined policy routing ensuring maximum utilization of cluster resources. The traffic load sharing mechanism reduces the packet drop by exchanging state information between load-balancer and cluster nodes and implementing switchovers between nodes in case the traffic exceeds pre-defined threshold limit. Finally, the recovery evaluation concept using Comparator Logic also enhance the overall efficiency by recovering lost data in switchovers, the retrieved data is than analyzed by the recovery NIDS to identify any leftover threats. Intelligent Anomaly Detection Filtration (IADF) using cascaded architecture of anomaly-based filtration and signature-based detection process is the second high performance design. The IADF design is used to preserve resources of NIDS by eliminating large portion of the traffic on well defined logics. In addition, the filtration concept augment the detection process by eliminating the part of malicious traffic which otherwise can go undetected by most of signature-based mechanisms. We have evaluated the mechanism to detect Denial of Service (DoS) and Probe attempts based by analyzing its performance on Defence Advanced Research Projects Agency (DARPA) dataset. The concept has also been supported by time-based normalized sampling mechanisms to incorporate normal traffic variations to reduce false alarms. Finally, we have observed that the IADF has augmented the overall detection process by reducing false alarms, increasing detection rate and incurring lesser data loss. / National University of Sciences & Technology (NUST), Pakistan
160

An autonomous host-based intrusion detection and prevention system for Android mobile devices. Design and implementation of an autonomous host-based Intrusion Detection and Prevention System (IDPS), incorporating Machine Learning and statistical algorithms, for Android mobile devices

Ribeiro, José C.V.G. January 2019 (has links)
This research work presents the design and implementation of a host-based Intrusion Detection and Prevention System (IDPS) called HIDROID (Host-based Intrusion Detection and protection system for andROID) for Android smartphones. It runs completely on the mobile device, with a minimal computation burden. It collects data in real-time, periodically sampling features that reflect the overall utilisation of scarce resources of a mobile device (e.g. CPU, memory, battery, bandwidth, etc.). The Detection Engine of HIDROID adopts an anomaly-based approach by exploiting statistical and machine learning algorithms. That is, it builds a data-driven model for benign behaviour and looks for the outliers considered as suspicious activities. Any observation failing to match this model triggers an alert and the preventive agent takes proper countermeasure(s) to minimise the risk. The key novel characteristic of the Detection Engine of HIDROID is the fact that it requires no malicious data for training or tuning. In fact, the Detection Engine implements the following two anomaly detection algorithms: a variation of K-Means algorithm with only one cluster and the univariate Gaussian algorithm. Experimental test results on a real device show that HIDROID is well able to learn and discriminate normal from anomalous behaviour, demonstrating a very promising detection accuracy of up to 0.91, while maintaining false positive rate below 0.03. Finally, it is noteworthy to mention that to the best of our knowledge, publicly available datasets representing benign and abnormal behaviour of Android smartphones do not exist. Thus, in the context of this research work, two new datasets were generated in order to evaluate HIDROID. / Fundação para a Ciência e Tecnologia (FCT-Portugal) with reference SFRH/BD/112755/2015, European Regional Development Fund (FEDER), through the Competitiveness and Internationalization Operational Programme (COMPETE 2020), Regional Operational Program of the Algarve (2020), Fundação para a Ciência e Tecnologia; i-Five .: Extensão do acesso de espectro dinâmico para rádio 5G, POCI-01-0145-FEDER-030500, Instituto de telecomunicações, (IT-Portugal) as the host institution.

Page generated in 0.041 seconds