• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 26
  • 26
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Avaliação de padrões para implementação de modelos de dados orientados a objetos em bancos de dados relacionais. / Patterns evaluation for implementation of object-oriented data models into relational databases.

July Any Martinez de Rizzo 11 November 2010 (has links)
A questão da implementação de modelos de dados que utilizam a orientação a objetos constitui-se, ainda, em um assunto não totalmente consolidado. Dessa forma, nesta dissertação realiza-se uma sistematização relativa à implementação de um banco de dados relacional representado por um diagrama de classes. Este trabalho apresenta como foco principal uma avaliação de métricas do mapeamento de três tipos de relacionamento de um modelo orientado a objetos, Herança, Agregação/Composição e Associação, quando aplicados a um modelo relacional de banco de dados. Para isso, foram avaliados ao todo sete padrões de mapeamento desses relacionamentos para a modelagem relacional, sendo dois padrões de Herança, dois de Agregação e dois de Associação, além de análise de estudos empíricos relacionados ao tema. Ambas as formas de modelagem, relacional e orientada a objetos, são compatíveis quando analisadas suas modelagens conceituais. Assim, avalia-se a adequação da implementação dos modelos orientados a objetos em um banco de dados relacional após a aplicação dos padrões de mapeamento. Como resultado deste trabalho, é apresentada uma proposta de análise de métricas da aplicação dos padrões de mapeamento em um modelo apropriado para implementação em um banco de dados relacional. Algumas das métricas avaliadas são desnormalização, método de armazenamento lógico alinhado à estratégia de indexação, alta disponibilidade e uso de métodos de replicação, custo de acesso a dados, espaço em disco e flexibilidade e custo de manutenção. / Implementation of object-oriented data models constitutes in a not fully consolidated subject yet. Thus, this work performs an evaluation about a relational database implementation represented by a class diagram. The main focus of this paper is to present a systematic metric evaluation for the mapping of three relationships types of an object-oriented model, Inheritance, Aggregation / Composition and Association, when applied to a relational database model. For this purpose, seven mapping patterns that transform these relationships into a relational model notation were evaluated, two patterns of Inheritance, two of Aggregation, and two of Association, besides the analysis of empirical studies related to the topic. Both forms of modeling, relational and object-oriented, are considered compatible when their conceptual modeling is analyzed. So this paper evaluates the adequation of the object-oriented models implementation in a relational database after the appliance of the mapping standards. As a result of this work, it is presented an analysis of metrics proposal from the mapping patterns application in a suitable model for implementation in a relational database. Some of the evaluated metrics are denormalization, logical storage method aligned to indexing strategy, high availability and use of replication methods, cost of access to data, disk space and flexibility and maintenance costs.
2

Avaliação de padrões para implementação de modelos de dados orientados a objetos em bancos de dados relacionais. / Patterns evaluation for implementation of object-oriented data models into relational databases.

Rizzo, July Any Martinez de 11 November 2010 (has links)
A questão da implementação de modelos de dados que utilizam a orientação a objetos constitui-se, ainda, em um assunto não totalmente consolidado. Dessa forma, nesta dissertação realiza-se uma sistematização relativa à implementação de um banco de dados relacional representado por um diagrama de classes. Este trabalho apresenta como foco principal uma avaliação de métricas do mapeamento de três tipos de relacionamento de um modelo orientado a objetos, Herança, Agregação/Composição e Associação, quando aplicados a um modelo relacional de banco de dados. Para isso, foram avaliados ao todo sete padrões de mapeamento desses relacionamentos para a modelagem relacional, sendo dois padrões de Herança, dois de Agregação e dois de Associação, além de análise de estudos empíricos relacionados ao tema. Ambas as formas de modelagem, relacional e orientada a objetos, são compatíveis quando analisadas suas modelagens conceituais. Assim, avalia-se a adequação da implementação dos modelos orientados a objetos em um banco de dados relacional após a aplicação dos padrões de mapeamento. Como resultado deste trabalho, é apresentada uma proposta de análise de métricas da aplicação dos padrões de mapeamento em um modelo apropriado para implementação em um banco de dados relacional. Algumas das métricas avaliadas são desnormalização, método de armazenamento lógico alinhado à estratégia de indexação, alta disponibilidade e uso de métodos de replicação, custo de acesso a dados, espaço em disco e flexibilidade e custo de manutenção. / Implementation of object-oriented data models constitutes in a not fully consolidated subject yet. Thus, this work performs an evaluation about a relational database implementation represented by a class diagram. The main focus of this paper is to present a systematic metric evaluation for the mapping of three relationships types of an object-oriented model, Inheritance, Aggregation / Composition and Association, when applied to a relational database model. For this purpose, seven mapping patterns that transform these relationships into a relational model notation were evaluated, two patterns of Inheritance, two of Aggregation, and two of Association, besides the analysis of empirical studies related to the topic. Both forms of modeling, relational and object-oriented, are considered compatible when their conceptual modeling is analyzed. So this paper evaluates the adequation of the object-oriented models implementation in a relational database after the appliance of the mapping standards. As a result of this work, it is presented an analysis of metrics proposal from the mapping patterns application in a suitable model for implementation in a relational database. Some of the evaluated metrics are denormalization, logical storage method aligned to indexing strategy, high availability and use of replication methods, cost of access to data, disk space and flexibility and maintenance costs.
3

Rehabilitating Asymmetric Gait Using Asymmetry

Ramakrishnan, Tyagi 07 November 2017 (has links)
Human gait is a complex process that involves the coordination of the central nervous and muscular systems. A disruption to the either system results in the impairment of a person’s ability to walk. Impairments can be caused by neurological disorders such as stroke and physical conditions like amputation. There is not a standardized method to quantitatively assess the gait asymmetry of affected subjects. The purpose of this research is to understand the fundamental aspects of asymmetrical effects on the human body and improve rehabilitation techniques and devices. This research takes an interdisciplinary approach to address the limitations with current rehabilitation methodologies. The goal of my Doctoral research is to understand the fundamental effects of asymmetry caused by physical and neurological impairments. The methods discussed in this document help in developing better solutions to rehabilitate impaired individuals’ gait. I studied four major hypothesis in regards to gait asymmetry. The first hypothesis is the potential of asymmetric systems to have symmetric output. The second hypothesis is that a method that incorporates a wider range of gait parameter asymmetries can be used as a measure for gait rehabilitation. The third hypothesis is that individuals can visually identify subtle gait asymmetries. Final hypothesis is to establish the relationship between gait quality and function. Current approaches to rehabilitate impaired gait typically focus on achieving the same symmetric gait as an able-body person. This cannot work because an impaired person is inherently asymmetric and forcing them to walk symmetrically causes them to adopt patterns that are not beneficial long term. Instead, it is more prudent to embrace the asymmetry of the condition and work to minimize in specific gait parameters that may cause more harm over the long run. Combined gait asymmetry metric (CGAM) provides the necessary means to study the effect of the gait parameters and it is weighted to balance each parameter’s effect equally by normalizing the data. CGAM provides the necessary means to study the effect of the gait parameters and is weighted towards parameters that are more asymmetric. The metric is also designed to combine spatial, temporal, kinematic, and kinetic gait parameter asymmetries. It can also combine subsets of the different gait parameters to provide a more thorough analysis. CGAM will help define quantitative thresholds for achievable balanced overall gait asymmetry. The studies in this dissertation conducted on able-body and impaired subjects provides better understanding of some fundamental aspects of asymmetry in human gait. Able body subjects test devices that aim to make an individual’s gait more asymmetric. These perturbations include a prosthetic and stroke simulator, addition of distal mass, and leg length alterations. Six able-body subjects and one amputee participated in the experiment that studied the effect of asymmetric knee height. The results which consisted of analyses of individual gait parameters and CGAM scores revealed that there is evidence of overall reduction of asymmetry in gait for both able-body subject on prosthetic simulators and transfemoral amputee. The transfemoral amputee also walked with a combination of distal mass with lowered knee height. Although this configuration showed better symmetry, the configuration is detrimental in terms of energy costs. Analyzing the data of gait with the stroke simulator showed that the subject’s gait does undergo alterations in terms of overall gait asymmetry. The distal mass and leg length alteration study has revealed some significant findings that are also reflected in the prosthetic study with distal mass. A leg length discrepancy (LLD) or the change of limb mass can result in asymmetric gait patterns. Although adding mass and LLD have been studied separately, this research studies how gait patterns change as a result of asymmetrically altering both leg length and mass at a leg’s distal end. Spatio-temporal and kinetic gait measures are used to study the combined asymmetric effects of placing LLD and mass on the opposite and same side. There were statistically significant differences for the amount of mass and leg length added for all five parameters. When LLD is added to longer leg, the temporal and kinetic gait parameters of the shorter limb and the altered limb’s spatial parameter become more asymmetric. Contrary to the hypothesis, there was no significant interaction between the amount of mass and leg length added. There were cases in all perturbations where a combination of mass and LLD make a gait parameter more symmetric than a single effect. These cases exhibit the potential for configurations with lower overall asymmetries even though each parameter has a slight asymmetry as opposed to driving one parameter to symmetry and other parameters to a larger asymmetry. CGAM analysis of the results revealed that the addition of distal mass contributes more towards overall asymmetry than LLD. Analyzing 11 gait parameters for LLD and mass on the same side showed that the overall asymmetry decreased for the combination of small LLD and mass. This is consistent with the findings from analyzing five individual gait parameters. Impaired subjects include individuals with stroke and amputees. The clinical trials for individuals with stroke involve training with the Gait Enhancing Mobile Shoe (GEMS) that pro- vides an asymmetric effect on the subject’s step length and time. Training with the GEMS showed improvement in clinical measures such as timed up and go (TUG), six minute walk test (6MWT), and gait velocity. The subjects also showed lower step length symmetry as intended by the GEMS. The ground reaction force asymmetries became more asymmetric as the spatial and temporal parameters became more symmetric. This phenomenon shows evidence that when an individual with stroke is corrected, for spatial and temporal symmetry is at the expense of kinetic symmetry. The CGAM scores also reflected similar trends to that of spatial and temporal symmetry and the r2 correlation with the gait parameters proved that double limb support asymmetry has no correlation with CGAM while ground reaction force asymmetry has a weak correlation. Step length, step, and swing time showed high correlation to CGAM. I also found the r2 correlation between the clinical measures and the CGAM scores. The CGAM scores were moderately correlated to 6MWT and gait velocity but had a weak correlation with TUG. CGAM has positive correlation with TUG and has negative correlation with 6MWT and gait velocity. This gives some validation to CGAM as a potential metric that can be used to evaluate gait patterns based on their asymmetries. Transfemoral amputees were tested for their gait with varied prosthetic knee heights to study the asymmetrical effects and trained split-belt treadmill. Asymmetric knee heights showed improvement in multiple gait parameters such as step length, vertical, propulsive, and braking force asymmetry. It also decreased hip and ankle angle asymmetries. However, these improvements did lead other parameters to become more asymmetric. The CGAM scores reflect this and they show overall improvement. Although the lowest knee height showed improvement, the input from the amputee suggested that the quality of gait decreased with the lowest knee height. These exploratory results did show that a slightly lower knee height may not affect the quality of gait but may provide better overall symmetry. Another exploratory study with split-belt treadmill training, similar to the protocol followed for individuals with stroke, showed definitive improvement in double limb support, swing time, step length and time symmetry. This was also reflected in the improvements seem post training in the CGAM scores as well. I found the r2 correlation of the CGAM and the gait parameters including gait velocity. Step length and swing time show consistent correlation for individual subjects and all the data combined to CGAM. Gait velocity shows a moderate correlation to CGAM for one subject and a high correlation to the other one. However, the combined data of gait velocities does not have any correlation with CGAM. These results show that CGAM can successfully represent the overall gait parameter asymmetry. The trends seen in the gait parameters is closely reflected in the CGAM scores. This research combines the study of asymmetry with people’s perception of human gait asymmetry, which will help in estimating the thresholds for perceivable asymmetrical changes to gait. Sixteen videos were generated using motion capture data and Unity game engine. The videos were chosen to represent the largest variation of gait asymmetries. Some videos were also chosen based on CGAM values that were similar but had large variation in underlying gait parameters. The dataset consisted of results of perturbation experiments on able-body subjects and asymmetric knee height prosthesis on transfemoral amputee. These videos were rated on a seven point Likert scale by subjects from 7 being normal to 1 being abnormal. Thirty one subjects took part in the experiment, out of which only 22 subject’s data was used because they rated at least 3 videos. The results show that the subjects were able to differentiate asymmetric gait with perturbations to able-body gait without perturbation at a self-selected speed. r2 correlation analysis showed that hip angle had mild correlation to the Likert scale rating of the 16 different gait patterns. Multivariate linear regression analysis with a linear model showed significant contribution of ankle and hip angles, vertical, propulsive, and braking forces. It is interesting that the majority of parameters that showed significance are not perceivable visually. Ankle and hip angles are visually perceivable and this significance revealed that subjects seemed to perceive asymmetric ankle and hip angles as abnormal. However, the subjects do not perceive asymmetric knee angles as completely abnormal with evidence of no significance, no correlation, and neutral Likert rating for gait patterns that perturbed knee angles.
4

Résilience des systèmes informatiques adaptatifs : modélisation, analyse et quantification / Resilience of adaptive computer systems : modelling, analysis and quantification

Excoffon, William 08 June 2018 (has links)
On appelle résilient un système capable de conserver ses propriétés de sûreté de fonctionnement en dépit des changements (nouvelles menaces, mise-à-jour,…). Les évolutions rapides des systèmes, y compris des systèmes embarqués, implique des modifications des applications et des configurations des systèmes, en particulier au niveau logiciel. De tels changements peuvent avoir un impact sur la sûreté de fonctionnement et plus précisément sur les hypothèses des mécanismes de tolérance aux fautes. Un système est donc résilient si de pareils changements n’invalident pas les mécanismes de sûreté de fonctionnement, c’est-à-dire, si les mécanismes déjà en place restent cohérents malgré les changements ou dont les incohérences peuvent être rapidement résolues. Nous proposons tout d’abord dans cette thèse un modèle pour les systèmes résilients. Grâce à ce modèle nous pourrons évaluer les capacités d’un ensemble de mécanismes de tolérance aux fautes à assurer les propriétés de sûreté issues des spécifications non fonctionnelles. Cette modélisation nous permettra également de définir un ensemble de mesures afin de quantifier la résilience d’un système. Enfin nous discuterons dans le dernier chapitre de la possibilité d’inclure la résilience comme un des objectifs du processus de développement / A system that remains dependable when facing changes (new threats, updates) is called resilient. The fast evolution of systems, including embedded systems, implies modifications of applications and system configuration, in particular at software level. Such changes may have an impact on the dependability of the system, in particular on the assumptions of the fault tolerance mechanisms. A system is resilient when such changes do not invalidate its dependability mechanisms, said in a different way, current dependability mechanisms remain appropriate despite changes or whose inconsistencies can be rapidly solved. We propose in this thesis a model for resilient computing systems. Using this model we propose a way to evaluate if a set of fault tolerance mechanisms is able to ensure dependability properties from non-functional specifications. The proposed model is the used to quantify the resilience of a system using a set of specific measures. In the last chapter we discuss the possibility of including resilience as a goal in development process
5

IMPROVED SEGMENTATION FOR AUTOMATED SEIZURE DETECTION USING CHANNEL-DEPENDENT POSTERIORS

Shah, Vinit, 0000-0001-5193-0206 January 2021 (has links)
The electroencephalogram (EEG) is the primary tool used for the diagnosis of a varietyof neural pathologies such as epilepsy. Identification of a critical event, such as an epileptic seizure, is difficult because the signals are collected by transducing extremely low voltages, and as a result, are corrupted by noise. Also, EEG signals often contain artifacts due to clinical phenomena such as patient movement. These artifacts are easily confused as seizure events. Factors such as slowly evolving morphologies make accurate marking of the onset and offset of a seizure event difficult. Precise segmentation, defined as the ability to detect start and stop times within a fraction of a second, is a challenging research problem. In this dissertation, we improve seizure segmentation performance by developing deep learning technology that mimics the human interpretation process. The central thesis of this work is that separation of the seizure detection problem into a two-phase problem – epileptiform activity detection followed by seizure detection – should improve our ability to detect and localize seizure events. In the first phase, we use a sequential neural network algorithm known as a long short-term memory (LSTM) network to identify channel-specific epileptiform discharges associated with seizures. In the second phase, the feature vector is augmented with posteriors that represent the onset and offset of ictal activities. These augmented features are applied to a multichannel convolutional neural network (CNN) followed by an LSTM network. The multiphase model was evaluated on a blind evaluation set and was shown to detect 106 segment boundaries within a 2-second margin of error. Our previous best system, which delivers state-of-the-art performance on this task, correctly detected only 9 segment boundaries. Our multiphase system was also shown to be robust by performing well on two blind evaluation sets. Seizure detection performance on the TU Seizure Detection (TUSZ) Corpus development set is 41.60% sensitivity with 5.63 false alarms/24 hours (FAs/24 hrs). Performance on the corresponding evaluation set is 48.21% sensitivity with 16.54 FAs/24 hrs. Performance on a previously unseen corpus, the Duke University Seizure (DUSZ) Corpus is 46.62% sensitivity with 7.86 FAs/24 hrs. Our previous best system yields 30.83% sensitivity with 6.74 FAs/24 hrs on the TUSZ development set, 33.11% sensitivity with 19.89 FAs/24 hrs on the TUSZ evaluation set and 33.71% sensitivity with 40.40 FAs/24 hrs on DUSZ. Improving seizure detection performance through better segmentation is an important step forward in making automated seizure detection systems clinically acceptable. For a real-time system, accurate segmentation will allow clinicians detect a seizure as soon as it appears in the EEG signal. This will allow neurologists to act during the early stages of the event which, in many cases, is essential to avoid permanent damage to the brain. In a similar way, accurate offset detection will help with delivery of therapies designed to mitigate postictal (after seizure) period symptoms. This will also help reveal the severity of a seizure and consequently provide guidance for medicating a patient. / Electrical and Computer Engineering
6

Machine Learning Survival Models : Performance and Explainability

Alabdallah, Abdallah January 2023 (has links)
Survival analysis is an essential statistics and machine learning field in various critical applications like medical research and predictive maintenance. In these domains understanding models' predictions is paramount. While machine learning techniques are increasingly applied to enhance the predictive performance of survival models, they simultaneously sacrifice transparency and explainability.  Survival models, in contrast to regular machine learning models, predict functions rather than point estimates like regression and classification models. This creates a challenge regarding explaining such models using the known off-the-shelf machine learning explanation techniques, like Shapley Values, Counterfactual examples, and others.    Censoring is also a major issue in survival analysis where the target time variable is not fully observed for all subjects. Moreover, in predictive maintenance settings, recorded events do not always map to actual failures, where some components could be replaced because it is considered faulty or about to fail in the future based on an expert's opinion. Censoring and noisy labels create problems in terms of modeling and evaluation that require to be addressed during the development and evaluation of the survival models. Considering the challenges in survival modeling and the differences from regular machine learning models, this thesis aims to bridge this gap by facilitating the use of machine learning explanation methods to produce plausible and actionable explanations for survival models. It also aims to enhance survival modeling and evaluation revealing a better insight into the differences among the compared survival models. In this thesis, we propose two methods for explaining survival models which rely on discovering survival patterns in the model's predictions that group the studied subjects into significantly different survival groups. Each pattern reflects a specific survival behavior common to all the subjects in their respective group. We utilize these patterns to explain the predictions of the studied model in two ways. In the first, we employ a classification proxy model that can capture the relationship between the descriptive features of subjects and the learned survival patterns. Explaining such a proxy model using Shapley Values provides insights into the feature attribution of belonging to a specific survival pattern. In the second method, we addressed the "what if?" question by generating plausible and actionable counterfactual examples that would change the predicted pattern of the studied subject. Such counterfactual examples provide insights into actionable changes required to enhance the survivability of subjects. We also propose a variational-inference-based generative model for estimating the time-to-event distribution. The model relies on a regression-based loss function with the ability to handle censored cases. It also relies on sampling for estimating the conditional probability of event times. Moreover, we propose a decomposition of the C-index into a weighted harmonic average of two quantities, the concordance among the observed events and the concordance between observed and censored cases. These two quantities, weighted by a factor representing the balance between the two, can reveal differences between survival models previously unseen using only the total Concordance index. This can give insight into the performances of different models and their relation to the characteristics of the studied data. Finally, as part of enhancing survival modeling, we propose an algorithm that can correct erroneous event labels in predictive maintenance time-to-event data. we adopt an expectation-maximization-like approach utilizing a genetic algorithm to find better labels that would maximize the survival model's performance. Over iteration, the algorithm builds confidence about events' assignments which improves the search in the following iterations until convergence. We performed experiments on real and synthetic data showing that our proposed methods enhance the performance in survival modeling and can reveal the underlying factors contributing to the explainability of survival models' behavior and performance.
7

Performance Management System for Temporary Employees : Understanding differences in Performance Management between Temporary and Permanent Employees

Rana, Atul, Hamed, Yaser January 2016 (has links)
Purpose – The purpose of this study is to find the organizational practices in place for the performance evaluation of temporary employees and how that varies from permanent employees. Method – The study takes an inductive and interpretive approach to find out the unknown practices. The study is conducted over 7 respondents from different organizations split between recruitment agencies and client organizations and represents practices maintained by both set of industries. Findings – The study identifies low standardization in performance evaluation and discusses the variance from literature over the subject matter. Also a model is drawn based on the amalgamation of literary review and empirical results. Implications – The study presents variance in the processes for temporary employees and the prime areas where the variance occurs. For the organizations to have fair and just performance management system and for equality towards temporary employees, these issues must be addressed. Limitations – Cultural practices are not taken into consideration and literature might be based on different cultural practices than the respondents country and for a wholesome study, more respondents might be needed.
8

Αλγόριθμοι εξισορρόπησης φόρτου σε p2p συστήματα και μετρικές απόδοσης

Μιχαήλ, Θεοφάνης-Αριστοφάνης 05 February 2015 (has links)
Πρόσφατα η ιντερνετική κοινότητα έστρεψε την προσοχή και το ενδιαφέρον της στα peer-to-peer συστήματα, όπου οι χρήστες προσφέρουν τους πόρους τους (αποθηκευτικό χώρο, υπολογιστικό χρόνο) και περιεχόμενο (αρχεία) στην διάθεση της κοινότητας. Οι χρήστες θεωρούνται ίσοι μεταξύ τους και ο καθένας συμμετέχει με διπλό ρόλο, τόσο σαν πελάτης, όσο και σαν εξυπηρετητής. Με αυτό τον τρόπο δημιουργούνται κοινότητες με αποκεντρωμένο έλεγχο, ευελιξία, σταθερότητα, κλιμάκωση, ανωνυμία κι αντοχή στην λογοκρισία. Ενώ όμως οι δυνατότητες αυτών των συστημάτων είναι ποικίλες, την μεγαλύτερη αποδοχή έχουν γνωρίσει τα συστήματα ανταλλαγής αρχείων όπως το Napster, Kazaa, Gnutella, eDonkey, BitTorrent. Το ενδιαφέρον των ερευνητών δεν έχει περιοριστεί μόνο στην ανταλλαγή δεδομένων μιας και η ανοιχτή φύση των peer-to-peer συστημάτων προσφέρει πολύ περισσότερες προκλήσεις. Ένα ενδιαφέρον πεδίο έρευνας είναι αυτό των τεχνικών εξισορρόπησης φόρτου. Η έρευνα που διεξάγεται μπορεί να χωριστεί σε δύο κατηγορίες. Στην μια κατηγορία μπορούμε να εντάξουμε τεχνικές για την καλύτερη κατανομή των αντικειμένων στον χώρο ονομάτων για βελτιστοποιήσεις στην απόδοση της δρομολόγησης και αναζήτησης [Pastry, Tapestry, Chord]. Στην δεύτερη μπορούμε να εντάξουμε τεχνικές για την κατανομή αντιγράφων των αντικειμένων στους κόμβους του δικτύου, για βελτιστοποίηση του ρυθμού εξυπηρέτησης των χρηστών και της ποιότητας υπηρεσίας που τους προσφέρει το σύστημα, η οποία μπορεί άμεσα να συσχετιστεί με την διαθεσιμότητα. Ενώ η δεύτερη κατηγορία μπορούμε να πούμε ότι φαίνεται να είναι πιο ενδιαφέρουσα από την σκοπιά του τελικού χρήστη, η έρευνα στον τομέα αυτό δεν φαίνεται να λαμβάνει υπόψη της έναν ρεαλιστικό υπολογισμό του κόστους εφαρμογής των προτεινομένων τεχνικών. Έτσι κάποιες εργασίες δεν υπολογίζουν καθόλου το κόστος δημιουργίας αντιγράφων, ενώ κάποιες άλλες το θεωρούν σταθερό και ανεξάρτητο από το μέγεθος των αντικειμένων και τη σύνδεση των δυο κόμβων μεταξύ των οποίων γίνεται η επικοινωνία. Κάποιες λιγότερο ή περισσότερο προχωρούν λίγο παραπέρα και ορίζουν το κόστος να είναι ανάλογο του απαιτούμενου αποθηκευτικού χώρου. Η διαφορά με την παρούσα εργασία είναι ότι δεν εμπεριέχουν την έννοια της εξισορρόπησης του φόρτου των κόμβων μεταξύ τους. Σε αυτή την εργασία προσπαθήσαμε να καθορίσουμε ένα σύνολο μετρικών απόδοσης μέσα από ένα πλαίσιο εργασίας για έναν όσο το δυνατόν περισσότερο ρεαλιστικό τρόπο υπολογισμού τους. Για να το πετύχουμε αυτό, καταρχήν σχεδιάσαμε ένα p2p σύστημα διαμοίρασης αρχείων με αρχιτεκτονική που βασίζεται στην οργάνωση των κόμβων σε ομάδες, ενώ στη συνέχεια ορίζοντας την έννοια του φόρτου υλοποιήσαμε τεχνικές για την εξισορρόπησή του. Για την αξιολόγηση των τεχνικών, ορίστηκε ένα σύνολο μετρικών οι οποίες καταγράφουν την απόδοση του συστήματος τόσο από την οπτική γωνία του συστήματος (επιθυμητή η δίκαιη κατανομή του φόρτου και η βέλτιστη χρήση των πόρων όπως κυρίως η διαθέσιμη χωρητικότητα των συνδέσεων των κόμβων μεταξύ τους), όσο κι από την οπτική γωνία του χρήστη (καλύτερη «ποιότητα υπηρεσίας» με το ελάχιστο δυνατό κόστος). Η πειραματική αξιολόγηση των τεχνικών έγινε μέσα σε ένα περιβάλλον προσομοίωσης, το οποίο υλοποιήθηκε από μηδενική βάση, έπειτα από μια μελέτη παρόμοιων συστημάτων. Τα κύρια χαρακτηριστικά του περιβάλλοντος αυτού είναι α) η επεκτασιμότητα κι ευχρηστία του, β) απλή και ρεαλιστική βάση προσομοίωσης, γ) ύπαρξη πληθώρας παραμέτρων της προσομοίωσης που δίνονται σαν είσοδος από τον χρήστη και δ) δυνατότητα προσομοίωσης μεγάλου μεγέθους συστημάτων. / Recently the internet community has focused its interest on peer-to-peer systems, where users contribute their resources (storage space, computation time) and content (documents, files) to the community. The users are considered equivalent, each one participating with a dual role, both as a client and server. The communities formed under this simple peer-to-peer paradigm are characterized by the decentralized control, robustness, stability, scaling, anonymity and resistance to censorship. While there are various potential application domains of peer-to-peer systems, depending on the type of shared resources, the file sharing systems, such as Napster, Kazaa, Gnutella, eDonkey, BitTorrent, has known the greater acceptance. The “open nature” of peer-to-peer systems offers a wider area of research interest and much more challenges than just content sharing; interesting research domains include infrastructure, collaboration, searching, routing, load balancing, security etc Load balancing is a very interesting domain on such systems. The carried out research in this domain may be categorized in two categories. In the first, one can include techniques for better item distribution in the name space so as improvements in routing and searching can be accomplished [ref2 Pastry, Tapestry, Chord]. In the second, one can include techniques for items’ replicas placement to the network nodes, for improving the throughput and the Quality of Service provided to the users. The QoS can be straightforward related to the availability While the second category seems to be more interesting from the user’s perspective, the research in this domain does not seem to take into account a realistic cost evaluation of the proposed techniques. Some research studies just ignore it, while some others consider it constant and irrelative to the objects’ size and the connection between the two nodes where the object transfer occurs. Some others (less) (or more) get a little further and define the cost to be proportional to the needed storage capacity. The difference with our study is that the previous studies do not comprise the notion of load balancing among users as well as evaluate the cost under different assumptions. With our work we try to define a set of performance metrics through a framework based on a measurement as realistic as possible. To accomplish this, at first we designed a cluster based file sharing p2p system, then we defined the notion of load and finally implemented load balancing techniques. To evaluate these techniques we defined a set of metrics that record the system’s performance both from the system’s perspective (desirable the fair load distribution and the optimum use of resources like the available bandwidth of nodes’ connections) and the user’s (better “quality of service” with the least cost). For the experimental evaluation of these techniques we developed from scratch a simulation environment, after we studied similar systems. The main characteristics of this simulator are a) extensibility and usability, b) simple and realistic simulation base, c) availability of plenty simulation parameters given as input from the user d) scalability to simulate large scale systems.
9

Volatility Forecasting using GARCH Processes with Exogenous Variables / Volatilitets prognoser av GARCH processer med exogena variabler

Larson, Ellis January 2022 (has links)
Volatility is a measure of the risk of an investment and plays an essential role in several areas of finance, including portfolio management and pricing of options. In this thesis, we have implemented and evaluated several so-called GARCH models for volatility prediction based on historical price series. The evaluation builds on different metrics and uses a comprehensive data set consisting of many assets of various types. We found that more advanced models do not, on average, outperform simpler ones. We also found that the length of the historical training data was critical for GARCH models to perform well and that the length was asset-dependent. Further, we developed and tested a method for taking exogenous variables into account in the model to improve the predictive performance of the model. This approach was successful for some of the large US/European indices such as Russell 2000 and S&P 500. / Volatilitet är ett mått på risken i en investering och spelar en viktig roll inom flera olika områden av finans, såsom portföljteori och prissättning av optioner. I det här projektet har vi implementerat och utvärderat olika, så kallade, GARCH modeller för prediktering av volatiliteten givet historisk prisdata. Utvärderingen av modellerna bygger på olika metriker och använder ett omfattande dataset med prishistorik för tillgångar av olika typer. Vi fann att mer komplexa modeller inte i allmänhet ger bättre resultat än enklare modeller. Vidare fann vi att en kritisk parameter för att erhålla goda resultat är att välja rätt längd på tidshistoriken av data som används för att träna modellen, och att den längden skiljer sig mellan olika tillgångar. Slutligen, vidareutvecklade vi modellen genom att inkorporera exogena variabler på olika sätt. Vi fann att det gick att förbättra GARCH modellerna främst med hjälp av några av de stora amerikanska och europeiska index som Russell 2000 och S&P 500.
10

Analyse de performance des systèmes de détection d’intrusion sans-fil / Performance analysis of wireless intrusion detection systems

Nasr, Khalid 09 January 2014 (has links)
La sécurité des réseaux sans fil fait l’objet d’une attention considérable ces dernières années. Toutefois, les communications sans fil sont confrontées à plusieurs types de menaces et d’attaques. Par conséquent, d’importants efforts, visant à sécuriser davantage les réseaux sans fil, ont dû être fournis pour en vue de lutter contre les attaques sans fil. Seulement, croire qu’une prévention intégrale des attaques peut s’effectuer au niveau de la première ligne de défense d’un système (pare-feux, chiffrement, …) n’est malheureusement qu’illusion. Ainsi, l’accent est de plus en plus porté sur la détection des attaques sans fil au travers d’une seconde ligne de défense, matérialisée par les systèmes de détection d’intrusions sans fil (WIDS). Les WIDS inspectent le trafic sans fil, respectant la norme 802.11, ainsi que les activités du système dans le but de détecter des activités malicieuses. Une alerte est ensuite envoyée aux briques chargées de la prévention pour contrer l’attaque. Sélectionner un WIDS fiable dépend principalement de l’évaluation méticuleuse de ses performances. L’efficacité du WIDS est considérée comme le facteur fondamental lors de l’évaluation de ses performances, nous lui accordons donc un grand intérêt dans ces travaux de thèse. La majeure partie des études expérimentales visant l’évaluation des systèmes de détection d’intrusions (IDS) s’intéressait aux IDS filaires, reflétant ainsi une carence claire en matière d’évaluation des IDS sans fil (WIDS). Au cours de cette thèse, nous avons mis l’accent sur trois principales critiques visant la plupart des précédentes évaluations : le manque de méthodologie d’évaluation globale, de classification d’attaque et de métriques d’évaluation fiables. Au cours de cette thèse, nous sommes parvenus à développer une méthodologie complète d’évaluation couvrant toutes les dimensions nécessaires pour une évaluation crédible des performances des WIDSs. Les axes principaux de notre méthodologie sont la caractérisation et la génération des données d’évaluation, la définition de métriques d’évaluation fiables tout en évitant les limitations de l’évaluation. Fondamentalement, les données d’évaluation sont constituées de deux principales composantes à savoir: un trafic normal et un trafic malveillant. Le trafic normal que nous avons généré au cours de nos tests d’évaluation était un trafic réel que nous contrôlions. La deuxième composante des données, qui se trouve être la plus importante, est le trafic malveillant consistant en des activités intrusives. Une évaluation complète et crédible des WIDSs impose la prise en compte de tous les scénarios et types d’attaques éventuels. Cela étant impossible à réaliser, il est nécessaire de sélectionner certains cas d’attaque représentatifs, principalement extraits d’une classification complète des attaques sans fil. Pour relever ce défi, nous avons développé une taxinomie globale des attaques visant la sécurité des réseaux sans fil, d’un point de vue de l’évaluateur des WIDS. Le deuxième axe de notre méthodologie est la définition de métriques fiables d’évaluation. Nous avons introduit une nouvelle métrique d’évaluation, EID (Efficacité de la détection d’intrusion), visant à pallier les limitations des précédentes métriques proposées. Nous avons démontré l’utilité de la métrique EID par rapport aux autres métriques proposées précédemment et comment elle parvenait à mesurer l’efficacité réelle tandis que les précédentes métriques ne mesuraient qu’une efficacité relative. L’EID peut tout aussi bien être utilisé pour l’évaluation de l’efficacité des IDS filaires et sans fil. Nous avons aussi introduit une autre métrique notée RR (Taux de Reconnaissance), pour mesurer l’attribut de reconnaissance d’attaque. Un important problème se pose lorsque des tests d’évaluation des WIDS sont menés, il s’agit des données de trafics incontrôlés sur le support ouvert de transmission. Ce trafic incontrôlé affecte sérieusement la pertinence des mesures… / Wireless intrusion detection system (WIDS) has become a matter of increasing concern in recent years as a crucial element in wireless network security. WIDS monitors 802.11 traffic to identify the intrusive activities, and then alerts the complementary prevention part to combat the attacks. Selecting a reliable WIDS system necessitates inevitably taking into account a credible evaluation of WIDSs performance. WIDS effectiveness is considered the basic factor in evaluating the WIDS performance, thus it receives great attention in this thesis. Most previous experimental evaluations of intrusion detection systems (IDSs) were concerned with the wired IDSs, with an apparent lack of evaluating the wireless IDSs (WIDSs). In this thesis, we try to manipulate three main critiques of most pervious evaluations; lack of comprehensive evaluation methodology, holistic attack classification, and expressive evaluation metrics. In this thesis, we introduce a comprehensive evaluation methodology that covers all the essential dimensions for a credible evaluation of WIDSs performance. The main pivotal dimensions in our methodology are characterizing and generating the evaluation dataset, defining reliable and expressive evaluation metrics, and overcoming the evaluation limitations. Basically, evaluation dataset consists of two main parts; normal traffic (as a background) and malicious traffic. The background traffic, which comprises normal and benign activities in the absence of attacks, was generated in our experimental evaluation tests as real controlled traffic. The second and important part of the dataset is the malicious traffic which is composed of intrusive activities. Comprehensive and credible evaluation of WIDSs necessitates taking into account all possible attacks. While this is operationally impossible, it is necessary to select representative attack test cases that are extracted mainly from a comprehensive classification of wireless attacks. Dealing with this challenge, we have developed a holistic taxonomy of wireless security attacks from the perspective of the WIDS evaluator. The second pivotal dimension in our methodology is defining reliable evaluation metrics. We introduce a new evaluation metric EID (intrusion detection effectiveness) that manipulates the drawbacks of the previously proposed metrics, especially the common drawback of their main notion that leads to measuring a relative effectiveness. The notion of our developed metric EID helps in measuring the actual effectiveness. We also introduce another metric RR (attack recognition rate) to evaluate the ability of WIDS to recognize the attack type. The third important dimension in our methodology is overcoming the evaluation limitations. The great challenge that we have faced in the experimental evaluation of WIDSs is the uncontrolled traffic over the open wireless medium. This uncontrolled traffic affects the accuracy of the measurements. We overcame this problem by constructing an RF shielded testbed to take all the measurements under our control without any interfering from any adjacent stations. Finally, we followed our methodology and conducted experimental evaluation tests of two popular WIDSs (Kismet and AirSnare), and demonstrated the utility of our proposed solutions.

Page generated in 0.1273 seconds