• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 767
  • 315
  • 154
  • 58
  • 45
  • 40
  • 25
  • 9
  • 9
  • 8
  • 7
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 1490
  • 910
  • 898
  • 871
  • 868
  • 867
  • 264
  • 261
  • 167
  • 163
  • 142
  • 133
  • 133
  • 104
  • 90
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
821

Optimization of cost-based threat response for Security Information and Event Management (SIEM) systems

Gonzalez Granadillo, Gustavo Daniel 12 December 2013 (has links) (PDF)
Current Security Information and Event Management systems (SIEMs) constitute the central platform of modern security operating centers. They gather events from various sensors (intrusion detection systems, anti-virus, firewalls, etc.), correlate these events, and deliver synthetic views for threat handling and security reporting. Research in SIEM technologies has traditionally focused on providing a comprehensive interpretation of threats, in particular to evaluate their importance and prioritize responses accordingly. However, in many cases, threat responses still require humans to carry out the analysis and decision tasks e.g., understanding the threats, defining the appropriate countermeasures and deploying them. This is a slow and costly process, requiring a high level of expertise, and remaining error-prone nonetheless. Thus, recent research in SIEM technology has focused on the ability to automate the process of selecting and deploying countermeasures. Several authors have proposed automatic response mechanisms, such as the adaptation of security policies, to overcome the limitations of static or manual response. Although these approaches improve the reaction process (making it faster and/or more efficient), they remain limited since these solutions do not analyze the impact of the countermeasures selected to mitigate the attacks. In this thesis, we propose a novel and systematic process to select the optimal countermeasure from a pool of candidates, by ranking them based on a trade-off between their efficiency in stopping the attack and their ability to preserve, at the same time, the best service to normal users. In addition, we propose a model to represent graphically attacks and countermeasures, so as to determine the volume of each element in a scenario of multiple attacks. The coordinates of each element are derived from a URI. This latter is mainly composed of three axes: user, channel, and resource. We use the CARVER methodology to give an appropriate weight to each element composing the axes in our coordinate system. This approach allows us to connect the volumes with the risks (i.e. big volumes are equivalent to high risk, whereas small volumes are equivalent to low risk). Two concepts are considered while comparing two or more risk volumes: Residual risk, which results when the risk volume is higher than the countermeasure volume; and Collateral damage, which results when the countermeasure volume is higher than the risk volume. As a result, we are able to evaluate countermeasures for single and multiple attack scenarios, making it possible to select the countermeasure or group of countermeasures that provides the highest benefit to the organization
822

Study of mechanisms ensuring service continuity for IKEv2 and IPsec protocols

Palomares Velasquez, Daniel 14 November 2013 (has links) (PDF)
During 2012, the global mobile traffic represented 70\% more than 2011. The arrival of the 4G technology introduced 19 times more traffic than non-4G sessions, and in 2013 the number of mobile-connected to the Internet exceeded the number of human beings on earth. This scenario introduces great pressure towards the Internet service providers (ISPs), which are called to ensure access to the network and maintain its QoS. At short/middle term, operators will relay on alternative access networks in order to maintain the same performance characteristics. Thus, the traffic of the clients might be offloaded from RANs to some other available access networks. However, the same security level is not ensured by those wireless access networks. Femtocells, WiFi or WiMAX (among other wireless technologies), must rely on some mechanism to secure the communications and avoid untrusted environments. Operators are mainly using IPsec to extend a security domain over untrusted networks. This introduces new challenges in terms of performance and connectivity for IPsec. This thesis concentrates on the study of the mechanism considering improving the IPsec protocol in terms of continuity of service. The continuity of service, also known as resilience, becomes crucial when offloading the traffic from RANs to other access networks. This is why we first concentrate our effort in defining the protocols ensuring an IP communication: IKEv2 and IPsec. Then, we present a detailed study of the parameters needed to keep a VPN session alive, and we demonstrate that it is possible to dynamically manage a VPN session between different gateways. Some of the reasons that justify the management of VPN sessions is to provide high availability, load sharing or load balancing features for IPsec connections. These mechanisms increase the continuity of service of IPsec-based communication. For example, if for some reason a failure occurs to a security gateway, the ISP should be able to overcome this situation and to provide mechanisms to ensure continuity of service to its clients. Some new mechanisms have recently been implemented to provide High Availability over IPsec. The open source VPN project, StrongSwan, implemented a mechanism called ClusterIP in order to create a cluster of IPsec gateways. We merged ClusterIP with our own developments in order to define two architectures: High Availability and Context Management over Mono-LAN and Multi-LAN environments. We called Mono-LAN those architectures where the cluster of security gateways is configured under a single IP address, whereas Multi-LAN concerns those architectures where different security gateways are configured with different IP addresses. Performance measurements throughout the thesis show that transferring a VPN session between different gateways avoids re-authentication delays and reduce the amount of CPU consumption and calculation of cryptographic material. From an ISP point of view, this could be used to avoid overloaded gateways, redistribution of the load, better network performances, improvements of the QoS, etc. The idea is to allow a peer to enjoy the continuity of a service while maintaining the same security level that it was initially proposed
823

Estimation of the mincerian wage model addressing its specification and different econometric issues

Bhatti, Sajjad Haider 03 December 2012 (has links) (PDF)
In the present doctoral thesis, we estimated Mincer's (1974) semi logarithmic wage function for the French and Pakistani labour force data. This model is considered as a standard tool in order to estimate the relationship between earnings/wages and different contributory factors. Despite of its vide and extensive use, simple estimation of the Mincerian model is biased because of different econometric problems. The main sources of bias noted in the literature are endogeneity of schooling, measurement error, and sample selectivity. We have tackled the endogeneity and measurement error biases via instrumental variables two stage least squares approach for which we have proposed two new instrumental variables. The first instrumental variable is defined as "the average years of schooling in the family of the concerned individual" and the second instrumental variable is defined as "the average years of schooling in the country, of particular age group, of particular gender, at the particular time when an individual had joined the labour force". Schooling is found to be endogenous for the both countries. Comparing two said instruments we have selected second instrument to be more appropriate. We have applied the Heckman (1979) two-step procedure to eliminate possible sample selection bias which found to be significantly positive for the both countries which means that in the both countries, people who decided not to participate in labour force as wage worker would have earned less than participants if they had decided to work as wage earner. We have estimated a specification that tackled endogeneity and sample selectivity problems together as we found in respect to present literature relative scarcity of such studies all over the globe in general and absence of such studies for France and Pakistan, in particular. Differences in coefficients proved worth of such specification. We have also estimated model semi-parametrically, but contrary to general norm in the context of the Mincerian model, our semi-parametric estimation contained non-parametric component from first-stage schooling equation instead of non-parametric component from selection equation. For both countries, we have found parametric model to be more appropriate. We found errors to be heteroscedastic for the data from both countries and then applied adaptive estimation to control adverse effects of heteroscedasticity. Comparing simple and adaptive estimations, we prefer adaptive specification of parametric model for both countries. Finally, we have applied quantile regression on the selected model from mean regression. Quantile regression exposed that different explanatory factors influence differently in different parts of the wage distribution of the two countries. For both Pakistan and France, it would be the first study that corrected both sample selectivity and endogeneity in single specification in quantile regression framework
824

Capital humain et compétences de base des adultes : production et valorisation

Branche-Seigeot, Aline 26 November 2013 (has links) (PDF)
La question des compétences de base est de plus en plus présente dans les préoccupations politiques actuelles au regard des coûts importants engendrés par l'illettrisme et supportés par la société. L'objectif de cette présente thèse est alors d'apporter un nouvel éclairage sur cette thématique dans une approche microéconomique. Elle se compose de deux parties. La première traite de la valorisation des compétences de base sur le marché du travail, à travers les essais 1 et 2, alors que la seconde traite de la production de compétences en français selon l'origine migratoire, à travers les essais 3 et 4. Dans le cadre de la première partie, le premier essai s'intéresse à la question des rendements des compétences de base sur le marché du travail, sous l'angle du déclassement/surclassement professionnel et sous l'angle de la rémunération individuelle. L'hypothèse centrale consiste à supposer que l'éducation, via le diplôme, n'est pas un bon filtre des capacités individuelles. Dès lors les situations de déclassement (surclassement), pour un même niveau de diplôme, peuvent résulter d'un plus faible (fort) niveau de compétences de base et non forcément d'un déséquilibre entre offre et demande de qualifications. De même, si pour un même niveau de diplôme il existe une certaine hétérogénéité des capacités individuelles, nous pouvons penser que l'ajout des scores en compétences de base tend à diminuer le rendement purement scolaire et à améliorer l'explication de la formation des salaires. Notre approche a pour principal avantage de tenir compte d'un éventuel biais de sélection dans l'estimation des équations de gains en appréhendant l'influence exercée par les compétences de base dans l'accès à l'emploi. Le deuxième essai tente quant à lui de mieux comprendre l'impact des compétences de base sur l'accès à l'emploi mais aussi sur la participation au marché du travail et ce, dans un contexte d'enrichissement généralisé du contenu du travail. La prise en compte de ces compétences doit améliorer l'explication des situations occupées à l'égard du marché du travail (emploi, chômage, inactivité), notamment pour des publics déjà fragilisés face à l'emploi. Plus précisément, pour une demande de travail donnée, nous supposons qu'à chaque situation correspond un niveau de compétences de base. La lecture, l'écriture, la compréhension écrite et orale du français étant des compétences de base clés pour l'intégration sociale et professionnelle en France, la deuxième partie s'intéresse à la production de ces compétences auprès des populations d'origine immigrée. En effet, la France possède une longue tradition d'accueil des migrants. Leur place dans la société française et leurs capacités à apprendre le français suscite donc un certain intérêt, qu'ils soient primo-arrivants ou enfants d'immigrés, d'autant que peu de recherches ont été réalisées à ce sujet. Dans le troisième essai, nous cherchons à déterminer dans quelle mesure les caractéristiques liées à l'origine migratoire entrave la production de compétences en français et dans quelle mesure la part des inégalités de scores en français sont attribuables à des différences d'efficacité résultant de la distance linguistique entre la langue maternelle et le français. Le quatrième et dernier essai vise, quant à lui, à déterminer les facteurs susceptibles d'expliquer le niveau de maîtrise en français écrit des primo-arrivants (considérés comme producteurs de leurs propres compétences) en âge de travailler. L'originalité de cet essai réside dans l'estimation d'une frontière stochastique de production afin d'expliquer leurs scores en français écrit et leurs performances productives.
825

Data ownership and interoperability for a decentralized social semantic web

SAMBRA, Andrei Vlad 19 November 2013 (has links) (PDF)
Ensuring personal data ownership and interoperability for decentralized social Web applications is currently a debated topic, especially when taking into consideration the aspects of privacy and access control. Since the user's data are such an important asset of the current business models for most social Websites, companies have no incentive to share data among each other or to offer users real ownership of their own data in terms of control and transparency of data usage. We have concluded therefore that it is important to improve the social Web in such a way that it allows for viable business models while still being able to provide increased data ownership and data interoperability compared to the current situation. To this regard, we have focused our research on three different topics: identity, authentication and access control. First, we tackle the subject of decentralized identity by proposing a new Web standard called "Web Identity and Discovery" (WebID), which offers a simple and universal identification mechanism that is distributed and openly extensible. Next, we move to the topic of authentication where we propose WebID-TLS, a decentralized authentication protocol that enables secure, efficient and user friendly authentication on the Web by allowing people to login using client certificates and without relying on Certification Authorities. We also extend the WebID-TLS protocol, offering delegated authentication and access delegation. Finally we present our last contribution, the Social Access Control Service, which serves to protect the privacy of Linked Data resources generated by users (e.g. pro le data, wall posts, conversations, etc.) by applying two social metrics: the "social proximity distance" and "social contexts"
826

Robust watermarking techniques for stereoscopic video protection

Chammem, Afef 27 May 2013 (has links) (PDF)
The explosion in stereoscopic video distribution increases the concerns over its copyright protection. Watermarking can be considered as the most flexible property right protection technology. The watermarking applicative issue is to reach the trade-off between the properties of transparency, robustness, data payload and computational cost. While the capturing and displaying of the 3D content are solely based on the two left/right views, some alternative representations, like the disparity maps should also be considered during transmission/storage. A specific study on the optimal (with respect to the above-mentioned properties) insertion domain is also required. The present thesis tackles the above-mentioned challenges. First, a new disparity map (3D video-New Three Step Search - 3DV-NTSS) is designed. The performances of the 3DV-NTSS were evaluated in terms of visual quality of the reconstructed image and computational cost. When compared with state of the art methods (NTSS and FS-MPEG) average gains of 2dB in PSNR and 0.1 in SSIM are obtained. The computational cost is reduced by average factors between 1.3 and 13. Second, a comparative study on the main classes of 2D inherited watermarking methods and on their related optimal insertion domains is carried out. Four insertion methods are considered; they belong to the SS, SI and hybrid (Fast-IProtect) families. The experiments brought to light that the Fast-IProtect performed in the new disparity map domain (3DV-NTSS) would be generic enough so as to serve a large variety of applications. The statistical relevance of the results is given by the 95% confidence limits and their underlying relative errors lower than er<0.1
827

2D/3D knowledge inference for intelligent access to enriched visual content

Sambra-Petre, Raluca-Diana 18 June 2013 (has links) (PDF)
This Ph.D. thesis tackles the issue of sill and video object categorization. The objective is to associate semantic labels to 2D objects present in natural images/videos. The principle of the proposed approach consists of exploiting categorized 3D model repositories in order to identify unknown 2D objects based on 2D/3D matching techniques. We propose here an object recognition framework, designed to work for real time applications. The similarity between classified 3D models and unknown 2D content is evaluated with the help of the 2D/3D description. A voting procedure is further employed in order to determine the most probable categories of the 2D object. A representative viewing angle selection strategy and a new contour based descriptor (so-called AH), are proposed. The experimental evaluation proved that, by employing the intelligent selection of views, the number of projections can be decreased significantly (up to 5 times) while obtaining similar performance. The results have also shown the superiority of AH with respect to other state of the art descriptors. An objective evaluation of the intra and inter class variability of the 3D model repositories involved in this work is also proposed, together with a comparative study of the retained indexing approaches . An interactive, scribble-based segmentation approach is also introduced. The proposed method is specifically designed to overcome compression artefacts such as those introduced by JPEG compression. We finally present an indexing/retrieval/classification Web platform, so-called Diana, which integrates the various methodologies employed in this thesis
828

Modélisation des dynamiques urbaines : application à l'analyse économique du changement climatique

Viguié, Vincent 05 January 2012 (has links) (PDF)
Parce qu'elles concentrent plus de la moitié de la population et l'essentiel de l'activité économique mondiales, les villes sont des acteurs majeurs des problématiques environnementales globales. Les politiques de transport, d'urbanisme et de logement sont ainsi reconnus comme des moyens nécessaires et efficaces d'action pour réduire les émissions ainsi que pour réduire la vulnérabilité aux impacts du changement climatique. Jusqu'à présent, malheureusement, il n'y a pas de consensus sur ce qui doit être fait, et encore moins sur comment le faire. Trois difficultés, au moins, expliquent cela. Tout d'abord, les politiques climatiques interagissent avec les autres objectifs des politiques urbaines, comme la compétitivité économique ou les problèmes sociaux, entrainant des synergies et des conflits. Ensuite, les inerties sont un facteur-clef à prendre en compte : les modifications structurelles des villes s'opèrent très lentement. Si l'on veut que les villes soient adaptées au climat de la fin du XXIème siècle, il est indispensable de commencer à agir dès maintenant. Enfin, les effets des politiques urbaines dépendent de nombreux facteurs exogènes, et inconnus au moment où la décision doit être prise : les changements démographiques, socio-économiques culturels politiques et technologiques vont jouer un rôle majeur. Ces trois difficultés ne sont cependant pas insurmontables, et nous illustrerons comment une modélisation intégrée peut permettre de répondre à une partie de ces problèmes
829

La Mauritanie et les défis du développement (étude d'ensemble et stratégie alternative)

Ba, Abdou Yéro 31 January 1992 (has links) (PDF)
Travail observant les problèmes de développement que la Mauritanie rencontre et les réponses que les dirigeants politiques essayent de créer face à la sécheresse, les conflits politiques et militaires, l'accumulation des dettes, et l'affaiblissement des prix des produits de base.
830

La remise en cause du modèle classique de la finance par Benoît Mandelbrot et la nécessité d'intégrer les lois de puissance dans la compréhension des phénomènes économiques

Herlin, Philippe 19 December 2012 (has links) (PDF)
Le modèle classique de la finance (Markowitz, Sharpe, Black, Scholes, Fama) a, dès le début, été remis en cause par le mathématicien Benoît Mandelbrot (1924-2010). Il démontre que la loi normale ne correspond pas à la réalité des marchés, parce qu'elle sous-estime les risques extrêmes. Il faut au contraire utiliser les lois de puissance, comme la loi de Pareto. Nous montrons ici toutes les implications de ce changement fondamental sur la finance, mais aus-si, ce qui est nouveau, en ce qui concerne la gestion des entreprises (à travers le calcul du coût des capitaux propres). Nous tentons de mettre à jour les raisons profondes de l'existence des lois de puissance en économie à travers la notion d'entropie. Nous présen-tons de nouveaux outils théoriques pour comprendre la formation des prix (la théorie de la proportion diagonale), des bulles (la notion de réflexivité), des crises (la notion de réseau), en apportant une réponse globale à la crise actuelle (un système monétaire diversifié). Toutes ces voies sont très peu, ou pas du tout exploitées. Elles sont surtout, pour la pre-mière fois, mises en cohérence autour de la notion de loi de puissance. C'est donc une nou-velle façon de comprendre les phénomènes économiques que nous présentons ici.

Page generated in 0.0553 seconds