• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 11
  • 9
  • 7
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 139
  • 139
  • 32
  • 31
  • 26
  • 24
  • 21
  • 19
  • 17
  • 16
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Usable, Secure Content-Based Encryption on the Web

Ruoti, Scott 01 July 2016 (has links)
Users share private information on the web through a variety of applications, such as email, instant messaging, social media, and document sharing. Unfortunately, recent revelations have shown that not only is users' data at risk from hackers and malicious insiders, but also from government surveillance. This state of affairs motivates the need for users to be able to encrypt their online data.In this dissertation, we explore how to help users encrypt their online data, with a special focus on securing email. First, we explore the design principles that are necessary to create usable, secure email. As part of this exploration, we conduct eight usability studies of eleven different secure email tools including a total of 347 participants. Second, we develop a novel, paired-participant methodology that allows us to test whether a given secure email system can be adopted in a grassroots fashion. Third, we apply our discovered design principles to PGP-based secure email, and demonstrate that these principles are sufficient to create the first PGP-based system that is usable by novices. We have also begun applying the lessons learned from our secure email research more generally to content-based encryption on the web. As part of this effort, we develop MessageGuard, a platform for accelerating research into usable, content-based encryption. Using MessageGuard, we build and evaluate Private Facebook Chat (PFC), a secure instant messaging system that integrates with Facebook Chat. Results from our usability analysis of PFC provided initial evidence that our design principles are also important components to usable, content-based encryption on the Web.
62

Product Development Processes, Three Vectors Of Improvement

Holmes, Maurice, Ronald, Campbell January 2003 (has links)
Product Development Processes have achieved a state of some maturity in recent years, but have focused primarily on structuring technical activities from the initiation of development to launch. We advocate major advances on three fronts; first, implementing an end-to-end process from the front end through field operations, second, integrating business considerations much better into the end-to-end process, and third, incorporating a performance improvement closed loop into the process. We call the resulting process a Product Development Business Process. Three initial applications are summarized. / Improving product development processes along three key vectors leads to greatly improved business performance. / Center for Innovation in Product Development
63

Development and simulation of hard real-time switched-ethernet avionics data network

Chen, Tao 08 1900 (has links)
The computer and microelectronics technologies are developing very quickly nowadays. In the mean time, the modern integrated avionics systems are burgeoning unceasingly. The modern integrated modular architecture more and more requires the low-latency and reliable communication databus with the high bandwidth. The traditional avionics databus technology, such as ARINC429, can not provide enough high speed and size for data communication, and it is a problem to achieve transmission mission successfully between the advanced avionic devices with the sufficient bandwidth. AFDX(Avionics Full Duplex Switched Ethernet) is a good solution for this problem, which is the high-speed full duplex switched avionic databus, on the basis of the Ethernet technology. AFDX can not only avoid Ethernet conflicts and collisions, but also increase transmission rate with a lower weigh of the databus. AFDX is now adopted by A380,B787 aircraft successfully. The avionics data must be delivered punctualy and reliablely, so it is very essential to validate the real-time performance of AFDX during the design process. The simulation is a good method to acquire the network performance, but it only happends in some given set of scenarios, and it is impossible to consider every case. So a sophisticatd network performance method for the worst-case scenario with the pessimistic upper bound requires to be deduced. The avionic design engineers have launched many researches in the AFDX simulation and methods study. That is the goal that this thesis is aimming for. The development of this project can been planned in the following two steps. In the first step, a communication platform plans to be implemented to simulate the AFDX network in two versions – the RTAI realtime framework and Linux user space framework. Ultimately, these frameworks need to be integrated into net-ASS, which is an integrated simulation and assessment platform in the cranfield’s lab.The second step deduces an effective method to evaluate network performance, including three bounds(delay,backlog and output flow), based on the NC. It is called Network Calculus. It is an internet theory keeping the network system in determistic way. It is also used in communication queue management. This mathematics method is planed to be verified with simulation results from the AFDX simuation communication platform, in order to assure its validity and applicability. All in all, the project aims to assess the performance of different network topologies in different avionic architectures, through the simulation and the mathematical assessment. The technologies used in this thesis benefit to find problems and faults in the beginning stage of the avionics architecture design in the industrial project, especially, in terms of guarantee the lossless service in avionics databus.
64

Analysis of Passive End-to-End Network Performance Measurements

Simpson, Charles Robert, Jr. 02 January 2007 (has links)
NETI@home, a distributed network measurement infrastructure to collect passive end-to-end network measurements from Internet end-hosts was developed and discussed. The data collected by this infrastructure, as well as other datasets, were used to conduct studies on the behavior of the network and network users as well as the security issues affecting the Internet. A flow-based comparison of honeynet traffic, representing malicious traffic, and NETI@home traffic, representing typical end-user traffic, was conducted. This comparison showed that a large portion of flows in both datasets were failed and potentially malicious connection attempts. We additionally found that worm activity can linger for more than a year after the initial release date. Malicious traffic was also found to originate from across the allocated IP address space. Other security-related observations made include the suspicious use of ICMP packets and attacks on our own NETI@home server. Utilizing observed TTL values, studies were also conducted into the distance of Internet routes and the frequency with which they vary. The frequency and use of network address translation and the private IP address space were also discussed. Various protocol options and flags were analyzed to determine their adoption and use by the Internet community. Network-independent empirical models of end-user network traffic were derived for use in simulation. Two such models were created. The first modeled traffic for a specific TCP or UDP port and the second modeled all TCP or UDP traffic for an end-user. These models were implemented and used in GTNetS. Further anonymization of the dataset and the public release of the anonymized data and their associated analysis tools were also discussed.
65

End-to-End Security of Information Flow in Web-based Applications

Singaravelu, Lenin 25 June 2007 (has links)
Web-based applications and services are increasingly being used in security-sensitive tasks. Current security protocols rely on two crucial assumptions to protect the confidentiality and integrity of information: First, they assume that end-point software used to handle security-sensitive information is free from vulnerabilities. Secondly, these protocols assume point-to-point communication between a client and a service provider. However, these assumptions do not hold true with large and complex vulnerable end point software such as the Internet browser or web services middleware or in web service compositions where there can be multiple value-adding service providers interposed between a client and the original service provider. To address the problem of large and complex end-point software, we present the AppCore approach which uses manual analysis of information flow, as opposed to purely automated approaches, to split existing software into two parts: a simplified trusted part that handles security-sensitive information and a legacy, untrusted part that handles non-sensitive information without access to sensitive information. Not only does this approach avoid many common and well-known vulnerabilities in the legacy software that compromised sensitive information, it also greatly reduces the size and complexity of the trusted code, thereby making exhaustive testing or formal analysis more feasible. We demonstrate the feasibility of the AppCore approach by constructing AppCores for two real-world applications: a client-side AppCore for https-based applications and an AppCore for web service platforms. Our evaluation shows that security improvements and complexity reductions (over a factor of five) can be attained with minimal modifications to existing software (a few tens of lines of code, and proxy settings of a browser) and an acceptable performance overhead (a few percent). To protect the communication of sensitive information between the clients and service providers in web service compositions, we present an end-to-end security framework called WS-FESec that provides end-to-end security properties even in the presence of misbehaving intermediate services. We show that WS-FESec is flexible enough to support the lattice model of secure information flow and it guarantees precise security properties for each component service at a modest cost of a few milliseconds per signature or encrypted field.
66

Impact of wireless losses on the predictability of end-to-end flow characteristics in Mobile IP Networks

Bhoite, Sameer Prabhakarrao 17 February 2005 (has links)
Technological advancements have led to an increase in the number of wireless and mobile devices such as PDAs, laptops and smart phones. This has resulted in an ever- increasing demand for wireless access to the Internet. Hence, wireless mobile traffic is expected to form a significant fraction of Internet traffic in the near future, over the so-called Mobile Internet Protocol (MIP) networks. For real-time applications, such as voice, video and process monitoring and control, deployed over standard IP networks, network resources must be properly allocated so that the mobile end-user is guaranteed a certain Quality of Service (QoS). As with the wired and fixed IP networks, MIP networks do not offer any QoS guarantees. Such networks have been designed for non-real-time applications. In attempts to deploy real-time applications in such networks without requiring major network infrastructure modifications, the end-points must provide some level of QoS guarantees. Such QoS guarantees or QoS control, requires ability of predictive capabilities of the end-to-end flow characteristics. In this research network flow accumulation is used as a measure of end-to-end network congestion. Careful analysis and study of the flow accumulation signal shows that it has long-term dependencies and it is very noisy, thus making it very difficult to predict. Hence, this work predicts the moving average of the flow accumulation signal. Both single-step and multi-step predictors are developed using linear system identification techniques. A multi-step prediction error of up to 17% is achieved for prediction horizon of up to 0.5sec. The main thrust of this research is on the impact of wireless losses on the ability to predict end-to-end flow accumulation. As opposed to wired, congestion related packet losses, the losses occurring in a wireless channel are to a large extent random, making the prediction of flow accumulation more challenging. Flow accumulation prediction studies in this research demonstrate that, if an accurate predictor is employed, the increase in prediction error is up to 170% when wireless loss reaches as high as 15% , as compared to the case of no wireless loss. As the predictor accuracy in the case of no wireless loss deteriorates, the impact of wireless losses on the flow accumulation prediction error decreases.
67

Estimation de l’écart type du délai de bout-en-bout par méthodes passives / Passive measurement in Software Defined Networks

Nguyen, Huu-Nghi 09 March 2017 (has links)
Depuis l'avènement du réseau Internet, le volume de données échangées sur les réseaux a crû de manière exponentielle. Le matériel présent sur les réseaux est devenu très hétérogène, dû entre autres à la multiplication des "middleboxes" (parefeux, routeurs NAT, serveurs VPN, proxy, etc.). Les algorithmes exécutés sur les équipements réseaux (routage, “spanning tree”, etc.) sont souvent complexes, parfois fermés et propriétaires et les interfaces de supervision peuvent être très différentes d'un constructeur/équipement à un autre. Ces différents facteurs rendent la compréhension et le fonctionnement du réseau complexe. Cela a motivé la définition d'un nouveau paradigme réseaux afin de simplifier la conception et la gestion des réseaux : le SDN (“Software-defined Networking”). Il introduit la notion de contrôleur, qui est un équipement qui a pour rôle de contrôler les équipements du plan de données. Le concept SDN sépare donc le plan de données chargés de l'acheminement des paquets, qui est opéré par des équipements nommés virtual switches dans la terminologie SDN, et le plan contrôle, en charge de toutes les décisions, et qui est donc effectué par le contrôleur SDN. Pour permettre au contrôleur de prendre ses décisions, il doit disposer d'une vue globale du réseau. En plus de la topologie et de la capacité des liens, des critères de performances comme le délai, le taux de pertes, la bande passante disponible, peuvent être pris en compte. Cette connaissance peut permettre par exemple un routage multi-classes, ou/et garantir des niveaux de qualité de service. Les contributions de cette thèse portent sur la proposition d'algorithmes permettant à une entité centralisée, et en particulier à un contrôleur dans un cadre SDN, d'obtenir des estimations fiables du délai de bout-en-bout pour les flux traversant le réseau. Les méthodes proposées sont passives, c'est-à-dire qu'elles ne génèrent aucun trafic supplémentaire. Nous nous intéressons tout particulièrement à la moyenne et l'écart type du délai. Il apparaît que le premier moment peut être obtenu assez facilement. Au contraire, la corrélation qui apparaît dans les temps d'attentes des noeuds du réseau rend l'estimation de l'écart type beaucoup plus complexe. Nous montrons que les méthodes développées sont capables de capturer les corrélations des délais dans les différents noeuds et d'offrir des estimations précises de l'écart type. Ces résultats sont validés par simulations où nous considérons un large éventail de scénarios permettant de valider nos algorithmes dans différents contextes d'utilisation / Since the early beginning of Internet, the amount of data exchanged over the networks has exponentially grown. The devices deployed on the networks are very heterogeneous, because of the growing presence of middleboxes (e.g., firewalls, NAT routers, VPN servers, proxy). The algorithms run on the networking devices (e.g., routing, spanning tree) are often complex, closed, and proprietary while the interfaces to access these devices typically vary from one manufacturer to the other. All these factors tend to hinder the understanding and the management of networks. Therefore a new paradigm has been introduced to ease the design and the management of networks, namely, the SDN (Software-defined Networking). In particular, SDN defines a new entity, the controller that is in charge of controlling the devices belonging to the data plane. Thus, in a SDN-network, the data plane, which is handled by networking devices called virtual switches, and the control plane, which takes the decisions and executed by the controller, are separated. In order to let the controller take its decisions, it must have a global view on the network. This includes the topology of the network and its links capacity, along with other possible performance metrics such delays, loss rates, and available bandwidths. This knowledge can enable a multi-class routing, or help guarantee levels of Quality of Service. The contributions of this thesis are new algorithms that allow a centralized entity, such as the controller in an SDN network, to accurately estimate the end-to-end delay for a given flow in its network. The proposed methods are passive in the sense that they do not require any additional traffic to be run. More precisely, we study the expectation and the standard deviation of the delay. We show how the first moment can be easily computed. On the other hand, estimating the standard deviation is much more complex because of the correlations existing between the different waiting times. We show that the proposed methods are able to capture these correlations between delays and thus providing accurate estimations of the standard deviation of the end-to-end delay. Simulations that cover a large range of possible scenariosvalidate these results
68

Allocation temporelle de systèmes avioniques modulaires embarqués / Temporal allocation in distributed modular avionics systems

Badache, Nesrine 27 May 2016 (has links)
L'évolution des architectures des systèmes embarqués temps réel vers des architectures modulaires a permis d'introduire plus de fonctionnalités grâce à l'utilisation de calculateurs répartis et d'interfaces de communication et de service standardisés. Nous nous intéressons dans cette thèse à l'architecture avionique modulaire (IMA) des standards ARINC 653 et ARINC 664 partie 7. Cette évolution a introduit de nouveaux défis de conception relatifs, entre autres, au respect des contraintes temporelles applicatives nécessaires au bon fonctionnement du système. La conception d'un système modulaire est un problème d'intégration sous contraintes, qui regroupe plusieurs problèmes difficiles (dimensionnement, allocation de ressource spatiales et temporelles). Ces difficultés requièrent la mise en place d'outils d'aide à l'intégration qui passent à l'échelle. C'est dans ce cadre-là que ces travaux de thèse ont été menés. Nous nous intéressons principalement à l'allocation des ressources temporelles du système. Plus particulièrement, nous déterminons les périodes d'exécution des fonctions embarquées distribuées qui garantissent les contraintes temporelles applicatives et qui offrent un degré d'évolutivité du système élevé, étant donné une répartition des fonctions sur les calculateurs. Notre démarche prend en compte la variabilité temporelle (bornée) du réseau de communication La première contribution de cette thèse est la formulation du problème d'intégration d'un système modulaire IMA en un problème d'optimisation multi-critère à contraintes temporelles. Pour une distribution des fonctions avioniques aux calculateurs, la périodicité des partitions IMA est recherchée de façon à garantir la fraîcheur et la non-perte des données transmises. Parmi toutes les allocations temporelles vérifiant les contraintes temporelles, nous réalisons une recherche multi-critères qui optimise à la fois un critère de charge des calculateurs et de marge temporelle dans le réseau. Ces deux critères facilitent les évolutions futures de l’architecture. La seconde contribution de cette thèse est la proposition de deux heuristiques de recherche multi-critère adaptées à notre problème. Il faut noter que le nombre d'allocations temporelles valides grandit exponentiellement avec le nombre de modules et de partitions hébergées par module. Nous proposons deux algorithmes d'optimisation multi-critères : (i) EXHAUST, un algorithme optimal de recherche exhaustive, (ii) TABOU un algorithme semi-optimal basé sur une métaheuristique Tabou. Pour les deux algorithmes, la cardinalité du problème est réduite par une phase d'optimisation locale à chaque module, rendue possible par la linéarité des deux métriques choisies. Cette première étape d'optimisation locale permet de résoudre à l'optimal le problème d'allocation avec EXHAUST pour un système IMA de taille moyenne. Nous montrons que pour des systèmes de grande taille, l'algorithme TABOU est un très bon candidat car il extrait des solutions satisfaisantes en un temps raisonnable, tout en testant un nombre limité d'allocations valides. Ces deux heuristiques sont appliquées à un système IMA. L'analyse des solutions obtenues nous permet de mettre en exergue la qualité des solutions Pareto-optimales obtenues par les deux algorithmes. Elles présentent les caractéristiques recherchées d'évolutivité de la charge des calculateurs et de la marge réseau. Notre dernière contribution réside dans une analyse fine de ces solutions. L'analyse met en avant différentes classes de solutions Pareto-optimales avec différent compromis entre la charge et la marge réseau. La connaissance de ces classes de solutions permet à l'intégrateur de choisir une solution lui fournissant le compromis qu'il recherche entre les critères de charge et de marge réseau. / The evolution of real-time embedded systems architectures to modular architectures has introduced more functionality through the use of distributed computers and communication interfaces and standardized service. We focus in this thesis on Integrated modular avionics architectures (IMA) standardized in ARINC 653 and ARINC 664 standard Part 7. This development has introduced new design challenges, among others, as respect for application timing constraints mandatory for the proper functioning of systems. The design of a modular system is an integration problem under constraints which features some difficult issues (design, spatial and temporal resource allocation). These difficulties require implementation of tools for integration that go to scale. It is, in this context, that the thesis work was conducted. We are interested primarily to the allocation of time resources of the system. In particular, we determine the execution time of distributed embedded functions that guarantee the application time constraints and offer a high degree of scalability of the system, given a distribution of functions on computers. Our approach takes into account the temporal variability (bounded variability) of the communication network. The first contribution of this thesis is the formulation of the problem of integration of an IMA system in a multi-criteria optimization problem with time constraints. For a distribution of avionics functions on computers, execution periods of IMA partitions are sought in order to ensure freshness and non-loss of transmitted data. Among all temporary allocations satisfying the time constraints, we perform a multi-criteria search that optimizes both load test calculators and time buffer in the network. These two criteria facilitate the future development of architecture. The second contribution of this thesis is the proposal of two multi-criteria search heuristics adapted to our problem. Note that the number of valid temporary allocations grows exponentially with the number of modules and partitions hosted on them. We offer two multi-criteria optimization algorithms: (i) EXHAUST, optimal exhaustive search algorithm, (ii) TABOO a semi-optimal algorithm based on a metaheuristic Tabu. For both algorithms, the cardinality of the problem is reduced by a local optimization phase for each module, made possible by the linearity of the two selected metric. This first local optimization step solves the problem of optimal allocation with EXHAUST for IMA system of medium size. We show that for large systems, the TABOO algorithm is a very good candidate because it extracts satisfactory solutions in a reasonable time while testing a limited number of valid allocations. These two heuristics are applied to an IMA system example. The analysis of the solutions obtained allows us to highlight the quality of Pareto-optimal solutions obtained by both algorithms. They have the characteristics sought scalability of the load of the computers and network margin. Our latest contribution lies in a detailed analysis of these solutions. The analysis highlights different classes of Pareto Optimal solutions with different compromise between the load of the system and the network margin. The knowledge of these solutions allows the system Integrator to choose a solution among solution classes that offer the compromise between the search criteria and network load margin.
69

END-TO-END TIMING ANALYSIS OF TASK-CHAINS

Jin, Zhiqun, Zhu, Shijie January 2017 (has links)
Many automotive systems are real-time systems, which means that not only correct operationsbut also appropriate timings are their main requirements. Considering the in uence that end-to-end delay might have on the performance of the systems, the calculation of it is of necessity.Abundant techniques have actually been proposed, and some of them have already been applied intopractical systems. In spite of this, some further work still needs to be done. The target of thisthesis is to evaluate and compare two end-to-end timing analysis methods from dierent aspectssuch as data age, consumption time, and then decide which method is a prior choice for end-to-end timing analysis. The experiments can be divided into three blocks, system generation andend-to-end delay calculation by two methods respectively. The experiments focus on two kinds ofperformance parameters, data age and the consumption time that these two methods cost duringtheir execution. By changing the system generating parameters like task number and periods, thechanges of performances of the two methods are analyzed. The performances of the two dierentmethods are also compared when they are applied into the same automotive systems. According tothe results of the experiments, the second method can calculate more accurate data age and consumeless time than the rst method does.
70

Formalisme pour la conception haut-niveau et détaillée de systèmes de contrôle-commande critiques / Formalism for the high-level design of hard real-time embedded systems

Garnier, Ilias 10 February 2012 (has links)
L’importance des systèmes temps-réels embarqués dans les sociétés industrialisées modernes en font un terrain d’application privilégié pour les méthodes formelles. La prépondérance des contraintes temporelles dans les spécifications de ces systèmes motive la mise au point de solutions spécifiques. Cette thèse s’intéresse à une classe de systèmes temps-réels incluant ceux développés avec la chaîne d’outils OASIS, développée au CEA LIST. Nos travaux portent sur la notion de délai de bout-en-bout, que nous proposons de modéliser comme une contrainte temporelle concernant l’influence du flot d’informations des entrées sur celui des sorties. Afin de répondre à la complexité croissante des systèmes temps-réels, nous étudions l’applicabilité de cette notion nouvelle au développement incrémental par raffinement et par composition. Le raffinement est abordé sous l’angle de la conservation de propriétés garantes de la correction du système au cours du processus de développement. Nous délimitons les conditions nécessaires et suffisantes à la conservation du délai de bout-en-bout lors d’un tel processus. De même, nous donnons des conditions suffisantes pour permettre le calcul du délai de bout-en-bout de manière compositionnelle. Combinés, ces résultats permettent d’établir un formalisme permettant la preuve du délai de bout-en-bout lors d’une démarche de développement incrémentale. / Real-time embedded systems are at the core of modern industrialized societies. They are a privileged target for the application of formal methods. The importance of real-time constraints in the specification of these systems requires the design of ad-hoc solutions. This work considers a class of real-time systems including those developed using OASIS, a tool-chain targeting hard real-time embedded systems developed at CEA LIST. We study the notion of end-to-end delay, which we propose to model as a constraint bearing directly on the influence of the input information flow over the output information flow . In order to cope with the growing complexity of real-time embedded systems, we study the possibility to apply this new notion of delay to the incremental development of such systems, by using both stepwise refinement and composition operators. We define the necessary and sufficient conditions to the preservation of the end-to-end delay by stepwise refinement. Similarly, we give sufficient conditions to compute the end-to-end delay in a compositional fashion. Together, these results permit to establish a formalism allowing to prove end-to-end delay properties in stepwise development methodologies.

Page generated in 0.0297 seconds