• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1912
  • 597
  • 576
  • 417
  • 240
  • 177
  • 57
  • 53
  • 40
  • 26
  • 26
  • 25
  • 24
  • 23
  • 20
  • Tagged with
  • 4797
  • 533
  • 503
  • 495
  • 426
  • 421
  • 375
  • 362
  • 354
  • 345
  • 340
  • 334
  • 317
  • 316
  • 315
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Efficient Simulation for Quantum Message Authentication

Wainewright, Evelyn January 2016 (has links)
A mix of physics, mathematics, and computer science, the study of quantum information seeks to understand and utilize the information that can be held in the state of a quantum system. Quantum cryptography is then the study of various cryptographic protocols on the information in a quantum system. One of the goals we may have is to verify the integrity of quantum data, a process called quantum message authentication. In this thesis, we consider two quantum message authentication schemes, the Clifford code and the trap code. While both of these codes have been previously proven secure, they have not been proven secure in the simulator model, with an efficient simulation. We offer a new class of simulator that is efficient, so long as the adversary is efficient, and show that both of these codes can be proven secure using the efficient simulator. The efficiency of the simulator is typically a crucial requirement for a composable notion of security. The main results of this thesis have been accepted to appear in the Proceedings of the 9th International Conference on Information Theoretic Security (ICITS 2016).
122

Genetic analysis of the gene Additional sex combs and interacting loci

Nicholls, Felicity K. M. January 1990 (has links)
In order to recover new mutant alleles of the Polycomb group gene Additional sex combs (Asx), mutagenized chromosomes were screened over the putative Asx allele XT129. Thirteen new mutant strains that fail to complement XT129 were recovered. Unexpectedly, the thirteen strains sorted into four complementation groups. Recombination mapping suggests that each complementation group represents a separate locus. The largest group fails to complement a deletion of Asx and maps in the vicinity of 2-72, the published location of Asx. All new mutant strains enhance the phenotype of Polycomb mutant flies and are not allelic to any previously discovered second chromosome Polycomb group genes. Therefore, the new mutants may be considered putative new members of the Polycomb group. This study suggests that Asx belongs to a sub-group of genes displaying intergenic non-complementation. / Science, Faculty of / Zoology, Department of / Graduate
123

La politique sociale napoléonienne : De la charité chrétienne à une politique sociale d’état : L’organisation du salut public sous le Consulat et l’Empire : 1785 – 1815 / Napoleonic social policy : from christian charity to state social policy : the organisation of public salvation under the Consulate and First Empire : (1785 – 1815)

Calland-Jackson, Paul-Napoléon 02 July 2015 (has links)
Les révolutionnaires de l’époque 1789 – 1799 ont supprimé les corps intermédiaires entre l’Etat et le Peuple. Selon la Déclaration des Droits de l’Homme, nul corps, nul individu ne devait s’insérer entre le pouvoir et la plèbe. Ainsi, les lois Chapelier (entre autres) ont supprimé les corps de métier et les gouvernements successifs ont tenté d’éradiquer les contre-pouvoirs des régions et des « féodalités » locales. Or, lorsque Napoléon Bonaparte prend la tête de l’Etat en novembre 1799, le pays est en quête de nouveaux repères. Le chef du nouveau gouvernement instauré en février 1800 entend mettre en place des « masses de granit », c’est-à-dire des institutions stables.La création de la Banque de France, des Préfets, des Lycées, du Baccalauréat, de la Légion d’Honneur, sont des exemples connus parmi tant d’autres. En revanche, le sujet de cette thèse est moins connu, excepté peut-être des étudiants et enseignants juristes. Car au cœur du nouveau Code Civil des Français se trouve « l’esprit de fraternité » exprimé dans le texte de la Déclaration des Droits de l’Homme et du Citoyen, et dans la Constitution du 5 fructidor. L’Eglise catholique n’étant plus – depuis le Concordat – la religion officielle de l’Etat, mais la religion majoritaire, l’Etat remplace le devoir de charité par une fraternité civile. Le Premier Consul (bientôt Empereur) ajoute une clause du Code Civil stipulant que les parents doivent pourvoir aux besoins de leurs enfants majeurs, lorsque ces derniers en sont incapables (et inversement).A travers l’époque du Consulat et du Premier Empire, cette thèse vise à démontrer le développement des structures de solidarité sociale, notamment dans la législation mais aussi en ce qui concerne les institutions et les politiques de l’Etat pendant cette période. Nous étudierons (entre autres) le Code Civil en son contexte, les Maisons d’Education de la Légion d’Honneur, la législation du travail (dont notamment celui des enfants), les sociétés de secours mutuels (prédécesseurs de nos mutuelles et syndicats d’aujourd’hui) et les administrations de bienfaisance. Nous jetterons également un regard – en conclusion – sur les projets inachevés développés sous des régimes postérieurs. Cela afin de mieux placer cette époque dans son contexte par rapport au XXIe siècle.La période du Consulat et de l’Empire a été une grande période de création de caisses de retraite, et l’Empereur Napoléon en a même précisé les principes qui devaient régir ce « droit » qu’il voulait étendre à tous les métiers. Notre thèse suit donc les traces de la création de ces institutions et de l’encadrement de la vie quotidienne selon les principes napoléoniens, synthèse de l’Ancien Régime et des idéaux de 1789. / The revolutionaries of the period spanning 1789 – 1799 abolished the corps intermédiaires between the State and the People. According to the Declaration of the Rights of Man, no organisation or individual must step between the power and the plebeians. Thus, the Le Chapelier laws (among others) abolished the guilds, and successive governments attempted to eradicate the opposing forces of the regions and local « feudalisms ». However, when Napoleon Bonaparte took charge of the ship of State in November 1799, the country was in search of new references. The chief of the new government installed in February 1800 aimed to lay « masses of granite », that is to say stable institutions, on the soil of France.The creation of the Bank of France, of the Prefects, of the Lycées, Baccalaureate and Legion of Honour are well-known examples among many others. But the subject of this thesis is less famous, except perhaps for students and teachers of law. For in the heart of the new Civil Code of the French, there is the « spirit of fraternity » expressed in the Declaration of the Rights of Man and of the Citizen, and in the Constitution of the 5th of Fructidor. The Catholic Church no longer being – since the Concordat – the official State religion, but the religion of the majority of Frenchmen, the State replaced the duty of charity with civil fraternity. The First Consul (who was soon to be Emperor) added a clause to the Civil Code stipulating that parents must provide for their children, even as adults, if the latter are unable to do so (and vice versa).Throughout the era of the Consulate and First Empire, this thesis aims to show the development of structures of social solidarity, particularly via legislation, but also in relation to the institutions and policies of the State during this period. We will study (among others) the Civil Code in its context, the Maisons d’Education de la Légion d’Honneur, legislation on labour (particularly in relation to child labour), mutual aid societies (predecessors of the mutual insurance companies and trades unions of our times) and the welfare administrations. We will also cast an eye, in conclusion, over the unfinished projects developed under later regimes. In order to better situate this era in its context in relation to the 21st Century.The period of the Consulate and Empire was a great period for the creation of retirement pension funds, and the Emperor Napoleon even set down the principles which were to regulate this « right » that he wanted to extend to all trades. Our thesis therefore follows in the trail of the creation of these institutions and of the framework of daily life according to Napoleonic principles, a synthesis of the Old Regime and the ideals of 1789.
124

Low-code-plattformar: En översikt : Undersökning via applikationsutveckling / Low-Code Platforms: A review : Exploration via application development

Berdén, Daniel, Traxler, Johan January 2021 (has links)
Utveckling av digitala lösningar som smidigt integrerar och visar relevant informationhar blivit ett av de viktigaste verktygen för många företag. Low-code-plattformar harskapats för att förenkla detta arbete. Frågan är om dessa plattformar lyckas med dettamål och till vilken grad. Denna uppsats har som mål att bearbeta denna fråga och kommatill ett vägvisande svar. Inledande så undersöktes low-code-konceptet och sedan valdestre olika plattformar ut. Dessa plattformar var: OutSystems, Microsoft Power Apps, ochMendix. För att undersöka och utvärdera dessa plattformar designades en demoapplikationsom implementerades i var och en av plattformarna. Resultatet blev en beskrivningav utvecklingsarbetet och en utvärdering av de olika plattformarna. I utvärderingen avutvecklingsarbetet konstaterades att de undersökta low-code-plattformarna bidrar tillsmidigare utveckling med fokus på användargränssnitt men att de trots sina enklaresystem ändå kräver viss utvecklingsvana för effektivt utnyttjande. Förslag för framtidaarbete inom området och med arbetet presenterat i rapporten beskrivs. / Development of digital solutions that seamlessly integrate and display relevant informationhas become one of the most important tools for many companies. Low-codeplatforms have been created to simplify this work. The question is if these platformssucceed in this goal and to what extent. This paper has as its goal to process this questionand arrive at a guiding answer. Initially, the low-code concept was examined and thenthree different platforms were selected. These platforms were: OutSystems, MicrosoftPower Apps, and Mendix. To investigate and evaluate these platforms, a demo applicationwas designed and subsequently implemented in each of the platforms. The resultwas a description of the development work and an evaluation of the various platforms. Inthe evaluation of the development work, it was concluded that the investigated low-codeplatforms contribute to smoother development with a focus on user interfaces, but thatdespite their simpler systems, they still require a certain development habit for efficientuse. Proposals for future work in the area and with the work presented in the report aredescribed.
125

Clean Code in Practice : Developers´ perception of clean code

Ljung, Kevin January 2021 (has links)
Context. There is a need for developers to write clean code and code that adheres to a high-quality standard. We need developers not to introduce technical debt and code smells to the code. From a business perspective, developers that introduce technical debt to the code will make the code more difficult to maintain, meaning that the cost for the project will increase. Objectives. The main objective of this study is to gain an understanding about the perception the developers have about clean code and how they use it in practice. There is not much information about how clean code is perceived by developers and applied in practice, and this thesis will extend the information about those two areas. It is an effort to understand developers' perception of clean code in practice and what they think about it. Realization (Method). To understand the state-of-the-art in the area of clean code, we first performed  literature review using snowballing. To delve into developers' perception about clean code and how it is used in practice. We have developed and sent out a questionnaire survey to developers within companies and shared the survey via social networks. We ask if developers believe that clean code eases the process of reading, modifying, reusing, or maintaining code. We also investigate whether developers write clean code initially or refactor it to become clean code, or do none of these. Finally, we ask developers in practice what clean code principles they agree or disagree with. Asking this will help identify which clean code principles developers think are helpful and which are not. Results. The results from the investigation are that the developers strongly believe in clean code and that it affects reading, modifying, reusing, and maintaining code, positively. Also, developers do not write clean code initially but rather refactor unclean code to become clean code. Only a small portion of developers write clean code initially, and some do what suits the situation, while some do neither of these. The last result is that developers agree with most of the clean code principles listed in the questionnaire survey and that there are also some principles that they discard, but these fewer. Conclusions. From the first research question, we know that developers strongly believe that clean code makes the code more readable, understandable, modifiable, or reusable. Also, developers check that the code is readable using code reviews, peer reviews, or pull requests. Regarding the second research question, we know that developers mostly refactor unclean code rather than write clean code initially. The challenges are that to write clean code initially, a developer must have a solid understanding of the problem and obstacles in advance, and a developer will not always know what the code should look like in advance. The last research question showed that most developers agree with most of the clean code principles and that only a small portion of developers disagree with some of them. Static code analysis and code quality gates can ensure that developers follow these clean code practices and principles.
126

An empirical investigation on modern code review focus areas

Jiang, Zhiyu, Ma, Bowen January 2020 (has links)
Background: In a sustaining, durable project, an effective code review process is key to ensuring the long-term quality of the code base. As the size of the software continues to increase, although the code inspections have many benefits, the time it takes, the manpower makes it not a good method in some larger projects.  Nowadays more and more industry performs modern code reviews for their project in order to increase the quality of the program. Only a few papers have studied the relationship between code reviewers and code review quality. We need to explore the relationships among code review, code complexity, and reviewers. Finding out which part of the code the reviewers pay more attention to in the code review and how much effort it takes to review. This way we can conduct code reviews more effectively. Objectives: The objective of our study is to investigate if code complexity relates to how software developers to review code in terms of code review length, review frequency, review text quality, reviewer’s sentiment. What’s more, we want to research if the reviewer’s experience will have an impact on code review quality. In order to find a suitable way to conduct a code review for different complexity codes.  Methods: We conduct an exploratory case study. The case and unit of analysis is the open-source project, Cassandra. We extract data from Cassandra Jira (a proprietary issue tracking product), the data are the reviewer’s name, review content, review time, reviewer’s comments, reviewer’s sentiment, comment length, and the review file(java file). Then we use CodeMR to calculate the complexity of the file, it uses some coupling and code complexity metrics. The reviewer’s sentiment is analyzed by a text analysis API. After we collect all these data we use SPSS to do a statistic analysis, to find whether there are relationships between code complexity and these factors. What’s more, we have a workshop and send out questionnaires to collect more input from Cassandra developers. Results: The results show that code review frequency is related to code complexity, complex code requires more review. Reviewer’s sentiment is related to code complexity, reviewer’s sentiment towards complex code is more positive or negative rather than neutral. Code review text quality is related to the reviewer’s experience, experienced reviewers leave a comment with higher quality than novice reviewers. On the other hand, the code review length and review text quality are not related to code complexity. Conclusions: According to the results, the code with higher code complexity related to the more frequent review, and the reviewer's emotions are more clear when reviewing more complex code. Training experienced reviewers are also very necessary because the results show that experienced reviewers review the code with higher quality. From the questionnaire, we know developers believe that more complex code needs more iterations of code review and experienced reviewers do have a positive effect on code review, which gives us a guide on how to do code review based on a different level of code complexity.
127

Génération stratégique de code pour la maîtrise des performances de systèmes temps-réel embarqués / Strategic generation of code to master the performances of real-time embedded systems

Cadoret, Fabien 26 May 2014 (has links)
Nous nous sommes intéressés aux systèmes embarqués temps-réel critiques (SETRC) qui soulèvent des problématiques de criticité, de respect de contraintes temporelles et de disponibilité des ressources telles que la mémoire. Pour maîtriser la complexité de conception de ces systèmes, l’Ingénierie Dirigée par les Modèles (IDM) propose de les modéliser pour les analyser au regard de leurs exigences et pour générer en partie leur code d’exécution. Cependant ces deux phases doivent s’articuler correctement de sorte que le système généré respecte toujours les propriétés du modèle initialement analysé. Par ailleurs, le générateur de code doit s’adapter à de multiples critères : notamment pour assurer le respect des performances ou bien pour cibler différentes plates-formes d’exécution qui ont leurs propres contraintes et sémantiques d’exécution. Pour réaliser cette adaptation, le processus de développement requiert de faire évoluer les règles de transformation selon ces critères. Son architecture doit également de permettre de sélectionner les composants logiciels répondant à ces critères. Nous répondons à cette problématique en proposant un processus de génération s’appuyant sur l’IDM. Lorsque l’utilisateur a spécifié et validé un modèle de haut niveau, une transformation traduit automatiquement ce modèle en un second modèle détaillé proche du code généré. Pour assurer la conservation des exigences, le modèle détaillé est exprimé dans le même formalisme que le modèle initial de sorte qu’il reste analysable. Cette démarche détermine l’impact de la stratégie du générateur sur les performances du système final et permet au générateur de changer de stratégie, à une étape donnée, pour assurer le respect des contraintes du système. Pour faciliter le développement et la sélection de stratégies alternatives, nous proposons une méthodologie qui s’articule autour d’un formalisme pour l’orchestration des transformations, un ensemble de patrons de transformation (qui factorisent et généralisent les règles de transformation) et une adaptation de composants logiciels selon leur impact sur les performances. Nous avons mis en place ce processus au sein de l’environnement OSATE, pour lequel nous avons développé le framework RAMSES (Refinment of AADL Models for Synthesis of Embedded Systems). Nous l’avons expérimenté sur la génération des communications entre tâches pour lesquelles plusieurs stratégies d’implémentation ont été définies / We focused on real-time embedded critical systems (RTECS) which present different problems: criticality, respect of time constraints and resources availability such as memory. In order to master design complexity of such systems, Model Driven Engineering (MDE) proposes to model it for analysis purposes and to generate, partially or totally, its execution code. However, these two phases must be correctly connected to ensure the generated code is always enforcing all the properties of the model initially analysed. In addition, the code generator must be adapted to several criteria: in particular to ensure respect of performances or to target different execution platforms which have their own execution constraints and semantics. To realize such an adaptation, the development process requires to evolve transformation rules according to these criteria. Its architecture needs also to allow the selection of the generated software components respecting these criteria.We answer such a problem by proposing a generation process based on the MDE. When the user specifies and validates a high-level model, a model transformation translates automatically this model into a detailed model close to the generated code. To ensure the conservation of the requirements, the detailed model is expressed in the same formalism as the initial model so that it remains analysable (by the same tools initially used). This approach determines the impact of the code generation strategy on the performances of the final system and allows the generator to adapt its strategy, in a given stage, to insure the respect of the system constraints. To facilitate the development and the selection of alternative strategies, we propose a methodology which articulates around a formalism for the orchestration of the transformations, a set of transformation patterns (which factorize and generalize the transformation rules) and an adaptation of software components according to their impact on the performances. We set up this process within the environment OSATE, for which we have developed the framework RAMSES (Refinement of AADL Models for Synthesis of Embedded Systems). We have experimented it on the code generation of the communications between tasks for which several strategies of implementation were defined.
128

Neural Network-based Methodologies for Securing Cryptographic Code

Xiao, Ya 17 August 2022 (has links)
Many studies show that manual code generation is error-prone and results in vulnerabilities. Vulnerability fixing has been shown as the most time-consuming process among multiple steps of code repair. To help developers repair these security vulnerabilities, my dissertation aims to develop an automatic or semi-automatic secure code generation system with neural network based approaches. Trained with huge amounts of good-quality code, I expect the neural network to learn the secure usage and produce the correct code suggestions. Despite the great success of neural networks, the vision of comprehending and generating programming languages through neural networks has not been fully realized. There are many fundamental questions that need to be answered. These questions include 1) what are the accuracy impacts of the various choices in code embedding? 2) How to address the accuracy challenges caused by the programming language specific properties in the task of secure code suggestion? My dissertation work answers the two questions with a systematical measurement study and specialized neural network designs. My experiments show that program analysis is a necessary preprocessing step to guide the code embedding – resulting in a 36.1% accuracy improvement. Furthermore, I identify two previously unreported deficiencies in the cryptographic API suggestion task. To close the gap, I invent a highly accurate API method suggestion solution, referred to as Multi-HyLSTM, with specialized neural network designs to recognize unique programming language characteristics. My work points out the important differences between natural languages and programming languages, which pure data-driven learning approaches may not recognize. / Doctor of Philosophy / Neural network techniques that automatically learn rules from data show great potential to provide vulnerability-agnostic solutions for securing code. Recent research community has witnessed the rapid progress of neural network techniques in various application domains, such as computer vision, natural language processing, etc. However, how to harness the success of neural network based approaches for dealing with programs is still largely unknown. Many fundamental questions are required to be answered. This dissertation aims to provide neural network based solutions to help developers write secure code, as well as answer several important but unknown research questions about promoting neural network based approaches specialized for the programming language domain. Learning from Java cryptographic code, I explore the accuracy challenges for neural networks to understand the secure API usage rules and generate appropriate suggestions based on them. One of my research focuses is on how to express code in a way that neural networks can comprehend, aka code embedding. Code embedding is the process of transforming code into numeric vectors. It is important for accuracy as all the subsequent neural network calculation is performed on it. I conduct a systematic comparison to evaluate several key embedding design choices and reveal their impacts on accuracy improvements. To further improve the accuracy, I focus on the accuracy challenges in the specific task, generating API suggestions by neural networks. I identify the unreported program dependency specific challenges and present several specialized neural network designs to address them.
129

Multiuser mobile communication systems

Ma, Ming January 2001 (has links)
No description available.
130

An optimizing code generator generator.

Wendt, Alan Lee. January 1989 (has links)
This dissertation describes a system that constructs efficient, retargetable code generators and optimizers. chop reads nonprocedural descriptions of a computer's instruction set and of a naive code generator for the computer, and it writes an integrated code generator and peephole optimizer for it. The resulting code generators are very efficient because they interpret no tables; they are completely hard-coded. Nor do they build complex data structures to communicate between code generation and optimization phases. Interphase communication is reduced to the point that the code generator's output is often encoded in the program counter and conveyed to the optimizer by jumping to the right label. chop's code generator and optimizer are based on a very simple formalism, namely rewriting rules. An instrumented version of the compiler infers the optimization rules as it complies a training suite, and it records them for translation into hard code and inclusion into the production version. I have replaced the Portable C Compiler's code generator with one generated by chop. Despite a costly interface, the resulting compiler runs 30% to 50% faster than the original Portable C Compiler (pcc) and generates comparable code. This figure is diluted by common lexical analysis, parsing, and semantic analysis and by comparable code emission. Allowing for these, the new code generator appears to run approximately seven times faster than that of the original pcc.

Page generated in 0.1681 seconds