• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 352
  • 128
  • 49
  • 39
  • 12
  • 10
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 717
  • 185
  • 97
  • 88
  • 87
  • 76
  • 69
  • 54
  • 54
  • 53
  • 53
  • 52
  • 50
  • 44
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Contributions to Audio Steganography : Algorithms and Robustness Analysis / Contributions à la stéganographie audio : algorithmes et analyse de robustesse

Djebbar, Fatiha 23 January 2012 (has links)
La stéganographie numérique est une technique récente qui a émergé comme une source importante pour la sécurité des données. Elle consiste à envoyer secrètement et de manière fiable des informations dissimulées et non pas seulement à masquer leur présence. Elle exploite les caractéristiques des fichiers médias numériques anodins, tels que l’image, le son et la vidéo,et les utilise comme support pour véhiculer des informations secrète d’une façon inaperçue. Les techniques de cryptage et de tatouage sont déjà utilisées pour résoudre les problèmes liés à la sécurité des données. Toutefois,l’évolution des tentatives d’interception et de déchiffrement des données numériques nécessitent de nouvelles techniques pour enrayer les tentatives malveillantes et d’élargir le champ des applications y associées. L’objectif principal des systèmes stéganographiques consiste à fournir de nouveaux moyens sécurisés, indétectables et imperceptibles pour dissimuler des données.La stéganographie est utilisée sous l’hypothèse qu’elle ne sera pas détectée si personne n’essaye de la découvrir. Les techniques récentes destéganographie ont déjà été employées dans diverses applications. La majorité de ces applications ont pour objectif d’assurer la confidentialité des données.D’autres par contre sont utilisées malicieusement. L’utilisation de la stéganographie à des fins criminelles, de terrorisme, d’espionnage ou de piraterie constitue une menace réelle. Ces tentatives malveillantes de communiquer secrètement ont récemment conduit les chercheurs à inclure une nouvelle branche de recherche: la stéganalyse, pour contrer les techniques stéganographique. L’objectif principal de la stéganalyse est de détecter la résence potentielle d’un message dissimulé dans un support numérique et ne considère pas nécessairement son extraction. La parole numérique constitue un terrain de prédilection pour dissimuler des données numériques. En effet, elle est présente en abondance grâce aux technologies de télécommunications fixes ou mobiles et aussi à travers divers moyens de stockage de l’audio numérique. Cette thèse étudie la stéganographie et la stéganalyse utilisant la voix numérisée comme support et vise à (1) présenter un algorithme qui répond aux exigences des systèmes stéganographiques reliées à la capacité élevée, l’indétectabilité et l’imperceptibilité des données dissimulées, (2) le contrôle de la distorsion induite par le processus de dissimulation des données (3) définir un nouveau concept de zones spectrales dans le domaine de Fourier utilisant et l’amplitude et la phase (4) introduire un nouveau algorithme de stéganalyse basé sur les techniques de compression sans pertes d’information à la fois simple et efficace. La performance de l’algorithme stéganographique est mesurée par des méthodes d’évaluation perceptuelles et statistiques. D’autre part, la performance de l’algorithme de stéganalyse est mesurée par la capacité du système à distinguer entre un signal audio pur et un signal audio stéganographié. Les résultats sont très prometteurs et montrent des compromis de performance intéressants par rapport aux méthodes connexes. Les travaux futurs incluent essentiellement le renforcement de l’algorithme de stéganalyse pour qu’il soit en mesure de détecter une faible quantité de données dissimulées. Nous planifions également l’intégration de notre algorithme de stéganographie dans certaines plateformes émergentes telles que l’iPhone. D’autres perspectives consistent à améliorer l’algorithme stéganographique pour que les données dissimulées soit résistantes au codage de la parole, au bruit et à la distorsion induits parles canaux de transmission. / Digital steganography is a young flourishing science emerged as a prominent source of data security. The primary goal of steganography is to reliably send hidden information secretly, not merely to obscure its presence. It exploits the characteristics of digital media files such as: image, audio, video, text by utilizing them as carriers to secretly communicate data. Encryption and watermarking techniques are already used to address concerns related to datasecurity. However, constantly-changing attacks on the integrity of digital data require new techniques to break the cycle of malicious attempts and expand the scope of involved applications. The main objective of steganographic systems is to provide secure, undetectable and imperceptible ways to conceal high-rate of data into digital medium. Steganography is used under the assumption that it will not be detected if no one is attempting to uncover it. Steganography techniques have found their way into various and versatile applications. Some of these applications are used for the benefit of people others are used maliciously. The threat posed by criminals, hackers, terrorists and spies using steganography is indeed real. To defeat malicious attempts when communicating secretly, researchers’ work has been lately extended toinclude a new and parallel research branch to countermeasure steganagraphy techniques called steganalysis. The main purpose of steganalysis technique is to detect the presence or not of hidden message and does not consider necessarily its successful extraction. Digital speech, in particular, constitutes a prominent source of data-hiding across novel telecommunication technologies such as covered voice-over-IP, audio conferencing, etc. This thesis investigatesdigital speech steganography and steganalysis and aims at: (1) presenting an algorithm that meets high data capacity, undetectability and imperceptibility requirements of steganographic systems, (2) controlling the distortion induced by the embedding process (3) presenting new concepts of spectral embedding areas in the Fourier domain which is applicable to magnitude and phase spectrums and (4) introducing a simple yet effective speech steganalysis algorithm based on lossless data compression techniques. The steganographic algorithm’s performance is measured by perceptual and statistical evaluation methods. On the other hand, the steganalysis algorithm’s performance is measured by how well the system can distinguish between stego- and cover-audio signals. The results are very promising and show interesting performance tradeoffs compared to related methods. Future work is based mainly on strengthening the proposed steganalysis algorithm to be able to detect small hiding capacity. As for our steganographic algorithm, we aim at integrating our steganographic in some emerging devices such as iPhone and further enhancing the capabilities of our steganographic algorithm to ensure hidden-data integrity under severe compression, noise and channel distortion.
262

Evolvability of a viral protease : experimental evolution of catalysis, robustness and specificity

Shafee, Thomas January 2014 (has links)
The aim of this thesis is to investigate aspects of molecular evolution and enzyme engineering using the experimental evolution of Tobacco Etch Virus cysteine protease (TEV) as a model. I map key features of the local fitness landscape and characterise how they affect details of enzyme evolution. In order to investigate the evolution of core active site machinery, I mutated the nucleophile of TEV to serine. The differing chemical properties of oxygen and sulphur force the enzyme into a fitness valley with a >104-fold activity reduction. Nevertheless, directed evolution was able to recover function, resulting in an enzyme able to utilise either nucleophile. High-throughput screening and sequencing revealed how the array of possible beneficial mutations changes as the enzyme evolves. Potential adaptive mutations are abundant at each step along the evolutionary trajectory, enriched around the active site periphery. It is currently unclear how seemingly neutral mutations affect further adaptive evolution. I used high-throughput directed evolution to accumulate neutral variation in large, evolving enzyme populations and deep sequencing to reconstruct the complex evolutionary dynamics within the lineages. Specifically I was able to observe the emergence of robust enzymes with improved mutation tolerance whose descendants overtake later populations. Lastly, I investigate how evolvability towards new substrate specificities changed along these neutral lineages, dissecting the different determinants of immediate and long-term evolvability. Results demonstrate the utility of evolutionary understanding to protease engineering. Together, these experiments forward our understanding of the molecular details of both fundamental evolution and enzyme engineering.
263

Robustness of connections to concrete-filled steel tubular columns under fire during heating and cooling

Elsawaf, Sherif Ahmed Elkarim Ibrahim Soliman January 2012 (has links)
Joint behaviour in fire is currently one of the most important topics of research in structural fire resistance. The collapse of World Trade Center buildings and the results of the Cardington full-scale eight storey steel framed building fire tests in the UK have demonstrated that steel joints are particularly vulnerable during the heating and cooling phases of fire. The main purpose of this research is to develop robust joints to CFT columns that are capable of providing very high rotational and tying resistances to make it possible for the connected beam to fully develop catenary action during the heating phase of fire attack and to retain integrity during the cooling phase of fire attack. This research employed the general finite element software ABAQUS to numerically model the behaviour of restrained structural subassemblies of steel beam to concrete filled tubular (CFT) columns and their joints in fire. For validation, this research compared the simulation and test results for 10 fire tests previously conducted at the University of Manchester. It was envisaged that catenary action in the connected beams at very large deflections would play an important role in ensuring robustness of steel framed structures in fire. Therefore, it was vital that the numerical simulations could accurately predict the structural behaviour at very large deflections. In particular, the transitional behaviour of the beam from compression to catenary action presented tremendous difficulties in numerical simulations due to the extremely high rate of deflection increase. This thesis will explain the methodology of a suitable simulation method, by introducing a pseudo damping factor. The comparison between the FE and the experimental results demonstrates that the 3-D finite element model is able to successfully simulate the fire tests. The validated ABAQUS model was then applied to conduct a thorough set of numerical studies to investigate methods of improving the survival temperatures under heating in fire of steel beams to concrete filled tubular (CFT) columns using reverse channel connection. This study investigated five different joint types of reverse channel connection: extended endplate, flush endplate, flexible endplate, hybrid flush/flexible endplate and hybrid extended/flexible endplate. The connection details investigated include reverse channel web thickness, bolt diameter and grade, using fire-resistant (FR) steel for different joint components (reverse channel, end plate and bolts) and joint temperature control. The effects of changing the applied beam and column loads were also considered. It is concluded that by adopting some of the joint details to improve the joint tensile strength and deformation capacity, it is possible for the beams to develop substantial catenary action to survive very high temperatures. This thesis also explains the implications on fire resistant design of the connected columns in order to resist the additional catenary force in the beam. The validated numerical model was also used to perform extensive parametric studies on steel framed structures using concrete filled tubular (CFT) columns with flexible reverse channel connection and fin plate connection to find means of reducing the risk of structural failure during cooling. The results lead to the suggestion that in order to avoid connection fracture during cooling, the most effective and simplest method would be to reduce the limiting temperature of the connected beam by less than 50°C from the limiting temperature calculated without considering any axial force in the beam.
264

Universal Biology

Mariscal, Carlos January 2014 (has links)
<p>Our only example of life is that of Earth- which is a single lineage. We know very little about what life would look like if we found evidence of a second origin. Yet there are some universal features of geometry, mechanics, and chemistry that have predictable biological consequences. The surface-to-volume ratio property of geometry, for example, places a maximum limit on the size of unassisted cells in a given environment. This effect is universal, interesting, not vague, and not arbitrary. Furthermore, there are some problems in the universe that life must invariably solve if it is to persist, such as resistance to radiation, faithful inheritance, and resistance to environmental pressures. At least with respect to these universal problems, some solutions must consistently emerge.</p><p> In this dissertation, I develop and defend my own account of universal biology, the study of non-vague, non-arbitrary, non-accidental, universal generalizations in biology. In my account, a candidate biological generalization is assessed in terms of the assumptions it makes. A successful claim is accepted only if its justification necessarily makes reference to principles of evolution and makes no reference to contingent facts of life on Earth. In this way, we can assess the robustness with which generalizations can be expected to hold. I contend that using a stringent-enough causal analysis, we are able to gather insight into the nature of life everywhere. Life on Earth may be our single example of life, but this is merely a reason to be cautious in our approach to life in the universe, not a reason to give up altogether.</p> / Dissertation
265

Essays on mechanism design under non-Bayesian frameworks

Guo, Huiyi 01 May 2018 (has links)
One important issue in mechanism design theory is to model agents’ behaviors under uncertainty. The classical approach assumes that agents hold commonly known probability assessments towards uncertainty, which has been challenged by economists in many fields. My thesis adopts alternative methods to model agents’ behaviors. The new findings contribute to understanding how the mechanism designer can benefit from agents’ uncertainty aversion and how she should respond to the lack of information on agents’ probability assessments. Chapter 1 of this thesis allows the mechanism designer to introduce ambiguity to the mechanism. Instead of informing agents of the precise payment rule that she commits to, the mechanism designer can tell agents multiple payment rules that she may have committed to. The multiple payment rules are called ambiguous transfers. As agents do not know which rule is chosen by the designer, they are assumed to make decisions based on the worst-case scenario. Under this assumption, this chapter characterizes when the mechanism designer can obtain the first-best outcomes by introducing ambiguous transfers. Compared to the standard approach where the payment rule is unambiguous, first-best mechanism design becomes possible under a broader information structure. Hence, there are cases when the mechanism designer can benefit from introducing ambiguity. Chapter 2 assumes that the mechanism designer does not know agents’ probability assessments about others’ private information. The mechanisms designed to implement the social choice function thus should not depend on the probability assessments, which are called robust mechanisms. Different from the existing robust mechanism design literature where agents are always assumed to act non-cooperatively, this chapter allows them to communicate and form coalitions. This chapter provides necessary and almost sufficient conditions for robustly implementing a social choice function as an equilibrium that is immune to all coalitional deviations. As there are social choice functions that are only implementable with coalitional structures, this chapter provides insights on when agents should be allowed to communicate. As an extension, when the mechanism designer has no information on which coalitions can be formed, this chapter also provides conditions for robust implementation under all coalition patterns. Chapter 3 assumes that agents are not probabilistic about others’ private information. Instead, when they hold ambiguous assessments about others’ information, they make decisions based on the worst-case belief. This chapter provides necessary and almost sufficient conditions on when a social choice goal is implementable under such a behavioral assumption. As there are social choice goals that are only implementable under ambiguous assessments, this chapter provides insights on what information structure is desirable to the mechanism designer.
266

Contributions to Optimal Experimental Design and Strategic Subdata Selection for Big Data

January 2020 (has links)
abstract: In this dissertation two research questions in the field of applied experimental design were explored. First, methods for augmenting the three-level screening designs called Definitive Screening Designs (DSDs) were investigated. Second, schemes for strategic subdata selection for nonparametric predictive modeling with big data were developed. Under sparsity, the structure of DSDs can allow for the screening and optimization of a system in one step, but in non-sparse situations estimation of second-order models requires augmentation of the DSD. In this work, augmentation strategies for DSDs were considered, given the assumption that the correct form of the model for the response of interest is quadratic. Series of augmented designs were constructed and explored, and power calculations, model-robustness criteria, model-discrimination criteria, and simulation study results were used to identify the number of augmented runs necessary for (1) effectively identifying active model effects, and (2) precisely predicting a response of interest. When the goal is identification of active effects, it is shown that supersaturated designs are sufficient; when the goal is prediction, it is shown that little is gained by augmenting beyond the design that is saturated for the full quadratic model. Surprisingly, augmentation strategies based on the I-optimality criterion do not lead to better predictions than strategies based on the D-optimality criterion. Computational limitations can render standard statistical methods infeasible in the face of massive datasets, necessitating subsampling strategies. In the big data context, the primary objective is often prediction but the correct form of the model for the response of interest is likely unknown. Here, two new methods of subdata selection were proposed. The first is based on clustering, the second is based on space-filling designs, and both are free from model assumptions. The performance of the proposed methods was explored visually via low-dimensional simulated examples; via real data applications; and via large simulation studies. In all cases the proposed methods were compared to existing, widely used subdata selection methods. The conditions under which the proposed methods provide advantages over standard subdata selection strategies were identified. / Dissertation/Thesis / Doctoral Dissertation Statistics 2020
267

Robustness Evaluation of Long Span Truss Bridge Using Damage Influence Lines / 損傷影響線を用いた長大トラス橋のロバスト性評価

Mya, San Wai 23 March 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第22417号 / 工博第4678号 / 新制||工||1730(附属図書館) / 京都大学大学院工学研究科社会基盤工学専攻 / (主査)教授 高橋 良和, 教授 清野 純史, 教授 八木 知己 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
268

Citlivostní analýza různých typů rekonstruktoru stavu / Sensitivity analysis of different forms of state observers

Kadlec, Milan January 2012 (has links)
This master thesis is focused on the sensitivity analysis of selected kinds of state reconstructors. They are realized in a general form, via direct and parallel programing. Quantity that determines the quality of sensitivity is output signal difference of the reconstructor with the general form of the system. Testing will be based on different initial state conditions and on the parameters change of the feedback A matrix due to the rested reconstructors.
269

Informed Non-Negative Matrix Factorization for Source Apportionment / Factorisation informées de matrice pour la séparation de sources non-négatives

Chreiky, Robert 19 December 2017 (has links)
Le démélange de sources pour la pollution de l'air peut être formulé comme un problème de NMF en décomposant la matrice d'observation X en le produit de deux matrices non négatives G et F, respectivement la matrice de contributions et de profils. Généralement, les données chimiques sont entâchées d'une part de données aberrantes. En dépit de l'intérêt de la communauté pour les méthodes de NMF, elles souffrent d'un manque de robustesse à un faible nombre de données aberrantes et aux conditions initiales et elles fournissent habituellement de multiples minimas. En conséquence, cette thèse est orientée d'une part vers les méthodes de NMF robustes et d'autre part vers les NMF informées qui utilisent une connaissance experte particulière. Deux types de connaissances sont introduites dans la matrice de profil F. La première hypothèse est la connaissance exacte de certaines composantes de la matrice F tandis que la deuxième information utilise la propriété de somme-à-1 de chaque ligne de la matrice F. Une paramétrisation qui tient compte de ces deux informations est développée et des règles de mise à jour dans le sous-espace des contraintes sont proposées. L'application cible qui consiste à identifier les sources de particules dans l'air dans la région côtière du nord de la France montre la pertinence des méthodes proposées. Dans la série d'expériences menées sur des données synthétiques et réelles, l'effet et la pertinence des différentes informations sont mises en évidence et rendent les résultats de factorisation plus fiables. / Source apportionment for air pollution may be formulated as a NMF problem by decomposing the data matrix X into a matrix product of two factors G and F, respectively the contribution matrix and the profile matrix. Usually, chemical data are corrupted with a significant proportion of abnormal data. Despite the interest for the community for NMF methods, they suffer from a lack of robustness to a few abnormal data and to initial conditions and they generally provide multiple minima. To this end, this thesis is oriented on one hand towards robust NMF methods and on the other hand on informed NMF by using some specific prior knowledge. Two types of knowlodge are introduced on the profile matrix F. The first assumption is the exact knowledge on some of flexible components of matrix F and the second hypothesis is the sum-to-1 constraint on each row of the matrix F. A parametrization able to deal with both information is developed and update rules are proposed in the space of constraints at each iteration. These formulations have been appliede to two kind of robust cost functions, namely, the weighted Huber cost function and the weighted αβ divergence. The target application-namely, identify the sources of particulate matter in the air in the coastal area of northern France - shows relevance of the proposed methods. In the numerous experiments conducted on both synthetic and real data, the effect and the relevance of the different information is highlighted to make the factorization results more reliable.
270

Approximation robuste de surfaces avec garanties / Robust shape approximation and mapping between surfaces

Mandad, Manish 29 November 2016 (has links)
Cette thèse comprend deux parties indépendantes.Dans la première partie nous contribuons une nouvelle méthode qui, étant donnée un volume de tolérance, génère un maillage triangulaire surfacique garanti d’être dans le volume de tolérance, sans auto-intersection et topologiquement correct. Un algorithme flexible est conçu pour capturer la topologie et découvrir l’anisotropie dans le volume de tolérance dans le but de générer un maillage de faible complexité.Dans la seconde partie nous contribuons une nouvelle approche pour calculer une fonction de correspondance entre deux surfaces. Tandis que la plupart des approches précédentes procède par composition de correspondance avec un domaine simple planaire, nous calculons une fonction de correspondance en optimisant directement une fonction de sorte à minimiser la variance d’un plan de transport entre les surfaces / This thesis is divided into two independent parts.In the first part, we introduce a method that, given an input tolerance volume, generates a surface triangle mesh guaranteed to be within the tolerance, intersection free and topologically correct. A pliant meshing algorithm is used to capture the topology and discover the anisotropy in the input tolerance volume in order to generate a concise output. We first refine a 3D Delaunay triangulation over the tolerance volume while maintaining a piecewise-linear function on this triangulation, until an isosurface of this function matches the topology sought after. We then embed the isosurface into the 3D triangulation via mutual tessellation, and simplify it while preserving the topology. Our approach extends toDépôt de thèseDonnées complémentairessurfaces with boundaries and to non-manifold surfaces. We demonstrate the versatility and efficacy of our approach on a variety of data sets and tolerance volumes.In the second part we introduce a new approach for creating a homeomorphic map between two discrete surfaces. While most previous approaches compose maps over intermediate domains which result in suboptimal inter-surface mapping, we directly optimize a map by computing a variance-minimizing mass transport plan between two surfaces. This non-linear problem, which amounts to minimizing the Dirichlet energy of both the map and its inverse, is solved using two alternating convex optimization problems in a coarse-to-fine fashion. Computational efficiency is further improved through the use of Sinkhorn iterations (modified to handle minimal regularization and unbalanced transport plans) and diffusion distances. The resulting inter-surface mapping algorithm applies to arbitrary shapes robustly and efficiently, with little to no user interaction.

Page generated in 0.0519 seconds