• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 264
  • 38
  • 25
  • 24
  • 6
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 440
  • 87
  • 68
  • 62
  • 56
  • 53
  • 46
  • 40
  • 40
  • 40
  • 39
  • 38
  • 38
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Ανάπτυξη εξομοιωτή σφαλμάτων για σφάλματα μετάβασης σε ψηφιακά ολοκληρωμένα κυκλώματα

Κασερίδης, Δημήτριος 26 September 2007 (has links)
Η μεταπτυχιακή αυτή εργασία μπορεί να χωριστεί σε δύο λογικά μέρη (Μέρος Α’ και Μέρος Β’). Το πρώτο μέρος αφορά τον έλεγχο ορθής λειτουργίας ψηφιακών κυκλωμάτων χρησιμοποιώντας το μοντέλο των Μεταβατικών (Transient) σφαλμάτων και πιο συγκεκριμένα περιλαμβάνει την μελέτη για το μοντέλο, τρόπο λειτουργίας και την υλοποίηση ενός Εξομοιωτή Μεταβατικών Σφαλμάτων (Transition Faults Simulator). Ο εξομοιωτής σφαλμάτων αποτελεί το πιο σημαντικό μέρος της αλυσίδας εργαλείων που απαιτούνται για τον σχεδιασμό και εφαρμογή τεχνικών ελέγχου ορθής λειτουργίας και η ύπαρξη ενός τέτοιου εργαλείου επιτρέπει την μελέτη νέων τεχνικών ελέγχου κάνοντας χρήση του Μεταβατικού μοντέλου σφαλμάτων. Το δεύτερο μέρος της εργασίας συνοψίζει την μελέτη που πραγματοποιήθηκε για την δημιουργία ενός νέου αλγόριθμου επιλογής διανυσμάτων ελέγχου στην περίπτωση των Test Set Embedding τεχνικών ελέγχου. Ο αλγόριθμος επιτυγχάνει σημαντικές μειώσεις τόσο στον όγκο των απαιτούμενων δεδομένων που είναι απαραίτητο να αποθηκευτούν για την αναπαραγωγή του ελέγχου, σε σχέση με τις κλασικές προσεγγίσεις ελέγχου, όσο και στο μήκος των απαιτούμενων ακολουθιών ελέγχου που εφαρμόζονται στο υπό-έλεγχο κύκλωμα σε σχέση με προγενέστερους Test Set Embedding αλγορίθμους. Στο τέλος του μέρους Β’ προτείνεται μία αρχιτεκτονική για την υλοποίηση του αλγόριθμου σε Built-In Self-Test περιβάλλον ελέγχου ορθής λειτουργίας ακολουθούμενη από την εκτίμηση της απόδοσης αυτής και σύγκριση της με την καλύτερη ως τώρα προτεινόμενη αρχιτεκτονική που υπάρχει στην βιβλιογραφία (Βλέπε Παράρτημα Α). / The thesis consists of two basic parts that apply in the field of VLSI testing of integrated circuits. The first one concludes the work that has been done in the field of VLSI testing using the Transient Fault model and more specifically, analyzes the model and the implementation of a Transition Fault Simulator. The transient fault model moves beyond the scope of the simple stuck-at fault model that is mainly used in the literature, by introducing the concept of time and therefore enables the testing techniques to be more precise and closer to reality. Furthermore, a fault simulator is probably the most important part of the tool chain that is required for the design, implementation and study of vlsi testing techniques and therefore having such a tool available, enables the study of new testing techniques using the transient fault model. The second part of the thesis summaries the study that took place for a new technique that reduces the test sequences of reseeding-based schemes in the case of Test Set Embedding testing techniques. The proposed algorithm features significant reductions in both the volumes of test data that are required to be stored for the precise regeneration of the test sequences, and the length of test vector sequences that are applied on the circuit under test, in comparison to the classical proposed test techniques that are available in the literature. In addition to the algorithm, a low hardware overhead architecture for implementing the algorithm in Built-in Self-Test environment is presented for which the imposed hardware overhead is confined to just one extra bit per seed, plus one, very small, extra counter in the scheme’s control logic. In the end of the second part, the proposed architecture is compared with the best so far proposed architecture available in the literature (see Appendix A)
422

Multistage Algorithms in C++ / Mehrstufige Algorithmen in C++

Priesnitz, Andreas 02 November 2005 (has links)
No description available.
423

A variational approach for viewpoint-based visibility maximization

Rocha, Kelvin Raymond 19 May 2008 (has links)
We present a variational method for unfolding of the cortex based on a user-chosen point of view as an alternative to more traditional global flattening methods, which incur more distortion around the region of interest. Our approach involves three novel contributions. The first is an energy function and its corresponding gradient flow to measure the average visibility of a region of interest of a surface from a given viewpoint. The second is an additional energy function and flow designed to preserve the 3D topology of the evolving surface. This latter contribution receives significant focus in this thesis as it is crucial to obtain the desired unfolding effect derived from the first energy functional and flow. Without it, the resulting topology changes render the unconstrained evolution uninteresting for the purpose of cortical visualization, exploration, and inspection. The third is a method that dramatically improves the computational speed of the 3D topology-preservation approach by creating a tree structure of the triangulated surface and using a recursion technique.
424

Sécurisation d'un lien radio UWB-IR / Security of an UWB-IR Link

Benfarah, Ahmed 10 July 2013 (has links)
Du fait de la nature ouverte et partagée du canal radio, les communications sans fil souffrent de vulnérabilités sérieuses en terme de sécurité. Dans ces travaux de thèse, je me suis intéressé particulièrement à deux classes d’attaques à savoir l’attaque par relais et l’attaque par déni de service (brouillage). La technologie de couche physique UWB-IR a connu un grand essor au cours de cette dernière décennie et elle est une candidate intéressante pour les réseaux sans fil à courte portée. Mon objectif principal était d’exploiter les caractéristiques de la couche physique UWB-IR afin de renforcer la sécurité des communications sans fil. L’attaque par relais peut mettre à défaut les protocoles cryptographiques d’authentification. Pour remédier à cette menace, les protocoles de distance bounding ont été proposés. Dans ce cadre, je propose deux nouveaux protocoles (STHCP : Secret Time-Hopping Code Protocol et SMCP : Secret Mapping Code Protocol) qui améliorent considérablement la sécurité des protocoles de distance bounding au moyen des paramètres de la radio UWB-IR. Le brouillage consiste en l’émission intentionnelle d’un signal sur le canal lors du déroulement d’une communication. Mes contributions concernant le problème de brouillage sont triples. D’abord, j’ai déterminé les paramètres d’un brouilleur gaussien pire cas contre un récepteur UWB-IR non-cohérent. En second lieu, je propose un nouveau modèle de brouillage par analogie avec les attaques contre le système de chiffrement. Troisièmement, je propose une modification rendant la radio UWB-IR plus robuste au brouillage. Enfin, dans une dernière partie de mes travaux, je me suis intéressé au problème d’intégrer la sécurité à un réseau UWB-IR en suivant l’approche d’embedding. Le principe de cette approche consiste à superposer et à transmettre les informations de sécurité simultanément avec les données et avec une contrainte de compatibilité. Ainsi, je propose deux nouvelles techniques d’embedding pour la couche physique UWB-IR afin d’intégrer un service d’authentification. / Due to the shared nature of wireless medium, wireless communications are more vulnerable to security threats. In my PhD work, I focused on two types of threats: relay attacks and jamming. UWB-IR physical layer technology has seen a great development during the last decade which makes it a promising candidate for short range wireless communications. My main goal was to exploit UWB-IR physical layer characteristics in order to reinforce security of wireless communications. By the simple way of signal relaying, the adversary can defeat wireless authentication protocols. The first countermeasure proposed to thwart these relay attacks was distance bounding protocol. The concept of distance bounding relies on the combination of two sides: an authentication cryptographic side and a distance checking side. In this context, I propose two new distance bounding protocols that significantly improve the security of existing distance bounding protocols by means of UWB-IR physical layer parameters. The first protocol called STHCP is based on using secret time-hopping codes. Whereas, the second called SMCP is based on secret mapping codes. Security analysis and comparison to the state of the art highlight various figures of merit of my proposition. Jamming consists in the emission of noise over the channel while communication is taking place and constitutes a major problem to the security of wireless communications. In a first contribution, I have determined worst case Gaussian noise parameters (central frequency and bandwidth) against UWB-IR communication employing PPM modulation and a non-coherent receiver. The metric considered for jammer optimization is the signal-to-jamming ratio at the output of the receiver. In a second contribution, I propose a new jamming model by analogy to attacks against ciphering algorithms. The new model leads to distinguish various jamming scenarios ranging from the best case to the worst case. Moreover, I propose a modification of the UWB-IR physical layer which allows to restrict any jamming problem to the most favorable scenario. The modification is based on using a cryptographic modulation depending on a stream cipher. The new radio has the advantage to combine the resistance to jamming and the protection from eavesdropping. Finally, I focused on the problem of security embedding on an existing UWB-IR network. Security embedding consists in adding security features directly at the physical layer and sending them concurrently with data. The embedding mechanism should satisfy a compatibility concern to existing receivers in the network. I propose two new embedding techniques which rely on the superposition of a pulse orthogonal to the original pulse by the form or by the position. Performances analysis reveal that both embedding techniques satisfy all system design constraints.
425

Satisficing solutions for multiobjective stochastic linear programming problems

Adeyefa, Segun Adeyemi 06 1900 (has links)
Multiobjective Stochastic Linear Programming is a relevant topic. As a matter of fact, many real life problems ranging from portfolio selection to water resource management may be cast into this framework. There are severe limitations in objectivity in this field due to the simultaneous presence of randomness and conflicting goals. In such a turbulent environment, the mainstay of rational choice does not hold and it is virtually impossible to provide a truly scientific foundation for an optimal decision. In this thesis, we resort to the bounded rationality and chance-constrained principles to define satisficing solutions for Multiobjective Stochastic Linear Programming problems. These solutions are then characterized for the cases of normal, exponential, chi-squared and gamma distributions. Ways for singling out such solutions are discussed and numerical examples provided for the sake of illustration. Extension to the case of fuzzy random coefficients is also carried out. / Decision Sciences
426

Etudes théoriques des propriétés optiques linéaires et non-linéaires des biomolécules. / Theoretical studies of linear and non-linear optical properties of biomolecules

Bonvicini, Andrea 24 October 2019 (has links)
Dans cette thèse, les propriétés optiques de biomolécules importantes ont été étudiées en utilisant une approche théorique et, dans un cas, aussi expérimentale. La Théorie de la fonctionnelle de la densité (DFT) et la timedependent-DFT (TD-DFT) sont les principales méthodes de chimie quantique utilisées dans cette thèse. Plusieurs spectroscopies ont été étudiées (au niveau théorique et, dans certains cas, également au niveau expérimental) : absorption électronique linéaire (absorption à un photon, OPA) et non-linéaire (absorption à deux ou trois photons, TPA et 3PA), dichroïsme circulaire électronique (DCE) et spectroscopie de fluorescence. Les effets de l’environnement, particulièrement importants dans des systèmes biologiques, ont été pris en compte, pour les propriétés de l’état fondamental et des états excités en utilisant une méthode multi-échelles QM/MM appelée Polarizable Embedding (PE). L’échantillonnage des conformations a été pris en compte avec des simulations de dynamique moléculaire (MD) qui sont basées sur la mécanique classique. Deux thématiques ont été étudiées dans cette thèse : le cholestérol et le design in silico de ses analogues fluorescents ainsi que la caractérisation des coudes de type β dans différentes conformations grâce à la simulation des spectres DCE. La simulation de plus d’une spectroscopie a été importante dans l’étude des états excités du cholestérol dans des solutions organiques. Le design in silico a suggéré un nouveau stérol-polyénique (P-stérol) qui montre despropriétés optiques améliorées pour le mécanisme d’excitation à trois photons par rapport au déhydroergostérol (DHE), une sonde du cholestérol déjà très utilisée. Ce nouveau P-stérol a été suggéré pour la synthèse. L’étude des spectres de DCE des coudes β en différentes conformations a mené une double conclusion : même si deux allures de DCE pour les conformations des coudes β étudiées (4) ont été trouvées (dans la majorité des cas), la spectroscopie de DCE doit toujours être associée à d’autres techniques spectroscopiques dans la caractérisation en solution des coudes β. / In this thesis, the optical properties of important biomolecules were studied using a theoretical approach and, in one case, also an experimental one. Density Functional Theory (DFT) and time-dependent-DFT (TD-DFT), were the principal quantum chemical methods adopted in this thesis.Various spectroscopies were studied (theoretically and, in some cases, also experimentally) : linear (one-photon, OPA) and non-linear (two- and three-photon, TPA and 3PA) electronic absorption, electronic circular dichroism (ECD) and fluorescence spectroscopy. The environment effects, which are particularly important in biological systems, were taken into account, for both ground and excited states properties, using a multiscale QM/MM method called Polarizable Embedding (PE). The sampling of conformations was addressed by Molecular Dynamic (MD) simulations based on classical mechanics. Two topics were studied in this thesis: cholesterol and the in-silico design of its fluorescent analogues, and the characterization of β-turns in different conformations by simulations of their ECD spectra in aqueous solutions. The simulation of more than one spectroscopy resulted to be important when studying the electronic excited states of cholesterol in organic solutions. The in-silico design study suggested a novel polyene-sterol (P-sterol) which shows improved optical properties for the three-photon excitation mechanism with respect to dehydroergosterol (DHE), an already widely used cholesterol probe. This new P-sterol was thus suggested for synthesis. The achievement from the study of ECD spectra for different β-turn conformations is two-fold: even if two ECD patterns for the β-turn conformations studied (4) were found (in most of cases), ECD spectroscopy should be always associated with other spectroscopic techniques when trying to characterize the β-turn conformations in solutions.
427

Langevinized Ensemble Kalman Filter for Large-Scale Dynamic Systems

Peiyi Zhang (11166777) 26 July 2021 (has links)
<p>The Ensemble Kalman filter (EnKF) has achieved great successes in data assimilation in atmospheric and oceanic sciences, but its failure in convergence to the right filtering distribution precludes its use for uncertainty quantification. Other existing methods, such as particle filter or sequential importance sampler, do not scale well to the dimension of the system and the sample size of the datasets. In this dissertation, we address these difficulties in a coherent way.</p><p><br></p><p> </p><p>In the first part of the dissertation, we reformulate the EnKF under the framework of Langevin dynamics, which leads to a new particle filtering algorithm, the so-called Langevinized EnKF (LEnKF). The LEnKF algorithm inherits the forecast-analysis procedure from the EnKF and the use of mini-batch data from the stochastic gradient Langevin-type algorithms, which make it scalable with respect to both the dimension and sample size. We prove that the LEnKF converges to the right filtering distribution in Wasserstein distance under the big data scenario that the dynamic system consists of a large number of stages and has a large number of samples observed at each stage, and thus it can be used for uncertainty quantification. We reformulate the Bayesian inverse problem as a dynamic state estimation problem based on the techniques of subsampling and Langevin diffusion process. We illustrate the performance of the LEnKF using a variety of examples, including the Lorenz-96 model, high-dimensional variable selection, Bayesian deep learning, and Long Short-Term Memory (LSTM) network learning with dynamic data.</p><p><br></p><p> </p><p>In the second part of the dissertation, we focus on two extensions of the LEnKF algorithm. Like the EnKF, the LEnKF algorithm was developed for Gaussian dynamic systems containing no unknown parameters. We propose the so-called stochastic approximation- LEnKF (SA-LEnKF) for simultaneously estimating the states and parameters of dynamic systems, where the parameters are estimated on the fly based on the state variables simulated by the LEnKF under the framework of stochastic approximation. Under mild conditions, we prove the consistency of resulting parameter estimator and the ergodicity of the SA-LEnKF. For non-Gaussian dynamic systems, we extend the LEnKF algorithm (Extended LEnKF) by introducing a latent Gaussian measurement variable to dynamic systems. Those two extensions inherit the scalability of the LEnKF algorithm with respect to the dimension and sample size. The numerical results indicate that they outperform other existing methods in both states/parameters estimation and uncertainty quantification.</p>
428

Deep Neural Networks for Context Aware Personalized Music Recommendation : A Vector of Curation / Djupa neurala nätverk för kontextberoende personaliserad musikrekommendation

Bahceci, Oktay January 2017 (has links)
Information Filtering and Recommender Systems have been used and has been implemented in various ways from various entities since the dawn of the Internet, and state-of-the-art approaches rely on Machine Learning and Deep Learning in order to create accurate and personalized recommendations for users in a given context. These models require big amounts of data with a variety of features such as time, location and user data in order to find correlations and patterns that other classical models such as matrix factorization and collaborative filtering cannot. This thesis researches, implements and compares a variety of models with the primary focus of Machine Learning and Deep Learning for the task of music recommendation and do so successfully by representing the task of recommendation as a multi-class extreme classification task with 100 000 distinct labels. By comparing fourteen different experiments, all implemented models successfully learn features such as time, location, user features and previous listening history in order to create context-aware personalized music predictions, and solves the cold start problem by using user demographic information, where the best model being capable of capturing the intended label in its top 100 list of recommended items for more than 1/3 of the unseen data in an offine evaluation, when evaluating on randomly selected examples from the unseen following week. / Informationsfiltrering och rekommendationssystem har använts och implementeratspå flera olika sätt från olika enheter sedan gryningen avInternet, och moderna tillvägagångssätt beror påMaskininlärrning samtDjupinlärningför att kunna skapa precisa och personliga rekommendationerför användare i en given kontext. Dessa modeller kräver data i storamängder med en varians av kännetecken såsom tid, plats och användardataför att kunna hitta korrelationer samt mönster som klassiska modellersåsom matris faktorisering samt samverkande filtrering inte kan. Dettaexamensarbete forskar, implementerar och jämför en mängd av modellermed fokus påMaskininlärning samt Djupinlärning för musikrekommendationoch gör det med succé genom att representera rekommendationsproblemetsom ett extremt multi-klass klassifikationsproblem med 100000 unika klasser att välja utav. Genom att jämföra fjorton olika experiment,så lär alla modeller sig kännetäcken såsomtid, plats, användarkänneteckenoch lyssningshistorik för att kunna skapa kontextberoendepersonaliserade musikprediktioner, och löser kallstartsproblemet genomanvändning av användares demografiska kännetäcken, där den bästa modellenklarar av att fånga målklassen i sin rekommendationslista medlängd 100 för mer än 1/3 av det osedda datat under en offline evaluering,när slumpmässigt valda exempel från den osedda kommande veckanevalueras.
429

Traffic Prediction From Temporal Graphs Using Representation Learning / Trafikförutsägelse från dynamiska grafer genom representationsinlärning

Movin, Andreas January 2021 (has links)
With the arrival of 5G networks, telecommunication systems are becoming more intelligent, integrated, and broadly used. This thesis focuses on predicting the upcoming traffic to efficiently promote resource allocation, guarantee stability and reliability of the network. Since networks modeled as graphs potentially capture more information than tabular data, the construction of the graph and choice of the model are key to achieve a good prediction. In this thesis traffic prediction is based on a time-evolving graph, whose node and edges encode the structure and activity of the system. Edges are created by dynamic time-warping (DTW), geographical distance, and $k$-nearest neighbors. The node features contain different temporal information together with spatial information computed by methods from topological data analysis (TDA). To capture the temporal and spatial dependency of the graph several dynamic graph methods are compared. Throughout experiments, we could observe that the most successful model GConvGRU performs best for edges created by DTW and node features that include temporal information across multiple time steps. / Med ankomsten av 5G nätverk blir telekommunikationssystemen alltmer intelligenta, integrerade, och bredare använda. Denna uppsats fokuserar på att förutse den kommande nättrafiken, för att effektivt hantera resursallokering, garantera stabilitet och pålitlighet av nätverken. Eftersom nätverk som modelleras som grafer har potential att innehålla mer information än tabulär data, är skapandet av grafen och valet av metod viktigt för att uppnå en bra förutsägelse. I denna uppsats är trafikförutsägelsen baserad på grafer som ändras över tid, vars noder och länkar fångar strukturen och aktiviteten av systemet. Länkarna skapas genom dynamisk time warping (DTW), geografisk distans, och $k$-närmaste grannarna. Egenskaperna för noderna består av dynamisk och rumslig information som beräknats av metoder från topologisk dataanalys (TDA). För att inkludera såväl det dynamiska som det rumsliga beroendet av grafen, jämförs flera dynamiska grafmetoder. Genom experiment, kunde vi observera att den mest framgångsrika modellen GConvGRU presterade bäst för länkar skapade genom DTW och noder som innehåller dynamisk information över flera tidssteg.
430

Constructing “Climate Change Knowledge”: The example of small-scale farmers in the Swartland region, South Africa

de Ruijter, Susann 27 June 2016 (has links)
During the last decades “Climate Change” has become a vital topic on national and international political agendas. There it is presented as an irrevocable fact of global impact and thus of universal relevance. What has often been neglected are local discourses of marginalized groups and their specific contextualization of “Climate Change” phenomena. The aim of this project, to develop another perspective along these dominant narratives, has resulted in the research question How is social reality reconstructed on the phenomenon of “Climate Change” among the “Emerging Black Farmers” in the Swartland region in Western Cape, South Africa? Taken as an example, “Climate Change Knowledge” is reconstructed through a case study on the information exchange between the NGO Goedgedacht Trust and local small-scale farmers in the post-Apartheid context of on-going political, social, economic and educational transition in South Africa. Applying a constructivist approach, “Climate Change Knowledge” is not understood as an objectively given, but a socially constructed “reality” that is based on the interdependency of socio-economic conditions and individual assets, including language skills and language practice, sets of social norms and values, as well as strategies of knowledge transfer. The data set consists of qualitative data sources, such as application forms and interview material, which are triangulated. The rationale of a multi-layered data analysis includes a discursive perspective as well as linguistic and ethical “side perspectives”. Epistemologically, the thesis is guided by assumptions of complexity theory, framing knowledge around “Climate Change” as a fluid, constantly changing system that is shaped by constant intra- and inter-systemic exchange processes, and characterized by non-linearity, self-organization and representation of its constituents. From this point of departure, a theoretical terminology has been developed, which differentiates between symbols, interrelations, contents and content clusters. These elements are located in a system of spatio-temporal orientation and embedded into a broader (socio-economic) context of “historicity”. Content clusters are remodelled with the help of concept maps. Starting from that, a local perspective on “Climate Change” is developed, adding an experiential notion to the global narratives. The thesis concludes that there is no single reality about “Climate Change” and that the farmers’ “Climate Change Knowledge” highly depends on experiential relativity and spatio-temporal immediacy. Furthermore, analysis has shown that the system’s historicity and social manifestations can be traced in the scope and emphasis of the content clusters discussed. Finally the thesis demonstrates that characteristics of symbols, interconnections and contents range between dichotomies of direct and indirect, predictable versus unpredictable, awareness and negligence or threat and danger, all coexisting and creating a continuum of knowledge production.

Page generated in 0.0387 seconds