• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 18
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 89
  • 15
  • 15
  • 12
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Inferences on the power-law process with applications to repairable systems

Chumnaul, Jularat 13 December 2019 (has links)
System testing is very time-consuming and costly, especially for complex high-cost and high-reliability systems. For this reason, the number of failures needed for the developmental phase of system testing should be relatively small in general. To assess the reliability growth of a repairable system, the generalized confidence interval and the modified signed log-likelihood ratio test for the scale parameter of the power-law process are studied concerning incomplete failure data. Specifically, some recorded failure times in the early developmental phase of system testing cannot be observed; this circumstance is essential to establish a warranty period or determine a maintenance phase for repairable systems. For the proposed generalized confidence interval, we have found that this method is not biased estimates which can be seen from the coverage probabilities obtained from this method being close to the nominal level 0.95 for all levels of γ and β. When the performance of the proposed method and the existing method are compared and validated regarding average widths, the simulation results show that the proposed method is superior to another method due to shorter average widths when the predetermined number of failures is small. For the proposed modified signed log-likelihood ratio test, we have found that this test performs well in controlling type I errors for complete failure data, and it has desirable powers for all parameters configurations even for the small number of failures. For incomplete failure data, the proposed modified signed log-likelihood ratio test is preferable to the signed log-likelihood ratio test in most situations in terms of controlling type I errors. Moreover, the proposed test also performs well when the missing ratio is up to 30% and n > 10. In terms of empirical powers, the proposed modified signed log-likelihood ratio test is superior to another test for most situations. In conclusion, it is quite clear that the proposed methods, the generalized confidence interval, and the modified signed log-likelihood ratio test, are practically useful to save business costs and time during the developmental phase of system testing since the only small number of failures is required to test systems, and it yields precise results.
62

Problèmes type "Feedback Set" et comportement dynamique des réseaux de régulation / Feedback Set Problems and Dynamical Behavior in Regulatory Networks

Montalva Medel, Marco 18 August 2011 (has links)
Dans la nature existent de nombreux exemples de systèmes dynamiques complexes: systèmes neuronaux, communautés, écosystèmes, réseaux de régulation génétiques, etc. Ces derniers, en particulier, sont de notre intérêt et sont souvent modélisés par des réseaux booléens. Un réseau booléenne peut être considérée comme un digraphe, où les sommets correspondent à des gènes ou de produits de gènes, tandis que les arcs indiquent les interactions entre eux. Une niveau d'expression des gènes est modélisé par des valeurs binaires, 0 ou 1, indiquant deux états de la transcription, soit activité, soit inactivité, respectivement, et ce niveau change dans le temps selon certains fonction locaux d'activation qui dépend des états d'un ensemble de nœuds (les gènes). L'effet conjoint des fonctions d'activation locale définit une fonction de transition globale: ainsi, le autre élément nécessaire dans la description du modèle est fonction de mise à jour, qui détermine quand chaque nœud doit être mis à jour, et donc, comme les fonctions local se combinent dans une fonction globale (en d'autres termes, il doit décrire les temps relative de les activités régulatoires). Comme un réseau booléen avec n sommets a 2 ^ n états globaux, à partir d'un état ​​de départ, et dans un nombre fini de mises à jour, le réseau atteindra un fixe point ou un cycle limite, appelée attracteurs qui sont souvent associées à des phénotypes distincts (états-cellulaire) définis par les patrons d'activité des gènes. Un réseau de régulation Booléenne (REBN) est un réseau Booléen où chaque interaction entre les éléments de la réseau correspond soit à une interaction positif ou d'une interaction négative. Ainsi, le digraphe interaction associée à une REBN est un digraphe signé où un circuit est appelé positif (négatif) si le nombre de ses arcs négative est pair (impair). Dans ce contexte, il y a diverses études sur l'importance du les circuits positif et négatifs dans le comportement dynamique de différents systèmes en Biologie. En effet le point de départ de cette thèse est basée sur un résultat en disant que le nombre maximal de points fixes d'une REBN dépend d'un ensemble de cardinalité minimale qu'intersecte tous les cycles positifs (également dénommés positive feedback vertex set) du digraphe signé associé. D'autre part, un autre aspect important de circuits est leur rôle dans la robustesse des réseaux booléens par rapport différents types de mise à jour déterministe. Dans ce contexte, un élément clé mathématique est le update digraphe qui est un digraphe étiqueté associé à la réseau dont les étiquettes sur les arcs sont définies comme suit: un arc (u,v) est dit être positif si l'état de sommet u est mis à jour en même temps ou après que celle de v, et négative sinon. Ainsi, un cycle dans le digraphe étiqueté est dite positive (négative) si tous ses arcs sont positifs (négatifs). Cela laisse en évidence que parler de "positif" et "négatif" a des significations différentes selon le contex: digraphes signé ou digraphes étiquetés. Ainsi, nous allons voir dans cette thèse, les relations entre les feedback sets et la dynamique des réseaux Booléens à travers l'étude analytique de ces deux fondamentaux objets mathématiques: le digraphe (de connexion) signé et l'update digraphe. / In the nature there exist numerous examples of complex dynamical systems: neural systems, communities, ecosystems, genetic regulatory networks, etc. These latest, in particular are of our interest and are often modeled by Boolean networks. A Boolean network can be viewed as a digraph, where the vertices correspond to genes or gene products, while the arcs denote interactions among them. A gene expression level is modeled by binary values, 0 or 1, indicating two transcriptional states, either active or inactive, respectively, and this level changes in time according to some local activation function which depends on the states of a set of nodes (genes). The joint effect of the local activation functions defines a global transition function; thus, the other element required in the description of the model is an update schedule which determines when each node has to be updated, and hence, how the local functions combine into the global one (in other words, it must describe the relative timings of the regulatory activities). Since a Boolean network with n vertices has 2^n global states, from a starting state, within a finite number of udpates, the network will reach a fixed point or a limit cycle, called attractors that are often associated to distinct phenotypes (cellular states) defined by patterns of gene activity. A regulatory Boolean network (REBN) is a Boolean network where each interaction between the elements of the network corresponds either to a positive or to a negative interaction. Thus, the interaction digraph associated to a REBN is a signed digraph where a circuit is called positive (negative) if the number of its negative arcs is even (odd). In this context, there are diverse studies about the importance of the positive and negative circuits in the dynamical behavior of different systems in Biology. Indeed the starting point of this Thesis is based on a result saying that the maximum number of fixed points of a REBN depends on a minimum cardinality vertex set whose elements intersects to all the positive cycles (also named a positive feedback vertex set) of the associated signed digraph. On the other hand, another important aspect of circuits is their role in the robustness of Boolean networks with respect to different deterministic update schedules. In this context a key mathematical element is the update digraph which is a labeled digraph associated to the network and whose labels on the arcs are defined as follows: an arc (u,v) is said to be positive if the state of vertex u is updated at the same time or after than that of v, and negative otherwise. Hence, a cycle in the labeled digraph is called positive (negative) if all its arcs are positive (negative). This leaves in evidence that talk of "positive" and "negative" has different meanings depending on the contex: signed digraphs or labeled digraphs. Thus, we will see in this thesis, relationships between feedback sets and the dynamics of Boolean networks through the analytical study of these two fundamental mathematical objects: the signed (connection) digraph and the update digraph.
63

Direktsamplande digital transciever / Direct sampling digital transceiver

Karlsson, Magnus January 2002 (has links)
<p>Master thesis work at ITN (Department of Science and Technology) in the areas of A/D-construction and RF-circuit design. Major goal of project were to research suitable possibilities for implementations of direct conversion in transceivers operating in the 160MHz band, theoretic study followed by development of components in the construction environment Cadence. Suitable A/D- converter and other important parts were selected at the end of the theoretic study. Subsampling technique was applied to make A/D sample requirements more realistic to achieve. Besides lowering requirements on A/D-converter it allows a more simple construction, which saves more components than subsampling adds. Subsampling add extra noise, because of that an A/D-converter based on the RSD algorithm was chosen to improve error rate. To achieve high bit-processing rate compared to the used number of transistors, pipeline structure were selected as conversion method. The receiver was that part which gained largest attention because it’s the part which is most interesting to optimise. A/D-conversion is more difficult to construct than D/A conversion, besides there’s more to gain from eliminating mixers in the receiver than in the transmitter.</p>
64

Direktsamplande digital transciever / Direct sampling digital transceiver

Karlsson, Magnus January 2002 (has links)
Master thesis work at ITN (Department of Science and Technology) in the areas of A/D-construction and RF-circuit design. Major goal of project were to research suitable possibilities for implementations of direct conversion in transceivers operating in the 160MHz band, theoretic study followed by development of components in the construction environment Cadence. Suitable A/D- converter and other important parts were selected at the end of the theoretic study. Subsampling technique was applied to make A/D sample requirements more realistic to achieve. Besides lowering requirements on A/D-converter it allows a more simple construction, which saves more components than subsampling adds. Subsampling add extra noise, because of that an A/D-converter based on the RSD algorithm was chosen to improve error rate. To achieve high bit-processing rate compared to the used number of transistors, pipeline structure were selected as conversion method. The receiver was that part which gained largest attention because it’s the part which is most interesting to optimise. A/D-conversion is more difficult to construct than D/A conversion, besides there’s more to gain from eliminating mixers in the receiver than in the transmitter.
65

Design of Low Cost Finite-Impulse Response (FIR) Filters Using Multiple Constant Truncated Multipliers

Zhang Jian, Jun-Hong 10 September 2012 (has links)
Finite impulse response (FIR) digital filters are frequently used in many digital signal processing and communication applications, such as IS-95 CDMA, Digital Mobile Phone Systems (D-AMPS), etc. FIR filter achieves the frequency response of system requirement using a series of multiplications and additions. Previous papers on FIR hardware implementations usually focus on reducing area and delay of the multiple constant multiplications (MCM) through common sub-expression elimination (CSE) in the transpose FIR filter structure. In this thesis, we first perform optimization for the quantization of FIR filter coefficients that satisfy the target frequency response. Then suitable encoding methods are adopted to reduce the height of the partial products of the MCM in the direct FIR filter structure. Finally, by jointly considering the errors in the truncated multiplications and additions, we can design the hardware-efficient FIR filter that meets the bit accuracy requirement. Experimental results show that although CSE in the transpose FIR structure can reduce more area in MCM, the direct form takes smaller area in registers. Compared with previous approaches, the proposed FIR implementations with direct form has the minimum area cost.
66

Advanced Statistical Methodologies in Determining the Observation Time to Discriminate Viruses Using FTIR

Luo, Shan 13 July 2009 (has links)
Fourier transform infrared (FTIR) spectroscopy, one method of electromagnetic radiation for detecting specific cellular molecular structure, can be used to discriminate different types of cells. The objective is to find the minimum time (choice among 2 hour, 4 hour and 6 hour) to record FTIR readings such that different viruses can be discriminated. A new method is adopted for the datasets. Briefly, inner differences are created as the control group, and Wilcoxon Signed Rank Test is used as the first selecting variable procedure in order to prepare the next stage of discrimination. In the second stage we propose either partial least squares (PLS) method or simply taking significant differences as the discriminator. Finally, k-fold cross-validation method is used to estimate the shrinkages of the goodness measures, such as sensitivity, specificity and area under the ROC curve (AUC). There is no doubt in our mind 6 hour is enough for discriminating mock from Hsv1, and Coxsackie viruses. Adeno virus is an exception.
67

A ROBUST RGB-D SLAM SYSTEM FOR 3D ENVIRONMENT WITH PLANAR SURFACES

Su, Po-Chang 01 January 2013 (has links)
Simultaneous localization and mapping is the technique to construct a 3D map of unknown environment. With the increasing popularity of RGB-depth (RGB-D) sensors such as the Microsoft Kinect, there have been much research on capturing and reconstructing 3D environments using a movable RGB-D sensor. The key process behind these kinds of simultaneous location and mapping (SLAM) systems is the iterative closest point or ICP algorithm, which is an iterative algorithm that can estimate the rigid movement of the camera based on the captured 3D point clouds. While ICP is a well-studied algorithm, it is problematic when it is used in scanning large planar regions such as wall surfaces in a room. The lack of depth variations on planar surfaces makes the global alignment an ill-conditioned problem. In this thesis, we present a novel approach for registering 3D point clouds by combining both color and depth information. Instead of directly searching for point correspondences among 3D data, the proposed method first extracts features from the RGB images, and then back-projects the features to the 3D space to identify more reliable correspondences. These color correspondences form the initial input to the ICP procedure which then proceeds to refine the alignment. Experimental results show that our proposed approach can achieve better accuracy than existing SLAMs in reconstructing indoor environments with large planar surfaces.
68

The Efficacy of the Eigenvector Approach to South African Sign Language Identification

Segers, Vaughn Mackman January 2010 (has links)
Masters of Science / The communication barriers between deaf and hearing society mean that interaction between these communities is kept to a minimum. The South African Sign Language research group, Integration of Signed and Verbal Communication: South African Sign Language Recognition and Animation (SASL), at the University of the Western Cape aims to create technologies to bridge the communication gap. In this thesis we address the subject of whole hand gesture recognition. We demonstrate a method to identify South African Sign Language classifiers using an eigenvector approach. The classifiers researched within this thesis are based on those outlined by the Thibologa Sign Language Institute for SASL. Gesture recognition is achieved in real time. Utilising a pre-processing method for image registration we are able to increase the recognition rates for the eigenvector approach.
69

Gestion des données dans les réseaux sociaux / Data management in social networks

Maniu, Silviu 28 September 2012 (has links)
Nous abordons dans cette thèse quelques-unes des questions soulevées par I'émergence d'applications sociales sur le Web, en se concentrant sur deux axes importants: l'efficacité de recherche sociale dans les applications Web et l'inférence de liens sociaux signés à partir des interactions entre les utilisateurs dans les applications Web collaboratives. Nous commençons par examiner la recherche sociale dans les applications de "tag- ging". Ce problème nécessite une adaptation importante des techniques existantes, qui n'utilisent pas des informations sociaux. Dans un contexte ou le réseau est importante, on peut (et on devrait) d'exploiter les liens sociaux, ce qui peut indiquer la façon dont les utilisateurs se rapportent au demandeur et combien de poids leurs actions de "tagging" devrait avoir dans le résultat. Nous proposons un algorithme qui a le potentiel d'évoluer avec la taille des applications actuelles, et on le valide par des expériences approfondies. Comme les applications de recherche sociale peut être considérée comme faisant partie d'une catégorie plus large des applications sensibles au contexte, nous étudions le problème de répondre aux requêtes à partir des vues, en se concentrant sur deux sous-problèmes importants. En premier, la manipulation des éventuelles différences de contexte entre les différents points de vue et une requête d'entrée conduit à des résultats avec des score incertains, valables pour le nouveau contexte. En conséquence, les algorithmes top-k actuels ne sont plus directement applicables et doivent être adaptés aux telle incertitudes dans les scores des objets. Deuxièmement, les techniques adaptées de sélection de vue sont nécessaires, qui peuvent s’appuyer sur les descriptions des requêtes et des statistiques sur leurs résultats. Enfin, nous présentons une approche pour déduire un réseau signé (un "réseau de confiance") à partir de contenu généré dans Wikipedia. Nous étudions les mécanismes pour deduire des relations entre les contributeurs Wikipédia - sous forme de liens dirigés signés - en fonction de leurs interactions. Notre étude met en lumière un réseau qui est capturée par l’interaction sociale. Nous examinons si ce réseau entre contributeurs Wikipedia représente en effet une configuration plausible des liens signes, par l’étude de ses propriétés globaux et locaux du reseau, et en évaluant son impact sur le classement des articles de Wikipedia. / We address in this thesis some of the issues raised by the emergence of social applications on the Web, focusing on two important directions: efficient social search inonline applications and the inference of signed social links from interactions between users in collaborative Web applications. We start by considering social search in tagging (or bookmarking) applications. This problem requires a significant departure from existing, socially agnostic techniques. In a network-aware context, one can (and should) exploit the social links, which can indicate how users relate to the seeker and how much weight their tagging actions should have in the result build-up. We propose an algorithm that has the potential to scale to current applications, and validate it via extensive experiments. As social search applications can be thought of as part of a wider class of context-aware applications, we consider context-aware query optimization based on views, focusing on two important sub-problems. First, handling the possible differences in context between the various views and an input query leads to view results having uncertain scores, i.e., score ranges valid for the new context. As a consequence, current top-k algorithms are no longer directly applicable and need to be adapted to handle such uncertainty in object scores. Second, adapted view selection techniques are needed, which can leverage both the descriptions of queries and statistics over their results. Finally, we present an approach for inferring a signed network (a "web of trust")from user-generated content in Wikipedia. We investigate mechanisms by which relationships between Wikipedia contributors - in the form of signed directed links - can be inferred based their interactions. Our study sheds light into principles underlying a signed network that is captured by social interaction. We investigate whether this network over Wikipedia contributors represents indeed a plausible configuration of link signs, by studying its global and local network properties, and at an application level, by assessing its impact in the classification of Wikipedia articles.javascript:nouvelleZone('abstract');_ajtAbstract('abstract');
70

General Education Teachers Implementing Common Core with Students in Special Education: A Mixed Methods Study of Teachers' Self-Efficacy Beliefs

Cash, Jon Leland 13 December 2014 (has links)
This embedded mixed method study addresses the problems teachers have reported in believing themselves capable to implement the Common Core State Standards with students in special education. This study examines the effect professional development on implementing the Common Core State Standards had on the participating teachers’ self-efficacy beliefs. The participants (N=21) in this study were drawn from a 20-day professional development for teachers based on implementing the Common Core State Standards. The instrument used in the study was the Teacher Efficacy Beliefs System-Self. Data were subject to both statistical and qualitative analysis. The results of this study provide insight into the self-efficacy beliefs of the participants during and shortly after professional development about implementing the Common Core State Standards with students in special education. The Wilcoxon test of signed ranks revealed a significant increase in the TEB-S subscale areas of Accommodating Individual Differences and Managing Learning Routines, but not in Positive Classroom Climate. Qualitative analysis of data found both support for the statistical findings and also contradicted the statistical findings. Further qualitative analysis showed that practices presented in the professional development such as using the arts, formative assessment, and technology were effective in maintaining their teachers’ self-efficacy beliefs after professional development. Factors unrelated to the professional development, such as support from administrators and colleagues and poorly working technology were not supportive in carrying over the increase in teachers’ self-efficacy beliefs in implementing the Common Core State Standards with students in special education. The study is framed by Social Cognitive Theory and organized into 5 parts. Chapter I provides an overview of the study. Chapter II includes a review of literature related to teachers’ self-efficacy belief’s Common Core State Standards, and professional development. Chapter III describes the methodology of the study. Chapter IV presents the results of the analysis of data. Chapter IV reports the findings of the study and presents the conclusions of the study and ideas for future research.

Page generated in 0.0412 seconds