• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1066
  • 463
  • 266
  • 142
  • 81
  • 58
  • 49
  • 41
  • 41
  • 37
  • 32
  • 20
  • 20
  • 14
  • 14
  • Tagged with
  • 2766
  • 354
  • 292
  • 264
  • 262
  • 255
  • 209
  • 191
  • 160
  • 154
  • 151
  • 134
  • 127
  • 126
  • 122
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
511

Examining variable galactic nuclei with the help of astronomical databases and archives

Kjellqvist, Jimmy January 2019 (has links)
There exists many astronomical objects that vary in brightness. Objects such as variable stars like the Cepheids that periodically expands and contracts their outer layers, or the active galactic nuclei (AGN) where accretion of matter into a black hole generates a often varying brightness. Several candidates for being such variable objects have been identified as a result of the Vanishing and Appearing Stuff during a Century of Observations (VASCO) project. These candidates were then narrowed down to a handful that showed variability towards the infrared part of the spectrum. This bachelor’s thesis then aims to look further into these candidates using various databases and catalogues taking data from several sky surveys (SDSS, 2MASS etc). This is done to get better overview of the objects lightcurve over a bigger part of the spectrum, to establish whether the variability is real or a result from errors and to form a hypothesis of what kind of objects they could be. The result obtained from the data from the surveys points towards all the objects being real variable objects. The hypothesis is that all the objects are AGN’s that vary in brightness. / Det existerar många olika astronomiska objekt som varierar i ljusstyrka. Allt från variabla stjärnor som Cepheiderna som periodvis expanderar och kontraherar dess yttre skikt, till aktiva galaxkärnor där ackretion av materia in i ett svart hål genererar en ofta varierande ljusstyrka. Ett flertal kandidater för just sådana varierande objekt har identifierats som ett resultat av VASCO projektet. Dessa kandidater har sedan skalats ner till en handfull mängd kandidater som visade variation mot den infraröda delen av spektrumet. Detta kandidatarbete siktar på att vidare undersöka dessa kandidater genom att använda diverse astronomiska databaser och kataloger för att få data från flera kartläggningsprojekt (t.ex. SDSS, 2MASS etc). Detta är gjort för att få en bättre överblick över objektens ljuskurvor över en större del av spektrumet, att fastställa ifall objekten är riktiga variabla objekt eller uppstått på grund av diverse fel, samt att framställa en hypotes för vad det är för typ av objekt de kan vara. Resultaten från undersökningarna pekar på att alla objekten är riktiga variabla objekt. Hypotesen är att alla av objekten är aktiva galaxkärnor som varierar i ljusstyrka.
512

Conception du décodeur NB-LDPC à débit ultra-élevé / Design of ultra high throughput rate NB-LDPC decoder

Harb, Hassan 08 November 2018 (has links)
Les codes correcteurs d’erreurs Non-Binaires Low Density Parity Check (NB-LDPC) sont connus pour avoir de meilleure performance que les codes LDPC binaires. Toutefois, la complexité de décodage des codes non-binaires est bien supérieure à celle des codes binaires. L’objectif de cette thèse est de proposer de nouveaux algorithmes et de nouvelles architectures matérielles de code NB-LDPC pour le décodage des NBLDPC. La première contribution de cette thèse consiste à réduire la complexité du nœud de parité en triant en amont ses messages d’entrées. Ce tri initial permet de rendre certains états très improbables et le matériel requis pour les traiter peut tout simplement être supprimé. Cette suppression se traduit directement par une réduction de la complexité du décodeur NB-LDPC, et ce, sans affecter significativement les performances de décodage. Un modèle d’architecture, appelée "architecture hybride" qui combine deux algorithmes de l’état de l’art ("l’Extended Min Sum" et le "Syndrome Based") a été proposé afin d’exploiter au maximum le pré-tri. La thèse propose aussi de nouvelles méthodes pour traiter les nœuds de variable dans le contexte d’une architecture pré-tri. Différents exemples d’implémentations sont donnés pour des codes NB-LDPC sur GF(64) et GF(256). En particulier, une architecture très efficace de décodeur pour un code de rendement 5/6 sur GF(64) est présentée. Cette architecture se caractérise par une architecture de check node nœud de parité entièrement parallèle. Enfin, une problématique récurrente dans les architectures NB-LDPC, qui est la recherche des P minimums parmi une liste de taille Ns, est abordée. La thèse propose une architecture originale appelée first-then-second minimum pour une implantation efficace de cette tâche. / The Non-Binary Low Density Parity Check (NB-LDPC) codes constitutes an interesting category of error correction codes, and are well known to outperform their binary counterparts. However, their non-binary nature makes their decoding process of higher complexity. This PhD thesis aims at proposing new decoding algorithms for NB-LDPC codes that will be shaping the resultant hardware architectures expected to be of low complexity and high throughput rate. The first contribution of this thesis is to reduce the complexity of the Check Node (CN) by minimizing the number of messages being processed. This is done thanks to a pre-sorting process that sorts the messages intending to enter the CN based on their reliability values, where the less likely messages will be omitted and consequently their dedicated hardware part will be simply removed. This reliability-based sorting enabling the processing of only the highly reliable messages induces a high reduction of the hardware complexity of the NB-LDPC decoder. Clearly, this hardware reduction must come at no significant performance degradation. A new Hybrid architectural CN model (H-CN) combining two state-of-the-art algorithms - Forward-Backward CN (FB-CN) and Syndrome Based CN (SB-CN) - has been proposed. This hybrid model permits to effectively exploit the advantages of pre-sorting. This thesis proposes also new methods to perform the Variable Node (VN) processing in the context of pre-sorting-based architecture. Different examples of implementation of NB-LDPC codes defined over GF(64) and GF(256) are presented. For decoder to run faster, it must become parallel. From this perspective, we have proposed a new efficient parallel decoder architecture for a 5/6 rate NB-LDPC code defined over GF(64). This architecture is characterized by its fully parallel CN architecture receiving all the input messages in only one clock cycle. The proposed new methodology of parallel implementation of NB-LDPC decoders constitutes a new vein in the hardware conception of ultra-high throughput rate decoders. Finally, since the NB-LDPC decoders requires the implementation of a sorting function to extract P minimum values among a list of size Ns, a chapter is dedicated to this problematic where an original architecture called First-Then-Second-Extrema-Selection (FTSES) has been proposed.
513

Recherche de nouveaux déterminants génétiques et épigénétiques de susceptibilité à la tumorigenèse intestinale au moyen du modèle murin Apcd14 / Identification of new genetic and epigenetic determinants of intestinal tumorigenesis susceptibility using the Apcd14 mouse model

Quesada, Stanislas 20 September 2013 (has links)
Le cancer colorectal (CCR) représente un problème majeur de Santé publique. Des facteurs de risque environnementaux, ainsi que l'apparition séquentielle d'altérations génétiques et épigénétiques corrélant avec la progression tumorale ont été extensivement décrites. Cependant, les variations épigénétiques préexistant dans le tissu sain, potentiellement responsables de différences notoires de susceptibilité au CCR, sont restées élusives jusqu'à ce jour. Afin de répondre à cette problématique, la lignée murine Apcd14 (porteuse d'une mutation hétérozygote constitutive au niveau du gène Adenomatous polyposis coli) a servi de modèle au cours de ce projet. Les individus de cette lignée comportent une expressivité très variable, au sens qu'ils développent spontanément un nombre plus ou moins important de tumeurs intestinales, impactant sur leur survie. Cette hétérogénéité est observée en dépit de conditions d'élevage et de génomes considérés identiques. L'analyse exhaustive de cette lignée a conduit à y caractériser deux groupes de souris, présentant une gravité différente de phénotype. La variation d'expression génétique dans le tissu sain (i.e en amont de la tumorigenèse) a ensuite été analysée dans le but de comprendre l'établissement de cette hétérogénéité. Ceci a mené à la découverte d'une signature de gènes différemment exprimés entre les deux groupes, permettant de corréler de façon parfaite données moléculaires et phénotype. La potentielle héritabilité de cette signature a par la suite conduit à remettre en cause le statut considéré syngénique de la lignée. Les approches expérimentales effectuées ont déjà permis de cibler une région chromosomique, ce qui mènera à court terme à la caractérisation d'un nouvel acteur impliqué dans la tumorigenèse intestinale. De manière plus générale, les expériences développées durant ce projet ouvrent la voie à la notion de susceptibilité individuelle face au cancer dépendante de la variation d'expression génétique. / Colorectal cancer (CRC) is a major public health concern. Environmental clues, as well as sequential genetic and epigenetic alterations correlating with tumor progression have been extensively described. Meanwhile, pre-existing epigenetic variations in the healthy intestinal mucosa, potentially leading to differences in CRC suceptibility, have generally been overlooked. In order to answer to this question, we have made use of the Apcd14 mouse model (carrying a heterozygous mutation in the Adenomatous polyposis coli gene). Although all Apcd14 mice apparently share the same genome and are raised in the same environmental conditions, they exhibit a huge phenotypic variability at the individual level, spontaneously developping a few or numerous intestinal tumors, that ultimately results in differences in survival rates. Through detailed analysis of this strain, two groups have been characterized, exhibiting several phenotypic specificities. Gene expression analysis in the healthy intestinal tissue was then performed in order to understand the differences already existing, prior to tumorigenesis. This led to the identification of a group-specific gene expression signature, allowing a correlation between macroscopic phenotype and molecular data. Consideration of the hereditary potential of this signature led to reconsider the syngeneic status of the Apcd14 strain. Several experimental data already targeted a specific genomic region, and this will allow to identify the genetic alteration involved. More generally, this project opens the way to discover a link between individual susceptibility to intestinal tumorigenesis and gene expression variability.
514

THE FAMILY OF CONDITIONAL PENALIZED METHODS WITH THEIR APPLICATION IN SUFFICIENT VARIABLE SELECTION

Xie, Jin 01 January 2018 (has links)
When scientists know in advance that some features (variables) are important in modeling a data, then these important features should be kept in the model. How can we utilize this prior information to effectively find other important features? This dissertation is to provide a solution, using such prior information. We propose the Conditional Adaptive Lasso (CAL) estimates to exploit this knowledge. By choosing a meaningful conditioning set, namely the prior information, CAL shows better performance in both variable selection and model estimation. We also propose Sufficient Conditional Adaptive Lasso Variable Screening (SCAL-VS) and Conditioning Set Sufficient Conditional Adaptive Lasso Variable Screening (CS-SCAL-VS) algorithms based on CAL. The asymptotic and oracle properties are proved. Simulations, especially for the large p small n problems, are performed with comparisons with other existing methods. We further extend to the linear model setup to the generalized linear models (GLM). Instead of least squares, we consider the likelihood function with L1 penalty, that is the penalized likelihood methods. We proposed for Generalized Conditional Adaptive Lasso (GCAL) for the generalized linear models. We then further extend the method for any penalty terms that satisfy certain regularity conditions, namely Conditionally Penalized Estimate (CPE). Asymptotic and oracle properties are showed. Four corresponding sufficient variable screening algorithms are proposed. Simulation examples are evaluated for our method with comparisons with existing methods. GCAL is also evaluated with a read data set on leukemia.
515

Variable Strength Covering Arrays

Raaphorst, Sebastian 21 January 2013 (has links)
Recently, covering arrays have been the subject of considerable research attention as they hold both theoretical interest and practical importance due to their applications to testing. In this thesis, we perform the first comprehensive study of a generalization of covering arrays called variable strength covering arrays, where we dictate the interactions to be covered in the array by modeling them as facets of an abstract simplicial complex. We outline the necessary background in the theory of hypergraphs, combinatorial testing, and design theory that is relevant to the study of variable strength covering arrays. We then approach questions that arise in variable strength covering arrays in a number of ways. We demonstrate their connections to hypergraph homomorphisms, and explore the properties of a particular family of abstract simplicial complexes, the qualitative independence hypergraphs. These hypergraphs are tightly linked to variable strength covering arrays, and we determine and identify several of their important properties and subhypergraphs. We give a detailed study of constructions for variable strength covering arrays, and provide several operations and divide-and-conquer techniques that can be used in building them. In addition, we give a construction using linear feedback shift registers from primitive polynomials of degree 3 over arbitrary finite fields to find variable strength covering arrays, which we extend to strength-3 covering arrays whose sizes are smaller than many of the best known sizes of covering arrays. We then give an algorithm for creating variable strength covering arrays over arbitrary abstract simplicial complexes, which builds the arrays one row at a time, using a density concept to guarantee that the size of the resultant array is asymptotic in the logarithm of the number of facets in the abstact simplicial complex. This algorithm is of immediate practical importance, as it can be used to create test suites for combinatorial testing. Finally, we use the Lovasz Local Lemma to nonconstructively determine upper bounds on the sizes of arrays for a number of different families of hypergraphs. We lay out a framework that can be used for many hypergraphs, and then discuss possible strategies that can be taken in asymmetric problems.
516

Utredning av behovsstyrd ventilation : En jämförelse mellan CAV och VAV

Ängalid, Filip January 2012 (has links)
Denna rapport är ett examensarbete på C-nivå som görs i sammarbete med teknikkonsulten Ramböll. Det vanligaste sättet att ventilera en byggnad idag är med så kallad CAV-ventilation (Constant Air Volume). Denna metod bygger på att ett luftflöde bestäms för rummet och upprätthålls med konstant flöde. En annan metod är så kallad VAV (Variable Air Volume) som bygger på att flödet varierar efter behovet. Anledningen till varför man väljer VAV istället för CAV är att med CAV finns det ofta en stor risk att man överventilerar ett rum eller byggnad, just på grund av att flödet är konstant. Problemet med VAV är att det är en högre investeringskostnad än för CAV så metoden lämpar sig bara där energibesparingen är så stor så den täcker mellanskillnaden i pris. Denna utredning visar i vilka typer av rum som det kan löna sig att installera VAV istället för det traditionella CAV-systemet. För att undersöka detta sker simuleringar av fiktiva modeller i programmet IDA Indoor Climate & Energy (IDA). IDA är ett simuleringsverktyg som används till att simulera den termiska komforten i byggnader samt byggnadens energianvändning. De olika rumstyperna som simuleras är: klassrummet, kontoret och mötesrummet. De olika fallen är utformade så att de liknar så som de ser ut i verkligheten både till geometri och nyttjandegrad. Om något av fallen visade sig vara en bra kandidat för att förse med VAV fortsätter utredningen med att fastställa hur stort bör flödet vara för att energibesparingen ska bli så stor så att det täcker investeringskostnaden. Den ekonomiska kalkyleringen sker både med en livscykelkostnadsanalys och med en enklare återbetalningstidskalkyl. Resultatet för simuleringarna visade att den enda rumstypen i denna utredning som var lönsam var mötesrummet. Klassrummet och kontoret visade sig båda ge en förlust. Detta var eftersom nyttjandegraden för dessa rum var så pass hög så att ventilationen med VAV var igång nästan lika mycket som för CAV. För mötesrummet var nyttjandegraden betydligt lägre vilket innebar att energibesparingen blev så pass hög att den täckte den höga investeringskostnaden. För mötesrummet gjordes sedan en flödesanalys som visade att rummets luftflöde bör vara dimensionerat för mellan omkring 20 – 30 personer för att vara en lönsam investering.
517

Design and Implementation of an Inverter Drive for High-Efficiency Compressor used in Air Conditioner

TSENG, WEI-CHIH 11 July 2002 (has links)
Abstract: This paper presents the results of an experimental investigation into the application of inverter-based variable speed drives to positive displacement rotary compressors. Designs and implements a DSP-microprocessor based of an inverter drive for high-efficiency compressor used in air conditioner. We control the compressor with sine PWM and V/F scheme. Permanent magnet synchronous motor has potential for energy saving in general applications on compressor drives. Permanent magnet synchronous motor drives are used for applications like compressors¡Awhere high dynamic performance is not a demand¡Asimple V/F control strategies may be sufficient to obtain the required control performance. For energy saving to find the best control strategy for an inverter drive for high efficiency compressor used in air conditioner.
518

Mélange et dynamique de la turbulence en écoulements libres à viscosité variable

Talbot, Benoit 10 November 2009 (has links) (PDF)
Ces travaux concernent l'étude expérimentale e analytique de la turbulence en phase de développement dans les fluides hétérogènes à densité et à viscosité variable. Ils font appel à des outils de diagnostics expérimentaux (anémométrie à fil chaud, technique de diffusion Rayleigh, Vélocimétrie Doppler Laser), et au formalisme des équations de Navier-Stockes à viscosité variable. L'innovation porte sur l'indépendance de la mesure de la vitesse. Après sa validation, la plate-forme expérimentale est exploitée pour l'étude comparative d'un jet de propane émergeant dans un milieu air-néon, à viscosité et densité variable, avec un jet d'air classique, à même quantité de mouvement injectée initialement. Ce travail se poursuit ensuite par un approfondissement des propriétés dans le champ proche, complétés par une approche analytique à partir des réécritures des équations de Navier-Stokes à viscosité variable.
519

Variable Strength Covering Arrays

Raaphorst, Sebastian 21 January 2013 (has links)
Recently, covering arrays have been the subject of considerable research attention as they hold both theoretical interest and practical importance due to their applications to testing. In this thesis, we perform the first comprehensive study of a generalization of covering arrays called variable strength covering arrays, where we dictate the interactions to be covered in the array by modeling them as facets of an abstract simplicial complex. We outline the necessary background in the theory of hypergraphs, combinatorial testing, and design theory that is relevant to the study of variable strength covering arrays. We then approach questions that arise in variable strength covering arrays in a number of ways. We demonstrate their connections to hypergraph homomorphisms, and explore the properties of a particular family of abstract simplicial complexes, the qualitative independence hypergraphs. These hypergraphs are tightly linked to variable strength covering arrays, and we determine and identify several of their important properties and subhypergraphs. We give a detailed study of constructions for variable strength covering arrays, and provide several operations and divide-and-conquer techniques that can be used in building them. In addition, we give a construction using linear feedback shift registers from primitive polynomials of degree 3 over arbitrary finite fields to find variable strength covering arrays, which we extend to strength-3 covering arrays whose sizes are smaller than many of the best known sizes of covering arrays. We then give an algorithm for creating variable strength covering arrays over arbitrary abstract simplicial complexes, which builds the arrays one row at a time, using a density concept to guarantee that the size of the resultant array is asymptotic in the logarithm of the number of facets in the abstact simplicial complex. This algorithm is of immediate practical importance, as it can be used to create test suites for combinatorial testing. Finally, we use the Lovasz Local Lemma to nonconstructively determine upper bounds on the sizes of arrays for a number of different families of hypergraphs. We lay out a framework that can be used for many hypergraphs, and then discuss possible strategies that can be taken in asymmetric problems.
520

Design and Analysis of a Shock Absorber with a Variable Moment of Inertia Flywheel for Passive Vehicle Suspension

Xu, Tongyi 05 November 2013 (has links)
Conventional vehicle suspensions consist of a spring and a damper, while mass is rarely used. A mass, if properly used, can also create a damping-like effect. However, a mass has only one terminal which makes it difficult to be incorporated into a suspension. In order to use a mass to achieve the damping-like effect, a two-terminal mass (TTM) has to be designed. However, most of the reported TTMs are of fixed moment of inertia (TTM-CMI), which limits the further improvement of the suspension performance and responsiveness to changes in environment and driving conditions. In this study, a TTM-based vibration absorber with variable moment of inertia (TTM-VMI) is proposed. The main component of the proposed TTM absorber contains a hydraulic-driven flywheel with sliders. The moment of inertia changes with the positions of the sliders in response to the driving conditions. The performance of the proposed TTM-VMI absorber has been analyzed via dynamics modeling and simulation and further examined by experiments. The analysis results indicate that the TTM-VMI absorber outperforms the TTM-CMI design in terms of body displacement; and ride comfort, tire grip and suspension deflection for zero and impulse inputs with comparable performance for sinusoidal input.

Page generated in 0.0642 seconds