• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 49
  • 48
  • 23
  • 15
  • 13
  • 10
  • 10
  • 8
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 500
  • 84
  • 82
  • 62
  • 51
  • 50
  • 50
  • 42
  • 41
  • 38
  • 35
  • 33
  • 31
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Časové rozlišení TileCalu a hledání těžkých dlouhožijících částic / Time resolution of TileCal and searches for heavy metastable particles

Pagáčová, Martina January 2012 (has links)
Title: Time resolution of TileCal and searches for heavy metastable parti- cles Author: Martina Pagáčová Department: Institute of Particle and Nuclear Physics Supervisor: Doc. RNDr. Rupert Leitner, DrSc. Supervisor's e-mail address: Rupert.Leitner@cern.ch Abstract: In the present work, the timing of the ATLAS TileCal is stud- ied using the single hadron collision data. The time resolution and also the mean time response depend on the energy deposited in a given cell. The results are compared to the previous analysis with jets and muons. Precise time-of-flight measurement using TileCal can be used to identify the heavy long-lived particles predicted by the models of physics beyond the standard model. Their mass can be reconstructed by combining with the momentum measurement in the ATLAS inner detector. Finally, the mass resolution of an exotic particle with mass M = 600 GeV is estimated. Keywords: ATLAS experiment, TileCal, time resolution, stable massive particles
202

Vernetztes Lernen an der Hochschule? Ergebnisse und Erfahrungen eines cMOOS

Kahnwald, Nina, Pscheida, Daniela January 2015 (has links)
Der Ansatz des Konnektivismus und die rasante Verbreitung von Massive Open Online Courses (MOOCs) haben eine anhaltende Debatte um die Chancen, Schwierigkeiten und Perspektiven offener Lernnetzwerke in der Hochschulbildung ausgelöst. Die Diskussion reicht dabei vom befürchteten Verlust des Einflusses der Dozierenden als Gewährleister einer kritischen und vielseitigen Auseinandersetzung mit Themen und Lerninhalten, über die lernerseitigen Voraussetzungen für eine erfolgreiche und gewinnbringende Beteiligung an konnektivistischen Kursangeboten, bis hin zur Frage, inwiefern offenes, vernetztes Lernen im institutionell verfestigten Rahmen der Hochschule überhaupt realisiert werden kann. Verlässliche Daten über konnektivistisch ausgerichtete MOOC-Angebote (sogenannte cMOOCs) mit vorrangig studentischer Beteiligung gibt es kaum, da diese im deutschsprachigen Raum bisher vor allem in non-formalen Settings bzw. im Weiterbildungsbereich angeboten und genutzt wurden. Dieser Beitrag stellt zentrale Ergebnisse der Durchführung und Evaluation eines cMOOC mit hauptsächlich studentischen Teilnehmenden vor, der im Sommersemester 2013 und Wintersemester 2013/14 in Kooperation zwischen drei deutschen Universitäten (Dresden, Chemnitz, Siegen) durchgeführt wurde. Der Fokus liegt auf der Frage, in welchem Ausmaß offenes, vernetztes Lernen im Rahmen eines Hochschulkurses ermöglicht werden kann und Lernergebnisse sich identifizieren lassen. Hierzu erfolgt eine Kombination quantitativer und qualitativer Evaluationsdaten.
203

DESIGN AND ANALYSIS OF TRANSMISSION STRATEGIES FOR TRAINING-BASED MASSIVE MIMO SYSTEMS

Kudathanthirige, Dhanushka Priyankara 01 December 2020 (has links)
The next-generation wireless technologies are currently being researched to address the ever-increasing demands for higher data rates, massive connectivity, improved reliability, and extended coverage. Recently, massive multiple-input multiple-output (MIMO) has gained significant attention as a new physical-layer transmission technology that can achieve unprecedented spectral and energy efficiency gains via aggressive spatial multiplexing. Thus, massive MIMO has been one of the key enabling technologies for the fifth-generation and subsequent wireless standards. This dissertation thus focuses on developing a system, channel, and signal models by considering the practical wireless transmission impairments for massive MIMO systems, and ascertaining the viability of massive MIMO in fulfilling massive access, improved spectrum, enhanced security, and energy efficiency requirements. Specifically, new system and channel models, pilot sequence designs and channel estimation techniques, secure transmit/receive beamforming techniques, transmit power allocation schemes with enhanced security provisions, energy efficiency, and user fairness, and comprehensive performance analysis frameworks are developed for massive MIMO-aided non-orthogonal multiple access (NOMA), cognitive spectrum-sharing, and wireless relaying architectures.Our first work focuses on developing physical-layer transmission schemes for NOMA-aided massive MIMO systems. A spatial signature-based user-clustering and pilot allocation scheme is first formulated, and thereby, a hybrid orthogonal multiple access (OMA)/NOMA transmission scheme is proposed to boost the number of simultaneous connections. In our second work, the viability of invoking downlink pilots to boost the achievable rate of NOMA-aided massive MIMO is investigated. The third research contribution investigates the performance of underlay spectrum-sharing massive MIMO systems for reverse time division duplexing based transmission strategies, in which primary and secondary systems concurrently operate in opposite directions. Thereby, we show that the secondary system can be operated with its maximum average transmit power independent of the primary system in the limit of infinity many primary/secondary base-station antennas. In our fourth work, signal processing techniques, power allocation, and relay selection schemes are designed and analyzed for massive MIMO relay networks to optimize the trade-off among the achievable user rates, coverage, and wireless resource usage. Finally, the cooperative jamming and artificial noise-based secure transmission strategies are developed for massive MIMO relay networks with imperfect legitimate user channel information and with no channel knowledge of the eavesdropper. The key design criterion of the aforementioned transmission strategies is to efficiently combine the spatial multiplexing gains and favorable propagation conditions of massive MIMO with properties of NOMA, underlay spectrum-sharing, and wireless relay networks via efficient signal processing.
204

Distribution Agnostic Structured Sparsity Recovery: Algorithms and Applications

Masood, Mudassir 05 1900 (has links)
Compressed sensing has been a very active area of research and several elegant algorithms have been developed for the recovery of sparse signals in the past few years. However, most of these algorithms are either computationally expensive or make some assumptions that are not suitable for all real world problems. Recently, focus has shifted to Bayesian-based approaches that are able to perform sparse signal recovery at much lower complexity while invoking constraint and/or a priori information about the data. While Bayesian approaches have their advantages, these methods must have access to a priori statistics. Usually, these statistics are unknown and are often difficult or even impossible to predict. An effective workaround is to assume a distribution which is typically considered to be Gaussian, as it makes many signal processing problems mathematically tractable. Seemingly attractive, this assumption necessitates the estimation of the associated parameters; which could be hard if not impossible. In the thesis, we focus on this aspect of Bayesian recovery and present a framework to address the challenges mentioned above. The proposed framework allows Bayesian recovery of sparse signals but at the same time is agnostic to the distribution of the unknown sparse signal components. The algorithms based on this framework are agnostic to signal statistics and utilize a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data if not available. In the thesis, we propose several algorithms based on this framework which utilize the structure present in signals for improved recovery. In addition to the algorithm that considers just the sparsity structure of sparse signals, tools that target additional structure of the sparsity recovery problem are proposed. These include several algorithms for a) block-sparse signal estimation, b) joint reconstruction of several common support sparse signals, and c) distributed estimation of sparse signals. Extensive experiments are conducted to demonstrate the power and robustness of our proposed sparse signal estimation algorithms. Specifically, we target the problems of a) channel estimation in massive-MIMO, and b) Narrowband interference mitigation in SC-FDMA. We model these problems as sparse recovery problems and demonstrate how these could be solved naturally using the proposed algorithms.
205

X-ray Emissions from Clump Bowshocks in Massive Star Winds

Ignace, Richard, Waldron, W., Cassinelli, N. 01 January 2012 (has links)
Clumped structures in wind flows have substantially altered our interpretations of multiwavelength data for understanding mass loss from massive stars. Embedded wind shocks have long been the favored explanation for the hot plasma production and X-ray generation in massive star winds. This contribution reports on line profile shapes fromthe clump bowshock model and summarizes the temperature and emission measure distributions throughout the wind for this model with a focus on results that can be tested against observations.The authors acknowledge funding support for this work from a NASA grant(NNH09CF39C
206

Radio Emission from Macroclumps in Massive Star Winds

Ignace, Richard 01 January 2014 (has links)
Massive star winds are understood to be structured. Structures can come in the form of co-rotating interaction regions, which are globally organized flow streams that thread the winds. Structures can also be stochastic in nature, generically referred to as "clumps". The theory for interpreting the radio emissions from randomly distributed microclumps in single star winds is established. Results are presented here for macroclumping, in which the radiative transfer is sensitive to the clump geometry. Two cases are compared: spherical clumps and pancake-like fragments. The geometry of macroclumps can influence the power-law slope of the long wavelength spectral energy distribution.
207

Tagging systems for sequencing large cohorts

Neiman, Mårten January 2010 (has links)
Advances in sequencing technologies constantly improves the throughput andaccuracy of sequencing instruments. Together with this development comes newdemands and opportunities to fully take advantage of the massive amounts of dataproduced within a sequence run. One way of doing this is by analyzing a large set ofsamples in parallel by pooling them together prior to sequencing and associating thereads to the corresponding samples using DNA sequence tags. Amplicon sequencingis a common application for this technique, enabling ultra deep sequencing andidentification of rare allelic variants. However, a common problem for ampliconsequencing projects is formation of unspecific PCR products and primer dimersoccupying large portions of the data sets. This thesis is based on two papers exploring these new kinds of possibilities andissues. In the first paper, a method for including thousands of samples in the samesequencing run without dramatically increasing the cost or sample handlingcomplexity is presented. The second paper presents how the amount of high qualitydata from an amplicon sequencing run can be maximized. The findings from the first paper shows that a two-tagging system, where the first tagis introduced by PCR and the second tag is introduced by ligation, can be used foreffectively sequence a cohort of 3500 samples using the 454 GS FLX Titaniumchemistry. The tagging procedure allows for simple and easy scalable samplehandling during sequence library preparation. The first PCR introduced tags, that arepresent in both ends of the fragments, enables detection of chimeric formation andhence, avoiding false typing in the data set. In the second paper, a FACS-machine is used to sort and enrich target DNA covered emPCR beads. This is facilitated by tagging quality beads using hybridization of afluorescently labeled target specific DNA probe prior to sorting. The system wasevaluated by sequencing two amplicon libraries, one FACS sorted and one standardenriched, on the 454 showing a three-fold increase of quality data obtained. / QC20100907
208

Index Modulation Schemes for Terahertz Communications

Loukil, Mohamed Habib 04 1900 (has links)
Terahertz (THz)-band communication is envisioned as a critical technology that could satisfy the need for much higher data rates in sixth generation wireless communi- cation (6G) systems and beyond. Although THz signal propagation suffers from huge spreading and molecular absorption losses that limit the achievable commu- nication ranges, ultra-massive multiple-input multiple-output (UM-MIMO) antenna arrays can introduce the required beamforming gains to compensate for these losses. The reconfigurable UM-MIMO systems of small footprints motivate the use of spatial modulation techniques. Furthermore, the ultra-wideband fragmented THz spectrum motivates the use of index modulation techniques over multicarrier channels. In this thesis, we consider the problem of efficient index mapping and data detection in THz- band index modulation paradigms. We first propose an accurate frequency-domain statistical UM-MIMO channel model for wideband multicarrier THz-band commu- nications by considering THz-specific features. We then propose several THz-band generalized index modulation schemes that provide various performance and complex- ity tradeoffs. We propose efficient algorithms for mapping information bits to antenna and frequency indices at the transmitter side to enhance the achievable data rates in THz channel uses. We further propose complementary low-complexity parameter estimation and data detection techniques at the receiver side that can scale efficiently with very high rates. We derive theoretical bounds on the achievable performance gains of the proposed solutions and generate extensive numerical results promoting the corresponding future 6G use cases.
209

The Polstar High Resolution Spectropolarimetry MIDEX Mission

Scowen, Paul A., Gayley, Ken, Neiner, Coralie, Vasudevan, Gopal, Woodruff, Robert, Ignace, Richard, Casini, Roberto, Hull, Tony, Nordt, Alison, Philip Stahl, H. 01 January 2021 (has links)
The Polstar mission will provide for a space-borne 60cm telescope operating at UV wavelengths with spectropolarimetric capability capturing all four Stokes parameters (intensity, two linear polarization components, and circular polarization). Polstar’s capabilities are designed to meet its goal of determining how circumstellar gas flows alter massive stars' evolution, and finding the consequences for the stellar remnant population and the stirring and enrichment of the interstellar medium, by addressing four key science objectives. In addition, Polstar will determine drivers for the alignment of the smallest interstellar grains, and probe the dust, magnetic fields, and environments in the hot diffuse interstellar medium, including for the first time a direct measurement of the polarized and energized properties of intergalactic dust. Polstar will also characterize processes that lead to the assembly of exoplanetary systems and that affect exoplanetary atmospheres and habitability. Science driven design requirements include: access to ultraviolet bands: where hot massive stars are brightest and circumstellar opacity is highest; high spectral resolution: accessing diagnostics of circumstellar gas flows and stellar composition in the far-UV at 122-200nm, including the NV, SiIV, and CIV resonance doublets and other transitions such as NIV, AlIII, HeII, and CIII; polarimetry: accessing diagnostics of circumstellar magnetic field shape and strength when combined with high FUV spectral resolution and diagnostics of stellar rotation and distribution of circumstellar gas when combined with low near-UV spectral resolution; sufficient signal-to-noise ratios: ~103 for spectropolarimetric precisions of 0.1% per exposure; ~102 for detailed spectroscopic studies; ~10 for exploring dimmer sources; and cadence: ranging from 1-10 minutes for most wind variability studies, to hours for sampling rotational phase, to days or weeks for sampling orbital phase. The ISM and exoplanet science program will be enabled by these capabilities driven by the massive star science.
210

The Chaotic Wind of WR 40 as Probed by BRITE

Ramiaramanantsoa, Tahina, Ignace, Richard, Moffat, Anthony F.J., St-Louis, Nicole, Shkolnik, Evgenya L., Popowicz, Adam, Kuschnig, Rainer, Pigulski, Andrzej, Wade, Gregg A., Handler, Gerald, Pablo, Herbert, Zwintz, Konstanze 01 December 2019 (has links)
Among Wolf-Rayet stars, those of subtype WN8 are the intrinsically most variable. We have explored the long-term photometric variability of the brightest known WN8 star, WR 40, through four contiguous months of time-resolved, single-passband optical photometry with the BRIght Target Explorer nanosatellite mission. The Fourier transform of the observed light curve reveals that the strong light variability exhibited by WR 40 is dominated by many randomly triggered, transient, low-frequency signals. We establish a model in which the whole wind consists of stochastic clumps following an outflow visibility promptly rising to peak brightness upon clump emergence from the optically thick pseudo-photosphere in the wind, followed by a gradual decay according to the right-half of a Gaussian. Free electrons in each clump scatter continuum light from the star. We explore a scenario where the clump size follows a power-law distribution, and another one with an ensemble of clumps of constant size. Both scenarios yield simulated light curves morphologically resembling the observed light curve remarkably well, indicating that one cannot uniquely constrain the details of clump size distribution with only a photometric light curve. Nevertheless, independent evidence favours a negative-index power law, as seen in many other astrophysical turbulent media.

Page generated in 0.0579 seconds