• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 361
  • 149
  • 78
  • 28
  • 10
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • Tagged with
  • 854
  • 120
  • 112
  • 110
  • 106
  • 106
  • 95
  • 74
  • 63
  • 60
  • 59
  • 58
  • 58
  • 57
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Laurent schwartz (1915-2002) et la vie collective des mathématiques / Laurent schwartz (1915-2002) and the collective life of mathematics

Paumier, Anne-Sandrine 30 June 2014 (has links)
Ce travail se saisit de la figure de Laurent Schwartz (1915-2002) pour étudier la vie collective des mathématiques dans la seconde moitié du XXème siècle.Il vise à montrer comment les pratiques collectives sont alors constitutives du travail et de la communauté mathématiques et comment elles évoluent au cours de cette période. Par le biais biographique, en considérant Schwartz à la fois comme un acteur important qui laisse de nombreuses traces ou comme un simple témoin, nous présentons plusieurs tableaux du collectif. Nous étudions la rencontre que Schwartz fait de la vie collective des mathématiques pendant la Seconde Guerre mondiale, notamment par son interaction avec le groupe Bourbaki. Nous analysons ensuite la diffusion de la théorie des distribu- tions dans les mathématiques et son historiographie et montrons le rôle actif de Schwartz dans ces processus. Un chapitre consacré au théorème des noyaux de Schwartz et ses écritures ultérieures permet d'approfondir l'étude des interactions entre pratiques d'écriture en mathématiques et différents types de collectifs. Ce sont ensuite sur trois formes d'organisation collective du travail mathématique que nous nous penchons : le colloque (en proposant une étude de cas sur le colloque d'analyse harmonique de 1947), le séminaire et, enfin, le laboratoire de mathématiques (en prenant l'exemple du Centre de Mathématiques de l'École polytechnique). Enfin, nous abordons la question de l'engagement politique de Schwartz en tant que mathématicien. Nous cherchons à montrer comment cet engagement traduit une certaine conception de la communauté mathématique, tout en s'inspirant de ses pratiques sociales particulières. / This work takes the case of laurent schwartz (1915-2002) to study the collective life of mathematics in the second half of the 20th century.Its goal is to show how collective practices have then been constitutive of mathematical work and community, as well as how they evolved over this period. through a biographical lens, by considering schwartz both as an important actor who has left numerous traces and as a simple witness, we present several tableaus of the collective. we study the encounter between schwartz and the collective life of mathematics during world war ii, in particular through his interaction with the bourbaki group. we then analyze the diffusion of the theory of distributions in mathematics and its historiography, and show schwartz?active role in these processes. a chapter devoted to the kernel theorem (théorème des noyaux) and its later written incarnations allows us to deepen our study of interactions between writing practices in mathematics and various kinds of collectives. Three forms of collective organization of the mathematical work are then investigated: the conference (through a study of the 1947 colloquium on harmonic analysis), the seminar, and, finally, the mathematical research center (taking as an example the centre de mathématiques de l'ecole polytechnique). Finally, we take on the question of schwartz's political engagement as a mathematician. we wish to show how this engagement embodies a certain conception of the mathematical community, while taking some inspiration from its particular social practices
332

Process Intensification Techniques for Continuous Spherical Crystallization in an Oscillatory Baffled Crystallizer with Online Process Monitoring

Joseph A Oliva (6588797) 15 May 2019 (has links)
<div> <p>Guided by the continuous manufacturing paradigm shift in the pharmaceutical industry, the proposed thesis focuses on the implementation of an integrated continuous crystallization platform, the oscillatory baffled crystallizer (OBC), with real time process monitoring. First, by defining an appropriate operating regime with residence time distribution (RTD) measurements, a system can be defined that allows for plug flow operation while also maintaining solid suspension in a two-phase system. The aim of modern crystallization processes, narrow crystal size distributions (CSDs), is a direct result of narrow RTDs. Using a USB microscope camera and principal component analysis (PCA) in pulse tracer experiments, a novel non-contact RTD measurement method was developed using methylene blue. After defining an operating region, this work focuses on a specific process intensification technique, namely spherical crystallization.</p> <p>Used mainly to tailor the size of a final dosage form, spherical crystallization removes the need for downstream size-control based unit operations (grinding, milling, and granulation), while maintaining drug efficacy by tailoring the size of the primary crystals in the agglomerate. The approach for generating spherical agglomerates is evaluated for both small and large molecules, as there are major distinctions in process kinetics and mechanisms. To monitor the spherical agglomeration process, a variety of Process Analytical Technology (PAT) tools were used and the data was implemented for scale-up applications.</p> <p>Lastly, a compartmental model was designed based on the experimental RTD data with the intention of predicting OBC mixing and scale-up dynamics. Together, with validation from both the DN6 and DN15 systems, a scale independent equation was developed to predict system dispersion at different mixing conditions. Although it accurately predicts the behavior of these two OBC systems, additional OBC systems of different scale, but similar geometry should be tested for validation purposes.</p> </div> <br>
333

Galaxy Evolution in the Local and the High-z Universe Through Optical+near-IR Spectroscopy

January 2020 (has links)
abstract: A key open problem within galaxy evolution is to understand the evolution of galaxies towards quiescence. This work investigates the suppression of star-formation through shocks and turbulence at low-redshift, and at higher-redshifts, this work investigates the use of features within quiescent galaxy spectra to redshift estimation, and passive evolution of aging stellar populations to understand their star-formation histories. At low-$z$, this work focuses on the analysis of optical integral field spectroscopy data of a nearby ($z\sim0.0145$) unusual merging system, called the Taffy system because of radio emission that stretches between the two galaxies. This system, although a recent major-merger of gas-rich spirals, exhibits an atypically low star-formation rate and infrared luminosity. Strong evidence of shock heating as a mechanism for these atypical properties is presented. This result (in conjunction with many others) from the nearby Universe provides evidence for shocks and turbulence, perhaps due to mergers, as an effective feedback mechanism for the suppression of star-formation. At intermediate and higher-$z$, this work focuses on the analysis of Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) G800L grism spectroscopy and photometry of galaxies with a discernible 4000\AA\ break. The usefulness of 4000\AA/Balmer breaks as redshift indicators by comparing photometric, grism, and spectrophotometric redshifts (SPZs) to ground-based spectroscopic redshifts, is quantified. A spectral energy distribution (SED) fitting pipeline that is optimized for combined HST grism and photometric data, developed for this project, is presented. This pipeline is a template-fitting based routine which accounts for correlated data between neighboring points within grism spectra via the covariance matrix formalism, and also accounts for galaxy morphology along the dispersion direction. Evidence is provided showing that SPZs typically improve the accuracy of photometric redshifts by $\sim$17--60\%. For future space-based observatories like the Nancy Grace Roman Space Telescope (formerly the Wide Field InfraRed Survey Telescope, i.e., WFIRST) and Euclid, this work predicts $\sim$700--4400 galaxies\,degree$^{-2}$, within $1.6 \lesssim z \lesssim 3.4$, for galaxies with 4000\AA\ breaks and continuum-based redshifts accurate to $\lesssim$2\%. This work also investigates the star-formation histories of massive galaxies ($\mathrm{M_s \geq 10^{10.5}\, M_\odot}$). This is done through the analysis of the strength of the Magnesium absorption feature, Mgb, at $\sim$5175\AA. This analysis is carried out on stacks of HST ACS G800L grism data, stacked for galaxies binned on a color vs stellar mass plane. / Dissertation/Thesis / Doctoral Dissertation Astrophysics and Astronomy 2020
334

Statistická analýza rozdělení extrémních hodnot pro cenzorovaná data / Statistical Analysis of Extreme Value Distributions for Censored Data

Chabičovský, Martin January 2011 (has links)
The thesis deals with extreme value distributions and censored samples. Theoretical part describes a maximum likelihood method, types of censored samples and introduce a extreme value distributions. In the thesis are derived likelihood equations for censored samples from exponential, Weibull, lognormal, Gumbel and generalized extreme value distribution. For these distributions are also derived asymptotic interval estimates and is made simulation studies on the dependence of the parameter estimate on the percentage of censoring.
335

The influence of particle size distribution on bio-coal gasification rate as related to packed beds of particles

Bäckebo, Markus January 2020 (has links)
This thesis is a part of a collaboration between Höganäs AB and Luleå University of Technology, aiming at replacing fossil process coal with bio-coal in their sponge iron process. The difference in gasification reactivity, i.e. reaction rate, between fossil coals and bio-coals is the major challenge in the endeavor to decrease the climate impact of the existing process. The goal of this thesis is to develop a model of reaction rate for bio-coals in relation to particle size distribution. Different particle size distributions were combined and tested to see how that affects the effective reaction rate. Within the scope of this work, gasification reactivities of different materials, including coal, cokes, and bio-coals, were determined. Three bio-coals were selected to study the effect of particle size distribution on reactivity. Kinetic parameters were determined by using thermogravimetric analysis in the temperature range of 770-850 °C while varying CO2 partial pressure between 0.1-0.4 atm. The effect of particle size on the reaction rate was investigated by using particles with diameter between 0.18 and 6.3 mm. The effect of particle size distribution on the reactivity of bio-coal in a packed bed was carried out in a macro thermogravimetric reactor with a constant bed volume of 6.5 cm3 at 980 °C and 40% (vol.) of CO2. The experimental investigation in three different rate-limiting steps was done for one bio-coal sample, i.e. Cortus Bark bio-coal. The activation energy of the bio-coal was 187 kJ mol-1, and the reaction order was 0.365. For the internal diffusion control regime, an increase in particle size resulted in low reaction rate. The effective diffusivity calculated from the Thiele modulus model was 1.41*10-5 m2 s-1. For the external diffusion control regime, an increase in particle size increased the reaction rate up to a certain point where it plateaued at &gt;1 mm. By choosing two discrete particle size distributions, where a smaller average distribution can fit into a larger average distribution the reaction rate was lowered by 30% compared to only using a single narrow particle size distribution. This solution decreased the difference of apparent reaction rate in a packed bed between the bio-coal and anthracite from 6.5 times to 4.5 times. At the moment the model is not generalized for all bio-coals. However, the developed methodology can be routinely applied to assess the different bio-coal samples. One possible error can be that pyrolysis influences the gasification rate for bio-coal that is pyrolyzed below the temperature of the gasification test. There is a clear correlation between particle size distributions, bulk density, and apparent reactivity. By mixing two distributions the reaction rate of Cortus Bark was reduced from 6.5 times the reaction rate of anthracite to 4.5.
336

BLOGS: Balanced Local and Global Search for Non-Degenerate Two View Epipolar Geometry

Brahmachari, Aveek Shankar 12 June 2009 (has links)
The problem of epipolar geometry estimation together with correspondence establishment in case of wide baseline and large scale changes and rotation has been addressed in this work. This work deals with cases that are heavily noised by outliers. The jump diffusion MCMC method has been employed to search for the non-degenerate epipolar geometry with the highest probabilistic support of putative correspondences. At the same time, inliers in the putative set are also identified. The jump steps involve large movements guided by a distribution of similarity based priors while diffusion steps are small movements guided by a distribution of likelihoods given by the Joint Feature Distribution (JFD). The 'best so far' samples are accepted in accordance to Metropolis-Hastings method. The diffusion steps are carried out by sampling conditioned on the 'best so far', making it local to the 'best so far' sample, while jump steps remain unconditioned and span across the correspondence and motion space according to a similarity based proposal distribution making large movements. We advance the theory in three novel ways. First, a similarity based prior proposal distribution which guide jump steps. Second, JFD based likelihoods which guide diffusion steps allowing more focused correspondence establishment while searching for epipolar geometry. Third, a measure of degeneracy that allows to rule out degenerate configurations. The jump diffusion framework thus defined allows handling over 90% outliers even in cases where the number of inliers is very few. Practically, the advancement lies in higher precision and accuracy that has been detailed in this work by comparisons. In this work, BLOGS is compared with LO-RANSAC, NAPSAC, MAPSAC and BEEM algorithm, which are the current state of the art competing methods, on a dataset that has significantly more change in baseline, rotation, and scale than those used in the state of the art. Performance of these algorithms and BLOGS are quantitatively benchmark for a comparison by estimating the error in the epipolar geometry given by root mean Sampson's distance from manually specified corresponding point pairs which serve as a ground truth. Not just is BLOGS able to tolerate very high outlier rates, but also gives result of similar quality in 10 times lesser number of iterations than the most competitive among the compared algorithms.
337

The Utility of Environmental DNA and Species Distribution Models in Assessing the Habitat Requirements of Twelve Fish Species in Alaskan North Slope Rivers

Eddings, James B. 01 May 2020 (has links)
Subsistence fishing is a vital component of Alaska’s North Slope borough economy and culture that is being threatened by human disturbance. These threats mean the fish must be protected, but the size of the region makes conservation planning difficult. Fortunately, advances in species distribution models (SDMs), environmental DNA (eDNA), and remote sensing technologies provide potential to better understand species’ needs and guide management. The objectives of my study were to: (1) map the current habitat suitability for twelve fish species, occurring in Alaska’s North Slope,(2) determine if SDMs based on eDNA data performed similarly to, or improved, models based on traditional sampling data, and (3) predict how species distributions will shift in the future in response to climate change. I was able to produce robust models for 8 of 12 species that relate environmental characteristics to a species’ presence or absence and identify stream reaches where species are likely to occur. Unfortunately, the use of eDNA data did not produce useful models in Northern Alaskan rivers. However, I was able to generate predictions of species distributions into the future that should help inform management for years to come.
338

Studium spinové struktury nukleonu s pomocí procesu Drell-Yan v experimentu Compass / Nucleon spin structure studies in Drell-Yan process at Compass

Matoušek, Jan January 2018 (has links)
Jointly-supervised doctoral thesis Title: Nucleon spin structure studies in Drell-Yan process at COMPASS Author: Jan Matoušek Department I: Department of Low Temperature Physics, Faculty of Mathem- atics and Physics, Charles University Department II: Department of Physics, University of Trieste Supervisor I: prof. Miroslav Finger (Department I) Supervisor II: prof. Anna Martin (Department II) Abstract: The nucleon structure is presently described by Transverse Momentum Depend- ent (TMD) Parton Distribution Functions (PDFs), which generalise the collinear PDFs, adding partonic spin and transverse momentum degrees of freedom. The recent HERMES and COMPASS data on hadron production in deep inelastic scattering (SIDIS) of leptons off transversely polarised nucleons have provided a decisive validation of this framework. Nevertheless, the TMD PDFs should be studied in complementary reactions, like pp hard scattering and Drell-Yan pro- cesses. In particular the Sivers TMD PDF, which encodes the correlation between the nucleon transverse spin and quark transverse momentum and appears in the Sivers Transverse Spin Asymmetry (TSA), is expected to have opposite sign in Drell-Yan and SIDIS. In 2015 COMPASS measured for the first time the Drell- Yan process on a transversely polarised target π− p↑ → µ− µ+ X to test...
339

Developing Random Compaction Strategy for Apache Cassandra database and Evaluating performance of the strategy

Surampudi, Roop Sai January 2021 (has links)
Introduction: Nowadays, the data generated by global communication systems is enormously increasing.  There is a need by Telecommunication Industries to monitor and manage this data generation efficiently. Apache Cassandra is a NoSQL database that manages any formatted data and a massive amount of data flow efficiently.  Aim: This project is focused on developing a new random compaction strategy and evaluating this random compaction strategy's performance. In this study, limitations of generic compaction strategies Size Tiered Compaction Strategy and Leveled Compaction Strategy will be investigated. A new random compaction strategy will be developed to address the limitations of the generic Compaction Strategies. Important performance metrics required for the evaluation of the strategy will be studied. Method: In this study, a grey literature review is done to understand the working of Apache Cassandra, different compaction strategies' APIs. A random compaction strategy is developed in two phases of development. A testing environment is created consisting of a 4-node cluster and a simulator. Evaluated the performance by stress-testing the cluster using different workloads. Conclusions: A stable RCS artifact is developed. This artifact also includes the support of generating random threshold from any user-defined distribution. Currently, only Uniform, Geometric, and Poisson distributions are supported. The RCS-Uniform's performance is found to be better than both STCS and  LCS. The RCS-Poisson's performance is found to be not better than both STCS and LCS. The RCS-Geometric's performance is found to be better than STCS.
340

Experimental Investigation of Container-based Virtualization Platforms For a Cassandra Cluster

Sulewski, Patryk, Jesper, Hallborg January 2017 (has links)
Context. Cloud computing is growing fast and has established itself as the next generationsoftware infrastructure. A major role in cloud computing is the virtualization of hardware toisolate systems from each other. This virtualization is often done with Virtual Machines thatemulate both hardware and software, which in turn makes the process isolation expensive. Newtechniques, known as Microservices or containers, has been developed to deal with the overhead.The infrastructure is conjoint with storing, processing and serving vast and unstructureddata sets. The overall cloud system needs to have high performance while providing scalabilityand easy deployment. Microservices can be introduced for all kinds of applications in a cloudcomputing network, and be a better fit for certain products.Objectives. In this study we investigate how a small system consisting of a Cassandra clusterperform while encapsulated in LXC and Docker containers, compared to a non virtualizedstructure. A specific loader is built to stress the cluster to find the limits of the containers.Methods. We constructed an experiment on a three node Cassandra cluster. Test data is sentfrom the Cassandra-loader from another server in the network. The Cassandra processes are thendeployed in the different architectures and tested. During these tests the metrics CPU, disk I/O,network I/O are monitored on the four servers. The data from the metrics is used in statisticalanalysis to find significant deviations.Results. Three experiments are being conducted and monitored. The Cluster test pointed outthat isolated Docker container indicate major latency during disk reads. A local stress test furtherconfirmed those results. The step-wise test in turn, implied that disk read latencies happened dueto isolated Docker containers needs to read more data to handle these requests. All Microservicesprovide some overheads, but fall behind the most for read requests.Conclusions. The results in this study show that virtualization of Cassandra nodes in a clusterbring latency in comparison to a non virtualized solution for write operations. However, thoselatencies can be neglected if scalability in a system is the main focus. For read operationsall microservices had reduced performance and isolated Docker containers brought out thehighest overhead. This is due to the file system used in those containers, which makes disk I/Oslower compared to the other structures. If a Cassandra cluster is to be launched in a containerenvironment we recommend a Docker container with mounted disks to bypass Dockers filesystem or a LXC solution.

Page generated in 0.1144 seconds