• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1066
  • 463
  • 266
  • 142
  • 81
  • 58
  • 49
  • 41
  • 41
  • 37
  • 32
  • 20
  • 20
  • 14
  • 14
  • Tagged with
  • 2766
  • 354
  • 292
  • 264
  • 262
  • 255
  • 209
  • 191
  • 160
  • 154
  • 151
  • 134
  • 127
  • 126
  • 122
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Evolution of close binary stars with application to cataclysmic variables and Blue Stragglers

Andronov, Nikolay I., January 2005 (has links)
Thesis (Ph. D.)--Ohio State University, 2005. / Title from first page of PDF file. Document formatted into pages; contains xii, 190 p.; also includes graphics (some col.). Includes bibliographical references (p. 181-190). Available online via OhioLINK's ETD Center.
392

The morphology and energetics of discrete optical events in compact extragalactic objects

Pollock, Joseph Thomas, January 1982 (has links)
Thesis (Ph. D.)--University of Florida, 1982. / Description based on print version record. Typescript. Vita. Includes bibliographical references (leaves 95-97).
393

A Multivariate Process Analysis on a Paper Production Process

Löfroth, Jaime, Wiklund, Samuel January 2018 (has links)
A big challenge in managing large scale industry processes, like the ones in the paper and pulp industry, is to reduce the amount of downtime and reduce sources of product quality variability to a minimum, while staying cost effective. To accomplish this the key is to understand the complex nature of the processes variables, and to quantify the causal relationships between them and the product quality together with the amount of output. Paper and pulp industry processes consist mainly of chemical processes and the relatively low cost of sensors today enables collection of huge amounts of data, both variables and observations on frequent time intervals. These masses of data usually come with the intrinsic problem of multicollinearity which requires efficient multivari- ate statistical tools for the extraction of useful insights among the noise. One goal in this multivariate situation is to breakthrough the noise and find a relatively small subset of variables that are important, that is, variable selection. The purpose with this master thesis is to help SCA Obbola, a large paper manu- facturer that have had a variable production output, to come up with conclusions that can help them ensure a long term high production quantity and quality. We apply different variable selection approaches that have proven successful in the literature. The results that we get are of mixed success, but we manage to find both variables that SCA Obbola knows affect specific response variables, but also variables that they find interesting for further investigation. / En stor utmaning när det gäller att hantera storskaliga industriprocesser, som i pappers- och massaindustrin, är att minska tiden för driftstopp och reducera källor till varia- tioner i produktkvalitén till ett minimum, och samtidigt vara kostnadseffektiv. För att uppnå detta är det viktigt att förstå processvariablernas komplexa natur och att kvantifiera orsakssambanden mellan dem och produktkvaliteten tillsammans med pro- duktionsmängden. Pappers- och massasindustrin består huvudsakligen av kemiska pro- cesser och den relativt låga kostnaden för sensorer idag möjliggör insamling av stora mängder data, både variabler och observationer inom frekventa tidsintervall. Med des- sa datamängder får man ofta problem med multikollinearitet, vilket kräver effektiva multivariata statistiska verktyg för att extrahera användbara insikter bland bruset. Ett mål i denna multivariata situation är att bryta igenom bruset och hitta en relativt liten delmängd variabler som är viktiga, det vill säga variabel selektion. Syftet med denna masteruppsats är att hjälpa SCA Obbola, en stor pappersprodu- cent som har haft ett varierat produktionsutfall, att komma fram till slutsatser som kan hjälpa dem att säkerställa en långsiktig hög produktionskvantitet och kvalitet. Vi tillämpar olika metoder för variabel selektion, som har visat sig framgångsrika i lit- teraturen. Resultaten av arbetet är av blandad framgång, men vi lyckas hitta både variabler som SCA Obbola vet påverkar specifika responser, men även variabler som de tycker är intressanta för vidare utredning.
394

Bayesian Latent Class Analysis with Shrinkage Priors: An Application to the Hungarian Heart Disease Data

Grün, Bettina, Malsiner-Walli, Gertraud January 2018 (has links) (PDF)
Latent class analysis explains dependency structures in multivariate categorical data by assuming the presence of latent classes. We investigate the specification of suitable priors for the Bayesian latent class model to determine the number of classes and perform variable selection. Estimation is possible using standard tools implementing general purpose Markov chain Monte Carlo sampling techniques such as the software JAGS. However, class specific inference requires suitable post-processing in order to eliminate label switching. The proposed Bayesian specification and analysis method is applied to the Hungarian heart disease data set to determine the number of classes and identify relevant variables and results are compared to those obtained with the standard prior for the component specific parameters.
395

High-precision time-domain astrophysics in crowded star-fields with ground based telescopes : globular clusters and the mitigation of the atmospheric turbulence

Figuera Jaimes, Roberto Jose January 2018 (has links)
We carried out a three year (2013-2015) observational campaign at the Danish 1.54-m Telescope at the ESO observatory at La Silla in Chile in which we obtained ~1000 astronomical images in the field of 11 Galactic globular clusters. The selection of these stellar systems was focused mainly on the visibility of the targets and their relevant physical properties available in the catalogues, among them were considered the density, variable stars known, colour-magnitude diagrams, and luminosity. The telescope was equipped with an electron-multiplying CCD (EMCCD) with the aim of taking very short exposure-time images. The camera was configured to take 10 frames per second. Due to this, the brighter stars observed were not affected by saturation, it helped to give higher signal to noise ratio to the fainter stars and, importantly, it minimised the effects of the atmospheric turbulence such as blending between stars in the crowded fields. To produce normal-exposure-time images (minutes) we implemented the shift-and-add technique that also enabled us to produce images with better angular resolution than previously achieved with conventional CCDs on ground-based telescopes, and even enabled us to produce images with angular resolution close to that obtained with space telescopes. The detection of the stars in each of the globular clusters and the photometry was performed via difference image analysis by using the DanDIA pipeline whose procedures and mathematical techniques have been demonstrated to produce high-precision time-series photometry of very crowded stellar regions. We produced time-series photometry for ~15000 stars in the fields observed which were statistically analysed in order to automatically extract variable stars. Our aim is to complete, or improve, the census of the variable star population in the globular clusters. In NGC 6715, we found light curves for 17 previously known variable stars near the edges of our reference image (16 RR Lyrae and 1 semi-regular) and we discovered 67 new variables (30 RR Lyrae, 21 long-period irregular, 3 semi-regular, 1 W Virginis, 1 eclipsing binary, and 11 unclassified). This cluster was particularly interesting because apart from the results obtained, it shows the benefits of using the EMCCD cameras and the shift-and-add technique. It is a cluster studied several times including data obtained with the OGLE survey and also with the Hubble Space Telescope and our discoveries were still new. Our new RR Lyrae star discoveries help confirm that NGC 6715 is of intermediate Oosterhoff type. In the other 10 globular clusters, we obtained light curves for 31 previously known variable stars (3 L, 2 SR, 20 RR Lyrae, 1 SX Phe, 3 cataclysmic variables, 1 EW and 1 NC) and we discovered 30 new variables (16 L, 7 SR, 4 RR Lyrae, 1 SX Phe and 2 NC). In NGC 6093, we analysed the famous case of the 1860 Nova, for which no observations of the Nova in outburst have been made until the present study. Ephemerides and photometric measurements for the variable stars are available in electronic form through the Strasbourg Astronomical Data Centre.
396

Heads reproduction in Hercules and hidras battles

Piza Volio, Eduardo 25 September 2017 (has links)
Hercules killed the Hydra of Lerna in a bloody battle-the second of the labor tasks imposed upon him in atonement for his hideous crimes. The Hydra was a horrible, aggressive mythological monster with many heads and poisonous blood, whose heads multiplied each time one of them was severed. This paper explores some mathematical methods about this interesting epic battle. A generalization of the original Kirby & Paris model is proposed, concerning a general heads reproduction pattern. We also study the connection of this model with Goodstein ultra-growing and recursive sequences. As an interesting application, we next analyze the inevitable death of another huge monster of our modern era: the Internet.
397

Desarrollo de una Herramienta de Asignación de Activos entre Instrumentos de Renta Fija y Variable como Apoyo a la Gestión Activa de Carteras de una Administradora

Aceituno Ávila, Miguel Ángel January 2008 (has links)
El objetivo de este trabajo es desarrollar una herramienta de asignación de activos, para apoyar la gestión activa de inversiones. Para ello se obtiene un modelo analítico que permite estimar las posiciones de acciones y bonos a incluir en una cartera de inversiones, sobre la cual se requiere una administración dinámica de su composición, con el fin de aumentar el rendimiento de ella en el corto plazo. Esta herramienta utiliza la estimación de retornos relativos entre instrumentos de renta fija y variable nacional, a partir de variables económicas y financieras. Además se complementa con dos modelos alternativos: uno basado en sorpresas de mercado y otro modelo autorregresivo general (ARIMA). La entidad estudiada es una Administradora General de Fondos (AGF) dentro de la cual semanalmente se realizan comités, donde los integrantes exponen su visión de mercado, para luego llegar a acuerdo en la estrategia de inversión a seguir. Sin embargo, la decisión sobre las posiciones relativas renta fija/renta variable en las carteras no es flexible, salvo ante situaciones de crisis en los mercados, donde se liquidan gran parte de los activos con variaciones esperadas decrecientes, para adquirir otro tipo de instrumentos más seguros. Generalmente, al no aplicar modelos específicos de asignación que indiquen direcciones de cambio o señales de compra o venta de instrumentos, como Markowitz o Black & Litterman, los niveles agregados por tipo de activo se mantienen constantes en el tiempo. Como bechmark creamos carteras sintéticas, compuestas por dos activos: el IPSA y un índice de renta fija a cinco años. Con ello se intentar replicar el comportamiento de los Fondos del AGF, dejando de lado la participación que tienen los activos internacionales en ellos. Los resultados obtenidos con los diversos modelos de rentabilidad relativa, que de paso involucran una transformación desde retornos relativos a posiciones de inversión, se ajustan significativamente a los valores efectivos. Más aún, al combinar de manera óptima los resultados de cada modelo, minimizando el error de la estimación, se obtiene un conjunto adicional de estimaciones, con buenas propiedades de ajuste a los datos Para el período muestral estudiado, el modelo de retornos relativos supera a los retornos de los fondos benchmark, por lo que se recomienda utilizar las posiciones entregadas por el modelo en la creación de carteras de comparación. También este modelo ayuda a la planeación de estrategias de inversión, o bien, se pueden utilizar los resultados como señales de compra o venta de instrumentos de renta fija y renta variable. Con ello, se flexibilizan e informan adecuadamente las decisiones de inversión de corto plazo.
398

Energy Efficiency Improvements for a Large Tire Manufacturing Plant

Moyer, Jeremy William 01 December 2011 (has links)
This study examines five potential improvement projects that could be implemented at the Continental Tire manufacturing plant located in Mount Vernon, IL. The study looks at insulating of tire molds, installation of variable frequency drives on circulating pumps, pressure reduction turbines, waste heat utilization used for absorption cooling, and cogeneration using a gas turbine cycle. A feasibility study and cost analysis was performed for each project to determine recommendation for implementation. The two most appealing projects are the insulation addition and the installation of variable frequency drives. Adding insulation would produce energy savings in the range of 908 kJ/s (3,097 Btu/hr) to 989 kJ/s (3,374 Btu/hr) and annual savings between $13,390 and $14,591. Installation of variable frequency drives on two 200 hp circulating pumps would produce energy savings between 74.6 kW (100 hp) and (104.6 kW (140.2 hp) with annual monetary savings in the range of $41,646 to $58,384.
399

Analysis of an Existing Coal Fired Power Generation Facility with Recommendations for Efficiency and Production Improvement

Achelpohl, Scott Alan 01 December 2010 (has links)
This study examined the Lake of Egypt Power Plant operated by Southern Illinois Power Cooperative located on the Lake of Egypt south of Marion, IL. The facility has a 173 MW rated turbine operating on a pulverized coal cyclone boiler and three 33 MW rated turbines operating on an oversized circulating fluidized bed boiler with 120 MW capacity. The first area examined was reduction of auxiliary power consumption possible with the addition of variable frequency drives to the forced draft fan and booster fan motors. Included in this examination was an analysis of the economic and environmental impact of such a reduction. From the analysis an annual savings of 24.4 GWh of electricity is possible. The second area examined was the generation capacity lost due to condenser fouling and the possible reduction in facility emissions with altered condenser treatment. From the analysis an additional 3.0 MW of capacity facility wide is possible or a reduction steam production of 1.5% for each boiler.
400

An Information Based Optimal Subdata Selection Algorithm for Big Data Linear Regression and a Suitable Variable Selection Algorithm

January 2017 (has links)
abstract: This article proposes a new information-based subdata selection (IBOSS) algorithm, Squared Scaled Distance Algorithm (SSDA). It is based on the invariance of the determinant of the information matrix under orthogonal transformations, especially rotations. Extensive simulation results show that the new IBOSS algorithm retains nice asymptotic properties of IBOSS and gives a larger determinant of the subdata information matrix. It has the same order of time complexity as the D-optimal IBOSS algorithm. However, it exploits the advantages of vectorized calculation avoiding for loops and is approximately 6 times as fast as the D-optimal IBOSS algorithm in R. The robustness of SSDA is studied from three aspects: nonorthogonality, including interaction terms and variable misspecification. A new accurate variable selection algorithm is proposed to help the implementation of IBOSS algorithms when a large number of variables are present with sparse important variables among them. Aggregating random subsample results, this variable selection algorithm is much more accurate than the LASSO method using full data. Since the time complexity is associated with the number of variables only, it is also very computationally efficient if the number of variables is fixed as n increases and not massively large. More importantly, using subsamples it solves the problem that full data cannot be stored in the memory when a data set is too large. / Dissertation/Thesis / Masters Thesis Statistics 2017

Page generated in 0.7757 seconds