• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 220
  • 117
  • 20
  • 15
  • 14
  • 11
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 492
  • 97
  • 58
  • 54
  • 47
  • 46
  • 40
  • 38
  • 33
  • 29
  • 29
  • 28
  • 27
  • 27
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Core-collapse supernovae: neutrino-dark matter phenomenology and probes of internal physics

Heston, Sean MacDonald 08 May 2024 (has links)
The standard model of particle physics cannot currently explain the origin of neutrino masses and anomalies that have been observed at different experiments. One solution for this is to introduce a beyond the standard model origin for these issues, which introduces a coupling between neutrinos and dark matter. Such an interaction would have implications on cosmology and would be constrained by astrophysical neutrino sources. A promising astrophysical source to probe this interaction is core-collapse supernovae as they release ~3x10^53 erg in neutrinos for each transient. However, more observations that constrain the internal physics of core-collapse supernovae are needed in order to better understand their neutrino emission. This dissertation studies two probes of internal physics that allow for a better understanding of the neutrino emission from core-collapse supernovae. The first is a novel approach to try and detect more supernova neutrinos that do not come from galactic events nor from the diffuse supernova background. This is accomplished by doing an offline timing coincidence search at neutrino detectors with a search window determined by optical observations of core-collapse supernovae. With a two-tank Hyper-Kamiokande, this allows for ~1 neutrino detection every 10 years with a confidence level of ~2.6 sigma, resulting from low nearby core-collapse rates and large background rates in the energy range of interest. The second probe of internal physics is high energy gamma-rays from the decays of unstable nuclei in proto-magnetar jets. The abundance distribution of the unstable nuclei depends directly on the neutrino emission, which controls the electron fraction, as well as properties of the proto-magnetar. We find that different proto-magnetar properties produce gamma-ray signals that are distinguishable from each other, and multiple types of observations allow for estimations of the jet and proto-magnetar properties. These gamma-ray signals are detectable for on-axis jets out to extragalactic distances, ~35 Mpc in the best case, and for off-axis jets the signal is only detectable for galactic or local galaxies depending upon the viewing angle. This dissertation also studies a phenomenological constraint on the interactions between neutrinos and dark matter. Using the neutrino emission from supernovae and the inferred dark matter distributions in Milky Way dwarf spheroidals, we constrain the amount of energy the neutrinos can inject into the dark matter sub-halos. This then allows a constraint on the interaction cross-section between neutrinos and dark matter with assumptions about the interaction kinematics. Assuming Lambda-CDM to be correct, the neutrinos cannot interact with low mass dark matter too often as it will become gravitationally unbound, changing the mass of the core we see today. For high mass dark matter, neutrinos can only inject a fraction of ~6.8x10^-6 of their energy in order to not conflict with estimates of the current shapes of the dark matter sub-halos. The constraints we obtain are sigma_nu-DM(E_nu=15 MeV, m_DM>130 GeV) ~ 3.4x10^-23 cm^2 and sigma_nu-DM(E_nu=15 MeV, m_DM <130 GeV) ~ 3.2x10^-27} (m_DM/1 GeV)^2 cm^2, which is slightly stronger than previous bounds for these energies. Consideration of baryonic feedback or host galaxy effects on the dark matter profile can strengthen this constraint. / Doctor of Philosophy / In our current understanding of the physics of the particles that govern how the universe behaves, there is no way to explain the properties we observe for the neutrino. Neutrinos were originally theorized to have zero mass, however neutrino experiments suggests otherwise. The current model of particle physics cannot explain how the neutrinos have mass, therefore an viable way to explain it is to introduce new physics that can generate the neutrino masses. A way to do this is to allow the neutrinos to interact with dark matter, which is matter that does not interact with light and is therefore invisible to the human eye. We know dark matter should exist in the universe due to the gravitational effects it has, making things like galaxies much heavier than what the stars and gas we see can explain. If neutrinos and dark matter interact, we should be able to see the effects of these interactions in the universe, and also possibly at locations where many neutrinos are produced. One such source of neutrinos in the universe are core-collapse supernovae, which are the deaths of massive stars and produce copious amounts of neutrinos. This dissertation studies signals that allow us to better understand the neutrino emission from core-collapse supernovae. One of these signals comes from summing the neutrinos we detect from many distant core-collapse supernovae. This technique uses the optical observations of the supernovae to give us a time window around which we can go through neutrino detector data to find if there are any neutrino detections that cannot be explained as coming from background events. Another method is to observe gamma-rays, high energy photons, that come from the radioactive decay of elements in jets moving near the speed of light powered by rare core-collapse supernovae. The specific gamma-rays and the overall brightness of them allows for an estimation of the properties of the neutrino emission and properties of the central engine that accelerates the jet to near the speed of light. This dissertation also studies the implications of a possible interactions in small and dim satellite galaxies of the Milky Way known as dwarf spheroidals. The shape of the dark matter that is distributed in these dwarf spheroidals can be inferred from the motion of the stars in that dwarf spheroidal, and this shape disagrees with the prevailing theory of dark matter in the universe. We take advantage of this disagreement to place an upper limit on both the mass loss that can occur in this region and the energy that past core-collapse supernovae within the dwarf spheroidals can inject into the dark matter. The mass loss bound lets us place a constraint on how often neutrinos can interact with light dark matter particles. The energy injection limit and an assumption on the energy transfer in each interaction between dark matter and neutrinos allows us to constrain how often the interaction can occur for heavy dark matter particles.
112

Progressive Collapse Evaluation of High Rise Steel Structures due to Sudden Loss of Structural Members

Stephen, D., Ye, J., Lam, Dennis January 2012 (has links)
No / Damage or instantaneous loss of critical structural members due to unforeseen events like impact, blast and natural disasters often propagates progressive collapse as a result of complex redistribution of stresses within the structural system. In severe conditions, in which the structure lacks the ability to absorb the stresses in seeking a new equilibrium state, it could result in partial or total collapse of the building. Current design guidelines such as GSA 2003 recommend single removal of load bearing member for progressive collapse assessment. However, triggering events may affect one or more structural member resulting in partial or total collapse of the structure. This paper presents the various effect of sudden column loss on the redistribution of forces in structural members that determine the equilibrium state of the structure.
113

The role of the abdominal pump in tracheal tube collapse in the darkling beetle, Zophobas morio

Dalton, Elan 23 May 2013 (has links)
Abdominal pumping is a widespread behavior in insects. However, there remains ambiguity surrounding the abdominal pumping behavior, both in terms of describing what exactly abdominal pumping is (i.e., if various modes of operation exist) and also what function(s) abdominal pumping serves (and if function is conserved across groups of insects). In some insects respiratory patterns have been correlated with abdominal movements, although the specific mechanical effects of these movements on the animal\'s respiratory system are generally unknown. Conversely, some insects (such as beetles, ants, and crickets) create convection in the respiratory system by compressing their tracheal tubes, yet the underlying physiological mechanisms of tracheal collapse are also unknown. This study aimed to investigate the relationship between abdominal pumping and the compression of tracheal tubes in the darkling beetle, Zophobas morio. I observed the movements of the abdomen and tracheal tubes using synchrotron x-ray imaging and video cameras, while concurrently monitoring CO2 expiration. I identified and characterized two distinct abdominal movements differentiated by the synchrony (the pinch movement) or lack of synchrony (the wave movement) of abdominal tergite movement. Tracheal tube compressions (and corresponding CO2 pulses) occurred concurrently with every pinch movement. This study provides evidence of a mechanistic linkage between abdominal movements and tracheal tube compressions in the ground beetle, Zophobas morio. / Master of Science
114

Reforestation Management to Prevent Ecosystem Collapse in Stochastic Deforestation

Chong, Fayu 24 May 2024 (has links)
The increasing rate of deforestation, which began decades ago, has significantly impacted on ecosystem services. In this context, secondary forests have emerged as crucial elements in mitigating environmental degradation and restoration. This study is motivated by the need to understand the reforestation management in secondary forests to prevent irreversible ecosystem damage. We begin by setting the drift and volatility in stochastic primary forests. However, it is more manageable to take control of replantation. We employ a dynamic programing approach, integrating ecological and economic perspectives to assess ecosystem services. To simulate a real-world case, we investigate the model in the Brazil Amazon Basin. Special attention is given to the outcome at the turning point, tipping point, and transition point, considering a critical threshold beyond which recovery becomes implausible. Our findings suggest that reducing tenure costs has advantages, while substitution between primary and secondary forests is not necessarily effective in postponing ecosystem collapse. This research contributes to a broader goal of sustainable forest management and offers strategic guidance for future reforestation initiatives in the Amazon Basin and similar ecosystems worldwide. / Master of Science / Deforestation has been drawing attention from institutions since the 1940s, and this global issue has been discussed for its negative impacts and the ways to restore what has been lost. Reforestation initiatives introduced by global environmental organizations consider forest plantations essential in re-establishing trees and the natural ecosystem. This study aims to investigate how different techniques target the growth of secondary forests to mitigate the irreversible damage of ecosystem services. Our research begins by defining the uncertain primary forests. Primary forests and deforestation face long-term climate changes and immediate shocks like fires, droughts, and human activities, meanwhile, policymakers have difficulties predicting and fully controlling them. We integrate considerations of ecology and economy to the ecosystem functioning, introducing stochasticity in deforestation into our dynamic optimization problem. We apply our models to the Brazil Amazon Basin, a region known for its diverse tropical forests and vast cases of deforestation. We pay close attention to the timing of tipping point that leads to ecosystem collapse, the turning point where reforestation rate catches up with deforestation rate, and the moment of forest type transition. Through simulation and sensitivity analysis, we gain a better grasp on guiding the management of secondary forests under uncertain conditions. Our results indicate that reforestation approaches that lower tenure costs can be beneficial, but merely substituting primary forests cannot necessarily delay an ecosystem collapse. This paper provides practical insights for policymakers, local communities, and international organizations.
115

Cavitation par excitation acoustique bifréquentielle : application à la thrombolyse ultrasonore / Cavitation using bifrequency acoustic excitation : application to ultrasound thrombolysis

Saletes, Izella 07 December 2009 (has links)
Dans nombre d’applications thérapeutiques des ultrasons, il peut être intéressant d’augmenter l’activité de cavitation inertielle tout en limitant au maximum les intensités utilisées : ceci permet de maximiser les effets mécaniques des ultrasons au niveau des tissus visés tout en minimisant les échauffements des tissus environnants. L’étude expérimentale présentée ici ² porte sur la modification des seuils de cavitation inertielle et de l’activité de cavitation au-delà du seuil lorsqu’un signal bifréquentiel comportant deux composantes fréquentielles proches est utilisé. Le caractère non linéaire de la modification du seuil est démontré. Ainsi, des réductions significatives de l’intensité nécessaire à l’obtention de cavitation inertielle peuvent être obtenues dans des milieux où les seuils sont élevés. De plus, l’évolution de l’activité de cavitation lorsque l’intensité ultrasonore est augmentée au-delà du seuil montre qu’avec une excitation bifréquentielle, de fortes activités de cavitation peuvent être atteintes pour des intensités plus proches du seuil. Ce point présente un double intérêt sur le plan de l’application pratique, puisque cela signifie une meilleure séparation des régimes cavitant et non cavitant et permet de réduire encore, par rapport à une excitation monofréquentielle, les intensités requises pour atteindre une activité de cavitation donnée. Des essais sur modèle de caillots sanguins ont permis de valider in vitro l’efficacité de cette excitation bifréquentielle pour la thrombolyse purement ultrasonore. / Enhancing cavitation activity using lower acoustic intensities is interesting to a variety of therapeutic applications, where the mechanical effects of cavitation are required with minimal heating of surrounding tissues. The present experimental work is focused on the modification of the inertial cavitation threshold and on the cavitation activity beyond the threshold where an excitation signal made of two neighbouring frequency components is used. A significant reduction of the acoustic intensity required to trigger cavitation can be obtained in a medium with a strong cavitation threshold. Moreover, comparing the evolution of the cavitation activity beyond the threshold where mono- and bi-frequency excitations are used, it is shown, in the latter case, that strong activities can be reached with intensities closer to the threshold value. This fact would offer a dual-benefit in terms of therapeutic applications, as it enables a better separation between the cavitating and non-cavitating regime and allows lower intensities to be used to attain a given cavitation activity. The evolution of the bifrequency threshold as function of the external parameters shows that the mechanisms involved are nonlinear. Experiments on in vitro blood clot models have validated the efficiency of this bifrequency excitation for purely ultrasound thrombolysis
116

Help! My mother wants to follow me on Instagram! : Which strategies do young adults in Sweden use, when facing context and time collapse. / Hjälp! Min mamma vill följa mig på Instagram! : Vilka strategier använder unga vuxna i Sverige när kontext och tid kollaps uppstår?

Andersson, Malou, Heed, Emma January 2023 (has links)
Young adults spend a lot of their time on social media where they share their lives with friends, family, colleagues, acquaintances, ect. Wesch (2009) explains that things posted on social media such as YouTube can be viewed by anybody, everybody, and nobody, anywhere in the world all at once. This becomes a problem for young adults as several different audiences blend into one (i.e. context collapse) (Brandtzaeg, Lüder &amp; Skjetne,2010). For example, how would it feel if your mother saw a video of you at a party which was posted for friends to joke about? However, Brandtzaeg and Lüders (2018) states that is not the only problem. Social media also blurs the line between the present and the past. One example can be a friend commenting on a silly post on facebook you made years ago, then it appears in everyone's feed again making it seem as if you have posted it recently. Both of these problems make young adults change how they chose to self-presentate themselves on social media. In addition, since social media is asynchronous as content does not take place in real-time, it provides time to be more strategic as well as for more polished forms of self-presentation and self-censorship(Gardner &amp; Davis, 2013; Lindgren, 2017). With foundation in this, this study is going to examine which strategies young adults use that are related to self-presentation on the occasion of facing context and time-collapse. The study will focus on to what extent the participants use the tactics mentioned in earlier literature as well as how different aspects relate to the tactics one chooses to use. In order to create an understanding of context- and time collapse previous research has been examined. Furthermore, previous research about self-presentation in general and self-presentation on social media inparticular is examined to connect to how self-presentation can be disturbed by context- and time collapse. Finally, theories and research about privacy is used to gather an understanding of how young adults experiencecontext and time collapse as a problem for their privacy. Through a survey, data have been collected from 226 respondents to be examined, presented and analyzed. The respondents were born between the years 1997 and 2004. The result showed that all of the strategies were used yet the extent varied depending on the strategy. However, the most commonly used strategies were connected to self-censoring. Moreover, there are relations between the strategies and for example gender, how long one had social media, how one perceives oneself etc.However, surprisingly the relations were for the most part weak even though some stand out as a bit stronger. / Unga vuxna spenderar mycket av deras tid på sociala medier där de delar sina liv med vänner, familj, kollegor, bekantskapskretsar, etc. Wesch (2009) förklarar att saker som är publicerade på sociala medier så som YouTube kan bli sedda av vem som helst, alla, ingen och var som helst i världen på samma gång. Detta blir ett problem för unga vuxna då flera olika publiker samlas och blandas på ett och samma ställe (d.v.s kontext kollaps) (Brandtzaeg m fl., 2010). Ett exempel är hur skulle det kännas ifall din mamma såg en video på dig från en fest som du postat till vänner för att skoja? Dock konstaterar Brandtzaeg och Lüders (2018) att det inte är det enda problemet. Sociala medier suddar ut linjen mellan nutid och dåtid. Ytterligare ett exempel kan vara när vänner kommenterar ett töntigt inlägg du gjorde för flera år sen som då hamnar i allas flöden igen, vilket får det att verka som att du har publicerat det nyligen. Båda dessa problem gör att unga vuxna ändrar hur de väljer att presentera sig själva på sociala medier. Dessutom då sociala medier är “asynchronous” då innehåll inte händer i realtid, möjliggör detta tid att vara mer strategisk men även visa mer polerade former av självpresentation och självcensurering  (Gardner &amp; Davis, 2013; Lindgren, 2017). Denna studie kommer med grund i ovanstående undersöka vilka strategier unga vuxna använder som är relaterade till självpresentation vid bemötande av kontext och tids kollaps. Studiens fokus kommer att ligga på vilken utsträckning deltagarna har använt sig av de olika taktikerna som är nämnda av tidigare forskning men även hur olika aspekter relaterar till de taktiker som beslutats att användas.   Genom att skapa en förståelse för kontext- och tid kollaps, har tidigare forskning blivit undersökt. Fortsättningsvis har forskning gällande självpresentation överlag på sociala medier i särskildhet undersökts för att knyta ihop hur själv presentationen kan bli rubbad av kontext- och tids kollaps. Slutligen så har teorier och forskning gällande integritet används för att bilda en förståelse för hur unga vuxna upplever att kontext - och tids kollaps är en problematisk faktor för deras integritet. Genom en enkät har data samlats för analys, undersökning och presentation från 226 deltagare. Deltagarna föddes mellan åren 1997 och 2004. Resultatet visade att alla strategier var använda till viss del, dock varierade det till vilken grad strategierna användes. De mest använda strategierna var länkade till självcensurering. Fortsättningsvis, är det relationer mellan strategier och exempelvis kön, hur länge man har haft sociala medier, hur man ser sig själv o.s.v. Oavsett är det överraskande att relationerna mellan variablerna var till större delen svaga, dock stod vissa ut i mängden som lite starkare.
117

Procedurální generování vesnic ve hře Minecraft pomocí algoritmu Wave Function Collapse / Procedurální generování vesnic ve hře Minecraft pomocí algoritmu Wave Function Collapse

Mifek, Jakub January 2022 (has links)
1 Maxim Gumin's Wave Function Collapse (WFC) algorithm is an example-driven image generation algorithm emerging from the craft of procedural content generation. The intended use of the algorithm is to generate new images in the style of given examples by ensuring local similarity. Our work aims to generalize the original work to make the algorithm applicable in other domains. Furthermore, we aim to apply it in a more difficult task of village generation in the 3D sandbox video game Minecraft. We will create a generic WFC library and a Minecraft mod, which will allow for structure generation using WFC. We hope that our WFC library will be beneficial to anyone exploring WFC and its applications in the Kotlin language and that our Minecraft showcase reveals some of the benefits and limits of the algorithm in complex problems.
118

Techniky "level of detail" v knihovně OpenSceneGraph / Algorithms of Level of Detail in OpenSceneGraph

Hupka, Dušan January 2014 (has links)
Present graphic requires a lot of optimizations of rendering techniques and mathematical calculations. It is caused by increased requirements of scene's visualization. One of scene's optimizing techniques is the Level of detail. This thesis is focused on methods used by LOD in OpenSceneGraph and OpenGL library. Next it will be described how to choose the right level of detail in a scene. Later it will be explained how to simplify 3D models. These techniques will be implemented in converting tool and demonstrating application. Methods for simplify 3D models will be tested for their speed and quality.
119

Failure Analysis of the World Trade Center 5 Building

LaMalva, Kevin Joseph 29 April 2007 (has links)
This project involves a failure analysis of the internal structural collapse that occurred in World Trade Center 5 (WTC 5) due to fire exposure alone on September 11, 2001. It is hypothesized that the steel column-tree assembly failed during the heating phase of the fire. The results of this research have serious and far-reaching implications, for this method of construction is utilized in approximately 20,000 existing buildings and continues to be very popular. Catastrophic failure during the heating phase of a fire would endanger the lives of firefighters and building occupants undergoing extended egress times (e.g., high-rise buildings), or relying upon defend-in-place strategies (e.g., hospitals). Computer software was used to reconstruct the fire event and predict the structural performance of the assembly when exposed to the fire. Results from a finite element, thermal-stress model confirms this hypothesis, for it is concluded that the catastrophic, progressive structural collapse occurred approximately 2 hours into the fire exposure.
120

Development of intelligent systems for evaluating voltage profile and collapse under contingency operation

Mohammed, Mahmoud M. Jr. January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Shelli K. Starrett / Monitoring and control of modern power systems have become very complex tasks due to the interconnection of power grids. These large-scale power grids confront system operators with a huge set of system inputs and control parameters. This work develops and compares intelligent systems-based algorithms which may be considered by power system operators or planners to help manage, process, and evaluate large amounts of data due to varying conditions within the system. The methods can be used to provide assistance in making operational control and planning decisions for the system in a timely manner. The effectiveness of the proposed algorithms is tested and validated on four different power systems. First, Artificial Neural Network (ANN) models are developed and compared for two different voltage collapse indices and utilizing two different-sized sets of inputs. The ANNs monitor and evaluate the voltage profile of a system and generate intelligent conclusions regarding the status of the system from a voltage stability perspective. A feature reduction technique, based on the analysis of generated data, is used to decrease the number of inputs fed to the ANN, decreasing the number of physical quantities that need to be measured. The major contribution of this work is the development of four different algorithms to control the VAR resources in a system. Four different objectives were also considered in this part of the work, namely: minimization of the number of control changes needed, minimization of the system power losses, minimization of the system's voltage deviations, and consideration of the computational time required. Each of the algorithms is iterative in nature and is designed to take advantage of a method of decoupling the load flow Jacobian matrix to decrease the time needed per iteration. The methods use sensitivity information derived from the load flow Jacobian and augmented with equations relating the desired control and dependent variables. The heuristic-sensitivity based method is compared to two GA-based methods using two different objective functions. In addition, a FL algorithm is added to the heuristic-sensitivity algorithm and compared to a PS-based algorithm. The last part of this dissertation presents the use of one of the GA-based algorithms to identify the size of shunt capacitor necessary to enhance the voltage profile of a system. A method is presented for utilizing contingency cases with this algorithm to determine required capacitor size.

Page generated in 0.0511 seconds