• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17669
  • 5462
  • 2960
  • 2657
  • 1693
  • 1643
  • 1013
  • 877
  • 762
  • 546
  • 306
  • 286
  • 279
  • 257
  • 175
  • Tagged with
  • 42388
  • 4351
  • 3926
  • 3773
  • 2864
  • 2494
  • 2426
  • 2321
  • 2158
  • 2037
  • 2020
  • 1962
  • 1954
  • 1930
  • 1871
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Model-based Assessment of Heat Pump Flexibility

Wolf, Tobias January 2016 (has links)
Today's energy production is changing from scheduled to intermittent generation due to the increasing energy injection from renewable sources. This alteration requires flexibility in energy generation and demand. Electric heat pumps and thermal storages were found to have a large potential to provide demand flexibility which is analysed in this work. A three-fold method is set up to generate thermal load profiles, to simulate heat pump pools and to assess heat pump flexibility. The thermal profile generation based on a combination of physical and behavioural models is successfully validated against measurement data. A randomised system sizing procedure was implemented for the simulation of heat pump pools. The parameter randomisation yields correct seasonal performance factors, full load hours and average operation cycles per day compared to 87 monitored systems. The flexibility assessment analysis the electric load deviation of representative heat pump pool in response to 5 different on / off signals. The flexibility is induced by the capacity of thermal storages and analysed by four parameters. Generally, on signals are more powerful than off signals. A generic assessment by the ambient temperature yield that the flexibility is highest for heating days and the activated additional space heating storage: Superheating of the storage to the maximal temperature provides a flexible energy of more than 400 kWh per 100 heat pumps in a temperature range between -10 and +13 °C.
432

Thermal modeling of permanent magnet synchronous motor and inverter

Rajput, Mihir N. 27 May 2016 (has links)
The purpose of my thesis is to establish a simple thermal model for a Parker GVM 210-150P motor and a SEVCON Gen4 Size8 inverter. These models give temperature variations of critical components in the motor and the inverter. My thesis will help Georgia Tech's EcoCAR-3 team in understanding the physics behind thermal modeling and why thermal study is necessary. This work is a prerequisite for Software in the Loop (SIL) simulations or Hardware in the Loop (HIL) simulations for a hybrid electric vehicle.
433

Scale model validation of QUAYSIM and WAVESCAT numerical models of ship motions

Eigelaar, Lerika Susan 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: Various numerical modelling software packages are available for predicting moored ship motions and forces. The focus of this study was to validate the numerical models QUAYSIM and WAVESCAT and how these models together form a procedure for predicting moored ship motions and forces under the impact of high and low frequency waves. The validation procedure applied in the study involved numerical modelling of a given physical model situation in which moored ship motions and forces were measured under both high and low frequency wave conditions. A physical model with built-in bathymetry was provided by the Council for Scientific and Industrial Research (CSIR) Hydraulics Laboratory in Stellenbosch. The model consisted of a moored container vessel at a jetty, with various mooring lines and fenders. A JONSWAP spectrum, which combines high and low frequency wave components, was used to simulate wave conditions for the modelling of ship motions. The wave periods and wave heights were measured at observation stations located at specific points in the basin. Other measurements such as those of the forces in the fenders and mooring lines were also determined. A multi-step approach was used to numerically predict the ship motions and forces. Firstly, the coastal processes occurring within the basin, which was set up to simulate the physical model wave behaviour, were measured to calibrate the SWAN Delft3D-WAVE model. The wave heights and periods for the respective observation stations were obtained and compared to the physical model measurements. The Delft3D-FLOW SURFBEAT model was used to calculate the low frequency waves in the coastal area. Low frequency waves are the main cause of larger ship motions and forces, therefore it is important to investigate them as part of the ship motion prediction procedure. After the waves had been computed, wave forces acting on the vessel needed to be determined for both high and low frequency waves. These wave forces were modelled with the combinations SURFBEAT/LF-STRIP (low frequency waves) and SWAN/WAVESCAT (high frequency waves). LF-STRIP provided the link between low frequency wave models and ship motion models, converting the low frequency waves into long wave forces acting on the vessel. WAVESCAT converted the high frequency waves to short wave forces. The calculated long wave forces and short wave forces served as the input required to run the ship motion model QUAYSIM to determine the movements of the moored ship as well as the restraining forces in the lines and fenders. The ship motions and forces were compared to the physical model, with the intention of possibly validating the QUAYSIM/WAVESCAT approach for predicting moored ship motions. The study provides an overview of both the setup and results of the physical and numerical model. A description of each of the numerical models SWAN, SURFBEAT, LF-STRIP, WAVESCAT and QUAYSIM is provided, along with a comparison between the physical and numerical models for each procedure. The validation procedure provided useful documentation of the quality of these numerical modelling approaches, already in use in some design projects. The numerical models WAVESCAT and QUAYSIM models of ship motion have shown to provide a good correlation between the physical model and the numerical approach. However, improvements are still required. Good comparisons were obtained for the long wave motions (horizontal movements - surge, sway and yaw). The surge and sway motions were slightly overestimated by QUAYSIM. The magnitude of the yaw was comparable but the not well represented in spectral plots. / AFRIKAANSE OPSOMMING: Daar is verskeie numeriese modellering-sagtewareprogramme beskikbaar waarmee skipbewegings en -kragte voorspel kan word. Die fokus van hierdie studie was om die numeriese modelle QUAYSIM en WAVESCAT te valideer. Saam vorm hierdie twee modelle ’n prosedure om vasgemeerde skipbewegings en -kragte veroorsaak deur lang- en kortgolfaksie te bepaal. Die validasieprosedure wat in hierdie studie gebruik is, behels ’n numeriese modelering van ’n fisiese situasie waar ’n vasgemeerde skip se bewegings en kragte onder kort- en langgolfkondisies gemeet is. ’n Fisiese model met ingeboude batimetrie is voorsien deur die Council for Scientific and Industrial Research (CSIR) se hidroliese laboratorium in Stellenbosch. Die model bestaan uit ’n vasgemeerde houerskip by ’n pier met verskeie ankerlyne en bootbuffers. ’n JONSWAPspektrum, wat kort- en langgolfkomponente kombineer, is gebruik om golfomstandighede vir die modellering van skipbewegings te simuleer. Golfperiodes en golfhoogtes is by spesifieke waarnemingstasies in die gesimuleerde hawe-area gemeet. Verdere opmetings, soos dié van die kragte in die bootbuffers en ankerlyne, is ook gedoen. ’n Stap-vir-stap benadering is gevolg om die skipbewegings numeries te voorspel. Eerstens is die kusprosesse wat in die gesimuleerde hawe plaasvind, gekalibreer met die numeriese paket SWAN Delft3D-WAVE. Die golfhoogtes en golfperiodes vir elke waarnemingstasie is bereken en vergelyk met die fisiese model se opmetings. Die SURFBEAT-module van Delft3D-FLOW is gebruik om die lae-frekwensie golwe in die kusarea te bereken. Lae-frekwensie golwe is die hoofoorsaak van skipbewegings en daarom is dit belangrik om dit te ondersoek gedurende die voorspellingsprosedure van skipbewegings. Na die golwe bereken is, moes die kragte wat beide kort en lang golwe op die skip uitoefen ook bereken word. Hierdie golfkragte is gemodelleer deur middel van die kombinasies SURFBEAT/LFSTRIP (langgolwe) en SWAN/WAVESCAT (kortgolwe). LF-STRIP het die skakel tussen golfmodelle en skipbewegingsmodelle verskaf en die lae-frekwensie golwe omgeskakel in langgolfkragte wat op die skip uitgeoefen is. WAVESCAT het die hoë-frekwensiegolwe omgeskakel in kortgolfkragte wat op die skip uitgeoefen is. Die berekende langgolf- en kortgolfkragte is ingevoer op die skipbewegingsmodel QUAYSIM om die skipbewegings en inperkingskragte in die bootbuffers en ankerlyne te bepaal sodat dit vergelyk kon word met die fisiese model, met die doel om moontlik die QUAYSIM/WAVESCAT-prosedure om gemeerde skipbewegings te voorspel te valideer. Die studie verskaf ’n oorsig van die opstel en resultate van die fisiese en numeriese modelle. Elk van die numeriese modelle SWAN, SURFBEAT, LF-STRIP, WAVESCAT en QUAYSIM word beskryf en vergelykings word getref tussen die numeriese en fisiese modelle vir elke prosedure. Die validasieprosedure verskaf nuttige dokumentasie van die kwaliteit van hierdie numeriese modeleringsprosedures wat reeds in sekere ontwerpprojekte gebruik word. Die numeriese WAVESCAT en QUAYSIM modelle van skipbewegings het ’n goeie korrelasie tussen die fisiese model en die numeriese benadering gelewer. Verbeteringe is wel steeds nodig. Goeie vergelykings is verkry vir langgolfbewegings (horisontale bewegings – stuwing (“surge”), swaai (“sway”) en gier (“yaw”)). Die stu- en swaaibewegings was effens oorskat met QUAYSIM. Die grootte van die gier was wel vergelykbaar maar is nie grafies goed uitgebeeld nie.
434

Aeronautical Channel Modeling for Packet Network Simulators

Khanal, Sandarva 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / The introduction of network elements into telemetry systems brings a level of complexity that makes performance analysis difficult, if not impossible. Packet simulation is a well understood tool that enables performance prediction for network designs or for operational forecasting. Packet simulators must however be customized to incorporate aeronautical radio channels and other effects unique to the telemetry application. This paper presents a method for developing a Markov Model simulation for aeronautical channels for use in packet network simulators such as OPNET modeler. It shows how the Hidden Markov Model (HMM) and the Markov Model (MM) can be used together to first extract the channel behavior of an OFDM transmission for an aeronautical channel, and then effortlessly replicate the statistical behavior during simulations in OPENT Modeler. Results demonstrate how a simple Markov Model can capture the behavior of very complex combinations of channel and modulation conditions.
435

Mathematical and Physical Simulations of BOF Converters

Zhou, Xiaobin January 2015 (has links)
The purpose of this study is to develop mathematical models to explore the mixing and its related phenomena in converter bath. Specifically, first, a mathematical model of a physical model converter, which was scaled down to 1/6th of a 30 t vessel, was developed in this study. A number of parameters were studied and their effects on the mixing time were recorded in a top blown converter. Second, a mathematical model for a combined top-bottom blown was built to investigate the optimization process. Then, a side tuyere was introduced in the combined top-bottom blown converter and its effects on the mixing and wall shear stress were studied. Moreover, based on the above results, the kinetic energy transfer phenomena in a real converter were investigated by applying the mathematical models. A simplified model, in which the calculation region was reduced to save calculation compared to simulations of the whole region of the converter, was used in the mathematical simulation. In addition, this method was also used in the simulation of real converters. This approach makes it possible to simulate the Laval nozzle flow jet and the cavity separately when using different turbulence models. In the top blown converter model, a comparison between the physical model and the mathematical model showed a good relative difference of 2.5% and 6.1% for the cavity depth and radius, respectively. In addition, the predicted mixing time showed a good relative difference of 2.8% in comparison to the experimental data. In an optimization of a combined top-bottom blown converter, a new bottom tuyere scheme with an asymmetrical configuration was found to be one of the best cases with respect to a decreased mixing time in the bath. An industrial investigation showed that the application effects of the new tuyere scheme yield a better stirring condition in the bath compared to the original case. Furthermore, the results indicated that the mixing time for a combined top-bottom-side blown converter was decreased profoundly compared to a conventional combined top-bottom blown converter. It was found that the side wall shear stress is increased by introducing side blowing, especially in the region near the side blowing plume. For a 100 t converter in real, the fundamental aspects of kinetic energy transfer from a top and bottom gas to the bath were explored. The analyses revealed that the energy transfer is less efficient when the top lance height is lowered or the flowrate is increased in the top blowing operations. However, an inverse trend was found. Namely, that the kinetic energy transfer is increased when the bottom flowrate is increased in the current bottom blowing operations. In addition, the slag on top of the bath is found to dissipate 6.6%, 9.4% and 11.2% for the slag masses 5, 9 and 15 t compared to the case without slag on top of the surface of the bath, respectively. / <p>QC 20151015</p>
436

Modelling Issues in Three-state Progressive Processes

Kopciuk, Karen January 2001 (has links)
This dissertation focuses on several issues pertaining to three-state progressive stochastic processes. Casting survival data within a three-state framework is an effective way to incorporate intermediate events into an analysis. These events can yield valuable insights into treatment interventions and the natural history of a process, especially when the right censoring is heavy. Exploiting the uni-directional nature of these processes allows for more effective modelling of the types of incomplete data commonly encountered in practice, as well as time-dependent explanatory variables and different time scales. In Chapter 2, we extend the model developed by Frydman (1995) by incorporating explanatory variables and by permitting interval censoring for the time to the terminal event. The resulting model is quite general and combines features of the models proposed by Frydman (1995) and Kim <i>et al</i>. (1993). The decomposition theorem of Gu (1996) is used to show that all of the estimating equations arising from Frydman's log likelihood function are self-consistent. An AIDS data set analyzed by these authors is used to illustrate our regression approach. Estimating the standard errors of our regression model parameters, by adopting a piecewise constant approach for the baseline intensity parameters, is the focus of Chapter 3. We also develop data-driven algorithms which select changepoints for the intervals of support, based on the Akaike and Schwarz Information Criteria. A sensitivity study is conducted to evaluate these algorithms. The AIDS example is considered here once more; standard errors are estimated for several piecewise constant regression models selected by the model criteria. Our results indicate that for both the example and the sensitivity study, the resulting estimated standard errors of certain model parameters can be quite large. Chapter 4 evaluates the goodness-of-link function for the transition intensity between states 2 and 3 in the regression model we introduced in chapter 2. By embedding this hazard function in a one-parameter family of hazard functions, we can assess its dependence on the specific parametric form adopted. In a simulation study, the goodness-of-link parameter is estimated and its impact on the regression parameters is assessed. The logistic specification of the hazard function from state 2 to state 3 is appropriate for the discrete, parametric-based data sets considered, as well as for the AIDS data. We also investigate the uniqueness and consistency of the maximum likelihood estimates based on our regression model for these AIDS data. In Chapter 5 we consider the possible efficiency gains realized in estimating the survivor function when an intermediate auxiliary variable is incorporated into a time-to-event analysis. Both Markov and hybrid time scale frameworks are adopted in the resulting progressive three-state model. We consider three cases for the amount of information available about the auxiliary variable: the observation is completely unknown, known exactly, or known to be within an interval of time. In the Markov framework, our results suggest that observing subjects at just two time points provides as much information about the survivor function as knowing the exact time of the intermediate event. There was generally a greater loss of efficiency in the hybrid time setting. The final chapter identifies some directions for future research.
437

Evaluating the utility and validity of the representational redescription model as a general model for cognitive development

Butler, Cathal January 2008 (has links)
A series of studies were conducted with the aim of showing that the Representational Redescription (RR) model (Karmiloff-Smith, 1992) can be used a general model of cognitive development. In this thesis, 3 aspects of the RR model were explored. The first set of experiments involved analysing the generalisability of RR levels across tasks in a domain. In an initial study, the levels of the RR model were successfully applied to a balance scale task. Then, in a subsequent study, children’s RR levels on the balance scale task were compared with their RR levels on a balance beam task (see Pine et al, 1999). Children were seen to access the same level of verbal knowledge across both tasks. This suggests that it is verbal knowledge which provides the basis for generalisation of knowledge. The second set of experiments involved a consideration of the RR model in relation to the domain of numeracy. The levels of the RR model were applied to children’s developing representations for the one-to-one and cardinality principles. The RR levels were shown to have utility in predicting children’s openness to different types of “procedurally based” and “conceptually based” teaching interventions, with pre-implicit children benefiting from procedural interventions, and children who have implicit and more advanced representational levels benefiting from conceptual interventions. The final study involved a microgenetic analysis of children’s representational levels on the balance beam task. The findings from this study indicated the importance of a period of stability prior to a cognitive advance, and demonstrated that cognitive advances can be driven by changes in the verbal explanations that are offered, rather than changes in successful performance. This provides support for the mechanism of change proposed by Karmiloff-Smith, 1992. Together, the findings indicate that the RR model provides a useful perspective about the cognitive development of children. In particular, the thesis highlights when children can use the same representations for different tasks in a domain and suggests the mechanism that brings about representational change.
438

The measurement of free energy by Monte Carlo computer simulation

Smith, Graham January 1996 (has links)
One of the most important problems in statistical mechanics is the measurement of free energies, these being the quantities that determine the direction of chemical reactions and--the concern of this thesis--the location of phase transitions. While Monte Carlo (MC) computer simulation is a well-established and invaluable aid in statistical mechanical calculations, it is well known that, in its most commonly-practised form (where samples are generated from the Boltzmann distribution), it fails if applied directly to the free energy problem. This failure occurs because the measurement of free energies requires a much more extensive exploration of the system's configuration space than do most statistical mechanical calculations: configurations which have a very low Boltzmann probability make a substantial contribution to the free energy, and the important regions of configuration space may be separated by potential barriers. We begin the thesis with an introduction, and then give a review of the very substantial literature that the problem of the MC measurement of free energy has produced, explaining and classifying the various different approaches that have been adopted. We then proceed to present the results of our own investigations. First, we investigate methods in which the configurations of the system are sampled from a distribution other than the Boltzmann distribution, concentrating in particular on a recently developed technique known as the multicanonical ensemble. The principal difficulty in using the multicanonical ensemble is the difficulty of constructing it: implicit in it is at least partial knowledge of the very free energy that we are trying to measure, and so to produce it requires an iterative process. Therefore we study this iterative process, using Bayesian inference to extend the usual method of MC data analysis, and introducing a new MC method in which inferences are made based not on the macrostates visited by the simulation but on the transitions made between them. We present a detailed comparison between the multicanonical ensemble and the traditional method of free energy measurement, thermodynamic integration, and use the former to make a high-accuracy investigation of the critical magnetisation distribution of the 2d Ising model from the scaling region all the way to saturation. We also make some comments on the possibility of going beyond the multicanonical ensemble to `optimal' MC sampling. Second, we investigate an isostructural solid-solid phase transition in a system consisting of hard spheres with a square-well attractive potential. Recent work, which we have confirmed, suggests that this transition exists when the range of the attraction is very small (width of attractive potential/ hard core diameter ~ 0.01). First we study this system using a method of free energy measurement in which the square-well potential is smoothly transformed into that of the Einstein solid. This enables a direct comparison of a multicanonical-like method with thermodynamic integration. Then we perform extensive simulations using a different, purely multicanonical approach, which enables the direct connection of the two coexisting phases. It is found that the measurement of transition probabilities is again advantageous for the generation of the multicanonical ensemble, and can even be used to produce the final estimators. Some of the work presented in this thesis has been published or accepted for publication: the references are G. R. Smith & A. D. Bruce, A Study of the Multicanonical Monte Carlo Method, J. Phys. A. 28, 6623 (1995). [reference details doi:10.1088/0305-4470/28/23/015] G. R. Smith & A. D. Bruce, Multicanonical Monte Carlo Study of a Structural Phase Transition, to be published in Europhys. Lett. [reference details Europhys. Lett. 34, 91 (1996) doi:10.1209/epl/i1996-00421-1] G. R. Smith & A. D. Bruce, Multicanonical Monte Carlo Study of Solid-Solid Phase Coexistence in a Model Colloid, to be published in Phys. Rev. E [reference details Phys. Rev. E 53, 6530–6543 (1996) doi:10.1103/PhysRevE.53.6530].
439

Contribution à la compréhension des finalités de l’essaimage. Vers une modélisation de la stratégie d’essaimage : cas des grandes entreprises tunisiennes. / A contribution to the understanding of the purposes of the spin-off strategy toward a model of spin-off strategy : the large tunisian companie as a case study

Ben Hamed Amara, Anji 07 January 2015 (has links)
Ce travail de recherche vise à montrer l’intérêt de l’essaimage comme outil autour duquel peut s’articuler une panoplie de stratégies organisationnelles. Des coalitions stratégiques s’établissent et des liens de proximité se construisent entre les différentes entités constituant l’essaimage. La nature et l’intensité de ces liens dépendent de la nature de la stratégie d’essaimage adoptée ainsi que des logiques endogènes qui lui sont associées. Ainsi, en vue de contribuer à une meilleure compréhension de cette pratique, le présent travail de recherche propose, en s’appuyant sur un cadre théorique et typologique, une grille de lecture opérationnelle mettant en évidence les multiples dimensions de l’essaimage. Pour ce faire, une démarche abductive fondée sur l’étude de cas multiples a été mobilisée. La stratégie de recensement et d’analyse des cas a été effectuée en trois étapes. D’abord, il s’agit d’une exploration préliminaire qui a pour intérêt, de faire ressortir les cas d’analyse à partir d’entretiens réalisés avec 22 responsables, a priori concernés par la pratique d’essaimage en Tunisie. Pour approfondir notre compréhension de la pratique, des analyses thématiques des entrevues ont été menées en profondeur auprès des responsables de cellules d’essaimage. Enfin, en vu de repérer les articulations logiques entre ces dimensions de l’essaimage qui viennent d’être cernées, l’exploration qualitative s’est aussi consolidée par l’étude de 7 cartes cognitives de responsables ayant vécu des expériences variées d’essaimage. Les résultats empiriques obtenus ont permis de relever l’existence de diverses trajectoires de mise en œuvre de l’essaimage, dont le sens, la direction et la logique diffèrent d’une entreprise à une autre. En outre, la lecture des représentations mentales des dirigeants des cellules d’essaimage nous ont permis de confirmer l’importance des facteurs externes reliés à l’environnement et des facteurs internes reliés à l’optimisation des ressources dans la modélisation du processus. Cette analyse cognitive a laissé aussi apparaitre quelques pistes de réflexion relative à la prédominance des objectifs stratégiques de gestion de l’innovation et de valorisation des résultats de la recherche scientifique dans certaines entreprises essaimantes. / This research aims to show the importance of spin-off as a tool around which various organizational strategies can be articuled. In this respect, a number of strategic coalitions and close links are etablished among the different entities constituting spin-off. So, the nature and intensity of these relationships depend on the nature of the adopted spin-off strategy as well as the endogenous rationale associated with it. Thus, in order to shape a better understanding of the procedure, this research suggests, based on a theorical and typological framework, a practical outline which highlights the multiple dimensions of spin-off. Consequently, an abductive research strategy based on multiple case studies has been implemented.
440

Direct and Indirect Searches for New Physics beyond Standard Model

Zhang, Huanian, Zhang, Huanian January 2016 (has links)
The search for new physics beyond the Standard Model can follow one of two tracks: direct searches for new particles at the collider or indirect probes for new physics from precision measurements. In the direct searches for third generation squarks in SUSY at the LHC, the common practice has been to assume a 100% decay branching fraction for a given search channel. In realistic MSSM scenarios, there is often more than one signicant decay mode present, which signicantly weakens the current search limits on third generation squarks at the LHC. On the other hand, the combination of multiple decay modes as well as the new open decay modes offer alternative discovery channels for third generation squarks searches. In this work, we present the third generation squarks decay and the collider signatures in a few representative mass parameter scenarios. We then analyze the reach of the stop/sbottom signal for the pair production in QCD at the 14 TeV LHC with 300 fb⁻¹ integrated luminosity and of the 100 TeV future collider with 3000 fb⁻¹ integrated luminosity in a few representative scenarios. In the scenario of Bino LSP with Wino NLSP, we investigate stop/sbottom pair production at the LHC with one stop/sbottom decaying via t̃ --> t𝑥[0 1], t𝑥[0 2]/b̃ --> b𝑥[0 1], b𝑥[0 2], and the other one decaying via t̃ --> b𝑥[± 1]/b̃ -->t𝑥[± 1]. With the gaugino subsequent decaying to gauge bosons or a Higgs boson 𝑥[0 2] --> 𝑍𝑥[0 1], h𝑥[0 1] and 𝑥[± 1]--> 𝑊±𝑥[0 1], leading to 𝑏𝑏𝑏𝑏𝑗𝑗𝓁 Ɇᴛ final states for the Higgs channel and 𝑏𝑏𝑗𝑗𝑗𝑗𝓁𝓁Ɇᴛ final states for the 𝑍 channel, we study the reach of those signals at the 14 TeV LHC with 300 fb⁻¹ integrated luminosity. Because the sbottom and stop signals in the same SUSY parameter scenario have indistinguishable final states, they are combined to obtain optimal sensitivity, which is about 150 GeV better than the individual reaches of the sbottom or stop. In the scenario of Bino LSP with Higgsino NLSP. The light stop pair production at the 14 TeV LHC, with stops decaying via t̃₁ --> t𝑥[0 2]/𝑥[0 3] and the neutralino subsequently decaying to a gauge boson or a Higgs boson 𝑥[0 2]/𝑥[0 3] --> 𝑥[0 1]h/𝑍, leads to tt̄hh Ɇᴛ, tt̄h𝑍 Ɇᴛ or tt̄𝑍𝑍 Ɇᴛ final states. The above decay channels give rise to final states containing one or more leptons, therefore our search strategy is to divide the signal regions based on the multiplicity of leptons. We find that the one lepton signal region of channel tt̄h𝑍 Ɇᴛ has the best reach sensitivity of light stop searches at the 14 TeV LHC with 300 fb⁻¹ integrated luminosity. We then combine all the signal regions for a given decay channel or combine all the decay channels for a given signal region to maximize the reach sensitivity of the stop search. For the light stop pair production at the √s = 100 TeV future machine with 3000 fb⁻¹ integrated luminosity, we find that a stop with a mass up to 6 TeV can be discovered at 5𝜎 signicance, while a mass up to 6.8 TeV can be excluded at 95% C.L. for the combined results of all three channels. In the indirect probes for new physics, we utilize the 𝑍-pole Oblique Parameters 𝑆,𝑇, 𝑈 and Higgs precision measurements complementarily in the framework of the Two Higgs Doublet Model at current and future colliders. The 𝑆, 𝑇 , 𝑈 is not that sensitive to the rotation angle 𝛽-𝛼, while the Higgs precision measurements set strong constrains on 𝛽-𝛼. Also the 𝑇 is very sensitive to the mass difference of Higgs bosons, leading to the mass of charged Higgs (H±) aligning either along with the mass of neutral Higgs 𝐻 or 𝐴. As for the Higgs precision measurements, we consider the tree level corrections to Higgs coupling constants as well as the radiative corrections to Higgs coupling constants at one loop level for the future collider. The combination of 𝑍-pole precision measurements and Higgs precision measurements complementarily set strong constraints on the parameter space of the 2HDM, especially in the future 𝑒⁺𝑒⁻ circular collider compared to the current collider due to much cleaner backgrounds and higher luminosity.

Page generated in 0.0746 seconds