• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 294
  • 24
  • 21
  • 16
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 483
  • 483
  • 119
  • 103
  • 99
  • 88
  • 68
  • 65
  • 62
  • 56
  • 51
  • 47
  • 47
  • 46
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Contribution au pronostic de durée de vie des systèmes piles à combustible PEMFC / Contribution to lifetime prognostics for proton exchange membrane fuel cell (PEMFC) systems

Silva Sanchez, Rosa Elvira 21 May 2015 (has links)
Les travaux de cette thèse visent à apporter des éléments de solutions au problème de la durée de vie des systèmes pile à combustible (FCS – Fuel Cell System) de type à « membrane échangeuse de protons » (PEM – Proton Exchange Membrane) et se décline sur deux champs disciplinaires complémentaires :Une première approche vise à augmenter la durée de vie de celle-ci par la conception et la mise en œuvre d'une architecture de pronostic et de gestion de l'état de santé (PHM – Prognostics & Health Management). Les PEM-FCS, de par leur technologie, sont par essence des systèmes multi-physiques (électriques, fluidiques, électrochimiques, thermiques, mécaniques, etc.) et multi-échelles (de temps et d'espace) dont les comportements sont difficilement appréhendables. La nature non linéaire des phénomènes, le caractère réversible ou non des dégradations, et les interactions entre composants rendent effectivement difficile une étape de modélisation des défaillances. De plus, le manque d'homogénéité (actuel) dans le processus de fabrication rend difficile la caractérisation statistique de leur comportement. Le déploiement d'une solution PHM permettrait en effet d'anticiper et d'éviter les défaillances, d'évaluer l'état de santé, d'estimer le temps de vie résiduel du système, et finalement, d'envisager des actions de maîtrise (contrôle et/ou maintenance) pour assurer la continuité de fonctionnement. Une deuxième approche propose d'avoir recours à une hybridation passive de la PEMFC avec des super-condensateurs (UC – Ultra Capacitor) de façon à faire fonctionner la pile au plus proche de ses conditions opératoires optimales et ainsi, à minimiser l'impact du vieillissement. Les UCs apparaissent comme une source complémentaire à la PEMFC en raison de leur forte densité de puissance, de leur capacité de charge/décharge rapide, de leur réversibilité et de leur grande durée de vie. Si l'on prend l'exemple des véhicules à pile à combustible, l'association entre une PEMFC et des UCs peut être réalisée en utilisant un système hybride de type actif ou passif. Le comportement global du système dépend à la fois du choix de l'architecture et du positionnement de ces éléments en lien avec la charge électrique. Aujourd'hui, les recherches dans ce domaine se focalisent essentiellement sur la gestion d'énergie entre les sources et stockeurs embarqués ; et sur la définition et l'optimisation d'une interface électronique de puissance destinée à conditionner le flux d'énergie entre eux. Cependant, la présence de convertisseurs statiques augmente les sources de défaillances et pannes (défaillance des interrupteurs du convertisseur statique lui-même, impact des oscillations de courant haute fréquence sur le vieillissement de la pile), et augmente également les pertes énergétiques du système complet (même si le rendement du convertisseur statique est élevé, il dégrade néanmoins le bilan global). / This thesis work aims to provide solutions for the limited lifetime of Proton Exchange Membrane Fuel Cell Systems (PEM-FCS) based on two complementary disciplines:A first approach consists in increasing the lifetime of the PEM-FCS by designing and implementing a Prognostics & Health Management (PHM) architecture. The PEM-FCS are essentially multi-physical systems (electrical, fluid, electrochemical, thermal, mechanical, etc.) and multi-scale (time and space), thus its behaviors are hardly understandable. The nonlinear nature of phenomena, the reversibility or not of degradations and the interactions between components makes it quite difficult to have a failure modeling stage. Moreover, the lack of homogeneity (actual) in the manufacturing process makes it difficult for statistical characterization of their behavior. The deployment of a PHM solution would indeed anticipate and avoid failures, assess the state of health, estimate the Remaining Useful Lifetime (RUL) of the system and finally consider control actions (control and/or maintenance) to ensure operation continuity.A second approach proposes to use a passive hybridization of the PEMFC with Ultra Capacitors (UC) to operate the fuel cell closer to its optimum operating conditions and thereby minimize the impact of aging. The UC appear as an additional source to the PEMFC due to their high power density, their capacity to charge/discharge rapidly, their reversibility and their long life. If we take the example of fuel cell hybrid electrical vehicles, the association between a PEMFC and UC can be performed using a hybrid of active or passive type system. The overall behavior of the system depends on both, the choice of the architecture and the positioning of these elements in connection with the electric charge. Today, research in this area focuses mainly on energy management between the sources and embedded storage and the definition and optimization of a power electronic interface designated to adjust the flow of energy between them. However, the presence of power converters increases the source of faults and failures (failure of the switches of the power converter and the impact of high frequency current oscillations on the aging of the PEMFC), and also increases the energy losses of the entire system (even if the performance of the power converter is high, it nevertheless degrades the overall system).
452

Measurement of the Standard Model W⁺W⁻ production cross-section using the ATLAS experiment on the LHC / Mesure de la section efficace de production des bosons W⁺W⁻ dans l'experience ATLAS au LHC

Zeman, Martin 02 October 2014 (has links)
Les mesures de sections efficaces de production di-bosons constituent une partie importante du programme de physique au Large Hadron Collider (Grand collisionneur de hadrons) au CERN. Ces analyses de physique offrent la possibilité d'explorer le secteur électrofaible du modèle standard à l'échelle du TeV et peuvent être une indication de l'existence de nouvelles particules au-delà du modèle standard. L'excellente performance du LHC dans les années 2011 et 2012 a permis de faire les mesures très compétitives. La thèse donne un aperçu complet des méthodes expérimentales utilisées dans la mesure de la section efficace de production de W⁺W⁻ dans les collisions proton-proton au √s = 7 TeV et 8 TeV. Le texte décrit l'analyse en détail, en commençant par l'introduction du cadre théorique du modèle standard et se poursuit avec une discussion détaillée des méthodes utilisées dans l'enregistrement et la reconstruction des événements de physique dans une expérience de cette ampleur. Les logiciels associés (online et offline) sont inclus dans la discussion. Les expériences sont décrites en détail avec en particulier une section détaillée sur le détecteur ATLAS. Le dernier chapitre de cette thèse présente une description détaillée de l'analyse de la production de bosons W dans les modes de désintégration leptoniques utilisant les données enregistrées par l'expérience ATLAS pendant les années 2011 et 2012 (Run I). Les analyses utilisent 4,6 fb⁻¹ de données enregistrées à √s = 7 TeV et 20,28 fb⁻¹ enregistré à 8 TeV. La section efficace mesurée expérimentalement pour la production de bosons W dans l'expérience ATLAS est plus grande que celle prédite par le modèle standard à 7 TeV et 8 TeV. La thèse se termine par la présentation des résultats de mesures différentielles de section efficace. / Measurements of di-boson production cross-sections are an important part of the physics programme at the CERN Large Hadron Collider. These physics analyses provide the opportunity to probe the electroweak sector of the Standard Model at the TeV scale and could also indicate the existence of new particles or probe beyond the Standard Model physics. The excellent performance of the LHC through years 2011 and 2012 allowed for very competitive measurements. This thesis provides a comprehensive overview of the experimental considerations and methods used in the measurement of the W⁺W⁻ production cross-section in proton-proton collisions at √s = 7 TeV and 8 TeV. The treatise covers the material in great detail, starting with the introduction of the theoretical framework of the Standard Model and follows with an extensive discussion of the methods implemented in recording and reconstructing physics events in an experiment of this magnitude. The associated online and offline software tools are included in the discussion. The relevant experiments are covered, including a very detailed section about the ATLAS detector. The final chapter of this thesis contains a detailed description of the analysis of the W-pair production in the leptonic decay channels using the datasets recorded by the ATLAS experiment during 2011 and 2012 (Run I). The analyses use 4.60 fb⁻¹ recorded at √s = 7 TeV and 20.28 fb⁻¹ recorded at 8 TeV. The experimentally measured cross section for the production of W bosons at the ATLAS experiment is consistently enhanced compared to the predictions of the Standard Model at centre-of-mass energies of 7 TeV and 8 TeV. The thesis concludes with the presentation of differential cross-section measurement results.
453

Unstable equilibrium : modelling waves and turbulence in water flow

Connell, R. J. January 2008 (has links)
This thesis develops a one-dimensional version of a new data driven model of turbulence that uses the KL expansion to provide a spectral solution of the turbulent flow field based on analysis of Particle Image Velocimetry (PIV) turbulent data. The analysis derives a 2nd order random field over the whole flow domain that gives better turbulence properties in areas of non-uniform flow and where flow separates than the present models that are based on the Navier-Stokes Equations. These latter models need assumptions to decrease the number of calculations to enable them to run on present day computers or super-computers. These assumptions reduce the accuracy of these models. The improved flow field is gained at the expense of the model not being generic. Therefore the new data driven model can only be used for the flow situation of the data as the analysis shows that the kernel of the turbulent flow field of undular hydraulic jump could not be related to the surface waves, a key feature of the jump. The kernel developed has two parts, called the outer and inner parts. A comparison shows that the ratio of outer kernel to inner kernel primarily reflects the ratio of turbulent production to turbulent dissipation. The outer part, with a larger correlation length, reflects the larger structures of the flow that contain most of the turbulent energy production. The inner part reflects the smaller structures that contain most turbulent energy dissipation. The new data driven model can use a kernel with changing variance and/or regression coefficient over the domain, necessitating the use of both numerical and analytical methods. The model allows the use of a two-part regression coefficient kernel, the solution being the addition of the result from each part of the kernel. This research highlighted the need to assess the size of the structures calculated by the models based on the Navier-Stokes equations to validate these models. At present most studies use mean velocities and the turbulent fluctuations to validate a models performance. As the new data driven model gives better turbulence properties, it could be used in complicated flow situations, such as a rock groyne to give better assessment of the forces and pressures in the water flow resulting from turbulence fluctuations for the design of such structures. Further development to make the model usable includes; solving the numerical problem associated with the double kernel, reducing the number of modes required, obtaining a solution for the kernel of two-dimensional and three-dimensional flows, including the change in correlation length with time as presently the model gives instant realisations of the flow field and finally including third and fourth order statistics to improve the data driven model velocity field from having Gaussian distribution properties. As the third and fourth order statistics are Reynolds Number dependent this will enable the model to be applied to PIV data from physical scale models. In summary, this new data driven model is complementary to models based on the Navier-Stokes equations by providing better results in complicated design situations. Further research to develop the new model is viewed as an important step forward in the analysis of river control structures such as rock groynes that are prevalent on New Zealand Rivers protecting large cities.
454

Digital Intelligence – Möglichkeiten und Umsetzung einer informatikgestützten Frühaufklärung / Digital Intelligence – opportunities and implementation of a data-driven foresight

Walde, Peter 18 January 2011 (has links) (PDF)
Das Ziel der Digital Intelligence bzw. datengetriebenen Strategischen Frühaufklärung ist, die Zukunftsgestaltung auf Basis valider und fundierter digitaler Information mit vergleichsweise geringem Aufwand und enormer Zeit- und Kostenersparnis zu unterstützen. Hilfe bieten innovative Technologien der (halb)automatischen Sprach- und Datenverarbeitung wie z. B. das Information Retrieval, das (Temporal) Data, Text und Web Mining, die Informationsvisualisierung, konzeptuelle Strukturen sowie die Informetrie. Sie ermöglichen, Schlüsselthemen und latente Zusammenhänge aus einer nicht überschaubaren, verteilten und inhomogenen Datenmenge wie z. B. Patenten, wissenschaftlichen Publikationen, Pressedokumenten oder Webinhalten rechzeitig zu erkennen und schnell und zielgerichtet bereitzustellen. Die Digital Intelligence macht somit intuitiv erahnte Muster und Entwicklungen explizit und messbar. Die vorliegende Forschungsarbeit soll zum einen die Möglichkeiten der Informatik zur datengetriebenen Frühaufklärung aufzeigen und zum zweiten diese im pragmatischen Kontext umsetzen. Ihren Ausgangspunkt findet sie in der Einführung in die Disziplin der Strategischen Frühaufklärung und ihren datengetriebenen Zweig – die Digital Intelligence. Diskutiert und klassifiziert werden die theoretischen und insbesondere informatikbezogenen Grundlagen der Frühaufklärung – vor allem die Möglichkeiten der zeitorientierten Datenexploration. Konzipiert und entwickelt werden verschiedene Methoden und Software-Werkzeuge, die die zeitorientierte Exploration insbesondere unstrukturierter Textdaten (Temporal Text Mining) unterstützen. Dabei werden nur Verfahren in Betracht gezogen, die sich im Kontext einer großen Institution und den spezifischen Anforderungen der Strategischen Frühaufklärung pragmatisch nutzen lassen. Hervorzuheben sind eine Plattform zur kollektiven Suche sowie ein innovatives Verfahren zur Identifikation schwacher Signale. Vorgestellt und diskutiert wird eine Dienstleistung der Digital Intelligence, die auf dieser Basis in einem globalen technologieorientierten Konzern erfolgreich umgesetzt wurde und eine systematische Wettbewerbs-, Markt- und Technologie-Analyse auf Basis digitaler Spuren des Menschen ermöglicht.
455

Introduction of New Products in the Supply Chain : Optimization and Management of Risks

El KHOURY, Hiba 31 January 2012 (has links) (PDF)
Shorter product life cycles and rapid product obsolescence provide increasing incentives to introduce newproducts to markets more quickly. As a consequence of rapidly changing market conditions, firms focus onimproving their new product development processes to reap the benefits of early market entry. Researchershave analyzed market entry, but have seldom provided quantitative approaches for the product rolloverproblem. This research builds upon the literature by using established optimization methods to examine howfirms can minimize their net loss during the rollover process. Specifically, our work explicitly optimizes thetiming of removal of old products and introduction of new products, the optimal strategy, and the magnitudeof net losses when the market entry approval date of a new product is unknown. In the first paper, we use theconditional value at risk to optimize the net loss and investigate the effect of risk perception of the manageron the rollover process. We compare it to the minimization of the classical expected net loss. We deriveconditions for optimality and unique closed-form solutions for single and dual rollover cases. In the secondpaper, we investigate the rollover problem, but for a time-dependent demand rate for the second producttrying to approximate the Bass Model. Finally, in the third paper, we apply the data-driven optimizationapproach to the product rollover problem where the probability distribution of the approval date is unknown.We rather have historical observations of approval dates. We develop the optimal times of rollover and showthe superiority of the data-driven method over the conditional value at risk in case where it is difficult to guessthe real probability distribution
456

Kvantitativ Modellering av förmögenhetsrättsliga dispositiva tvistemål / Quantitative legal prediction : Modeling cases amenable to out-of-court Settlements

Egil, Martinsson January 2014 (has links)
I den här uppsatsen beskrivs en ansats till att med hjälp av statistiska metoder förutse utfallet i förmögenhetsrättsliga dispositiva tvistemål. Logistiska- och multilogistiska regressionsmodeller skattades på data för 13299 tvistemål från 5 tingsrätter och användes  till att förutse utfallet för 1522 tvistemål från 3 andra tingsrätter.   Modellerna presterade bättre än slumpen vilket ger stöd för slutsatsen att man kan använda statistiska metoder för att förutse utfallet i denna typ av tvistemål. / BACKROUND: The idea of legal automatization is a controversial topic that's been discussed for hundreds of years, in modern times in the context of Law & Artificial Intelligence. Strangely, real world applications are very rare. Assuming that the judicial system is like any system that transforms inputs into outputs one would think that we should be able measure it and and gain insight into its inner workings and ultimately use these measurements to make predictions about its output. In this thesis, civil procedures on commercial matters amenable to out-of-court settlement (Förmögenhetsrättsliga Dispositiva Tvistemål) was devoted particular interest and the question was posed: Can we predict the outcome of civil procedures using Statistical Methods? METHOD: By analyzing procedural law and legal doctrin, the civil procedure was modeled in terms of a random variable with a discrete observable outcome. Some data for 14821 cases was extracted from eight different courts. Five of these courts (13299 cases) were used to train the models and three courts (1522 cases) were chosen randomly and kept untouched for validation. Most cases seemed to concern monetary claims (66%) and/or damages (12%). Binary- and Multinomial- logistic regression methods were used as classifiers. RESULTS: The models where found to be uncalibrated but they clearly outperformed random score assignment at separating classes and at a preset threshold gave accuracies significantly higher (p<<0.001) than that of random guessing and in identifying settlements or the correct type of verdict performance was significantly better (p<<0.003) than consequently guessing the most common outcome. CONCLUSION: Using data for cases from one set of courts can to some extent predict the outcomes of cases from another set of courts. The results from applying the models to new data concludes that the outcome in civil processes can be predicted using statistical methods.
457

Data-driven prediction of saltmarsh morphodynamics

Evans, Ben Richard January 2018 (has links)
Saltmarshes provide a diverse range of ecosystem services and are protected under a number of international designations. Nevertheless they are generally declining in extent in the United Kingdom and North West Europe. The drivers of this decline are complex and poorly understood. When considering mitigation and management for future ecosystem service provision it will be important to understand why, where, and to what extent decline is likely to occur. Few studies have attempted to forecast saltmarsh morphodynamics at a system level over decadal time scales. There is no synthesis of existing knowledge available for specific site predictions nor is there a formalised framework for individual site assessment and management. This project evaluates the extent to which machine learning model approaches (boosted regression trees, neural networks and Bayesian networks) can facilitate synthesis of information and prediction of decadal-scale morphological tendencies of saltmarshes. Importantly, data-driven predictions are independent of the assumptions underlying physically-based models, and therefore offer an additional opportunity to crossvalidate between two paradigms. Marsh margins and interiors are both considered but are treated separately since they are regarded as being sensitive to different process suites. The study therefore identifies factors likely to control morphological trajectories and develops geospatial methodologies to derive proxy measures relating to controls or processes. These metrics are developed at a high spatial density in the order of tens of metres allowing for the resolution of fine-scale behavioural differences. Conventional statistical approaches, as have been previously adopted, are applied to the dataset to assess consistency with previous findings, with some agreement being found. The data are subsequently used to train and compare three types of machine learning model. Boosted regression trees outperform the other two methods in this context. The resulting models are able to explain more than 95% of the variance in marginal changes and 91% for internal dynamics. Models are selected based on validation performance and are then queried with realistic future scenarios which represent altered input conditions that may arise as a consequence of future environmental change. Responses to these scenarios are evaluated, suggesting system sensitivity to all scenarios tested and offering a high degree of spatial detail in responses. While mechanistic interpretation of some responses is challenging, process-based justifications are offered for many of the observed behaviours, providing confidence that the results are realistic. The work demonstrates a potentially powerful alternative (and complement) to current morphodynamic models that can be applied over large areas with relative ease, compared to numerical implementations. Powerful analyses with broad scope are now available to the field of coastal geomorphology through the combination of spatial data streams and machine learning. Such methods are shown to be of great potential value in support of applied management and monitoring interventions.
458

Learning Data-Driven Models of Non-Verbal Behaviors for Building Rapport Using an Intelligent Virtual Agent

Amini, Reza 25 March 2015 (has links)
There is a growing societal need to address the increasing prevalence of behavioral health issues, such as obesity, alcohol or drug use, and general lack of treatment adherence for a variety of health problems. The statistics, worldwide and in the USA, are daunting. Excessive alcohol use is the third leading preventable cause of death in the United States (with 79,000 deaths annually), and is responsible for a wide range of health and social problems. On the positive side though, these behavioral health issues (and associated possible diseases) can often be prevented with relatively simple lifestyle changes, such as losing weight with a diet and/or physical exercise, or learning how to reduce alcohol consumption. Medicine has therefore started to move toward finding ways of preventively promoting wellness, rather than solely treating already established illness. Evidence-based patient-centered Brief Motivational Interviewing (BMI) interven- tions have been found particularly effective in helping people find intrinsic motivation to change problem behaviors after short counseling sessions, and to maintain healthy lifestyles over the long-term. Lack of locally available personnel well-trained in BMI, however, often limits access to successful interventions for people in need. To fill this accessibility gap, Computer-Based Interventions (CBIs) have started to emerge. Success of the CBIs, however, critically relies on insuring engagement and retention of CBI users so that they remain motivated to use these systems and come back to use them over the long term as necessary. Because of their text-only interfaces, current CBIs can therefore only express limited empathy and rapport, which are the most important factors of health interventions. Fortunately, in the last decade, computer science research has progressed in the design of simulated human characters with anthropomorphic communicative abilities. Virtual characters interact using humans’ innate communication modalities, such as facial expressions, body language, speech, and natural language understanding. By advancing research in Artificial Intelligence (AI), we can improve the ability of artificial agents to help us solve CBI problems. To facilitate successful communication and social interaction between artificial agents and human partners, it is essential that aspects of human social behavior, especially empathy and rapport, be considered when designing human-computer interfaces. Hence, the goal of the present dissertation is to provide a computational model of rapport to enhance an artificial agent’s social behavior, and to provide an experimental tool for the psychological theories shaping the model. Parts of this thesis were already published in [LYL+12, AYL12, AL13, ALYR13, LAYR13, YALR13, ALY14].
459

Combined Computational-Experimental Design of High-Temperature, High-Intensity Permanent Magnetic Alloys with Minimal Addition of Rare-Earth Elements

Jha, Rajesh 20 May 2016 (has links)
AlNiCo magnets are known for high-temperature stability and superior corrosion resistance and have been widely used for various applications. Reported magnetic energy density ((BH) max) for these magnets is around 10 MGOe. Theoretical calculations show that ((BH) max) of 20 MGOe is achievable which will be helpful in covering the gap between AlNiCo and Rare-Earth Elements (REE) based magnets. An extended family of AlNiCo alloys was studied in this dissertation that consists of eight elements, and hence it is important to determine composition-property relationship between each of the alloying elements and their influence on the bulk properties. In the present research, we proposed a novel approach to efficiently use a set of computational tools based on several concepts of artificial intelligence to address a complex problem of design and optimization of high temperature REE-free magnetic alloys. A multi-dimensional random number generation algorithm was used to generate the initial set of chemical concentrations. These alloys were then examined for phase equilibria and associated magnetic properties as a screening tool to form the initial set of alloy. These alloys were manufactured and tested for desired properties. These properties were fitted with a set of multi-dimensional response surfaces and the most accurate meta-models were chosen for prediction. These properties were simultaneously extremized by utilizing a set of multi-objective optimization algorithm. This provided a set of concentrations of each of the alloying elements for optimized properties. A few of the best predicted Pareto-optimal alloy compositions were then manufactured and tested to evaluate the predicted properties. These alloys were then added to the existing data set and used to improve the accuracy of meta-models. The multi-objective optimizer then used the new meta-models to find a new set of improved Pareto-optimized chemical concentrations. This design cycle was repeated twelve times in this work. Several of these Pareto-optimized alloys outperformed most of the candidate alloys on most of the objectives. Unsupervised learning methods such as Principal Component Analysis (PCA) and Heirarchical Cluster Analysis (HCA) were used to discover various patterns within the dataset. This proves the efficacy of the combined meta-modeling and experimental approach in design optimization of magnetic alloys.
460

Search for Higgs boson decays to beyond-the-Standard-Model light bosons in four-lepton events with the ATLAS detector at the LHC

Chiu, Justin 22 December 2020 (has links)
This thesis presents the search for the dark sector process h -> Zd Zd -> 4l in events collected by the ATLAS detector at the Large Hadron Collider in 2015--2018. In this theorized process, the Standard Model Higgs boson (h) decays to four leptons via two intermediate Beyond-the-Standard-Model particles each called Zd. This process arises from interactions of the Standard Model with a dark sector. A dark sector consists of one or more new particles that have limited or zero interaction with the Standard Model, such as the new vector boson Zd (dark photon). It could have a rich and interesting phenomenology like the visible sector (the Standard Model) and could naturally address many outstanding problems in particle physics. For example, it could contain a particle candidate for dark matter. In particular, Higgs decays to Beyond-the-Standard-Model particles are well-motivated theoretically and are not tightly constrained; current measurements of Standard Model Higgs properties permit the fraction of such decays to be as high as approximately 30%. The results of this search do not show evidence for the existence of the h -> Zd Zd -> 4l process and are therefore interpreted in terms of upper limits on the branching ratio B(h -> Zd Zd) and the effective Higgs mixing parameter kappa^prime. / Graduate

Page generated in 0.0434 seconds