• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17558
  • 5457
  • 2960
  • 2657
  • 1693
  • 1640
  • 1013
  • 877
  • 762
  • 541
  • 306
  • 283
  • 279
  • 257
  • 175
  • Tagged with
  • 42219
  • 4330
  • 3915
  • 3756
  • 2861
  • 2490
  • 2415
  • 2310
  • 2143
  • 2020
  • 2011
  • 1951
  • 1949
  • 1926
  • 1864
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
841

none

Weng, Chuan-Wei 02 February 2010 (has links)
none
842

Stock returns, risk factor loadings, and model predictions a test of the CAPM and the Fama-French 3-factor model /

Suh, Daniel January 2009 (has links)
Thesis (Ph. D.)--West Virginia University, 2009. / Title from document title page. Document formatted into pages; contains x, 146 p. : col. ill. Includes abstract. Includes bibliographical references.
843

On the metastability of the Standard Model

Baum, Sebastian January 2015 (has links)
With the discovery of a particle consistent with the Standard Model (SM) Higgs at the Large Hadron Collider (LHC) at CERN in 2012, the final ingredient of the SM has been found. The SM provides us with a powerful description of the physics of fundamental particles, holding up at all energy scales we can probe with accelerator based experiments. However, astrophysics and cosmology show us that the SM is not the final answer, but e.g. fails to describe dark matter and massive neutrinos. Like any non-trivial quantum field theory, the SM must be subject to a so-called renormalization procedure in order to extrapolate the model between different energy scales. In this context, new problems of more theoretical nature arise, e.g. the famous hierarchy problem of the Higgs mass. Renormalization also leads to what is known as the metastability problem of the SM: assuming the particle found at the LHC is the SM Higgs boson, the potential develops a second minimum deeper than the electroweak one in which we live, at energy scales below the Planck scale. Absolute stability all the way up to the Planck scale is excluded at a confidence level of about 98 %. For the central experimental SM values the instability occurs at scales larger than ~ 10¹⁰ GeV. One can take two viewpoints regarding this instability: assuming validity of the SM all the way up to the Planck scale, the problem does not necessarily lead to an inconsistency of our existence. If we assume our universe to have ended up in the electroweak minimum after the Big Bang, the probability that it would have transitioned to its true minimum during the lifetime of the universe is spectacularly small.  If we on the other hand demand absolute stability, new physics must modify the SM at or below the instability scale of ~ 10¹⁰ GeV, and we can explore which hints the instability might provide us with on this new physics. In this work, the metastability problem of the SM and possible implications are revisited. We give an introduction to the technique of renormalization and apply this to the SM. We then discuss the stability of the SM potential and the hints this might provide us with on new physics at large scales. / Standardmodellen inom partikelfysik är vår bästa beskrivning av elementarpartiklarnas fysik. År 2012 hittades en ny skalär boson vid Large Hadron Collider (LHC) på CERN, som är kompatibel med att vara Higgs bosonen, den sista saknade delen av Standardmodellen. Men även om Standardmodellen ger oss en väldigt precis beskrivning av all fysik vi ser i partikelacceleratorer, vet vi från astropartikelfysik och kosmologi att den inte kan vara hela lösningen. T.ex. beskriver Standardmodellen ej mörk materia eller neutrinernas massa. Som alla kvantfältteorier måste man renormera Standardmodellen för att få en beskrivning som fungerar på olika energiskalor. När man renormerar Standardmodellen hittar man nya problem som är mer teoretiska, t.ex. det välkända hierarkiproblemet av Higgsmassan. Renormering leder också till vad som kallas för metastabilitetsproblemet, dvs att Higgspotentialen utvecklar ett minimum som är djupare än det elektrosvaga minimum vi lever i, på högre energiskalor. Om vi antar att partikeln som hittades på CERN är Standardmodellens Higgs boson, är absolut stabilitet exkluderad med 98 % konfidens. För centrala experimentiella mätningar av Standardmodells parametrar uppkommer instabiliteten på skalor över ~ 10¹⁰ GeV. Det finns två olika sätt att tolka stabilitetsproblemet: Om man antar att Standardmodellen är den rätta teorien ända upp till Planckskalan, kan vi faktiskt fortfarande existera. Om vi antar att universum hamnat i det elektrosvaga minimumet efter Big Bang är sannolikheten att det har gått över till sitt riktiga minimum under universums livstid praktiskt taget noll. Dvs att vi kan leva i ett metastabilt universum. Om vi å andra sidan kräver att potentialen måste vara absolut stabil, måste någon ny fysik modifiera Standardmodellen på eller under instabilitetsskalan ~10¹⁰ GeV. I så fall kan vi fundera på vilka antydningar stabilitetsproblemet kan ge oss om den nya fysiken. Den här uppsatsen beskriver Standardmodells metastabilitetsproblem. Vi ger en introduktion till renormering och använder tekniken till Standardmodellen. Sen diskuteras stabiliteten inom Standardmodellens potential och vilka antydningar problemet kan ge oss angående ny fysik.
844

Simplified finite element bearing modeling : with NX Nastran

Adolfsson, Erik January 2015 (has links)
This report was produced at the request of ABB Robotics and the work was conducted at their facilities in Västerås, Sweden. In the development of industrial robots the structures are slimmed to increase the accuracy and speed. When conducting finite element analysis on the robots the accuracy of the component modelling and definitions of the boundary conditions becomes more important. One such component is the ball bearing which consist of several parts and has a nonlinear behavior where the balls are in contact with the rings. The task given was to develop new methods to model roller bearings in Siemens finite element modelling software NX Nastran. Then conduct a strain measurement, to compare the methods to real experimental values. The goal with the report is to find one or more methods to model roller bearings, with accurate results, that can beused in their development work. The report was conducted by first doing a study on bearings and finite element modeling, and learning to use the software NX Nastran. Then the development of the methods were done by generating ideas for bearing models and testing them on simple structures. Nine methods was produced and a tenth, the method used to model bearings today, was used as a reference. The methods was used to build bearing models in a finite element model of a six axis robot wrist. Simulations were done on the models with different load cases and the results were compared to a strain measurement of the wrists real counterpart. Only six of the models were analyzed in the result, since four of the models returned results that were deemed unusable. When compiling the result data no model was found to accurately recreate the stresses in every load case. Three methods, that allow deformation, performed similarly. One of them is suggested to be used as modelling method in the future. Worst of the methods, according to the results compiled, was found to be the method used today. It fails to describe local stresses around the bearing. For continued work it is suggested that linear contact elements is studied further. Four out of five models constructed with linear contact elements failed to return satisfactory results.
845

Mediation modeling and analysis forhigh-throughput omics data

Zheng, Ning January 2015 (has links)
There is a strong need for powerful unified statistical methods for discovering underlying genetic architecture of complex traits with the assistance of omics information. In this paper, two methods aiming to detect novel association between the human genome and complex traits using intermediate omics data are developed based on statistical mediation modeling. We demonstrate theoretically that given proper mediators, the proposed statistical mediation models have better power than genome-wide association studies (GWAS) to detect associations missed in standard GWAS that ignore the mediators. For each ofthe modeling methods in this paper, an empirical example is given, where the association between a SNP and BMI missed by standard GWAS can be discovered by mediation analysis.
846

The effect of noise on the dynamics of a 2-D walking model

Campbell, Bradley Cortez 27 February 2012 (has links)
Walking models have been used to explore concepts such as energy, step variability, control strategies and redundancy in walking. A 2-D dynamic walking model was used to determine the levels of variability in gait while being perturbed. The perturbations were added in the form of randomly added noise applied at different magnitudes. The model was comprised of two equal length legs and masses at the feet and hips. The model walked on a flat surface and each step was initialed by an impulse at the swing leg. The magnitude of the impulse determined the size of the model's steps. In this study, the walker took steps with lengths that were than were analogous to humans. An attempt to offset the effect of the noise was made by adding a proportional controller to correct the errors of the applied impulse. The control equation was comprised of gain, A, and noise, [xi], term. The step length, time and speed were calculated to analyze how the model walks. It was hypothesized that the model would use a strategy similar to humans on a treadmill and follow a goal equivalent manifold. The manifold was all possible solutions of step length and step time for maintaining constant speed. Any fluctuations in step length and time would still result in constant speed. The results showed that the model's gait became more variable as noise was added. When the control was added through the gain being increased, the model steps became more variable. The model did not follow the same control strategy as humans and coordinate steps along the GEM. As the model began taking longer step lengths the step time decreased. / text
847

Modeling of nanoparticle transport in porous media

Zhang, Tiantian 20 November 2012 (has links)
The unique properties of engineered nanoparticles have many potential applications in oil reservoirs, e.g., as emulsion stabilizers for enhanced oil recovery, or as nano-sensors for reservoir characterization. Long-distance propagation (>100 m) is a prerequisite for many of these applications. With diameters between 10 to 100 nanometers, nanoparticles can easily pass through typical pore throats in reservoirs, but physicochemical interaction between nanoparticles and pore walls may still lead to significant retention. A model that accounts for the key mechanisms of nanoparticle transport and retention is essential for design purposes. In this dissertation, interactions are analyzed between nanoparticles and solid surface for their effects on nanoparticle deposition during transport with single-phase flow. The analysis suggests that the DLVO theory cannot explain the low retention concentration of nanoparticles during transport in saturated porous media. Moreover, the hydrodynamic forces are not strong enough for nanoparticle removal from rough surface. Based on different filtration mechanisms, various continuum transport models are formulated and used to simulate our nanoparticle transport experiments through water-saturated sandpacks and consolidated cores. Every model is tested on an extensive set of experimental data collected by Yu (2012) and Murphy (2012). The data enable a rigorous validation of a model. For a set of experiments injecting the same kind of nanoparticle, the deposition rate coefficients in the model are obtained by history matching of one effluent concentration history. With simple assumptions, the same coefficients are used by the model to predict the effluent histories of other experiments when experimental conditions are varied. Compared to experimental results, colloid filtration model fails to predict normalized effluent concentrations that approach unity, and the kinetic Langmuir model is inconsistent with non-zero nanoparticle retention after postflush. The two-step model, two-rate model and two-site model all have both reversible and irreversible adsorptions and can generate effluent histories similar to experimental data. However, the two-step model built based on interaction energy curve fails to fit the experimental effluent histories with delay in the leading edge but no delay in the trailing edge. The two-rate model with constant retardation factor shows a big failure in capturing the dependence of nanoparticle breakthrough delay on flow velocity and injection concentration. With independent reversible and irreversible adsorption sites the two-site model has capability to capture most features of nanoparticle transport in water-saturated porous media. For a given kind of nanoparticles, it can fit one experimental effluent history and predict others successfully with varied experimental conditions. Some deviations exist between model prediction and experimental data with pump stop and very low injection concentration (0.1 wt%). More detailed analysis of nanoparticle adsorption capacity in water-saturated sandpacks reveals that the measured irreversible adsorption capacity is always less than 35% of monolayer packing density. Generally, its value increases with higher injection concentration and lower flow velocities. Reinjection experiments suggest that the irreversible adsorption capacity has fixed value with constant injection rate and dispersion concentration, but it becomes larger if reinjection occurs with larger concentration or smaller flow rate. / text
848

The effects of video modeling and a lag schedule of reinforcement on toy play behaviors of children with autism

Fragale, Christina Lin 18 February 2013 (has links)
Video modeling is a research-based intervention used to teach play skills to children with autism. While children learn to imitate the play behaviors seen in the video, increases in play behaviors different from the video were not apparent. The current study examined the use of video modeling and video modeling with an added lag schedule of reinforcement, on increasing toy play of five children with autism in their homes. During video modeling, the children watched a short video portraying a person playing with toy figurines. Then, they were given the toys and instructed to play independently for 5-min. During the video model with lag schedule reinforcement, praise and preferred snacks were provided when his or her toy play was different from immediately preceding responses during the play session. A nonconcurrent multiple baseline across participants design was used to examine the effects. Overall results indicated that the children learned scripted toy play and increased in levels of varied play, but did not increase significantly nor decrease in levels of unscripted toy play from baseline. Even with the additional reinforcement, the children’s play did not increase in levels of varied play, scripted or unscripted play behaviors for four of five participants. Social validity of the child’s play outcomes and the perceived ease of use of the intervention were assessed using questionnaires filled out by parents and behavioral therapists. Discussion, limitations, and implications for future research are presented. / text
849

Fast error detection with coverage guarantees for concurrent software

Coons, Katherine Elizabeth 04 October 2013 (has links)
Concurrency errors are notoriously difficult to debug because they may occur only under unexpected thread interleavings that are difficult to identify and reproduce. These errors are increasingly important as recent hardware trends compel developers to write more concurrent software and to provide more concurrent abstractions. This thesis presents algorithms that dynamically and systematically explore a program's thread interleavings to manifest concurrency bugs quickly and reproducibly, and to provide precise incremental coverage guarantees. Dynamic concurrency testing tools should provide (1) fast response -- bugs should manifest quickly if they exist, (2) reproducibility -- bugs should be easy to reproduce and (3) coverage -- precise correctness guarantees when no bugs manifest. In practice, most tools provide either fast response or coverage, but not both. These goals conflict because a program's thread interleavings exhibit exponential state- space explosion, which inhibits fast response. Two approaches from prior work alleviate state-space explosion. (1) Partial-order reduction provides full coverage by exploring only one interleaving of independent transitions. (2) Bounded search provides bounded coverage by enumerating only interleavings that do not exceed a bound. Bounded search can additionally provide guarantees for cyclic state spaces for which dynamic partial-order reduction provides no guarantees. Without partial-order reduction, however, bounded search wastes most of its time exploring executions that reorder only independent transitions. Fast response with coverage guarantees requires both approaches, but prior work failed to combine them soundly. We combine bounded search with partial-order reduction and extensively analyze the space of dynamic, bounded partial-order reduction strategies. First, we prioritize with a best-first search and show that heuristics that combine these approaches find bugs quickly. Second, we restrict partial-order reduction to combine approaches while maintaining bounded coverage. We specialize this approach for several bound functions, prove that these algorithms guarantee bounded coverage, and leverage dynamic information to further reduce the state space. Finally, we bound the partial order on a program's transitions, rather than the total order on those transitions, to combine these approaches without sacrificing partial-order reduction. This algorithm provides fast response, incremental coverage guarantees, and reproducibility. We manifest bugs an order of magnitude more quickly than previous approaches and guarantee incremental coverage in minutes or hours rather than weeks, helping developers find and reproduce concurrency errors. This thesis makes bounded stateless model checking for concurrent programs substantially more efficient and practical. / text
850

Estimation of multiple mediator model

Wen, Sibei 09 December 2013 (has links)
Models for mediation are widely used in psychology, behavior science and education because they help researchers understand how a causal effect happens through one or several mediating variables. And more complex mediation models that incorporate multiple mediators are increasingly being assessed. This report uses a generated dataset to provide an overview of the assessment of direct effects and indirect effects in multiple mediator models. Use of a multiple comparison-based procedure for testing a set of hypotheses simultaneously while controlling the experiment-wise type I error rate is used to calculate a confidence interval for each pairwise contrast of mediated effects. Three approaches will be used to test hypotheses concerning the contrast between pairs of mediator effects. These approaches include 1) an assumption of zero covariance between parameters from different models, 2) assumption of a non-zero covariance between parameters from different models and 3) use of bootstrapping. Results are provided and discussed. / text

Page generated in 0.0803 seconds