331 |
Two Essays on Venture Capital: What Drives the Underpricing of Venture CapitalBacked IPOs and Do Venture Capitalists Provide Anything More than Money?Flagg, Donald 01 May 2007 (has links)
This dissertation includes two chapters that investigate the role venture capitalists (VCs) play in the underpricing and in the long-run performance of IPOs. The first chapter focuses on the underpricing of IPOs and attempts to determine the role that VCs play in this underpricing process. The evidence is consistent with a view that VCs agree to underpricing to ascertain benefits from both "grandstanding" and "spinning." The second chapter examines the long-run performance of IPOs and tries to determine the role that VCs play in the development of IPOs. Here, the evidence suggests that VC-backed IPOs appear to have better access to capital than non-VC-backed IPOs, but the long-run performance of VC-backed IPOs is generally mixed.
|
332 |
Reconstruction du flux d'énergie et recherche de squarks et gluinos dans l'expérience D0Ridel, Mélissa 16 April 2002 (has links) (PDF)
DO est l'une des 2 expériences du collisionneur p-pbar du Fermi National Accelerator Laboratory près de Chicago. Après 5 années d'arrêt, le Run II a débuté en mars 2001. Il permettra d'explorer de nouveaux domaines de masses de squarks et de gluinos, particules supersymétriques dont la signature en jets et énergie transverse manquante est l'objet de ce travail. Mais, avant le démarrage, j'ai travaillé sur des améliorations hardware et software de la mesure de l'énergie, essentielle pour les jets et l'énergie transverse manquante.<br> Une simulation des chaînes de lecture et de calibration de chaque voie du calorimètre a été réalisée. Son résultat dépend de 8 grandeurs caractéristiques qui ont été extraites par traitement du signal de mesures de réflectométrie temporelle. Elle permettra de définir une stratégie de calibration du calorimètre.<br> Une clusterisation des dépôts d'énergie calorimétrique a été réalisée (cel1NN) basée sur la cellule et non la tour et exploitant au maximum la granularité du calorimètre notamment en débutant dans le Sème compartiment électromagnétique, 4 fois plus granulaire que les autres. L'information longitudinale permet de séparer les particules électromagnétiques et hadroniques superposées. Ainsi, tous les éléments indispensables â la reconstruction individuelle des gerbes sont mis en oeuvre.<br> Puis, l'energy flow combine les clusters ce11NN et les traces reconstruites dans la cavité centrale pour conserver la meilleure mesure de l'énergie et améliorer ainsi la reconstruction du flux d'énergie de chaque événement.<br> L'efficacité des déclenchements calorimétriques actuels a été déterminée et utilisée pour une recherche de squarks et gluinos utilisant des événements Monte Carlo dans le cadre de mSUGRA. Une limite inférieure sur les masses des squarks et des gluinos qu'atteindra DO avec 100 pb-1 de luminosité est prédite â partir d'outils de reconstruction standards; elle pourra être améliorée grâce à l'utilisation de l'energy flow.
|
333 |
Recherche du boson de Higgs dans le canal $WH$ et étude de la production $Wb\bar{b}$ dans les collisions $p\bar{p}$ à 1.96 TeV dans l'expérience $D0$ auprès du Tevatron de FermilabBeauceron, Stéphanie 28 May 2004 (has links) (PDF)
L'introduction du boson de Higgs permet de résoudre le problème de l'origine de la masse des particules dans le modèle standard. A ce jour, il n'a pas encore été déecouvert et les expériences du LEP on établit à 95% de niveau de confiance une limite inférieure sur sa masse à 114.4 GeV. Au Tevatron qui est un collisionneur protonantiproton, on le recherche produit en association avec un boson $W$, pour une masse inférieure à 135 GeV où la désintégration du boson de Higgs se fait en paire de quark $b\bar{b}$. Notre étude s'est faite dans la cadre de l'expérience $D0$. L'état final de tels événements s'appuie essentiellement sur les signaux du calorimètre et sur l'étiquetage des jets comme provenant de quark $b$ utilisant le détecteur de vertex, le déctecteur de trace à fibre scintillante et le solénoïde qui sont nouveaux pour $D0$ au le Run II. Nous avons étudié la calibration de la chaîne électronique de lecture et l'influence du bruit dans le calorimètre avant d'aborder l'analyse en utilisant une reconstruction calorimétrique optimisée : les électrons, l'énergie transverse manquante ainsi que les jets y sont correctement identifiés et peuvent être utilisés dans une analyse $W (\to e\nu)+$ jets. <br />L'étude du processus $W (\to e\nu)b\bar{b}$ qui est un bruit de fond irréductible du signal de boson de Higgs a été réalisée avec 174 pb$^{-1}$ de données. Elle a permis d'établir une limite supérieure sur sa section efficace de production de 20.3 pb. Cette étude a été suivie par une recherche d'un signal de boson de Higgs pour des masses comprises entre 105 GeV et 135 GeV. Des limites sur les sections efficaces de production multipliées par les rapports d'embranchement de désintégrations ont été obtenues. Pour une masse de boson de Higgs de 115 GeV, nous obtenons une limite supérieure de 12.4 pb.
|
334 |
Simulation d'un écoulement de jet de rive par une méthode VOFMauriet, Sylvain 02 July 2009 (has links) (PDF)
Les processus dynamiques présents en zone de swash ont un impact significatif sur l'évolution des zones côtières. Une part importante du transport sédimentaire cross-shore se produit dans cette zone, plus particulièrement dans cette zone où se produisent le run-up et le run-down. La zone située au-delà de la ligne de rivage au repos est le plus souvent décrite par des modèles intégrés sur la verticale. La décroissance des vagues est bien reproduite, cependant l'étude du transport sédimentaire impose une paramétrisation du frottement sur le fond. Nous présentons les résultats de simulations RANS de la propagation d'un mascaret (obtenu par un "lâcher de barrage") sur une plage en pente et le run-up et le run-down ainsi générés. Les résultats numériques sont comparés aux résultats expérimentaux de Yeh et al. (1989). Les simulations ont été réalisées avec le code Navier-Stokes diphasique AQUILON. Deux méthodes de suivi d'interface VOF (VOF TVD ET VOF PLIC) sont implémentées. La viscosité turbulente est calculée par un modèle V2-F (Durbin, 1991). Une estimation des grandeurs turbulentes k et epsilon basée sur la théorie des ondes longues pour la propagation d'un ressaut hydraulique est présentée. Une modélisation VOF-PLIC & V2-F est appliquée pour reproduire les caractéristiques macroscopiques du lâcher de barrage, qui comme on pouvait s'y attendre dépendent peu de la turbulence. Nous étudions aussi l'impact des conditions initiales sur k et epsilon sur l'établissement de l'écoulement turbulent. Après ces validations vis-à-vis de la turbulence, des simulations du cas décrit par Yeh et al. (1989) sont menées pour optimiser le choix des paramètres de calcul. La théorie de Whitham (1958), prédit un effondrement du mascaret au niveau de la ligne de rivage au repos. La théorie de Shen and Meyer (1963) est toujours à l'heure actuelle le modèle de référence. Les résultats expérimentaux de Yeh et al. (1989) montrent clairement un phénomène différent. L'utilisation conjointe de la technique VOF-TVD et du modèle de turbulence V2-F semble apporter les meilleurs résultats par rapport aux expériences de Yeh et al. (1989). Une étude de la transition mascaret/lame de swash est proposée. Nos résultats montrent que la théorie de Whitham décrit de façon assez précise le mécanisme de d'effondrement du mascaret. Les résultats de nos simulations sont utilisés pour décrire la transition entre l'effondrement du mascaret et l'écoulement du run-up. L'analyse des processus de frottement dans le jet de rive met en évidence une forte dissymétrie entre le run-up et le run-down avec cisaillement plus faible lors du run-down
|
335 |
Computational algorithms for algebrasLundqvist, Samuel January 2009 (has links)
This thesis consists of six papers. In Paper I, we give an algorithm for merging sorted lists of monomials and together with a projection technique, we obtain a new complexity bound for the Buchberger-Möller algorithm and the FGLM algorithm. In Paper II, we discuss four different constructions of vector space bases associated to vanishing ideals of points. We show how to compute normal forms with respect to these bases and give complexity bounds. As an application we drastically improve the computational algebra approach to the reverse engineering of gene regulatory networks. In Paper III, we introduce the concept of multiplication matrices for ideals of projective dimension zero. We discuss various applications and, in particular, we give a new algorithm to compute the variety of an ideal of projective dimension zero. In Paper IV, we consider a subset of projective space over a finite field and give a geometric description of the minimal degree of a non-vanishing form with respect to this subset. We also give bounds on the minimal degree in terms of the cardinality of the subset. In Paper V, we study an associative version of an algorithm constructed to compute the Hilbert series for graded Lie algebras. In the commutative case we use Gotzmann's persistence theorem to show that the algorithm terminates in finite time. In Paper VI, we connect the commutative version of the algorithm in Paper V with the Buchberger algorithm. / At the time of doctoral defence, the following papers were unpublished and had a status as follows: Paper 3: Manuscript. Paper 4: Manuscript. Paper 5: Manuscript. Paper 6: Manuscript
|
336 |
Embedded electronic systems driven by run-time reconfigurable hardwareFons Lluís, Francisco 29 May 2012 (has links)
Abstract
This doctoral thesis addresses the design of embedded electronic systems based on run-time reconfigurable hardware technology –available through SRAM-based FPGA/SoC devices– aimed at contributing to enhance the life quality of the human beings. This work does research on the conception of the system architecture and the reconfiguration engine that provides to the FPGA the capability of dynamic partial reconfiguration in order to synthesize, by means of hardware/software co-design, a given application partitioned in processing tasks which are multiplexed in time and space, optimizing thus its physical implementation –silicon area, processing time, complexity, flexibility, functional density, cost and power consumption– in comparison with other alternatives based on static hardware (MCU, DSP, GPU, ASSP, ASIC, etc.). The design flow of such technology is evaluated through the prototyping of several engineering applications (control systems, mathematical coprocessors, complex image processors, etc.), showing a high enough level of maturity for its exploitation in the industry. / Resumen
Esta tesis doctoral abarca el diseño de sistemas electrónicos embebidos basados en tecnología hardware dinámicamente reconfigurable –disponible a través de dispositivos lógicos programables SRAM FPGA/SoC– que contribuyan a la mejora de la calidad de vida de la sociedad. Se investiga la arquitectura del sistema y del motor de reconfiguración que proporcione a la FPGA la capacidad de reconfiguración dinámica parcial de sus recursos programables, con objeto de sintetizar, mediante codiseño hardware/software, una determinada aplicación particionada en tareas multiplexadas en tiempo y en espacio, optimizando así su implementación física –área de silicio, tiempo de procesado, complejidad, flexibilidad, densidad funcional, coste y potencia disipada– comparada con otras alternativas basadas en hardware estático (MCU, DSP, GPU, ASSP, ASIC, etc.). Se evalúa el flujo de diseño de dicha tecnología a través del prototipado de varias aplicaciones de ingeniería (sistemas de control, coprocesadores aritméticos, procesadores de imagen, etc.), evidenciando un nivel de madurez viable ya para su explotación en la industria. / Resum
Aquesta tesi doctoral està orientada al disseny de sistemes electrònics empotrats basats en tecnologia hardware dinàmicament reconfigurable –disponible mitjançant dispositius lògics programables SRAM FPGA/SoC– que contribueixin a la millora de la qualitat de vida de la societat. S’investiga l’arquitectura del sistema i del motor de reconfiguració que proporcioni a la FPGA la capacitat de reconfiguració dinàmica parcial dels seus recursos programables, amb l’objectiu de sintetitzar, mitjançant codisseny hardware/software, una determinada aplicació particionada en tasques multiplexades en temps i en espai, optimizant així la seva implementació física –àrea de silici, temps de processat, complexitat, flexibilitat, densitat funcional, cost i potència dissipada– comparada amb altres alternatives basades en hardware estàtic (MCU, DSP, GPU, ASSP, ASIC, etc.). S’evalúa el fluxe de disseny d’aquesta tecnologia a través del prototipat de varies aplicacions d’enginyeria (sistemes de control, coprocessadors aritmètics, processadors d’imatge, etc.), demostrant un nivell de maduresa viable ja per a la seva explotació a la indústria.
|
337 |
Evaluation of Roadside Collisions with Utility Poles and Trees at Intersection LocationsMattox, Todd Berry 15 November 2007 (has links)
The United States averages 40,000 traffic fatalities annually. The American Association of State Highway and Transportation Officials (AASHTO) Roadside Design Guide cites run-off-the-road crashes as contributing greatly to this statistic, with about one-third of all traffic deaths [1]. This number has remained relatively constant over the past four decades, and despite a major increase in vehicle miles traveled (VMT), the rate of fatalities per 100 million vehicle miles traveled has declined. However, this relatively large number of run-off-the-road crashes should remain a major concern in all roadway design.
The Highway Safety Act of 1966 marks a defining moment in the history of roadside safety [ ]. Before this point, roadways were only designed for motorists who remained on the roadway, with no regard for driver error. As there was no legislation or guidelines concerning roadside design, roadways constructed prior to 1966 are littered with fixed objects directly off of the edge of pavement. Fortunately, many of these roads have reached their thirty year design lives and have become candidates for improvement.
The following report examines roadside crashes on nine Atlanta urban arterial roadways. Accident type, severity, and location for all crashes on these were evaluated. It is found roadside collisions with utility poles and trees were more prone to occur at intersection locations than midblock locations. Also for the studied roadway corridors, on average, roadside collisions were more likely to result in serious injury or fatality. Based on these findings initial recommendations are offer for improving clear zone requirements.
|
338 |
Essays on Insurance Development and Economic GrowthChang, Chi-Hung 03 July 2012 (has links)
This dissertation comprises two topics. In Chapter 1, I explore the short- and long-run relation between insurance development and economic growth for 40 countries between 1981 and 2010. Applying a pooled mean group estimation, I find that life and nonlife insurance have different short- and long-run effects on the growth. On a full sample analysis, life insurance exerts a significantly positive long-run effect on the growth, while its short-run effect is not significant. Nonlife insurance, in contrast, has a significantly positive short-run growth effect but no long-run effect. On a reduced sample analysis, the observation on life insurance is qualitatively similar, but the growth effect of nonlife insurance is no longer significant both in short and long run, suggesting that specific countries drive the overall effect in the full sample. The results pass a battery of robustness tests. The analysis on individual countries reveals that the short-run effect and adjustment speed toward the long-run equilibrium varies across countries. I also analyze if the level of income and insurance development makes any difference on the growth effect of insurance. The results show that the growth effect of life insurance is significant in non-high income countries and countries with low level of life insurance development, while the effect is not significant both for life and nonlife insurance in high income countries.
In Chapter 2, I employ the dynamic panel threshold model to investigate how institutional environments shape the impact of insurance development on economic growth. I conduct four hypotheses for possible intermediate effects of institutional environments on insurance-growth nexus: quasi-institution positivity, quasi-institution negativity, quasi-institution duality, and quasi-institution neutrality. I use multiple measures related to political, economic, and legal environments to evaluate the soundness of institutional environments. Empirical results show that the quasi-institution negativity hypothesis is supported for life insurance because the observation is consistent across all institution-related measures. The results in nonlife insurance are not as uniform as those in life insurance. The quasi-institution positivity, negativity, and neutrality are respectively supported in different institutional measures, and the coefficients in most cases are significant only at a marginal significance level. The overall findings suggest that a sound institutional environment does not necessarily benefit the growth effect of life insurance, but an unhealthy one does deter it and that the effect depends on specific measure in the case of nonlife insurance. In Chapter 3 I briefly introduce some directions for further research.
|
339 |
Listing Unique Fractional Factorial DesignsShrivastava, Abhishek Kumar 2009 December 1900 (has links)
Fractional factorial designs are a popular choice in designing experiments for
studying the effects of multiple factors simultaneously. The first step in planning an
experiment is the selection of an appropriate fractional factorial design. An appro-
priate design is one that has the statistical properties of interest of the experimenter
and has a small number of runs. This requires that a catalog of candidate designs
be available (or be possible to generate) for searching for the "good" design. In the
attempt to generate the catalog of candidate designs, the problem of design isomor-
phism must be addressed. Two designs are isomorphic to each other if one can be
obtained from the other by some relabeling of factor labels, level labels of each factor
and reordering of runs. Clearly, two isomorphic designs are statistically equivalent.
Design catalogs should therefore contain only designs unique up to isomorphism.
There are two computational challenges in generating such catalogs. Firstly,
testing two designs for isomorphism is computationally hard due to the large number
of possible relabelings, and, secondly, the number of designs increases very rapidly
with the number of factors and run-size, making it impractical to compare all designs
for isomorphism. In this dissertation we present a new approach for tackling both
these challenging problems. We propose graph models for representing designs and
use this relationship to develop efficient algorithms. We provide a new efficient iso-
morphism check by modeling the fractional factorial design isomorphism problem as
graph isomorphism problem. For generating the design catalogs efficiently we extend a result in graph isomorphism literature to improve the existing sequential design
catalog generation algorithm.
The potential of the proposed methods is reflected in the results. For 2-level
regular fractional factorial designs, we could generate complete design catalogs of run
sizes up to 4096 runs, while the largest designs generated in literature are 512 run
designs. Moreover, compared to the next best algorithms, the computation times
for our algorithm are 98% lesser in most cases. Further, the generic nature of the
algorithms makes them widely applicable to a large class of designs. We give details of
graph models and prove the results for two classes of designs, namely, 2-level regular
fractional factorial designs and 2-level regular fractional factorial split-plot designs,
and provide discussions for extensions, with graph models, for more general classes
of designs.
|
340 |
Semi-empirical Probability Distributions and Their Application in Wave-Structure Interaction ProblemsIzadparast, Amir Hossein 2010 December 1900 (has links)
In this study, the semi-empirical approach is introduced to accurately estimate
the probability distribution of complex non-linear random variables in the field of wavestructure
interaction. The structural form of the semi-empirical distribution is developed
based on a mathematical representation of the process and the model parameters are
estimated directly from utilization of the sample data. Here, three probability
distributions are developed based on the quadratic transformation of the linear random
variable. Assuming that the linear process follows a standard Gaussian distribution, the
three-parameter Gaussian-Stokes model is derived for the second-order variables.
Similarly, the three-parameter Rayleigh-Stokes model and the four-parameter Weibull-
Stokes model are derived for the crests, troughs, and heights of non-linear process
assuming that the linear variable has a Rayleigh distribution or a Weibull distribution.
The model parameters are empirically estimated with the application of the conventional
method of moments and the newer method of L-moments. Furthermore, the application
of semi-empirical models in extreme analysis and estimation of extreme statistics is discussed. As a main part of this research study, the sensitivity of the model statistics to
the variability of the model parameters as well as the variability in the samples is
evaluated. In addition, the sample size effects on the performance of parameter
estimation methods are studied.
Utilizing illustrative examples, the application of semi-empirical probability
distributions in the estimation of probability distribution of non-linear random variables
is studied. The examples focused on the probability distribution of: wave elevations and
wave crests of ocean waves and waves in the area close to an offshore structure, wave
run-up over the vertical columns of an offshore structure, and ocean wave power
resources. In each example, the performance of the semi-empirical model is compared
with appropriate theoretical and empirical distribution models. It is observed that the
semi-empirical models are successful in capturing the probability distribution of
complex non-linear variables. The semi-empirical models are more flexible than the
theoretical models in capturing the probability distribution of data and the models are
generally more robust than the commonly used empirical models.
|
Page generated in 0.0355 seconds