• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 243
  • 225
  • 53
  • 41
  • 39
  • 11
  • 11
  • 10
  • 8
  • 7
  • 6
  • 6
  • 4
  • 3
  • 3
  • Tagged with
  • 785
  • 219
  • 151
  • 121
  • 114
  • 109
  • 82
  • 74
  • 67
  • 66
  • 62
  • 57
  • 55
  • 54
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Analysts’ use of earnings components in predicting future earnings

Bratten, Brian Michael 16 October 2009 (has links)
This dissertation examines the general research issue of whether the components of earnings are informative and specifically 1) how analysts consider earnings components when predicting future earnings and 2) whether the information content in, and analysts’ use of, earnings components have changed through time. Although earnings components have predictive value for future earnings based on each component’s persistence, extant research provides only a limited understanding of whether and how analysts consider this when forecasting. Using an integrated income statement and balance sheet framework to estimate the persistence of earnings components, I first establish that disaggregation based on the earnings components framework in this study is helpful to predict future earnings and helps explains contemporaneous returns. I then find evidence suggesting that although analysts consider the persistence of various earnings components, they do not fully integrate this information into their forecasts. Interestingly, analysts appear to be selective in their incorporation of the information in earnings components, seeming to ignore information from components indicating lower persistence, which results in higher forecast errors. Conversely, when a firm’s income is concentrated in high persistence items, analysts appear to incorporate the information into their forecasts, reducing their forecast errors. I also report that the usefulness of components relative to aggregate earnings has dramatically and continuously increased over the past several decades, and contemporaneous returns appear to be much better explained by earnings components than aggregate earnings (than historically). Finally, the relation between analyst forecast errors and the differential persistence of earnings components has also declined over time, indicating that analysts appear to recognize the increasing importance of earnings components through time. / text
162

Valstybės išlaidų politikos ir visuminės paklausos Lietuvoje analizė 1995-2007 / Analysis of Fiscal Policy and Aggregate Demand in Lithuania in 1995 - 2007

Kanauka, Vytautas 16 June 2009 (has links)
Šiame darbe nagrinėjama vyriausybės vykdomos fiskalinės politikos įtaką visuminei paklausai Lietuvoje. Darbe yra apžvelgiama vyriausybės vartojimo išlaidų bei visuminės paklausos komponentų pokyčiai nuo 1995 iki 2007 metų. Teorinėje dalyje aptariamos pagrindinės Keinsistinės teorijos, kurios nagrinėja fiskalinės politikos įtaką visuminei paklausai, naudojant IS-LM ir AD-AS modelius. Taip pat teorinėje dalyje yra parodoma biudžeto deficito mažinimo įtaka ekonomikai. Darbe atliekama regresinė analizė, kuri įrodo, kad egzistuoja statistiškai reišmingas ryšys tarp visuminės paklausos ir valdžios sektoriaus vartojimo išlaidų. Pabaigoje pateikiamos rekomendacijos, kaip sustiprinti biudžeto išlaidų poveikį visuminei paklausai. / In this thesis the relationship between fiscal policy and aggregate demand in Lithuania is investigated. The first part of the work shows changes of government consumption expenditure and components of aggregate demand in period 1995 - 2007. The theoretical part analyses the main Keynesian ideas which research relationship between aggregate demand and fiscal policy, using IS-LM and AD-AS models. Also theoretical part shows how the reduction of budged deficit influences interest rates, aggregate demand, prices. In the last part aggregate demand is regressed against government consumption expenditure, interest rates, inflation and income. Results suggest that there are statistically significant relationship between aggregate demand and government consumption expenditure. Finally some recommendations are made in the end of the work.
163

Genetinių algoritmų taikymas imituojant sistemas aprašytas agregatiniu metodu / Genetic algorithms usage to simulate the systems described in the aggregate method

Dobilas, Mindaugas 13 August 2010 (has links)
Mokslinių tyrimų sritis – genetinių algoritmų ir agregatinio metodo panaudojimas modeliuojant sudėtingas sistemas. Darbo tikslas – genetinių algoritmų taikymas formaliuose sistemų aprašymo metoduose, sistemų imitaciniame modeliavime, sistemų parametrams nustatyti. Mokslinis naujumas. Šiame darbe siūlomas naujas genetinio algoritmo ir agregatinio metodo taikymas sistemos modeliams aprašyti. Sistemos modelio parametrai genetiniame algoritme laikomi kaip individo chromosomos, o sistemos modelis tai naudingumo funkcija genetiniame algoritme. Padarytos prielaidos leidţia nustatyti sistemos parametrų optimalias reikšmes, kad sistema efektyviai dirbtų. Kitas siūlomas taikymo atvejis, kai genetinis algoritmas naudojamas perėjimo operatoriuje, nustatyti sekančios populiacijos struktūrai. Tai leidţia imituoti biologines, agentines, savireguliuojančias sistemas. / Research area - genetic algorithms approach to aggregate and use modeling complex systems. Work objective - the application of genetic algorithms in formal methods, systems imitation modeling, to find optimal settings. This work proposed new usage of aggregate method and genetic algorithm to describe system models. The system parameters of the model are used as the individual's chromosomes in genetic algorithm, and the system model used as a utility function of genetic algorithm. It also proposed other aggregate approach, the genetic algorithm used for the transition operator, followed by the population structure. This allows the simulation of biological, agent, self-regulating systems.
164

Informationsvärde i den svenska insynshandeln : En studie på aggregerad insynshandel / Information Content of Swedish Insider Trading : A study on aggregate insider trading

Malmkvist, Henrik, Edström, Nils January 2013 (has links)
Denna studie kartlägger om det är möjligt att med hjälp av svenska insynspersoners värde-pappershandel prognostisera den svenska aktiemarknaden. Individuella insynspersoner har tidigare visats ha mer information kring enskilda företag än övriga aktörer på en aktiemarknad och har vistats skapa överavkastning gentemot marknaden. Aggregerad insynshandel har tidi-gare visat sig ha ett positivt samband med framtida avkastning på aktiemarknader. För att undersöka sambandet mellan svensk insynshandel och den svenska aktiemarknaden använder vi finansinspektionens insynslista som innefattar över 209 000 transaktioner av svensk insynshandel för perioden 1991-2013. Detta material undersöks tillsammans med hi-storiska indexvärden över tidsperioden och sambandet kartläggs med hjälp av OLS-regressioner. Vi undersöker även vad som driver sambandet mellan insynshandel och framtida avkastning, och vilket ekonomiskt värde det finns i insynshandel som prognosinstrument. Resultaten visar på att det finns ett statistiskt signifikant positivt samband mellan insynshan-del och framtida avkastning på den svenska aktiemarknaden. Detta samband blir starkare på lång sikt. Vi ser även att köptransaktioner är en starkare indikator för framtida marknadsrörel-ser än säljtransaktioner. Detta bekräftar tidigare studier där de menar att insynspersoner ofta säljer innehav på grund av andra anledningar än vinstsyfte. Vi finner även att sambandet drivs av ett informationsövertag men även av en Contrarian-strategi samt en genomlysningseffekt. Slutligen skapar vi prognosmodeller grundade i historisk insynshandel och genomför backtest på dessa under 22 år. Resultaten pekar på att insynshandel fungerar bra för att prognostisera framtida uppgångar på den svenska aktiemarknaden och är användbara för att skapa invester-ingsstrategier. / This study investigates if it possible to forecast the Swedish stock market using insider trading data. Individual insiders have been shown to have more information concerning a company than other investors. Additionally, insiders have been shown to be able to outperform the market in earnings from trading in company stock. Aggregate insider trading has, in previous studies, been shown to have a positive relationship with future returns on stock markets. To map the relationship between Swedish insider trading and the Swedish stock market we use the insider trading records from Finansinspektionen containing over 209 000 transactions over the course of 22 years. These records are examined together with a historic stock price index from the same time period. The relationship between the two is examined using OLS-regressions. We examine what factors drive the predictive power of insider trading and what economic value insider trading has as a forecasting instrument. Our results show that there is a statistically significant positive relationship between insider trading and future returns on the Swedish stock market, the significance increases with time. We also find indications that insider purchases have a stronger relationship with future index movements than insider sales have. This is consistent with earlier studies that find that insid-ers sell stock for many other reasons than profit. We conclude that the predictive power of insider trading derive from an information advantage, although our results indicates that some of the predictive power can be explained by a contrarian-strategy and a transparency effect. Finally we construct forecast-models based on historical insider trading and back-test these on the 22 year period. Results from these tests indicate that aggregate insider trading is effective in predicting future rises in the stock market and can function as a basis for successful invest-ment strategies.
165

Offshore aggregate extraction in the Prince Rupert area of British Columbia

Good, Thomas Milton 10 April 2008 (has links)
No description available.
166

Geometric Computing over Uncertain Data

Zhang, Wuzhou January 2015 (has links)
<p>Entering the era of big data, human beings are faced with an unprecedented amount of geometric data today. Many computational challenges arise in processing the new deluge of geometric data. A critical one is data uncertainty: the data is inherently noisy and inaccuracy, and often lacks of completeness. The past few decades have witnessed the influence of geometric algorithms in various fields including GIS, spatial databases, and computer vision, etc. Yet most of the existing geometric algorithms are built on the assumption of the data being precise and are incapable of properly handling data in the presence of uncertainty. This thesis explores a few algorithmic challenges in what we call geometric computing over uncertain data.</p><p>We study the nearest-neighbor searching problem, which returns the nearest neighbor of a query point in a set of points, in a probabilistic framework. This thesis investigates two different nearest-neighbor formulations: expected nearest neighbor (ENN), where we consider the expected distance between each input point and a query point, and probabilistic nearest neighbor (PNN), where we estimate the probability of each input point being the nearest neighbor of a query point.</p><p>For the ENN problem, we consider a probabilistic framework in which the location of each input point and/or query point is specified as a probability density function and the goal is to return the point that minimizes the expected distance. We present methods for computing an exact ENN or an \\eps-approximate ENN, for a given error parameter 0 < \\eps < 1, under different distance functions. These methods build an index of near-linear size and answer ENN queries in polylogarithmic or sublinear time, depending on the underlying function. As far as we know, these are the first nontrivial methods for answering exact or \\eps-approximate ENN queries with provable performance guarantees. Moreover, we extend our results to answer exact or \\eps-approximate k-ENN queries. Notably, when only the query points are uncertain, we obtain state-of-the-art results for top-k aggregate (group) nearest-neighbor queries in the L1 metric using the weighted SUM operator.</p><p>For the PNN problem, we consider a probabilistic framework in which the location of each input point is specified as a probability distribution function. We present efficient algorithms for (i) computing all points that are nearest neighbors of a query point with nonzero probability; (ii) estimating, within a specified additive error, the probability of a point being the nearest neighbor of a query point; (iii) using it to return the point that maximizes the probability being the nearest neighbor, or all the points with probabilities greater than some threshold to be the nearest neighbor. We also present some experimental results to demonstrate the effectiveness of our approach.</p><p>We study the convex-hull problem, which asks for the smallest convex set that contains a given point set, in a probabilistic setting. In our framework, the uncertainty of each input point is described by a probability distribution over a finite number of possible locations including a null location to account for non-existence of the point. Our results include both exact and approximation algorithms for computing the probability of a query point lying inside the convex hull of the input, time-space tradeoffs for the membership queries, a connection between Tukey depth and membership queries, as well as a new notion of \\beta-hull that may be a useful representation of uncertain hulls.</p><p>We study contour trees of terrains, which encode the topological changes of the level set of the height value \\ell as we raise \\ell from -\\infty to +\\infty on the terrains, in a probabilistic setting. We consider a terrain that is defined by linearly interpolating each triangle of a triangulation. In our framework, the uncertainty lies in the height of each vertex in the triangulation, and we assume that it is described by a probability distribution. We first show that the probability of a vertex being a critical point, and the expected number of nodes (resp. edges) of the contour tree, can be computed exactly efficiently. Then we present efficient sampling-based methods for estimating, with high probability, (i) the probability that two points lie on an edge of the contour tree, within additive error; (ii) the expected distance of two points p, q and the probability that the distance of p, q is at least \\ell on the contour tree, within additive error and/or relative error, where the distance of p, q on a contour tree is defined to be the difference between the maximum height and the minimum height on the unique path from p to q on the contour tree.</p> / Dissertation
167

Performance based characterization of virgin and recycled aggregate base materials

Ahmeduzzaman, Mohammad 12 September 2016 (has links)
Characterization of the effect of physical properties on the performance such as stiffness and drainage of unbound granular materials is necessary in order to incorporate them in pavement design. The stiffness, deformation and permeability behaviour of unbound granular materials are the essential design inputs for Mechanistic-Empirical Pavement Design Guide as well as empirical design methods. The performance based specifications are aimed to design, and construct a durable and cost effective material throughout the design life of a pavement. However, the specification varies among jurisdiction depending on the historical or current practice, locally available materials, landform, climate and drainage. A literature review on the current unbound granular materials virgin and recycled concrete aggregate base construction specification has been carried out in this study. Resilient modulus, permanent deformation and permeability tests have been carried out on seven gradations of materials from locally available sources. Resilient modulus stiffness of unbound granular material at two different conditioning stress level have been compared in the study. The long term deformation behaviour has also been characterized from results of the permanent deformation test using shakedown approach, dissipated energy approach and a simplified approach. The results show improvement in resilient modulus and permanent deformation for the proposed specification compared to the currently used materials as a results of reduced fines content, increased crush count and inclusion of larger maximum aggregate size into the gradation. A significant effect of particle packing on permeability of granular materials have also been found, in addition to the effect of fines. / October 2016
168

Use of a Portland Cement Accelerator with Mineral Trioxide Aggregate

Monts, M. Scott 01 January 2004 (has links)
The use of Mineral Trioxide Aggregate (MTA) is gaining popularity among clinicians. Despite the many ideal qualities it possesses, it is often difficult to manipulate and often requires a second appointment for placement of a restoration to allow for setting. If the time to set of MTA can be accelerated to a single appointment time frame without significantly altering its properties, then MTA may gain even wider acceptance. The purpose of this study is to identify the percentage of a Portland Cement Accelerator (PCA), that when added to MTA, will decrease the time to set of MTA towards a single appointment time frame. Ten Teflon sample molds were prepared to hold 20 standardized chambers in each. Three sample molds were prepared with a 5.0% (by weight of MTA) accelerator, 3 with 10.0% accelerator and 3 with 15.0% accelerator mixed with MTA and water. Another sample mold contained a mixture of MTA and water only and acted as the control. Samples were tested using a dial indicator microgauge apparatus that measured the depth of needle penetration starting at 2 minutes and then every minute up to 15 minutes. Samples were also tested at 3, 4, 24, 48 and 72 hours. A mixed-model repeated measures ANOVA showed the four accelerator groups were significantly different and there was a significant time trend. The 5.0% accelerator group set significantly faster compared to the 15.0% and the control at 15 minutes or less (p<0.05). In conclusion, it appears that 5.0% PCA when added to MTA can accelerate the setting reaction.
169

Physical and Chemical Properties of a New Mineral Trioxide Aggregate Material

Spencer, David Lowell 01 January 2004 (has links)
The objective of this study was to compare the time to final set and compressive strength of the white mineral trioxide aggregate (MTA) formulation to the original grey MTA. To test compressive strength, each MTA formulation was placed into Teflon split molds for four hours at 37° Celsius (C) and 100% humidity. Compressive strength of both MTA formulations was measured at 24 hours (n=12) and 21 days (n=19) using an Instron Testing Machine. For determination of time to final set, each MTA formulation (n=6) was placed into a metal mold and maintained at 37° C and 100% humidity while setting. At five-minute time intervals, an indenter needle was lowered onto the surface of the MTA material and allowed to remain in place for five seconds before it was removed from the specimen surface. This process was repeated until the needle failed to make a complete circular indentation in the MTA specimen. Results of a two-way ANOVA indicate that white MTA had a significantly higher compressive strength (mean=32.7 MPa) than grey MTA (mean=25.2 MPa) at 24 hours and no statistically significant differences at 21 days (white mean=38.6 MPa and grey mean=38.0 MPa). Using one-way ANOVA, results indicate that grey MTA had a significantly longer time to final setting time (mean=296 min) compared to white MTA (mean=276 min). Based on this study, the results suggest that white MTA is an effective substitute for grey MTA.
170

Simulations du dépôt par pulvérisation plasma et de la croissance de couches minces / Simulations of plasma sputtering deposition and thin film growth

Xie, Lu 02 September 2013 (has links)
L'objectif de cette thèse est d'étudier le dépôt de couches minces par pulvérisation plasma à l'aide de simulations de dynamique moléculaire, en mettant l'accent sur les mécanismes de la formation de la microstructure dans diverses conditions de dépôt pertinentes pour les expériences. Des dépôts de films minces de ZrxCu100-x et AlCoCrCuFeNi sur Si (100) par procédé magnétron de co-pulvérisation ont été étudiés par simulations de dynamique moléculaire utilisant des conditions initiales similaires à celles des expériences. Les résultats montrent que la phase de films minces ZrxCu100-x est déterminée par la composition de l'alliage binaire et par l'énergie cinétique moyenne des atomes incidents. Les alliages AlCoCrCuFeNi simulés possèdent la structure fcc / bcc modulée par la composition, en conformité avec l'expérience. Ils ont une tendance à évoluer vers une solution solide de verres métalliques massifs a été trouvée. Le dépôt par pulvérisation plasma d'atomes de platine sur deux substrats carbonés nanostructurés (carbone poreux et nanotubes de carbone) a également été étudié à température ambiante (300K) et pour deux ensembles de paramètres de potentiels Lennard-Jones et à trois distributions d’énergie cinétique différentes d'atomes Pt incidents sur le substrat. Les résultats des simulations sont en bon accord avec les résultats expérimentaux. Enfin, la simulation numérique des décharges magnétron a été introduite en vue de déterminer les paramètres d'entrée pour les simulations de MD. Les particules chargées sont décrites par le modèle hydrodynamique, en utilisant des expressions classiques des flux. Les caractéristiques du réacteur sont reproduites par les premières simulations. / The objective of this thesis is to study the deposition of thin films by plasma sputtering using moleculardynamics simulations, focusing on the mechanisms of formation of the microstructure in various deposition conditions relevant to experiments. Deposition of thin films AlCoCrCuFeNi and ZrxCu100-x on Si (100) by magnetron sputtering process of co-sputtering have been studied by molecular dynamics simulations using similar experiments to those initial conditions. The results show that the phase ZrxCu100-x thin films is determined by the composition of the binary alloy and the average kinetic energy of the incident atoms. The simulated AlCoCrCuFeNi alloys have fcc / bcc structure modulated by the composition in accordance with experience. They have a tendency to evolve into a solid solution of bulk metallic glasses found. Plasma deposition of platinum atoms spray on two nanostructured carbon substrates (porous carbon and carbon nanotubes) has also been studied at room temperature (300K) and two sets of parameters of Lennard-Jones potential and three distributions of different kinetic energy Pt atoms incident on the substrate. The simulation results are in good agreement with the experimental results. Finally, the numerical simulation of magnetron discharges was introduced to determine the input parameters for the MD simulations. The charged particles are described by the hydrodynamic model, expressions using conventional flow. The characteristics of the reactor are reproduced by the first simulations.

Page generated in 0.0562 seconds