• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1988
  • 524
  • 512
  • 204
  • 117
  • 91
  • 55
  • 42
  • 35
  • 28
  • 27
  • 18
  • 18
  • 18
  • 18
  • Tagged with
  • 4312
  • 1286
  • 517
  • 516
  • 464
  • 330
  • 315
  • 306
  • 296
  • 291
  • 282
  • 274
  • 271
  • 260
  • 243
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

A VLSI architecture for a neurocomputer using higher-order predicates

Geller, Ronnie Dee 05 1900 (has links) (PDF)
M.S. / Computer Science & Engineering / Some biological aspects of neural interactions are presented and used as a basis for a computational model in the development of a new type of computer architecture. A VLSI microarchitecture is proposed that efficiently implements the neural-based computing methods. An analysis of the microarchitecture is presented to show that it is feasible using currently available VLSI technology. The performance expectations of the proposed system are analyzed and compared to conventional computer systems executing similar algorithms. The proposed system is shown to have comparatively attractive performance and cost/performance ratio characteristics. Some discussion is given on system level characteristics including initialization and learning.
362

River router for the graphics editor Caesar

Holla, Jaya 11 1900 (has links) (PDF)
M.S. / Computer Science / A general river routing algorithm is described. It is assumed that there is one layer available for routing and the terminals are on the boundaries of an arbitrarily shaped rectilinear routing region. All nets are two terminal nets. No crossover is permitted between nets. A minimum separation must be maintained between wires to prevent design rule violations. The separation and default width for all nets are obtained from a parameter file. A command line option permits the user to change the width. The algorithm assumes no grid on the routing plane. The number of corners in a given route is reduced by flipping corners.
363

Design of large time constant switched-capacitor filters for biomedical applications

Tumati, Sanjay 17 February 2005 (has links)
This thesis investigates the various techniques to achieve large time constants and the ultimate limitations therein. A novel circuit technique for the realization of large time constants for high pass corners in switched-capacitor filters is also proposed and compared with existing techniques. The switched-capacitor technique is insensitive to parasitic capacitances and is area efficient and it requires only two clock phases. The circuit is used to build a typical switched-capacitor front end with a gain of 10. The low pass corner is fixed at 200 Hz. The high pass corner is varied from 0.159Hz to 4 Hz and various performance parameters, such as power consumption, silicon area etc., are compared with conventional techniques and the advantages and disadvantages of each technique are demonstrated. The front-ends are fully differential and are chopper stabilized to protect against DC offsets and 1/f noise. The front-end is implemented in AMI0.6um technology with a supply voltage of 1.6V and all transistors operate in weak inversion with currents in the range of tens of nano-amperes.
364

Development and validation of a LES methodology for complex wall-bounded flows : application to high-order structured and industrial unstructured solvers

Georges, Laurent 12 June 2007 (has links)
Turbulent flows present structures with a wide range of scales. The computation of the complete physics of a turbulent flow (termed DNS) is very expensive and is, for the time being, limited to low and medium Reynolds number flows. As a way to capture high Reynolds number flows, a part of the physics complexity has to be modeled. Large eddy simulation (LES) is a simulation strategy where the large turbulent eddies present on a given mesh are captured and the influence of the non-resolved scales onto the resolved ones is modeled. The present thesis reports on the development and validation of a methodology in order to apply LES for complex wall-bounded flows. Discretization methods and LES models, termed subgrid scale models (SGS), compatible with such a geometrical complexity are discussed. It is proved that discrete a kinetic energy conserving discretization of the convective term is an attractive solution to perform stable simulations without the use of an artificial dissipation, as upwinding. The dissipative effect of the SGS model is thus unaffected by any additional dissipation process. The methodology is first applied to a developed parallel fourth-order incompressible flow solver for cartesian non-uniform meshes. In order to solve the resulting Poisson equation, an efficient multigrid solver is also developed. The code is first validated using DNS (Taylor-Green vortex, channel flow, four-vortex system) and LES (channel flow), and finally applied to the investigation of an aircraft two-vortex system in ground effect. The methodology is then applied to improve a RANS-based industrial unstructured compressible flow solver, developed at CENAERO, to perform well for LES applications. The proposed modifications are tested successfully on the unsteady flow past a sphere at Reynolds of 300 and 10000, corresponding to the subcritical regime.
365

Essays in Dynamic Macroeconometrics

Bañbura, Marta 26 June 2009 (has links)
The thesis contains four essays covering topics in the field of macroeconomic forecasting. The first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy. The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness. The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for. The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast. The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size. The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.
366

Novel Measures on Directed Graphs and Applications to Large-Scale Within-Network Classification

Mantrach, Amin 25 October 2010 (has links)
Ces dernières années, les réseaux sont devenus une source importante d’informations dans différents domaines aussi variés que les sciences sociales, la physique ou les mathématiques. De plus, la taille de ces réseaux n’a cessé de grandir de manière conséquente. Ce constat a vu émerger de nouveaux défis, comme le besoin de mesures précises et intuitives pour caractériser et analyser ces réseaux de grandes tailles en un temps raisonnable. La première partie de cette thèse introduit une nouvelle mesure de similarité entre deux noeuds d’un réseau dirigé et pondéré : la covariance “sum-over-paths”. Celle-ci a une interprétation claire et précise : en dénombrant tous les chemins possibles deux noeuds sont considérés comme fortement corrélés s’ils apparaissent souvent sur un même chemin – de préférence court. Cette mesure dépend d’une distribution de probabilités, définie sur l’ensemble infini dénombrable des chemins dans le graphe, obtenue en minimisant l'espérance du coût total entre toutes les paires de noeuds du graphe sachant que l'entropie relative totale injectée dans le réseau est fixée à priori. Le paramètre d’entropie permet de biaiser la distribution de probabilité sur un large spectre : allant de marches aléatoires naturelles où tous les chemins sont équiprobables à des marches biaisées en faveur des plus courts chemins. Cette mesure est alors appliquée à des problèmes de classification semi-supervisée sur des réseaux de taille moyennes et comparée à l’état de l’art. La seconde partie de la thèse introduit trois nouveaux algorithmes de classification de noeuds en sein d’un large réseau dont les noeuds sont partiellement étiquetés. Ces algorithmes ont un temps de calcul linéaire en le nombre de noeuds, de classes et d’itérations, et peuvent dés lors être appliqués sur de larges réseaux. Ceux-ci ont obtenus des résultats compétitifs en comparaison à l’état de l’art sur le large réseaux de citations de brevets américains et sur huit autres jeux de données. De plus, durant la thèse, nous avons collecté un nouveau jeu de données, déjà mentionné : le réseau de citations de brevets américains. Ce jeu de données est maintenant disponible pour la communauté pour la réalisation de tests comparatifs. La partie finale de cette thèse concerne la combinaison d’un graphe de citations avec les informations présentes sur ses noeuds. De manière empirique, nous avons montré que des données basées sur des citations fournissent de meilleurs résultats de classification que des données basées sur des contenus textuels. Toujours de manière empirique, nous avons également montré que combiner les différentes sources d’informations (contenu et citations) doit être considéré lors d’une tâche de classification de textes. Par exemple, lorsqu’il s’agit de catégoriser des articles de revues, s’aider d’un graphe de citations extrait au préalable peut améliorer considérablement les performances. Par contre, dans un autre contexte, quand il s’agit de directement classer les noeuds du réseau de citations, s’aider des informations présentes sur les noeuds n’améliora pas nécessairement les performances. La théorie, les algorithmes et les applications présentés dans cette thèse fournissent des perspectives intéressantes dans différents domaines. In recent years, networks have become a major data source in various fields ranging from social sciences to mathematical and physical sciences. Moreover, the size of available networks has grow substantially as well. This has brought with it a number of new challenges, like the need for precise and intuitive measures to characterize and analyze large scale networks in a reasonable time. The first part of this thesis introduces a novel measure between two nodes of a weighted directed graph: The sum-over-paths covariance. It has a clear and intuitive interpretation: two nodes are considered as highly correlated if they often co-occur on the same -- preferably short -- paths. This measure depends on a probability distribution over the (usually infinite) countable set of paths through the graph which is obtained by minimizing the total expected cost between all pairs of nodes while fixing the total relative entropy spread in the graph. The entropy parameter allows to bias the probability distribution over a wide spectrum: going from natural random walks (where all paths are equiprobable) to walks biased towards shortest-paths. This measure is then applied to semi-supervised classification problems on medium-size networks and compared to state-of-the-art techniques. The second part introduces three novel algorithms for within-network classification in large-scale networks, i.e., classification of nodes in partially labeled graphs. The algorithms have a linear computing time in the number of edges, classes and steps and hence can be applied to large scale networks. They obtained competitive results in comparison to state-of-the-art technics on the large scale U.S.~patents citation network and on eight other data sets. Furthermore, during the thesis, we collected a novel benchmark data set: the U.S.~patents citation network. This data set is now available to the community for benchmarks purposes. The final part of the thesis concerns the combination of a citation graph with information on its nodes. We show that citation-based data provide better results for classification than content-based data. We also show empirically that combining both sources of information (content-based and citation-based) should be considered when facing a text categorization problem. For instance, while classifying journal papers, considering to extract an external citation graph may considerably boost the performance. However, in another context, when we have to directly classify the network citation nodes, then the help of features on nodes will not improve the results. The theory, algorithms and applications presented in this thesis provide interesting perspectives in various fields.
367

turbulent convective mass transfer in electrochemical systems

Gurniki, Francois January 2000 (has links)
No description available.
368

Massively-Parallel Spectral Element Large Eddy Simulation of a Ring-Type Gas Turbine Combustor

Camp, Joshua Lane 2011 May 1900 (has links)
The average and fluctuating components in a model ring-type gas turbine combustor are characterized using a Large Eddy Simulation at a Reynolds number of 11,000, based on the bulk velocity and the mean channel height. A spatial filter is applied to the incompressible Navier-Stokes equations, and a high pass filtered Smagorinsky model is used to model the sub-grid scales. Two cases are studied: one with only the swirler inlet active, and one with a single row of dilution jets activated, operating at a momentum flux ratio J of 100. The goal of both of these studies is to validate the capabilities of the solver NEK5000 to resolve important flow features inherent to gas turbine combustors by comparing qualitatively to the work of Jakirlic. Both cases show strong evidence of the Precessing Vortex Core, an essential flow feature in gas turbine combustors. Each case captures other important flow characteristics, such as corner eddies, and in general predicts bulk flow movements well. However, the simulations performed quite poorly in terms of predicting turbulence shear stress quantities. Difficulties in properly emulating the turbulent velocity entering the combustor for the swirl, as well as mesh quality concerns, may have skewed the results. Overall, though small length scale quantities were not accurately captured, the large scale quantities were, and this stress test on the HPF LES model will be built upon in future work that looks at more complex combustors.
369

Establishing the Mirage Mediation Model at the Large Hadron Collider

Wang, Kechen 2011 August 1900 (has links)
This thesis describes the research I did during my Master's study. I investigated the stau-neutralino coannihilation region of the Mirage Mediation Model at the Large Hadron Collider (LHC). By constructing five kinematic observables at the LHC, the masses of supersymmetric particles (sparticles) were determined. The Mirage Mediation Model parameters were determined from the sparticles' masses. This is the first time to establish the Mirage Mediation Model at the LHC. All these techniques can be applied to other coannihilation regions of the Mirage Mediation Model and other supersymmetry (SUSY) models.
370

Uplink Performance Analysis of Multicell MU-SIMO Systems with ZF Receivers

Ngo, Hien Quoc, Matthaiou, Michail, Duong, Trung Q., Larsson, Erik G. January 2013 (has links)
We consider the uplink of a multicell multiuser single-input multiple-output system where the channel experiences both small and large-scale fading. The data detection is done by using the linear zero-forcing technique, assuming the base station (BS) has perfect channel state information of all users in its cell. We derive  new, exact analytical expressions for the uplink rate, symbol error rate, and outage probability per user, as well as alower bound on the achievable rate. This bound is very tight and becomes exact in the large-number-of-antennas limit. We further study the asymptotic system performance in the regimes of high signal-to-noise ratio (SNR), large number of antennas, and large number of users per cell. We show that at high SNRs, the system is interference-limited and hence, we cannot improve the system performance by increasing the transmit power of each user. Instead, by increasing the number of BS antennas, the effects of interference and noise can be reduced, thereby improving the system performance. We demonstrate that, with very large antenna arrays at the BS, the transmit power of each user can be made inversely proportional to the number of BS antennas while maintaining a desired quality-of-service. Numerical results are presented to verify our analysis.

Page generated in 0.0344 seconds