• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 108
  • 19
  • 17
  • 9
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 209
  • 43
  • 30
  • 14
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

A parallel windowed fast discrete curvelet transform applied to seismic processing

Thomson, Darren, Hennenfent, Gilles, Modzelewski, Henryk, Herrmann, Felix J. January 2006 (has links)
We propose using overlapping, tapered windows to process seismic data in parallel. This method consists of numerically tight linear operators and adjoints that are suitable for use in iterative algorithms. This method is also highly scalable and makes parallel processing of large seismic data sets feasible. We use this scheme to define the Parallel Windowed Fast Discrete Curvelet Transform (PWFDCT), which we apply to a seismic data interpolation algorithm. The successful performance of our parallel processing scheme and algorithm on a two-dimensional synthetic data is shown.
62

Six Sigma management. Action research with some contributions to theories and methods.

Cronemyr, Peter January 2007 (has links)
Many companies around the world have implemented Six Sigma as a problem solving methodology especially useful for dealing with recurring problems in business processes. Since the 1980s when it was developed at Motorola, many companies have tried to implement Six Sigma to fit their own company’s culture and goals. This thesis presents a longitudinal case study describing the evolution of ‘Six Sigma Management’ at Siemens in Sweden. The success of the programme was to a large degree built on previous failures, confirming Juran’s old saying ‘Failure is a gold mine’. From the case study, success factors for implementing Six Sigma at Siemens are identified and compared to those given in the literature. Some of the most critical success factors identified at Siemens had not been mentioned as such in the literature before. The main conclusion of the study is that, in order to succeed and get sustainable results from a Six Sigma programme, Six Sigma should be integrated with Process Management, instead of just running Six Sigma as a separate initiative in an organisation. Furthermore, the thesis includes papers presenting methods and tools to be used in a Six Sigma programme or in Six Sigma projects. They deal with: how to identify suitable Six Sigma projects, how to select which Six Sigma methodology to use, how to find hidden misunderstandings between people from different knowledge domains, and how to simulate the impact of improvements to iterative processes. All these methods and tools have been developed and tested at Siemens. This has been an action research project, where the author has been employed by the company under investigation for eleven years and has actively influenced the changes in the company based on knowledge gained at the company as well as on research studies conducted at universities. In action research the change initiative under investigation is conducted and analysed in a single context. The readers are invited to draw their own conclusions on the applicability of the results to their own specific cases. In addition to this, some conclusions derived using analytical generalisation, applicable to a more general case, are presented in the thesis. / <p>Defended att Chalmers University of Technolgy in 2007.</p>
63

Understanding Integration in Emergent Reading

Davis, Bronwen 07 January 2013 (has links)
A predictable alphabet book was proposed as a natural way to observe emergent readers’ attempts to integrate their developing literacy skills and knowledge base, despite not yet having achieved conventional levels of reading. Study 1 examined how accuracy in identifying words in an alphabet book in kindergarten related with emergent skills measured in kindergarten and with subsequent reading ability. One hundred and three children completed tests of phonemic awareness, letter knowledge, vocabulary, and rapid naming in kindergarten and were audiotaped reading an alphabet book with their parent. Reading ability was assessed one year later. Correlations were consistent with previous research identifying phonemic awareness, letter knowledge, vocabulary and rapid naming as significant correlates of emergent reading. Alphabet book accuracy correlated with subsequent reading, and the relative indirect effects of kindergarten phonemic awareness and letter sound knowledge on Grade 1 reading through kindergarten alphabet book reading were significant. Findings supported the conceptualization of how well a child identifies words in an alphabet book as a representation of early skill integration. Study 2 built upon these findings by examining self-reported reading strategies. Siegler’s (1996) overlapping waves model was used as a framework, which emphasizes variability, adaptive choice, and gradual change in children’s problem solving. Ninety-one kindergarteners completed tests of phonemic awareness, letter knowledge, and vocabulary, and read an experimentally designed alphabet book having pages of varying difficulty with a researcher twice over several months. Findings supported the three main features of the overlapping waves model. Children reported a variety of strategies across the book and on individual pages within it. They worked most quickly on the easiest pages, reported more strategies on the most difficult pages, and chose adaptively among their strategy repertoire. The number of strategies reported and the number of accurately labeled pages increased over time. The relative indirect effects of phonemic awareness and letter sound knowledge on alphabet book accuracy through the use of graphophonemic strategies were significant. Findings support the application of the overlapping waves model to the domain of reading. Overall, these studies highlight the potential for using typical literacy activities to deepen our understanding of the process of learning to read.
64

Signal processing and amplifier design for inexpensive genetic analysis instruments

Choi, Sheng Heng Unknown Date
No description available.
65

An economic analysis of higher education financing policies / Financiamiento universitario y bienestar: un análisis dinámico con agentes heterogéneos y generaciones superpuestas

Sánchez, Juan Matías January 2004 (has links) (PDF)
This paper develops a model in which it is possible to evaluate alternatives of higher education financing. The alternative systems under discussion are: total fecing, graduates' taxes and uniform taxes (this can be associated to the scheme presently used in the Argentina to finance the universities). Assessment of the alternatives is performed over welfare, based on the indicators of poverty, equality, and the average levels of utility and wealth. Likewise, the functions of welfare presented by Bentham, Rawls, Atkinson, Sen and Kakwani are also considered. The most remarkable results are obtained through the simulation of an economy under two scenarios. In that way, the system of graduates' taxation is found to be better for welfare; whereas the system of uniform taxation only can be justified as it maximizes the number of students.
66

Three Essays on Human Capital, Child Care and Growth, and on Mobility

Alamgir-Arif, Rizwana 27 March 2012 (has links)
This thesis contributes to the fields of Public Economics and Development Economics by studying human capital formation under three scenarios. Each scenario is represented in an individual paper between Chapters 2 to 4 of this thesis. Chapter 2 examines the effect of child care financing, through human capital formation, on growth and welfare. There is an extensive literature on the benefits of child care affordability on labour market participation. The overall inference that can be drawn is that the availability and affordability of appropriate child care may enhance parental time spent outside the home in furthering their economic opportunities. In another front, the endogenous growth literature exemplifies the merits of subsidizing human capital in generating growth. Again, other contributions demonstrate the negative implications of taxes on the returns from human capital on long run growth and welfare. This paper assesses the long run welfare implications of child care subsidies financed by proportional income taxes when human capital serves as the engine of growth. More specifically, using an overlapping-generations framework (OLG) with endogenous labour choice, we study the implications of a distortionary wage income tax on growth and welfare. When the revenues from proportional income taxes are channelled towards improving economic opportunities for both work and schooling investments in the form of child care subsidies, long run physical and human capital stock may increase. A higher level of growth may ensue leading to higher welfare. Chapter 3 answers the question of how child care subsidization works in the interest of skill formation, and specifically, whether child care subsidization policies can work to the effect of human capital subsidies. Ample studies have highlighted the significance of early childhood learning through child care in determining the child’s longer-term outcomes. The general conclusion has been that the quality of life for a child, higher earnings during later life, as well as the contributions the child makes to society as an adult can be traced back to exposures during the first few years of life. Early childhood education obtained through child care has been found to play a pivotal role in the human capital base amongst children that can benefit them in the long run. Based on this premise, the paper develops a simple Overlapping Generations Model (OLG) to find out the implications of early learning on future investments in human capital. It is shown that higher costs of child care will reduce skill investments of parents. Also, for some positive child care cost, higher human capital obtained through early childhood education can induce further skill investments amongst individuals with a higher willingness to substitute consumption intertemporally. Finally, intervention that can internalize the intra-generational human capital externalities arising from parental time spent outside the home - for which care/early learning is required to be purchased for the child - can unambiguously lead to higher skill investments by all individuals. Chapter 3 therefore proposes policy intervention, such as child care subsidization, as the effect of such will be akin to a human capital subsidy. The objective of Chapter 4 is to understand the implications of inter-regional mobility on higher educational investments of individuals and to study in detail the impact of mobility on government spending for education under two particular scenarios – one in which human capital externalities are non-localized and spill over to other regions (e.g. in the form of R&D), and another in which the externalities are localized and remain within the region. It is shown that mobility enhances private investments in education, and all else equal, welfare should be higher with increased migration. The impacts on government educational expenditures are studied and some policy implications are drawn. In general, with non-localized externalities, all public expenditures decline under full-migration. Finally under localized externalities, the paper finds that governments will increase their financing of education to increasingly mobile individuals only when agglomeration benefits outweigh congestion costs from increases in regional population.
67

Multiple Versions and Overlap in Digital Text

Desmond Schmidt Unknown Date (has links)
This thesis is unusual in that it tries to solve a problem that exists between two widely separated disciplines: the humanities (and to some extent also linguistics) on the one hand and information science on the other. Chapter 1 explains why it is essential to strike a balance between study of the solution and problem domains. Chapter 2 surveys the various models of cultural heritage text, starting in the remote past, through the coming of the digital era to the present. It establishes why current models are outdated and need to be revised, and also what significance such a revision would have. Chapter 3 examines the history of markup in an attempt to trace how inadequacies of representation arose. It then examines two major problems in cultural heritage and lin- guistics digital texts: overlapping hierarchies and textual variation. It assesses previously proposed solutions to both problems and explains why they are all inadequate. It argues that overlapping hierarchies is a subset of the textual variation problem, and also why markup cannot be the solution to either problem. Chapter 4 develops a new data model for representing cultural heritage and linguistics texts, called a ‘variant graph’, which separates the natural overlapping structures from the content. It develops a simplified list-form of the graph that scales well as the number of versions increases. It also describes the main operations that need to be performed on the graph and explores their algorithmic complexities. Chapter 5 draws on research in bioinformatics and text processing to develop a greedy algorithm that aligns n versions with non-overlapping block transpositions in O(M N ) time in the worst case, where M is the size of the graph and N is the length of the new version being added or updated. It shows how this algorithm can be applied to texts in corpus linguistics and the humanities, and tests an implementation of the algorithm on a variety of real-world texts.
68

Human capital, dynamic inefficiency and economic growth /

Lauri, Pekka. January 2004 (has links) (PDF)
School of Economics, Diss.--Helsinki, 2004.
69

Resolução de sobreposição de picos de cristalização, pelo método de Kurajica, caso não isotérmico, em vidros teluretos e fosfatos /

Costa, Francine Bettio. January 2010 (has links)
Orientador: Victor Ciro Solano Reynoso / Banca: Walter Katsumi Sakamoto / Banca: Silvio Rainho Teixeira / Resumo: Os vidros preparados para diversas aplicações têm um ponto em comum: a possibilidade de nuclear e cristalizar novas fases, quando preparados a partir de uma massa fundida ou através de um tratamento térmico acima da temperatura de transição vítrea. Neste trabalho são apresentadas de forma sucinta as formulações teóricas de formação de vidros incidindo principalmente nos processos de nucleação e crescimento de fases. O estudo da cristalização de vidros pode ser feito através de métodos cinéticos baseados na descrição teórica formulada por Johnson-Mehl- Avrami (JMA). Estes métodos descrevem os processos de nucleação e cristalização utilizando dados provenientes das curvas de DTA/DSC. Uma delas é aquela proposta por Kurajica, que determina os parâmetros cinéticos utilizando um modelo para a resolução da sobreposição dos picos de cristalização. Para a aplicação deste modelo, foram utilizados vidros teluretos, de composição 80TeO2 - 10Nb2O5 - 8Li2O - 2V2O5 (mol%) denominados TNLV, e vidros fosfatos, de composição 50P2O5 - 36Na2O - 10CdO - 4La2O3 (mol%), dopados com 20 e 60mg de CeO2 denominados PNCL20 e PNCL60. O estudo cinético teve início com a identificação das fases cristalinas formadas, utilizando a difratometria de raios X (DRX). Considerando a formação de três fases cristalinas para cada sistema vítreo, a equação proposta por Kurajica foi aplicada, utilizando o software Origin 7.0, para determinação dos parâmetros cinéticos, a partir dos ajustes dos dados de DSC. Os coeficientes de Avrami (n) determinados mostraram que para o sistema TNLV o crescimento se dá em três dimensões com mecanismos diferentes, enquanto que para os sistemas PNCL20 e PNCL60, o crescimento ocorre em três dimensões com reação de interface. Foi observado que... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The glasses prepared for various applications have one common point: the possibility of new phases nucleation and crystallization, when prepared from a melt or through a heat treatment above the glass transition temperature. In this work are presented briefly the theoretical formulations of formation of glass focusing mainly in the processes of phases nucleation and growth. The study of glass crystallization can be done by kinetic methods based on the theoretical description formulated by Johnson- Mehl-Avrami (JMA). These methods describe the processes of nucleation and crystallization using data from DTA/DSC curves. One model is that proposed by Kurajica which determine the kinetic parameters resolving overlapping peaks of crystallization. To apply this model were used a tellurite glass, with composition 80TeO2 - 10Nb2O5 - 8Li2O - 2V2O5 (mol%) denominated TNLV, and two phosphate glasses of composition 50P2O5 - 36Na2O - 10CdO - 4La2O3 (mol%) doped with 20 and 60mg of CeO2 denominated PNCL20 and PNCL60. The kinetic study started us with the identification of crystalline phase, using the X-ray diffraction (XRD). Considering that three crystalline phases are formed for each glassy system, the Kurajica equation was applied using the Origin 7.0 software, for determining the kinetic parameters. The calculated Avrami coefficients (n) shown that for the TNLV system the growth occurs in three dimensions with different mechanisms, while for the PNCL20 and PNCL60 systems the growth occurs in three dimensions with an interface reaction. It was observed that the PNCL20 system has higher activation energy that the PNCL60 system and XRD patterns showed that the characteristic peaks of the phases containing CeO2, become higher and thinner in the system PNCL60. These results show that the cerium may favors the glass crystallization. / Mestre
70

Macroeconomic Consequences of Uncertain Social Security Reform

Hunt, Erin 06 September 2018 (has links)
The U.S. social security system faces funding pressure due to the aging of the population. This dissertation examines the welfare cost of social security reform and social security policy uncertainty under rational expectations and under learning. I provide an overview of the U.S. social security system in Chapter I. In Chapter II, I construct an analytically tractable two-period OLG model with capital, social security, and endogenous government debt. I demonstrate the existence of steady states depends on social security parameters. I demonstrate a saddle-node bifurcation of steady states numerically, and demonstrate a transcritical bifurcation analytically. I show that if a proposed social security reform is large enough, or if the probability of reform is high enough, the economy will converge to a steady state. In Chapter III, I develop a three-period lifecycle model. The model is inherently forward looking, which allows for more interesting policy analysis. With three periods, the young worker's saving-consumption decision depends on her expectation of future capital. This forward looking allows analysis of multi-period uncertainty. Analysis in the three-period model suggests that policy uncertainty may have lasting consequences, even after reform is enacted. In Chapter IV, I develop two theories of bounded rationality called life-cycle horizon learning and finite horizon life-cycle learning. In both models, agents use adaptive expectations to forecast future aggregates, such as wages and interest rates. This adaptive learning feature introduces cyclical dynamics along a transition path, which magnify the welfare cost of changes in policy and policy uncertainty. I model policy uncertainty as a stochastic process in which reform takes place in one of two periods as either a benefit cut or a tax increase. I find the welfare cost of this policy uncertainty is less than 0.25% of period consumption in a standard, rational expectations framework. The welfare cost of policy uncertainty is larger in the learning models; the worst-off cohort in the life-cycle horizon learning model would be willing to give up 1.98% of period consumption to avoid policy uncertainty.

Page generated in 0.0638 seconds