• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 31
  • 29
  • 13
  • 12
  • 10
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 408
  • 158
  • 59
  • 58
  • 57
  • 57
  • 55
  • 52
  • 49
  • 45
  • 42
  • 41
  • 39
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Do Different Expenditure Mechanisms Invite Different Influences? Evidence from Research Expenditures of the National Institutes of Health

Kim, Jungbu 01 October 2007 (has links)
This study examines 1) whether the different expenditure mechanisms used by the National Institutes of Health (NIH) invite different sources of influences on the budget process and thus on the expenditure outcomes and 2) whether the frequent use of omnibus appropriations bills since 1996 has changed budget levels of the institutes under the NIH. The NIH uses two major expenditure mechanisms with very different beneficiary groups: the principal investigator-initiated Research Project Grants and Intramural Research. Drawing on theories of motivations of public officials and of political clout of agency heads and considering empirical studies of the effect of omnibus legislation, this study reveals the following: 1) directors with more public service experience are more successful in securing a higher budget for their institutes; 2) while the directors are found to be driven by public service motivation, when it comes to expenditure allocation between two different mechanisms, they behave in a self-interested manner, representing the interests of the institutional sectors where they have developed close relationships; 3) with ever-increasing budgets between 1983 and 2005, the institute directors have chosen to seek higher budgets rather than merely avoid the risk of budget cuts; 4) although the advisory boards are purportedly used to seek private input for the priority setting, they tend to increase intramural more than external research project grant expenditures; 5) the practice of omnibus appropriations bills significantly benefits the institutes under the NIH such that with omnibus legislation the institutes' total expenditures have more than doubled controlling the other factors; and 6) there are significant differences in the effects of the director's public experience and the number of advisory boards and their membership both (i) between disease-focused institutes and nondisease institutes and (ii) with and without omnibus legislation. The effects of the director's public service experience and the advisory boards have more budgetary impact in the general science-focused institutes than in their disease-focused counterparts. The influence of the advisory board and of the institute director's public service experience on the individual institute's expenditure level is significantly diminished by the frequent use of omnibus appropriations bills.
262

Algorithms for Transcriptome Quantification and Reconstruction from RNA-Seq Data

Mangul, Serghei 16 November 2012 (has links)
Massively parallel whole transcriptome sequencing and its ability to generate full transcriptome data at the single transcript level provides a powerful tool with multiple interrelated applications, including transcriptome reconstruction, gene/isoform expression estimation, also known as transcriptome quantification. As a result, whole transcriptome sequencing has become the technology of choice for performing transcriptome analysis, rapidly replacing array-based technologies. The most commonly used transcriptome sequencing protocol, referred to as RNA-Seq, generates short (single or paired) sequencing tags from the ends of randomly generated cDNA fragments. RNA-Seq protocol reduces the sequencing cost and significantly increases data throughput, but is computationally challenging to reconstruct full-length transcripts and accurately estimate their abundances across all cell types. We focus on two main problems in transcriptome data analysis, namely, transcriptome reconstruction and quantification. Transcriptome reconstruction, also referred to as novel isoform discovery, is the problem of reconstructing the transcript sequences from the sequencing data. Reconstruction can be done de novo or it can be assisted by existing genome and transcriptome annotations. Transcriptome quantification refers to the problem of estimating the expression level of each transcript. We present a genome-guided and annotation-guided transcriptome reconstruction methods as well as methods for transcript and gene expression level estimation. Empirical results on both synthetic and real RNA-seq datasets show that the proposed methods improve transcriptome quantification and reconstruction accuracy compared to previous methods.
263

Optimal investment in incomplete financial markets

Schachermayer, Walter January 2002 (has links) (PDF)
We give a review of classical and recent results on maximization of expected utility for an investor who has the possibility of trading in a financial market. Emphasis will be given to the duality theory related to this convex optimization problem. For expository reasons we first consider the classical case where the underlying probability space is finite. This setting has the advantage that the technical diffculties of the proofs are reduced to a minimum, which allows for a clearer insight into the basic ideas, in particular the crucial role played by the Legendre-transform. In this setting we state and prove an existence and uniqueness theorem for the optimal investment strategy, and its relation to the dual problem; the latter consists in finding an equivalent martingale measure optimal with respect to the conjugate of the utility function. We also discuss economic interpretations of these theorems. We then pass to the general case of an arbitrage-free financial market modeled by an R^d-valued semi-martingale. In this case some regularity conditions have to be imposed in order to obtain an existence result for the primal problem of finding the optimal investment, as well as for a proper duality theory. It turns out that one may give a necessary and sufficient condition, namely a mild condition on the asymptotic behavior of the utility function, its so-called reasonable asymptotic elasticity. This property allows for an economic interpretation motivating the term "reasonable". The remarkable fact is that this regularity condition only pertains to the behavior of the utility function, while we do not have to impose any regularity conditions on the stochastic process modeling the financial market (to be precise: of course, we have to require the arbitrage-freeness of this process in a proper sense; also we have to assume in one of the cases considered below that this process is locally bounded; but otherwise it may be an arbitrary R^d-valued semi-martingale). (author's abstract) / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
264

Ethics in Family Businesses and Venture Capital Firms : How managers manage ethical considerations and steer behavior

de Groot, Niels, Antonsson, Jimmy January 2012 (has links)
Business ethics is a fragmented and well covered scientific field. This Master thesis study concerns two type of organizations, namely family businesses (FB’s) and venture capital firms (VCF’s), in relation to the ethical decision-making process, which is a relatively undiscovered field. The study is conducted in the way it sheds a light on the influences on a manager when taking decisions concerning ethical considerations. Important scholars such as Colby and Kohlberg (1987) and Rest et al. (1999) framed the field of moral development of individuals, and what makes managers unaware of their unethical decisions (Bazerman, 2008). However, a manager’s possibility to take decisions is also influenced by organizational factors and actors. The type of management and ownership structure, and the expectations these actors have with regard to profits, as well as situational factors such as business strategy, maturity of the company, human and financial resources and market position are shaping the environment and possibility for managers to pursue ethical behavior because they affect the decision-making process.The purpose of this study is to understand how managers in FB’s and VCF’s manage ethical considerations. The creating of the conceptual framework was used as a foundation to visualize how ethical behavior is constructed, while the focus laid on the influences and possibility to take decisions including ethical considerations and content. While performing this research, we have conducted eight semi-structured interviews with managers in three VCF’s and two FB’s in Sweden. The respondent companies and interviewees remain anonymous. We did that to increase the chance of honest and unbiased answers since we saw a risk to receive adjusted and image improving responses.The empirical findings show that the VCF’s do not pay attention to ethical considerations in the same extent as FB’s do. Discovered reasons were lack of time and know-how, financial and human resources, business maturity and the fact that they were to generate a high ROI to the venture capitalist. Such a relationship makes the managers focus on profit maximization and short term objectives rather than ethical considerations. The two FB’s did have an ethical code of conduct with the employees and was constructed in order to fulfill acceptance, integration and efficiency with this management tool. The ethical codes of conduct were created with the goal to steer behavior and ensure ethical commitment in certain areas of interest. The major finding is that situational factors either suffocate or give room for ethical considerations in companies when taking decisions.In particular, this research contributes to the field of business ethics and VCF’s in general, but also with regard to FB’s. The results of this thesis are constructed in the decision-making model which is different than the ethical decision-making model we constructed based on the theoretical research. However, reality did not allow us to recognize the fragmented patterns we interpreted from the theory. We therefore created a new top-down model which takes the need for a decision in companies into account, the actors and factors in the organization, the situational factors that influence the happenings in the organization and the outcome of the decision, which possibly contains ethical considerations and content. With the improved model we visualize the decision-making process while taking influences towards ethical decision-making into consideration and visualize organizational reality as we discovered it.Key words: business ethics, ethical considerations, ethical code of conduct, moral awareness, ethical decision-making, ethical behavior, family business, venture capital firm, profit maximization, shareholder preferences.
265

Comparison Of Missing Value Imputation Methods For Meteorological Time Series Data

Aslan, Sipan 01 September 2010 (has links) (PDF)
Dealing with missing data in spatio-temporal time series constitutes important branch of general missing data problem. Since the statistical properties of time-dependent data characterized by sequentiality of observations then any interruption of consecutiveness in time series will cause severe problems. In order to make reliable analyses in this case missing data must be handled cautiously without disturbing the series statistical properties, mainly as temporal and spatial dependencies. In this study we aimed to compare several imputation methods for the appropriate completion of missing values of the spatio-temporal meteorological time series. For this purpose, several missing imputation methods are assessed on their imputation performances for artificially created missing data in monthly total precipitation and monthly mean temperature series which are obtained from the climate stations of Turkish State Meteorological Service. Artificially created missing data are estimated by using six methods. Single Arithmetic Average (SAA), Normal Ratio (NR) and NR Weighted with Correlations (NRWC) are the three simple methods used in the study. On the other hand, we used two computational intensive methods for missing data imputation which are called Multi Layer Perceptron type Neural Network (MLPNN) and Monte Carlo Markov Chain based on Expectation-Maximization Algorithm (EM-MCMC). In addition to these, we propose a modification in the EM-MCMC method in which results of simple imputation methods are used as auxiliary variables. Beside the using accuracy measure based on squared errors we proposed Correlation Dimension (CD) technique for appropriate evaluation of imputation performances which is also important subject of Nonlinear Dynamic Time Series Analysis.
266

Passive Haptic Robotic Arm Design

Yilmaz, Serter 01 October 2010 (has links) (PDF)
The implant surgery replaces missing tooth to regain functionality and look of the normal tooth after dental operation. Improper placement of implant increases recuperation periods and reduces functionality. The aim of this thesis is to design a passive haptic robotic arm to guide dentist during the implant surgery. In this thesis, the optimum design of the 6R passive haptic robotic arm is achieved. The methodology used in optimization problem involves minimization of end-effector side parasitic forces/torques while maximizing transparency of the haptic device. The transparency of haptic device is defined as realism of forces generated by device in real world compared to forces in virtual world. The multivariable objective function including dynamic equations of 6R robotic arm is derived and the constraints are determined using kinematic equations. The optimization problem is solved using SQP and GA. The link lengths and other relevant parameters along with the location of tool path are optimized. The end-effector parasitic torques/forces are significantly minimized. The results of two optimization techniques have proven to be nearly the same, thus a global optimum solution has been found in the search space. Main contribution of this study is to take spatial nonlinear dynamics into consideration to reduce parasitic torques. Also, a mechanical brake is designed as a passive actuator. The mechanical brake includes a cone based braking system actuated by DC motor. Three different prototypes are manufactured to test performance of the mechanical brake. The final design indicates that the mechanical brake can be used as passive actuators.
267

Investigation of probabilistic principal component analysis compared to proper orthogonal decomposition methods for basis extraction and missing data estimation

Lee, Kyunghoon 21 May 2010 (has links)
The identification of flow characteristics and the reduction of high-dimensional simulation data have capitalized on an orthogonal basis achieved by proper orthogonal decomposition (POD), also known as principal component analysis (PCA) or the Karhunen-Loeve transform (KLT). In the realm of aerospace engineering, an orthogonal basis is versatile for diverse applications, especially associated with reduced-order modeling (ROM) as follows: a low-dimensional turbulence model, an unsteady aerodynamic model for aeroelasticity and flow control, and a steady aerodynamic model for airfoil shape design. Provided that a given data set lacks parts of its data, POD is required to adopt a least-squares formulation, leading to gappy POD, using a gappy norm that is a variant of an L2 norm dealing with only known data. Although gappy POD is originally devised to restore marred images, its application has spread to aerospace engineering for the following reason: various engineering problems can be reformulated in forms of missing data estimation to exploit gappy POD. Similar to POD, gappy POD has a broad range of applications such as optimal flow sensor placement, experimental and numerical flow data assimilation, and impaired particle image velocimetry (PIV) data restoration. Apart from POD and gappy POD, both of which are deterministic formulations, probabilistic principal component analysis (PPCA), a probabilistic generalization of PCA, has been used in the pattern recognition field for speech recognition and in the oceanography area for empirical orthogonal functions in the presence of missing data. In formulation, PPCA presumes a linear latent variable model relating an observed variable with a latent variable that is inferred only from an observed variable through a linear mapping called factor-loading. To evaluate the maximum likelihood estimates (MLEs) of PPCA parameters such as a factor-loading, PPCA can invoke an expectation-maximization (EM) algorithm, yielding an EM algorithm for PPCA (EM-PCA). By virtue of the EM algorithm, the EM-PCA is capable of not only extracting a basis but also restoring missing data through iterations whether the given data are intact or not. Therefore, the EM-PCA can potentially substitute for both POD and gappy POD inasmuch as its accuracy and efficiency are comparable to those of POD and gappy POD. In order to examine the benefits of the EM-PCA for aerospace engineering applications, this thesis attempts to qualitatively and quantitatively scrutinize the EM-PCA alongside both POD and gappy POD using high-dimensional simulation data. In pursuing qualitative investigations, the theoretical relationship between POD and PPCA is transparent such that the factor-loading MLE of PPCA, evaluated by the EM-PCA, pertains to an orthogonal basis obtained by POD. By contrast, the analytical connection between gappy POD and the EM-PCA is nebulous because they distinctively approximate missing data due to their antithetical formulation perspectives: gappy POD solves a least-squares problem whereas the EM-PCA relies on the expectation of the observation probability model. To juxtapose both gappy POD and the EM-PCA, this research proposes a unifying least-squares perspective that embraces the two disparate algorithms within a generalized least-squares framework. As a result, the unifying perspective reveals that both methods address similar least-squares problems; however, their formulations contain dissimilar bases and norms. Furthermore, this research delves into the ramifications of the different bases and norms that will eventually characterize the traits of both methods. To this end, two hybrid algorithms of gappy POD and the EM-PCA are devised and compared to the original algorithms for a qualitative illustration of the different basis and norm effects. After all, a norm reflecting a curve-fitting method is found to more significantly affect estimation error reduction than a basis for two example test data sets: one is absent of data only at a single snapshot and the other misses data across all the snapshots. From a numerical performance aspect, the EM-PCA is computationally less efficient than POD for intact data since it suffers from slow convergence inherited from the EM algorithm. For incomplete data, this thesis quantitatively found that the number of data-missing snapshots predetermines whether the EM-PCA or gappy POD outperforms the other because of the computational cost of a coefficient evaluation, resulting from a norm selection. For instance, gappy POD demands laborious computational effort in proportion to the number of data-missing snapshots as a consequence of the gappy norm. In contrast, the computational cost of the EM-PCA is invariant to the number of data-missing snapshots thanks to the L2 norm. In general, the higher the number of data-missing snapshots, the wider the gap between the computational cost of gappy POD and the EM-PCA. Based on the numerical experiments reported in this thesis, the following criterion is recommended regarding the selection between gappy POD and the EM-PCA for computational efficiency: gappy POD for an incomplete data set containing a few data-missing snapshots and the EM-PCA for an incomplete data set involving multiple data-missing snapshots. Last, the EM-PCA is applied to two aerospace applications in comparison to gappy POD as a proof of concept: one with an emphasis on basis extraction and the other with a focus on missing data reconstruction for a given incomplete data set with scattered missing data. The first application exploits the EM-PCA to efficiently construct reduced-order models of engine deck responses obtained by the numerical propulsion system simulation (NPSS), some of whose results are absent due to failed analyses caused by numerical instability. Model-prediction tests validate that engine performance metrics estimated by the reduced-order NPSS model exhibit considerably good agreement with those directly obtained by NPSS. Similarly, the second application illustrates that the EM-PCA is significantly more cost effective than gappy POD at repairing spurious PIV measurements obtained from acoustically-excited, bluff-body jet flow experiments. The EM-PCA reduces computational cost on factors 8 ~ 19 compared to gappy POD while generating the same restoration results as those evaluated by gappy POD. All in all, through comprehensive theoretical and numerical investigation, this research establishes that the EM-PCA is an efficient alternative to gappy POD for an incomplete data set containing missing data over an entire data set.
268

Resource Allocation Methodologies with Fractional Reuse Partitioning in Cellular Networks

Aki, Hazar 01 January 2011 (has links)
Conventional cellular systems have not taken full advantage of fractional frequency reuse and adaptive allocation due to the fixed cluster size and uniformed channel assignment procedures. This problem may cause more fatal consequences considering the cutting-edge 4G standards which have higher data rate requirements such as 3GPP-LTE and IEEE 802.16m (WiMAX). In this thesis, three different partitioning schemes for adaptive clustering with fractional frequency reuse were proposed and investigated. An overlaid cellular clustering scheme which uses adaptive fractional frequency reuse factors would provide a better end-user experience by exploiting the high level of signal to interference ratio (SIR). The proposed methods are studied via simulations and the results show that the adaptive clustering with different partitioning methods provide better capacity and grade of service (GoS) comparing to the conventional cellular architecture methodologies.
269

Distributed estimation in resource-constrained wireless sensor networks

Li, Junlin 13 November 2008 (has links)
Wireless sensor networks (WSN) are an emerging technology with a wide range of applications including environment monitoring, security and surveillance, health care, smart homes, etc. Subject to severe resource constraints in wireless sensor networks, in this research, we address the distributed estimation of unknown parameters by studying the correlation among resource, distortion, and lifetime, which are three major concerns for WSN applications. The objective of the proposed research is to design efficient distributed estimation algorithms for resource-constrained wireless sensor networks, where the major challenge is the integrated design of local signal processing operations and strategies for inter-sensor communication and networking so as to achieve a desirable tradeoff among resource efficiency (bandwidth and energy), system performance (estimation distortion and network lifetime), and implementation simplicity. More specifically, we address the efficient distributed estimation from the following perspectives: (i) rate-distortion perspective, where the objective is to study the rate-distortion bound for the distributed estimation and to design practical and distributed algorithms suitable for wireless sensor networks to approach the performance bound by optimally allocating the bit rate for each sensor, (ii) energy-distortion perspective, where the objective is to study the energy-distortion bound for the distributed estimation and to design practical and distributed algorithms suitable for wireless sensor networks to approach the performance bound by optimally allocating the bit rate and transmission energy for each sensor, and (iii) lifetime-distortion perspective, where the objective is to maximize the network lifetime while meeting estimation distortion requirements by jointly optimizing the source coding, source throughput and multi-hop routing. Also, energy-efficient cluster-based distributed estimation is studied, where the objective is to minimize the overall energy cost by appropriately dividing the sensor field into multiple clusters with data aggregation at cluster heads.
270

Three essays on valuation and investment in incomplete markets

Ringer, Nathanael David 01 June 2011 (has links)
Incomplete markets provide many challenges for both investment decisions and valuation problems. While both problems have received extensive attention in complete markets, there remain many open areas in the theory of incomplete markets. We present the results in three parts. In the first essay we consider the Merton investment problem of optimal portfolio choice when the traded instruments are the set of zero-coupon bonds. Working within a Markovian Heath-Jarrow-Morton framework of the interest rate term structure driven by an infinite dimensional Wiener process, we give sufficient conditions for the existence and uniqueness of an optimal investment strategy. When there is uniqueness, we provide a characterization of the optimal portfolio. Furthermore, we show that a specific Gauss-Markov random field model can be treated within this framework, and explicitly calculate the optimal portfolio. We show that the optimal portfolio in this case can be identified with the discontinuities of a certain function of the market parameters. In the second essay we price a claim, using the indifference valuation methodology, in the model presented in the first section. We appeal to the indifference pricing framework instead of the classic Black-Scholes method due to the natural incompleteness in such a market model. Because we price time-sensitive interest rate claims, the units in which we price are very important. This will require us to take care in formulating the investor’s utility function in terms of the units in which we express the wealth function. This leads to new results, namely a general change-of-numeraire theorem in incomplete markets via indifference pricing. Lastly, in the third essay, we propose a method to price credit derivatives, namely collateralized debt obligations (CDOs) using indifference. We develop a numerical algorithm for pricing such CDOs. The high illiquidity of the CDO market coupled with the allowance of default in the underlying traded assets creates a very incomplete market. We explain the market-observed prices of such credit derivatives via the risk aversion of investors. In addition to a general algorithm, several approximation schemes are proposed. / text

Page generated in 0.0704 seconds