• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3416
  • 1373
  • 363
  • 350
  • 314
  • 193
  • 144
  • 76
  • 61
  • 60
  • 46
  • 46
  • 35
  • 27
  • 27
  • Tagged with
  • 7853
  • 996
  • 661
  • 591
  • 520
  • 493
  • 466
  • 458
  • 441
  • 438
  • 423
  • 406
  • 380
  • 373
  • 371
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

An Annotated Guide to the Songs of Karl Goldmark

Spivak, Mary Amanda 21 April 2008 (has links)
The purpose of this study was to examine and provide a pedagogical content analysis of the published songs of Karl Goldmark (1830-1915), an Austrian composer from the Romantic Era. The songs' characteristics were evaluated to determine the level of singer for which they would be appropriate. The creation of an annotation format was devised for the analysis of each song including the areas of subject matter, difficulty level, range, tessitura, tempo indication, duration, and unique characteristics of the vocal line and the piano line. The detailed entries provide an easy and accurate evaluation of the individual songs in order for the voice teacher to assess their value for each student, with particular attention to their suitability for the beginning, intermediate and advanced singer. These levels generally correspond to freshman or sophomore, junior or senior, and graduate student, respectively. The results indicate a division of difficulty level among all of the songs, with moderate difficulty being the most common. It has also been concluded that there are valuable songs for all levels of student. Areas for further study are included.
482

An examination of factors contributing to a reduction in race-based subgroup differences on a constructed response paper-and-pencil test of achievement

Edwards, Bryan D. 30 September 2004 (has links)
The objectives of the present study were to: (a) replicate the results of Arthur et al. (2002) by comparing race-based subgroup differences on a multiple-choice and constructed response test in a laboratory setting using a larger sample, (b) extend their work by investigating the role of reading ability, test-taking skills, and test perceptions that could explain why subgroup differences are reduced when the test format is changed from multiple-choice to a constructed response format, and (c) assess the criterion-related validity of the constructed response test. Two hundred sixty White and 204 African Americans completed a demographic questionnaire, Test Attitudes and Perceptions Survey, a multiple-choice or constructed response test, the Raven's Advanced Progressive Matrices Short Form, the Nelson-Denny Reading Test, Experimental Test of Testwiseness, and a post-test questionnaire. In general, the pattern of results supported the hypotheses in the predicted direction. For example, although there was a reduction in subgroup differences in performance on the constructed response compared to the multiple-choice test, the difference was not statistically significant. However, analyses by specific test content yielded a significant reduction in subgroup differences on the science reasoning section. In addition, all of the hypothesized study variables, with the exception of face validity, were significantly related to test performance. Significant subgroup differences were also obtained for all study variables except for belief in tests and stereotype threat. The results also indicate that reading ability, test-taking skills, and perceived fairness partially mediated the relationship between race and test performance. Finally, the criterion-related validity for the constructed response test was stronger than that for the multiple-choice test. The results suggested that the constructed response test format investigated in the present study may be a viable alternative to the traditional multiple-choice format in high-stakes testing to solve the organizational dilemma of using the most valid predictors of job performance and simultaneously reducing subgroup differences and subsequent adverse impact on tests of knowledge, skill, ability, and achievement. However, additional research is needed to further demonstrate the appropriateness of the constructed response format as an alternative to traditional testing methods.
483

Decision Support System (DSS) for Machine Selection: A Cost Minimization Model

Mendez Pinero, Mayra I. 16 January 2010 (has links)
Within any manufacturing environment, the selection of the production or assembly machines is part of the day to day responsibilities of management. This is especially true when there are multiple types of machines that can be used to perform each assembly or manufacturing process. As a result, it is critical to find the optimal way to select machines when there are multiple related assembly machines available. The objective of this research is to develop and present a model that can provide guidance to management when making machine selection decisions of parallel, non-identical, related electronics assembly machines. A model driven Decision Support System (DSS) is used to solve the problem with the emphasis in optimizing available resources, minimizing production disruption, thus minimizing cost. The variables that affect electronics product costs are considered in detail. The first part of the Decision Support System was developed using Microsoft Excel as an interactive tool. The second part was developed through mathematical modeling with AMPL9 mathematical programming language and the solver CPLEX90 as the optimization tools. The mathematical model minimizes total cost of all products using a similar logic as the shortest processing time (SPT) scheduling rule. This model balances machine workload up to an allowed imbalance factor. The model also considers the impact on the product cost when expediting production. Different scenarios were studied during the sensitivity analysis, including varying the amount of assembled products, the quantity of machines at each assembly process, the imbalance factor, and the coefficient of variation (CV) of the assembly processes. The results show that the higher the CV, the total cost of all products assembled increased due to the complexity of balancing machine workload for a large number of products. Also, when the number of machines increased, given a constant number of products, the total cost of all products assembled increased because it is more difficult to keep the machines balanced. Similar results were obtained when a tighter imbalance factor was used.
484

Reduction of Dimensionality in Spatiotemporal Models

Sætrom, Jon January 2010 (has links)
No description available.
485

A Difficult Choice? : A study of which factors influence the choice of auditor

Embretzén, Johanna, Nilsson, Marie, Olofsson, Sandra January 2007 (has links)
Because all joint-stock companies in Sweden need to have an auditor we thought it would be interesting to study how companies choose their auditor and which factors influence their choice. Therefore our research question is: “Which factors influence joint-stock companies in their choice of auditor?” The main purpose of the study is to get a better and deeper understanding of the subject, in order to clarify the purpose we established three sub purposes: • Establish which determinants that play a significant role in a company’s choice of auditor. • Investigate if there are any differences between companies of different sizes. • Research how auditors perceive the relationship with their clients. We have performed a study with a subjective view of reality and to get a deeper understanding for the subject of our thesis we chose a qualitative research method. The purpose of this study is to get a better understanding and therefore the hermeneutic point of view is the most suitable alternative because it brings attention to understanding and realistic thinking. During the study we have done a total of eight interviews, six with joint-stock companies of different sizes managed by the owner and two interviews with auditors, both working in a “Big Four” audit firm. After the interview we compared the collected data with our chosen theories to see if there are any patterns that we can draw a conclusion from. This is representative for the deductive approach of our study. Our study shows that recommendations and personal relationships are the most important determining factors for a company when choosing an auditor. Recommendations from friends and family are the most common way to get in contact with an auditor. Prior to the study we believed that companies of various sizes would have different opinions of which factors that influence their choice of auditor but the study shows that there are no significant differences in how the companies choose their auditor. The auditors’ perceptions of what expectations the companies have on their auditors overall seem to agree with what the companies expressed during the interviews. However, a majority of the respondents in the researched companies want their auditor to be more pro-active and knowledgeable about the company.
486

Sparse Value Function Approximation for Reinforcement Learning

Painter-Wakefield, Christopher Robert January 2013 (has links)
<p>A key component of many reinforcement learning (RL) algorithms is the approximation of the value function. The design and selection of features for approximation in RL is crucial, and an ongoing area of research. One approach to the problem of feature selection is to apply sparsity-inducing techniques in learning the value function approximation; such sparse methods tend to select relevant features and ignore irrelevant features, thus automating the feature selection process. This dissertation describes three contributions in the area of sparse value function approximation for reinforcement learning.</p><p>One method for obtaining sparse linear approximations is the inclusion in the objective function of a penalty on the sum of the absolute values of the approximation weights. This <italic>L<sub>1</sub></italic> regularization approach was first applied to temporal difference learning in the LARS-inspired, batch learning algorithm LARS-TD. In our first contribution, we define an iterative update equation which has as its fixed point the <italic>L<sub>1</sub></italic> regularized linear fixed point of LARS-TD. The iterative update gives rise naturally to an online stochastic approximation algorithm. We prove convergence of the online algorithm and show that the <italic>L<sub>1</sub></italic> regularized linear fixed point is an equilibrium fixed point of the algorithm. We demonstrate the ability of the algorithm to converge to the fixed point, yielding a sparse solution with modestly better performance than unregularized linear temporal difference learning.</p><p>Our second contribution extends LARS-TD to integrate policy optimization with sparse value learning. We extend the <italic>L<sub>1</sub></italic> regularized linear fixed point to include a maximum over policies, defining a new, "greedy" fixed point. The greedy fixed point adds a new invariant to the set which LARS-TD maintains as it traverses its homotopy path, giving rise to a new algorithm integrating sparse value learning and optimization. The new algorithm is demonstrated to be similar in performance with policy iteration using LARS-TD.</p><p>Finally, we consider another approach to sparse learning, that of using a simple algorithm that greedily adds new features. Such algorithms have many of the good properties of the <italic>L<sub>1</sub></italic> regularization methods, while also being extremely efficient and, in some cases, allowing theoretical guarantees on recovery of the true form of a sparse target function from sampled data. We consider variants of orthogonal matching pursuit (OMP) applied to RL. The resulting algorithms are analyzed and compared experimentally with existing <italic>L<sub>1</sub></italic> regularized approaches. We demonstrate that perhaps the most natural scenario in which one might hope to achieve sparse recovery fails; however, one variant provides promising theoretical guarantees under certain assumptions on the feature dictionary while another variant empirically outperforms prior methods both in approximation accuracy and efficiency on several benchmark problems.</p> / Dissertation
487

Robust inference of gene regulatory networks : System properties, variable selection, subnetworks, and design of experiments

Nordling, Torbjörn E. M. January 2013 (has links)
In this thesis, inference of biological networks from in vivo data generated by perturbation experiments is considered, i.e. deduction of causal interactions that exist among the observed variables. Knowledge of such regulatory influences is essential in biology. A system property–interampatteness–is introduced that explains why the variation in existing gene expression data is concentrated to a few “characteristic modes” or “eigengenes”, and why previously inferred models have a large number of false positive and false negative links. An interampatte system is characterized by strong INTERactions enabling simultaneous AMPlification and ATTEnuation of different signals and we show that perturbation of individual state variables, e.g. genes, typically leads to ill-conditioned data with both characteristic and weak modes. The weak modes are typically dominated by measurement noise due to poor excitation and their existence hampers network reconstruction. The excitation problem is solved by iterative design of correlated multi-gene perturbation experiments that counteract the intrinsic signal attenuation of the system. The next perturbation should be designed such that the expected response practically spans an additional dimension of the state space. The proposed design is numerically demonstrated for the Snf1 signalling pathway in S. cerevisiae. The impact of unperturbed and unobserved latent state variables, that exist in any real biological system, on the inferred network and required set-up of the experiments for network inference is analysed. Their existence implies that a subnetwork of pseudo-direct causal regulatory influences, accounting for all environmental effects, in general is inferred. In principle, the number of latent states and different paths between the nodes of the network can be estimated, but their identity cannot be determined unless they are observed or perturbed directly. Network inference is recognized as a variable/model selection problem and solved by considering all possible models of a specified class that can explain the data at a desired significance level, and by classifying only the links present in all of these models as existing. As shown, these links can be determined without any parameter estimation by reformulating the variable selection problem as a robust rank problem. Solution of the rank problem enable assignment of confidence to individual interactions, without resorting to any approximation or asymptotic results. This is demonstrated by reverse engineering of the synthetic IRMA gene regulatory network from published data. A previously unknown activation of transcription of SWI5 by CBF1 in the IRMA strain of S. cerevisiae is proven to exist, which serves to illustrate that even the accumulated knowledge of well studied genes is incomplete. / Denna avhandling behandlar inferens av biologiskanätverk från in vivo data genererat genom störningsexperiment, d.v.s. bestämning av kausala kopplingar som existerar mellan de observerade variablerna. Kunskap om dessa regulatoriska influenser är väsentlig för biologisk förståelse. En system egenskap—förstärksvagning—introduceras. Denna förklarar varför variationen i existerande genexpressionsdata är koncentrerat till några få ”karakteristiska moder” eller ”egengener” och varför de modeller som konstruerats innan innehåller många falska positiva och falska negativa linkar. Ett system med förstärksvagning karakteriseras av starka kopplingar som möjliggör simultan FÖRSTÄRKning och förSVAGNING av olika signaler. Vi demonstrerar att störning av individuella tillståndsvariabler, t.ex. gener, typiskt leder till illakonditionerat data med både karakteristiska och svaga moder. De svaga moderna domineras typiskt av mätbrus p.g.a. dålig excitering och försvårar rekonstruktion av nätverket. Excitationsproblemet löses med iterativdesign av experiment där korrelerade störningar i multipla gener motverkar systemets inneboende försvagning av signaller. Följande störning bör designas så att det förväntade svaret praktiskt spänner ytterligare en dimension av tillståndsrummet. Den föreslagna designen demonstreras numeriskt för Snf1 signalleringsvägen i S. cerevisiae. Påverkan av ostörda och icke observerade latenta tillståndsvariabler, som existerar i varje verkligt biologiskt system, på konstruerade nätverk och planeringen av experiment för nätverksinferens analyseras. Existens av dessa tillståndsvariabler innebär att delnätverk med pseudo-direkta regulatoriska influenser, som kompenserar för miljöeffekter, generellt bestäms. I princip så kan antalet latenta tillstånd och alternativa vägar mellan noder i nätverket bestämmas, men deras identitet kan ej bestämmas om de inte direkt observeras eller störs. Nätverksinferens behandlas som ett variabel-/modelselektionsproblem och löses genom att undersöka alla modeller inom en vald klass som kan förklara datat på den önskade signifikansnivån, samt klassificera endast linkar som är närvarande i alla dessa modeller som existerande. Dessa linkar kan bestämmas utan estimering av parametrar genom att skriva om variabelselektionsproblemet som ett robustrangproblem. Lösning av rangproblemet möjliggör att statistisk konfidens kan tillskrivas individuella linkar utan approximationer eller asymptotiska betraktningar. Detta demonstreras genom rekonstruktion av det syntetiska IRMA genreglernätverket från publicerat data. En tidigare okänd aktivering av transkription av SWI5 av CBF1 i IRMA stammen av S. cerevisiae bevisas. Detta illustrerar att t.o.m. den ackumulerade kunskapen om välstuderade gener är ofullständig. / <p>QC 20130508</p>
488

Artificial selection for large and small relative brain size in guppies (Poecilia reticulata) results in differences in cognitive ability

Bundsen, Andreas January 2012 (has links)
Vertebrate brain size is remarkably variable at all taxonomic levels. Brains of mammals forexample, range from 0.1 gram in small bats (Chiroptera) to about 8-9 kilos in Sperm whales(Physeter macrocephalus). But what does this variation in size really mean? The link between brainsize and cognition is debated due to, for instance the difficulties of comparing cognitive ability indifferent species. A large number of comparative studies continue to provide information aboutcorrelations found both within and between species. The relative size of the brain is an example of apopular measurement that correlates with cognitive ability. But to date, no experimental studieshave yielded any proof causality between relative brain size and cognitive ability. Here I usedguppies selected for either large or small relative brain size to investigate differences in cognitiveperformance of a quantity discrimination task. The results from this experiment provideexperimental evidence that relative brain size is important for cognitive ability, and that a differencein cognitive ability could be obtained already after two generations of selection experiments onrelative brain size in a vertebrate. / Artificial Selection on Relative Brain Size in the Guppy Reveals Costs and benefits of Evolving a Larger Brain
489

Individualized selection of learning objects

Liu, Jian 15 May 2009
Rapidly evolving Internet and web technologies and international efforts on standardization of learning object metadata enable learners in a web-based educational system ubiquitous access to multiple learning resources. It is becoming more necessary and possible to provide individualized help with selecting learning materials to make the most suitable choice among many alternatives.<p> A framework for individualized learning object selection, called Eliminating and Optimized Selection (EOS), is presented in this thesis. This framework contains a suggestion for extending learning object metadata specifications and presents an approach to selecting a short list of suitable learning objects appropriate for an individual learner in a particular learning context. The key features of the EOS approach are to evaluate the suitability of a learning object in its situated context and to refine the evaluation by using available historical usage information about the learning object. A Learning Preference Survey was conducted to discover and determine the relationships between the importance of learning object attributes and learner characteristics. Two weight models, a Bayesian Network Weight Model and a Naïve Bayes Model, were derived from the data collected in the survey. Given a particular learner, both of these models provide a set of personal weights for learning object features required by the individualized learning object selection.<p> The optimized selection approach was demonstrated and verified using simulated selections. Seventy simulated learning objects were evaluated for three simulated learners within simulated learning contexts. Both the Bayesian Network Weight Model and the Naïve Bayes Model were used in the selection of simulated learning objects. The results produced by the two algorithms were compared, and the two algorithms highly correlated each other in the domain where the testing was conducted.<p> A Learning Object Selection Study was performed to validate the learning object selection algorithms against human experts. By comparing machine selection and human experts selection, we found out that the agreement between machine selection and human experts selection is higher than agreement among the human experts alone.
490

Metareasoning about propagators for constraint satisfaction

Thompson, Craig Daniel Stewart 11 July 2011
Given the breadth of constraint satisfaction problems (CSPs) and the wide variety of CSP solvers, it is often very difficult to determine a priori which solving method is best suited to a problem. This work explores the use of machine learning to predict which solving method will be most effective for a given problem. We use four different problem sets to determine the CSP attributes that can be used to determine which solving method should be applied. After choosing an appropriate set of attributes, we determine how well j48 decision trees can predict which solving method to apply. Furthermore, we take a cost sensitive approach such that problem instances where there is a great difference in runtime between algorithms are emphasized. We also attempt to use information gained on one class of problems to inform decisions about a second class of problems. Finally, we show that the additional costs of deciding which method to apply are outweighed by the time savings compared to applying the same solving method to all problem instances.

Page generated in 0.108 seconds