• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3418
  • 1373
  • 363
  • 350
  • 314
  • 193
  • 144
  • 76
  • 61
  • 60
  • 46
  • 46
  • 35
  • 27
  • 27
  • Tagged with
  • 7858
  • 996
  • 662
  • 591
  • 520
  • 493
  • 466
  • 458
  • 443
  • 439
  • 423
  • 406
  • 381
  • 373
  • 371
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Antiselektion und Proselektion bei gegebener und mangelnder Leistungsäquivalenz von Nettorisikoprämien im Versicherungsentgelt

Eszler, Erwin 02 March 2015 (has links) (PDF)
(no abstract available)
252

The Semantic Basis for Selectional Restrictions

Melchin, Paul 20 February 2019 (has links)
In this thesis I investigate the relationship between the semantics of a verb and its selectional restrictions, which determine how many and what kind of arguments it must occur with in a clause. For most verbs, these restrictions are predictable from the semantics of the verb, but there are pairs of verbs with very similar semantics that differ in their argument restrictions. For example, both ask and wonder can take questions as their complements (John asked/wondered what time it was), but of the two, only ask can take a noun phrase complement with a question-like interpretation (John asked/*wondered the time). Similarly, while both eat and devour are verbs of consumption, the object can be omitted with eat but not devour (John ate/*devoured yesterday). Due to these and similar examples, many linguists have claimed that selectional restrictions are to some extent arbitrary and unpredictable from the semantics, and therefore must be learned as part of our knowledge of the relevant verbs. In this thesis I argue that these differences are not arbitrary; they recur across languages, and they can be predicted on the basis of lexical semantics, meaning they do not need to be learned on a word-by-word basis. In order for selectional features to be eliminated from the grammar, and replaced with semantic generalizations, two things must be shown. First, it must be demonstrated that the elements being selected for can be defined in terms of their semantics, rather than their syntactic properties. If not, the selectional properties could not be considered to be fully predictable based on the semantics of the selecting and selected items. Second, it must be shown that the selectional restrictions of a predicate are predictable from components of the selecting predicate’s meaning. In other words, the semantics of both the selected and the selecting elements must be accounted for. I focus mainly on the semantics of selected elements in Chapter 2, and on selecting elements in Chapters 3 and 4. Chapter 2 provides a brief review of the literature on selectional features, and argues that the elements being selected need not be defined in terms of their syntactic category and features. Instead, what are selected for are the semantic properties of the selected items. While the relationship between syntactic and semantic categories and properties is often systematic, it is not always, which can make it difficult in certain cases to determine the semantic basis for predicting what elements will be selected. Specifically, I argue that what appears to be selection for clausal categories (CPs or TPs) is in fact selection for propositional entities (including questions, assertions, facts, and so on); apparent selection for bare verb phrases (vPs) is selection for eventualities (events or states); and apparent selection for nominals (DPs) is selection for objects or things. Only properties of the nearest semantic entity (i.e., excluding elements embedded therein) can be selected for. In this way, I account for the selectional asymmetries between clausal and nominal complements noted by Bruening (2009) and Bruening et al. (2018): predicates selecting clausal complements can only select for (semantic) properties of the upper portion of the clause (in the CP domain), not for the lower portion (the vP domain), while predicates taking nominal complements can select for any properties of the nominal rather than being restricted to the upper portion. Since all syntactic properties of items are encoded as features, on a syntactic account it is expected that all features should be involved in selectional restrictions, contrary to fact; the semantic approach taken here allows for a principled explanation of what can and cannot be selected for. In Chapters 3 and 4 I turn to the lexical semantics of selecting elements, showing that these too are involved in determining selectional restrictions. I start in Chapter 3 by looking at c-selection (i.e., syntactic selection), specifically the case of eat versus devour. As mentioned above, their selectional properties of these two verbs differ in that the complement of eat is optional, while that of devour is obligatory, despite the two verbs having similar meanings. I show that this is due to the aspectual properties of these verbs: devour denotes an event where the complement necessarily undergoes a complete scalar change (i.e., it must be fully devoured by the end of the event), which means that the complement must be syntactically realized (Rappaport Hovav and Levin 2001; Rappaport Hovav 2008). Eat, on the other hand, does not entail a complete change of state in its complement, and so the complement is optional. I show that the correlation between scalar change entailments and obligatory argument realization holds for a wider group of verbs as well. Thus, the c-selectional properties of eat, devour, and similar verbs need not be stipulated in their lexical entries. In Chapter 4 I turn to the selection of complements headed by a particular lexical item, as with rely, which requires a PP complement headed by on, a phenomenon commonly referred to as l-selection. I show that the sets of verbs and prepositions involved in l-selection, and the observed verb-preposition combinations, are not fully random but can instead be (partially) predicted based on the thematic properties of the items in question. Furthermore, I show that there are different kinds of l-selecting predicates, and one kind is systematically present in satellite-framed languages (like English) and absent in verb-framed languages (like French), based on the Framing Typology of Talmy (1985, 1991, 2000). I account for this difference by analyzing l-selection as an instance of complex predicate formation, and showing that a certain kind of complex predicate (exemplified by rely on) is possible in satellite-framed languages but not in verb-framed languages. Thus, I show that the features that get selected for are semantic features, and that the problematic cases of eat versus devour and l-selection have semantic correlates, and need not be stipulated in the lexicon. While this leaves many instances of selectional features unaccounted for, it provides proposals for some components of lexical semantics that are relevant to selection, and demonstrates that a research program directed toward eliminating the remaining cases is plausibly viable.
253

A Comprehensive Method for the Selection of Sustainable Materials for Building Construction

Zhang, Yuxin 01 May 2012 (has links)
In the design phase of any building industry, appropriate material selection is critical for the entire project. A poor choice of material may affect the quality of the project, lead to high cost during the long term operation and maintenance phases, and even endangering humans and the environment. Since the inception of the United States Green Building Council (USGBC) in 1993, ¡°green¡± buildings have become a hot topic and people have become concerned about how sustainable their buildings are. In order to determine the level of sustainability in buildings, the Leadership in Energy and Environmental Design (LEED) has developed a rating system that has been established now as the common denominator in the industry. However, the LEED rating system simplifies, or even ignores, explicit considerations for Lifecycle Assessment (LCA) in determining the selection of building materials. This lack of explicit consideration for LCA does not permit a full assessment in determining how truly sustainable the chosen materials are. This research analyzes the factors impacting the selection of the green materials and reviews the current standards used in green material. It proposes a more comprehensive rating method for the green material selection illustrating its applicability through a case study analysis based on new WPI Sports and Recreation Center. It is expected that this study would contribute to a better understanding of the sustainable materials selection and can improve help to improving their long term performance in buildings.
254

The elementary principal's role in the selection of teachers

Rowson, Garry Lee January 2010 (has links)
Digitized by Kansas Correctional Industries
255

Confronting a dilemma in the American judicial system: the peremptory strike and its racially discriminatory consequences in jury selection

Silldorf, David Richard January 2003 (has links)
Boston University. University Professors Program Senior theses. / PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you. / 2031-01-02
256

On selection for evolvability

Webb, Andrew January 2017 (has links)
This thesis is about direct selection for evolvability in artificial evolutionary systems. The origin of evolvability-the capacity for adaptive evolution-is of great interest to evolutionary biologists, who have proposed many indirect selection mechanisms. In evolutionary computation and artificial life, these indirect selection mechanisms have been co-opted in order to engineer the evolution of evolvability into artificial evolution simulations. Very little work has been done on direct selection, and so this thesis investigates the extent to which we should select for evolvability. I show in a simple theoretical model the existence of conditions in which selection for a weighted sum of fitness and evolvability achieves greater long-term fitness than selection for fitness alone. There are no conditions, within the model, in which it is beneficial to select more for evolvability than for fitness. Subsequent empirical work compares episodic group selection for evolvability (EGS)-an algorithm that selects for evolvability estimates calculated from noisy samples-with an algorithm that selects for fitness alone on four fitness functions taken from the literature. The long-term fitness achieved by EGS does not exceed that of selection for fitness alone in any region of the parameter space. However, there are regions of the parameter space in which EGS achieves greater long-term evolvability. A modification of the algorithm, EGS-AR, which incorporates a recent best-arm identification algorithm, reliably outperforms EGS across the parameter space, in terms of both eventual fitness and eventual evolvability. The thesis concludes that selection for estimated evolvability may be a viable strategy for solving time-varying problems.
257

Variable selection for general transformation models. / CUHK electronic theses & dissertations collection

January 2011 (has links)
General transformation models are a class of semiparametric survival models. The models generalize simple transformation models with more flexibility in modeling data coming from statistical practice. The models include many popular survival models as their special cases, e.g., proportional hazard Cox regression models, proportional odds models, generalized probit models, frailty survival models and heteroscedastic hazard regression models etc. Although the maximum marginal likelihood estimate of parameters in general transformation models with interval censored data is very satisfactory, its large sample properties are open. In this thesis, we will consider the problem and use discretization technique to establish the large sample properties of maximum marginal likelihood estimates with interval censored data. / In general, to reduce possible model bias, many covariates will be collected into a model. Hence a high-dimensional regression model is built. But at the same time, some non-significant variables may be also included in. So one of tasks to build an efficient survival model is to select significant variables. In this thesis, we will focus on the variable selection for general transformation models with ranking data, right censored data and interval censored data. Ranking data are widely seen in epidemiological studies, population pharmacokinetics and economics. Right censored data are the most common data in clinical trials. Interval censored data are another type common data in medical studies, financial, epidemiological, demographical and sociological studies. For example, a patient visits a doctor with a prespecified schedule. In his last visit, the doctor did not find occurrence of an interested event but at the current visit, the doctor found the event has occurred. Then the exact occurrence time of this event was censored in an interval bracketed by the two consecutive visiting dates. Based on rank-based penalized log-marginal likelihood approach, we will propose an uniform variable selection procedure for all three types of data mentioned above. In the penalized marginal likelihood function, we will consider non-concave and Adaptive-LASSO (ALASSO) penalties. For the non-concave penalties, we will adopt HARD thresholding, SCAD and LASSO penalties. ALASSO is an extended version of LASSO. The key of ALASSO is that it can assign weights to effects adaptively according to the importance of corresponding covariates. Therefore it has received more attention recently. By incorporating Monte Carlo Markov Chain stochastic approximation (MCMC-SA) algorithm, we also propose an uniform algorithm to find the rank-based penalized maximum marginal likelihood estimates. Based on the numeric approximation for marginal likelihood function, we propose two evaluation criteria---approximated GCV and BIC---to select proper tuning parameters. Using the procedure, we not only can select important variables but also be able to estimate corresponding effects simultaneously. An advantage of the proposed procedure is that it is baseline-free and censoring-distribution-free. With some regular conditions and proper penalties, we can establish the n -consistency and oracle properties of penalized maximum marginal likelihood estimates. We illustrate our proposed procedure by some simulations studies and some real data examples. At last, we will extend the procedures to analyze stratified survival data. / Keywords: General transformation models; Marginal likelihood; Ranking data; Right censored data; Interval censored data; Variable selection; HARD; SCAD; LASSO; ALASSO; Consistency; Oracle. / Li, Jianbo. / Adviser: Minggao Gu. / Source: Dissertation Abstracts International, Volume: 73-06, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 104-111). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
258

Construct truncation due to suboptimal person and item selection : consequences and potential solutions

Murray, Aja Louise January 2016 (has links)
Construct truncation can be defined as the failure to capture variation along the entire continuum of a construct reliably. It can occur due to suboptimal person selection or due to suboptimal item selection. In this thesis, I used a series of simulation studies coupled with real data examples to characterise the consequences of construct truncation on the inferences made in empirical research. The analyses suggested that construct truncation has the potential to result in significant distortions of substantive conclusions. Based on these analyses I developed recommendations for anticipating the circumstances under which construct truncation is likely to be problematic, identifying it when it occurs, and mitigating its adverse effects on substantive conclusions drawn from affected data.
259

Modelling and analysis of ranking data with misclassification.

January 2007 (has links)
Chan, Ho Wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (leaves 56). / Abstracts in English and Chinese. / Abstract --- p.ii / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Model --- p.3 / Chapter 3 --- Implementation by Mx --- p.10 / Chapter 3.1 --- Example 1 --- p.10 / Chapter 3.2 --- Example 2 --- p.22 / Chapter 4 --- Covariance structure analysis --- p.26 / Chapter 5 --- Simulation --- p.29 / Chapter 5.1 --- Simulation 1 --- p.29 / Chapter 5.2 --- Simulation 2 --- p.36 / Chapter 6 --- Discussion --- p.41 / Appendix A: Mx input script for ranking data data with p =4 --- p.43 / Appendix B: Selection matrices for ranking data with p = 4 --- p.47 / Appendix C: Mx input script for ranking data data with p = 3 --- p.50 / Appendix D: Mx input script for p = 4 with covariance structure --- p.53 / References --- p.56
260

Optimal selection of stocks using computational intelligence methods

Betechuoh, Brain Leke 08 February 2006 (has links)
Master of Science in Engineering - Engineering / Various methods, mostly statistical in nature have been introduced for stock market modelling and prediction. These methods are, however, complex and difficult to manipulate. Computational intelligence facilitates this approach of predicting stocks due to its ability to accurately and intuitively learn complex patterns and characterise these patterns as simple equations. In this research, a methodology that uses neural networks and Bayesian framework to model stocks is developed. The NASDAQ all-share index was used as test data. A methodology to optimise the input time-window for stock prediction using neural networks was also devised. Polynomial approximation and reformulated Bayesian frameworks methodologies were investigated and implemented. A neural network based algorithm was then designed. The performance of this final algorithm was measured based on accuracy. The effect of simultaneous use of diverse neural network engines is also investigated. The test result and accuracy measurements are presented in the final part of this thesis. Key words: Neural Networks, Bayesian framework and Markov Chain Monte Carlo

Page generated in 0.0629 seconds