• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Keep Your Eyes on Ms. Clark: Two Mexican Immigrant Children Make the Transition to Kindergarten

Cobb, Mark B. 12 February 2008 (has links)
Presented are case studies of two children as they make the transition from Mexican immigrant homes to kindergarten in an English-dominant school in the United States. In the first case, Victor adapts by keeping his attention focused on the teacher, which allows him to avoid disorientation and take on the role of exemplary student. In the second, Natalie adapts to kindergarten through her relationships with peers and the teacher. She often participates in class activities, however, without understanding the narrative or rationale behind them. Cross-case comparisons suggest that each student adapted in a way suited to his or her own needs and resources. The journey from disorientation to adaptation is described through the application of the holistic, systems-oriented, interactionalistic developmental approaches of Werner, Wapner, and Koizumi.
2

Text to Image Synthesis via Mask Anchor Points and Aesthetic Assessment

Baraheem, Samah Saeed 15 June 2020 (has links)
No description available.
3

Statistical Methods for the Analysis of Mass Spectrometry-based Proteomics Data

Wang, Xuan 2012 May 1900 (has links)
Proteomics serves an important role at the systems-level in understanding of biological functioning. Mass spectrometry proteomics has become the tool of choice for identifying and quantifying the proteome of an organism. In the most widely used bottom-up approach to MS-based high-throughput quantitative proteomics, complex mixtures of proteins are first subjected to enzymatic cleavage, the resulting peptide products are separated based on chemical or physical properties and then analyzed using a mass spectrometer. The three fundamental challenges in the analysis of bottom-up MS-based proteomics are as follows: (i) Identifying the proteins that are present in a sample, (ii) Aligning different samples on elution (retention) time, mass, peak area (intensity) and etc, (iii) Quantifying the abundance levels of the identified proteins after alignment. Each of these challenges requires knowledge of the biological and technological context that give rise to the observed data, as well as the application of sound statistical principles for estimation and inference. In this dissertation, we present a set of statistical methods in bottom-up proteomics towards protein identification, alignment and quantification. We describe a fully Bayesian hierarchical modeling approach to peptide and protein identification on the basis of MS/MS fragmentation patterns in a unified framework. Our major contribution is to allow for dependence among the list of top candidate PSMs, which we accomplish with a Bayesian multiple component mixture model incorporating decoy search results and joint estimation of the accuracy of a list of peptide identifications for each MS/MS fragmentation spectrum. We also propose an objective criteria for the evaluation of the False Discovery Rate (FDR) associated with a list of identifications at both peptide level, which results in more accurate FDR estimates than existing methods like PeptideProphet. Several alignment algorithms have been developed using different warping functions. However, all the existing alignment approaches suffer from a useful metric for scoring an alignment between two data sets and hence lack a quantitative score for how good an alignment is. Our alignment approach uses "Anchor points" found to align all the individual scan in the target sample and provides a framework to quantify the alignment, that is, assigning a p-value to a set of aligned LC-MS runs to assess the correctness of alignment. After alignment using our algorithm, the p-values from Wilcoxon signed-rank test on elution (retention) time, M/Z, peak area successfully turn into non-significant values. Quantitative mass spectrometry-based proteomics involves statistical inference on protein abundance, based on the intensities of each protein's associated spectral peaks. However, typical mass spectrometry-based proteomics data sets have substantial proportions of missing observations, due at least in part to censoring of low intensities. This complicates intensity-based differential expression analysis. We outline a statistical method for protein differential expression, based on a simple Binomial likelihood. By modeling peak intensities as binary, in terms of "presence / absence", we enable the selection of proteins not typically amendable to quantitative analysis; e.g., "one-state" proteins that are present in one condition but absent in another. In addition, we present an analysis protocol that combines quantitative and presence / absence analysis of a given data set in a principled way, resulting in a single list of selected proteins with a single associated FDR.
4

Ingen rök utan eld - en kartläggning av svenska mordbrännare ur ett geografiskt profileringsperspektiv

Minic, Johanna, Nilsson, Mathilda January 2013 (has links)
Syftet med denna uppsats är att utifrån ett uppdrag från Christina Innala vid Örebropolisen undersöka huruvida det går att utläsa beteendemönster hos svenska mordbrännare som kan ligga till grund för vidare geografiska analyser. I uppsatsen kommer vi att se närmre på mordbrännare och geografisk profilering utifrån svenska förhållanden. Materialet som ligger till grund för uppsatsen består av domslut gällande mordbränder och dessa analyseras och kategoriseras. De domslut som i materialet har omfattat seriemordbrännare har analyserats utifrån geografisk profilering med hjälp av mjukvaran GeoProfile. Detta för att undersöka hur brottplatser förhåller sig till fasta punkter för svenska seriemordbrännare.Sammanfattningsvis kan man utifrån vårt material identifiera sex stycken kategorier av mordbrännare. Dessa kategorier stämmer överrens med tidigare forskning. Utifrån våra geografiska profileringar kan man dra slutsatsen att seriemordbrännare tenderar att anlägga sina bränder i anslutning till en eller flera fasta punkter. Den svenska mordbrännaren är alltså inte en irrationell individ som slumpmässigt och oberäkneligt anlägger bränder. Bränderna tenderar ofta att uppstå inom en gärningsmans rutinaktivitetsområde, det vill säga att gärningsmannen vanligtvis begår brott i områden som denne känner till. / As an a assignment from Christina Innala at the police in Örebro, this paper aims to investigate whether it is possible to discern behavioral patterns of Swedish arsonists as a point of departure for further spatial analysis. In this paper geographic profiling based on Swedish conditions will be studied. The material that is the basis for the essay consists of judicial decisions regarding arson and these will be analyzed and categorized. The judicial decisions that in the material has included serial arsonists has been analyzed based on geographic profiling using the software GeoProfile. We want to examine wheatear crime scenes are related to anchor-points for Swedish serial arsonists.In summary, based on our material we were able to identify six categories of arsonists. These categories match those of previous research. Based on our geographic profiling it can be concluded that the serial arsonists tend to set their fires in connection with one or more anchor-points. The Swedish arsonist is not an irrational individual that randomly and unpredictably set fires. Fires tend to occur within a criminals routine activity area, in other words the offender usually commits the offense in areas that is known for him/her.
5

Figurations du réel : l'exemple musical : Appuis mentaux, visées, saisies et reprojections dans l'architecture cognitive / Representations of reality : the case of music : mental anchor points, designs, input and reprojections in cognitive architecture

Letailleur, Alain 18 December 2017 (has links)
La façon dont les musiciens parviennent à reconnaître les notes, par l’écoute seule ou en pratiquant leur art, a toujours fait l’objet d’une certaine fascination. Eux-mêmes, du reste, ne savent que rarement les raisons particulières qui leur permettent de disposer ainsi d’une excellente oreille musicale : « on est doué ou on ne l’est pas » reste alors souvent le raccourci qui permet de ne pas s’aventurer plus loin dans la quête d’une véritable explication. Il faut bien admettre que cette propension à pouvoir identifier des hauteurs perçues paraît ne pas trouver de véritable fondement, et ce d’autant plus que le son musical se trouve être invisible, impalpable et relativement fugace. Pour tenter de mieux comprendre les raisons liées à cette capacité mystérieuse, nous avons pris le parti d’interroger des musiciens, professionnels ou en apprentissage, afin de les questionner sur les procédures mentales qu’ils mettent en oeuvre à l’instant de l’identification notale. La description détaillée des plus petits éléments mentaux (ou la plus petite cohabitation de microéléments mentaux) que les musiciens utilisent pour effectuer cette tâche nous fait alors entrer dans un monde fascinant, qui révèle progressivement l’organisation de nombreuses actions de bas niveau, aussi ajustées à leurs fonctions que particulièrement discrètes. Ces fragments de pensées, que nous avons nommés appuis mentaux (les musiciens se repèrent en fonction de points d’ancrages mentaux adaptés pour accéder à l’identification) peuvent être décrits, sont variés dans leurs formes d’émergence à l’esprit et adoptent différents types de missions. Il a été possible de classer l’ensemble des configurations décrites en plusieurs catégories d’approches stratégiques. Certains de ces infimes gestes internes se sont tellement automatisés au fil du temps qu’ils se trouvent enfouis dans le registre inconscient. Ils deviennent alors très difficiles, voire parfois impossibles à détecter. En y regardant de plus près, nous pouvons imaginer que ces mécanismes hautement spécialisés, décrits dans un secteur restreint du monde musical, relèvent de principes fonctionnels généraux qui semblent s’activer, en réalité, à tout instant de notre vie quotidienne, pour chaque opération que nous sommes appelés à effectuer : calculer, orthographier, créer, faire du sport, cuisiner, bricoler ou bien penser tout simplement. C’est ce que la seconde partie de recherche tente de montrer dans un premier temps, pour exposer ensuite une bien étrange problématique, concernant les rapports interactifs qui s’opèrent entre contenus perceptifs et représentationnels (de nombreux témoignages font en effet état de situations où les appuis mentaux s’invitent directement sur la scène perceptive). La confrontation de ces deux univers, à travers le maniement de ce que nous avons appelé les reprojections mentales, nous met en situation de questionner les rouages qui sont en jeu dans l’édification de la cognition humaine, et interroge sur les conséquences qu’ils impliquent vis-à-vis de notre compréhension du réel. / The way musicians identify notes has always been a fascinating subject. In order to understand this competence of theirs, we have opted to interview professional and learner musicians so as to analyse the mental methods they use to fulfil this task. A detailed description of the faintest mental processes involved in so doing opens on a bewildering world which exposes an organisation of many low level actions as adapted to their functions as they are subtle. These fragments of thoughts - which we have called mental anchor points - can be described, are varied in their ways of surfacing and can engage in diverse mission types. When subjected to closer scrutiny, we can imagine that these highly specialised mechanisms fall within the sphere of general functional principles which seem to be active at every moment of our lives, for whichever operation we try to perform: calculating, taking part in sports activities, cooking or simply thinking. This is what the second part of this study first tries to show, before disclosing a strange system of issues concerning interactive relations between perceptions and representations. Many testimonies mention situations in which mental anchor points play a prominent part in our perceptive behaviour. The confrontation of these two universes, thanks to the use of what we have called mental reprojections, makes it possible for us to examine the machinery at stake in our cognitive constructions and to analyse the consequences they imply concerning our comprehension of the real world.
6

以最大測驗訊息量決定通過分數之研究 / Study of the Standard Setting by the Maximum Test Information

謝進昌, Shieh, Jin-Chang Unknown Date (has links)
本研究目的,乃在運用試題反應理論中最大測驗訊息量的概念於精熟標準設定上作為探討的主軸,透過其歷史的演進與發展,衍生出詮釋本研究最大測驗訊息量法的三個面向,分別為:元素的搭配組合與調整、廣義測驗建構流程、多元效度等,並以此概念賦予解釋運用最大測驗訊息量於精熟標準設定時的合理性與適切性。同時,確立最大測驗訊息量法於公式意涵、試題選擇與統計考驗力面向的合理性,建立其於精熟標準上的理論基礎,而後,再輔以精熟/未精熟者分類一致性信度值以期提供多元效度證據。最後,探討測驗分數的轉換方法、差異能力描述,期能同時獲得量與質的測驗結果解釋。 綜整分析,可發現以下幾點結論: 一、運用最大測驗訊息量法於精熟標準設定時,在分類的信度指標上,顯示由此求得精熟標準,經交叉驗證後,大致可獲得滿意的結果,皆有高達九成以上的精確分類水準,且藉由區間的概念亦能充分顯現出,以最大測驗訊息量法求得之標準,可作為專家設定精熟標準時參考、判斷的優勢。而在分數轉換上,不論搭配換算古典測驗分數法或測驗特徵曲線構圖法時,其分類精熟/未精熟者的一致性表現,大致可獲得滿意的結果,乃是值得參照的組合策略。 二、在運用定錨點以解釋由最大測驗訊息量法於國中基本學力測驗求得之精熟標準時,可發現未精熟者乃僅需具備學科基礎知識與簡易圖示理解能力,而對於精熟者而言,則需進一步擁有對於廣泛學科知識的了解;複雜問題、資料與圖表詮釋;邏輯推理、分析實驗結果以獲得相關論點等能力,或者更高階之具備進階學科知識;綜合、評鑑資料、情境傳遞之訊息的能力。 三、探討測驗長度因素時,分析結果顯示不論採行最大測驗訊息量法、換算古典測驗分數法或是測驗特徵曲線構圖法,皆受此因素的影響,顯示測驗長度愈長,分類一致性愈高,此項結果乃與過去大多數的研究一致。另,由本資料分析結果乃建議測驗長度20題時,會是必備的基本題數要求值。此外,若從細部精確錯誤分類人數角度分析時,於實務用途上,可發現對於影響轉換分數時,產生差異分數的因素,決策者並不容易掌握與控制,但卻可藉由增加測驗長度,分散分數點的人數,以彌平錯誤分類的影響。 四、探討測驗異質性因素時,最大測驗訊息量法因具有因試題參數而調整估計受試者能力的特性,使得在異質測驗時,分類一致性仍能維持在不錯的水準之上。反觀換算古典測驗分數法與測驗特徵曲線構圖法,在固定精熟標準下,則有明顯的錯誤分類比率,此現象也反應出現行以固定60分作為及格(精熟)標準的缺失。 五、探討採用簡易測驗、困難測驗或常態測驗間於轉換分數上之效果時,由換算古典測驗分數法或測驗特徵曲線構圖法轉換來自最大測驗訊息量法之精熟標準時,資料分析結果顯示,不論於何種測驗難度類型中,採用何種轉換方式,並不會嚴重影響轉換分數間一致性分類的效果。另,若從細部精確錯誤分類人數角度分析時,本研究所採之最大測驗訊息量法,因具備隨測驗難易程度來決定門檻的特性,於簡易測驗中求得之精熟標準較低,而於困難測驗中求得之精熟標準相對較高,使得於轉換分數上,即使有較大的差異分數,亦不會造成嚴重的錯誤分類人數。 六、在探討測驗長度、測驗異質性因素與定錨點題目篩選間互動關係時,分析結果顯示,測驗長度與測驗異質性,並非是絕對影響定錨點題目篩選的因素,更重要的在於最大試題訊息量所對應之最適能力值是否能與定錨點相搭配。 綜整之,本研究所採最大測驗訊息量法,經檢驗後,於分類一致性上乃具有不錯的表現,且搭配相對強韌、嚴謹的理論支持與適切測驗結果解釋方法等,是最適合用於大型考試上使用。因此,乃建議未來政府單位或實務工作者於進行大型證照、資格檢定考試時,可考慮使用本策略。 / The purpose of this study is to adopt the concepts of IRT maximum test information to standard setting. At first, we are trying to discover three facets of interpretation in using the maximum test information to standard setting through the historical movement of standard setting. The three facets are component combination and adjustment, generalized test construction processes and multiple validities. Depending on these three concepts, we can easily explain the reasonableness and appropriateness of maximum test information approach. After that, we further investigate the reasonableness from the dimensions of definition of formula, item selections and statistical power to establish the basic theory of the maximum information approach in standard setting. In addition, we also examine the effects on exact classification of master/non-master in expectation to provide multiple evidences for validity. Finally, the method of classical test scores transformation and difference ability description are discussed to provide quantitative and qualitative test result interpretation simultaneously. In sum, some conclusions are proposed. 1.In applying the maximum test information approach to standard setting, the effect on exact classification of master/ non-master may come to a satisfying result. We may have at least 90% exact classification performance. At the same time, we also find that the mastery standard deriving from the maximum test information approach may have some advantages being a starting reference point for experts to adjust on the basis of the view of confidence interval. In the aspect of classical test scores transformation, no matter what approach you take, the transformed classical test scores approach or test characteristic curve mapping method, the consistency of exact classification of master/ non-master may hold. We may suggest the combination strategy is really worthy to take into consideration in standard setting. 2.In applying the anchor point to interpret Basic Competency Test result, we may find non-master only has basic academic knowledge and simple graph understanding ability, but for the master, he may need extensive academic knowledge; ability of complicated problems、data and graph interpretation; logic reasoning、analyzing experimental result to get related issues. Moreover, advanced academic knowledge; ability of synthesizing and evaluating information from data and surroundings are also included. 3.In the aspect of test length, the result of this research shows no matter what approach you take, maximum test information approach、transformed classical test scores approach or test characteristic curve mapping method, they are all influenced. It shows the longer test length, the higher consistency of exact classification of master/non-master. This result is consistent to most of the studies in the past. On the other hand, we suggest the 20 items is a fundamental value. Moreover, from the view of exact number of error classification, we can find that the real factor affecting the difference scores in transforming classical test score is unable to control in practical usage, but we can just disperse the numbers of people in each test score point to reduce the influence of error classification by increasing test length. 4.In the aspect of diverse test difficulty, because the maximum test information approach possesses the characteristic of examinees’ ability adjustment depending on item parameters, it is less influenced to maintain a acceptable level of consistent classification. In contrast with the maximum test information approach, the transformed classical test scores approach and test characteristic curve mapping method may have obvious high ratio of error classification under the fixed mastery standard. This also reflects the deficiency of current fixed 60 points passing scores. 5.In the aspect of analyzing the effect of score transformation between easy、hard and normal test, this research shows no matter what approach you take in any type of test difficulty, they may not severely influenced. Furthermore, from the view of exact number of error classification, because the maximum test information approach possesses the characteristic of deciding passing level depending on the degree of test difficulty (the lower mastery standard in easy test and the higher in hard test), it may not lead to a severe error classification even if there exists a large difference score in classical test score transformation. 6.In the aspect of interaction between test length、diverse test difficulty and anchor items selection, this research shows that test length and diverse test difficulty are not the real factors affecting anchor items selection. The more accurate cause is if the mastery standard deriving from the maximum test information approach may coordinate with the anchor point or not. In sum, the maximum test information approach may not only lead to a satisfying exact classification performance after analysis, but also be supported by strong and strict theory and accompany proper test result interpretation method. It is the most proper method in standard setting for large-sized test. Finally, we suggest the government or practitioners may consider adopting this strategy for future usage.

Page generated in 0.2363 seconds