• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3434
  • 1373
  • 363
  • 350
  • 314
  • 193
  • 144
  • 76
  • 61
  • 60
  • 46
  • 46
  • 35
  • 27
  • 27
  • Tagged with
  • 7885
  • 996
  • 664
  • 593
  • 523
  • 495
  • 468
  • 458
  • 443
  • 439
  • 427
  • 408
  • 381
  • 373
  • 372
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
841

ON THE CONTRIBUTION OF TOP-DOWN PREPARATION TO LEARNED CONTROL OVER SELECTION IN SINGLETON SEARCH / TOP-DOWN PREPARATION IN SINGLETON SEARCH

Sclodnick, Benjamin January 2024 (has links)
Physically salient stimuli in the visual field tend to capture attention rapidly and automatically, leading to the perceived pop-out effect in visual search. There is much debate about if and how top-down preparatory processes influence visual attention when salient stimuli are present. Experience with a task involves learning at multiple levels of cognitive processing, and it can be difficult to distinguish these learning effects from the effect of a ‘one-shot’ act of top-down preparation on a given trial. That is, preparing to attend to a particular colour might influence search on a given trial, but that act of preparation may also become embedded in a memory representation that carries over to influence future search events. Moreover, such learning effects may accumulate with repeated experiences of preparing in a particular way. The goal of the present thesis was to examine specifically how preparation at one point in time affects pop-out search at a later point in time. To this end, I present the following empirical contributions: I introduce a novel method for studying preparation effects in search for a salient singleton target; I use this new method to explore the contribution of learning and memory to effects of preparation on singleton search, and outline a number of boundary conditions of this new method; and I distinguish between two components of the reported preparatory effects, one related to preparing to attend to a particular feature, and one related to preparing to ignore a particular feature. Together, these contributions highlight the contribution of top-down preparation to memory representations that guide attention in singleton search, and offer a novel method that researchers can use to ask unanswered questions about the roles of preparation and experience in singleton search. / Thesis / Doctor of Philosophy (PhD) / Imagine looking out over a farmer’s field. All you can see is green grass, except for a big red tractor parked off in the distance. In this scenario, the contrast of the tractor’s colour and shape against the uniform grass will tend to draw attention to the tractor, making it immediately noticeable. This pop-out effect is often thought to be driven solely by physical stimulus features. However, past experiences searching through visual scenes can also affect the degree to which salient objects pop-out, suggesting that pop-out is influenced by memory. This thesis is centered around the memory processes that influence visual search for pop-out targets. I focus specifically on how deliberate preparation for particular search targets at one moment in time can lead to learning that influences pop-out search at later moments.
842

CRISPR-Hybrid: A CRISPR-mediated intracellular selection platform for RNA aptamers

Su-Tobon, Qiwen January 2024 (has links)
Thesis advisor: Jia Niu / In the last ten years, programmable CRISPR-Cas systems have been widely-used as genome editing tools for gene manipulation, epigenetic functionalization, and transcriptional regulation. Among them, fusing effector proteins directly to the Cas protein allows the resulting CRISPR machinery to direct these effector proteins to multiple sites of the same gene or multiple genes at once. Although they can be used to target multiple genetic loci simultaneously, these methods are often limited to applying one regulatory function (e.g., activation or repression) at a time. On the other hand, recruiting effector proteins via RNA aptamer-RNA-binding protein (RBP) recognition enabled multiplexed and multi-modular gene manipulations simultaneously. However, there are only a limited set of aptamer-RBP pairs that can function orthogonally and intracellularly, e.g., MS2 RNA aptamer with MS2 coat protein (MCP), and PP7 RNA aptamer with PP7 coat protein (PCP). The scarcity of orthogonal intracellular aptamer-RBP pairs imposes severe constraints on the CRISPR-mediated multifunctional manipulations of the genome and the epigenome. We established an intracellular selection platform for RNA aptamers, named CRISPR-Hybrid, and expanded the scope of aptamer-RBP toolkit for CRISPR transcription regulators. Using CRISPR-Hybrid, we successfully identified a highly active and specific aptamer for bacteriophage Qβ coat protein (QCP) in vivo, and characterized its binding affinity and specificity in vitro. We further validated the orthogonality of selected aptamer with QCP to other available intracellularly functional aptamer-RBP pairs including MS2-MCP and PP7-PCP in mammalian cells. Finally, we demonstrated the utility of this orthogonal pair in multiplexed and multi-modular regulations of endogenous genes. / Thesis (PhD) — Boston College, 2024. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Chemistry.
843

Hiring practices and instruments: Investigating CV and LinkedIn Profiles as Tools of Personnel Selection.

Casciano, Alberto 07 October 2024 (has links)
This doctoral thesis investigates the inferential processes recruiters use when evaluating applicants through curriculum vitae (CVs) and LinkedIn profiles, focusing particularly on personality traits inferences. The three studies described, examined the elements involved in such inference-making process from different perspectives. First study investigates the validities of cues retrievable in CVs, in particular the relationships with self-reported personality and job performance scores. In collaboration with an Italian public transportation company, we analyzed data collected in past selection processes across three occupational families and 787 applicants. The findings highlighted significant correlations between CV cues and both personality traits (e.g., number of languages spoken indicative of openness) and job performance (e.g., coherent training certifications positively predicting overall job performance). However, cues taken from previous literature also showed limitations in the generalizability to present sample, with many indicators studied showing no significant relationships with both criteria (personality and job performance). The second study delved deeper into the utilization recruiters do of CV information, examining how specific cues (i.e., the presence of teamwork skills and volunteering activities) affect raters’ perceptions of candidates’ personality traits. Through manipulating CV content, we observed that these cues significantly influence perceptions of agreeableness (with additional impacts observed on perceptions of extraversion and openness) supporting the effect that the availability of specific cues can have on the subsequent utilization and, consequently, on personality inferences. The third study assessed the impact of a training session designed to improve the accuracy of personality trait ratings from LinkedIn profiles. Participants, divided into a control group (who did not receive a training before the assessment) and an experimental group (who received a training on cue validity and utilization), rated the personality traits of LinkedIn users, with their assessments compared against a composite score of self- and friend-reports and experts’ evaluations of the same profiles. The trained group showed greater accuracy in discerning personality trait variations within profiles (i.e., profile accuracy), than the control group. However, the ability to compare different profiles’ levels of specific personality traits (i.e., trait accuracy), was improved only when using experts’ ratings as the criterion of interest (and only for the traits of conscientiousness, agreeableness, and openness). Although these findings do not support the substitution of classical assessment tools for personality evaluations, like personality questionnaires (nor had this purpose), they collectively offer empirical evidence related to cues validity and their utilization, also exploring the possibility to improve screening practices, advocating for more informed and structured approaches in assessing applicant information.
844

Detection of Latent Heteroscedasticity and Group-Based Regression Effects in Linear Models via Bayesian Model Selection

Metzger, Thomas Anthony 22 August 2019 (has links)
Standard linear modeling approaches make potentially simplistic assumptions regarding the structure of categorical effects that may obfuscate more complex relationships governing data. For example, recent work focused on the two-way unreplicated layout has shown that hidden groupings among the levels of one categorical predictor frequently interact with the ungrouped factor. We extend the notion of a "latent grouping factor'' to linear models in general. The proposed work allows researchers to determine whether an apparent grouping of the levels of a categorical predictor reveals a plausible hidden structure given the observed data. Specifically, we offer Bayesian model selection-based approaches to reveal latent group-based heteroscedasticity, regression effects, and/or interactions. Failure to account for such structures can produce misleading conclusions. Since the presence of latent group structures is frequently unknown a priori to the researcher, we use fractional Bayes factor methods and mixture g-priors to overcome lack of prior information. We provide an R package, slgf, that implements our methodology in practice, and demonstrate its usage in practice. / Doctor of Philosophy / Statistical models are a powerful tool for describing a broad range of phenomena in our world. However, many common statistical models may make assumptions that are overly simplistic and fail to account for key trends and patterns in data. Specifically, we search for hidden structures formed by partitioning a dataset into two groups. These two groups may have distinct variability, statistical effects, or other hidden effects that are missed by conventional approaches. We illustrate the ability of our method to detect these patterns through a variety of disciplines and data layouts, and provide software for researchers to implement this approach in practice.
845

Unsupervised Signal Deconvolution for Multiscale Characterization of Tissue Heterogeneity

Wang, Niya 29 June 2015 (has links)
Characterizing complex tissues requires precise identification of distinctive cell types, cell-specific signatures, and subpopulation proportions. Tissue heterogeneity, arising from multiple cell types, is a major confounding factor in studying individual subpopulations and repopulation dynamics. Tissue heterogeneity cannot be resolved directly by most global molecular and genomic profiling methods. While signal deconvolution has widespread applications in many real-world problems, there are significant limitations associated with existing methods, mainly unrealistic assumptions and heuristics, leading to inaccurate or incorrect results. In this study, we formulate the signal deconvolution task as a blind source separation problem, and develop novel unsupervised deconvolution methods within the Convex Analysis of Mixtures (CAM) framework, for characterizing multi-scale tissue heterogeneity. We also explanatorily test the application of Significant Intercellular Genomic Heterogeneity (SIGH) method. Unlike existing deconvolution methods, CAM can identify tissue-specific markers directly from mixed signals, a critical task, without relying on any prior knowledge. Fundamental to the success of our approach is a geometric exploitation of tissue-specific markers and signal non-negativity. Using a well-grounded mathematical framework, we have proved new theorems showing that the scatter simplex of mixed signals is a rotated and compressed version of the scatter simplex of pure signals and that the resident markers at the vertices of the scatter simplex are the tissue-specific markers. The algorithm works by geometrically locating the vertices of the scatter simplex of measured signals and their resident markers. The minimum description length (MDL) criterion is applied to determine the number of tissue populations in the sample. Based on CAM principle, we integrated nonnegative independent component analysis (nICA) and convex matrix factorization (CMF) methods, developed CAM-nICA/CMF algorithm, and applied them to multiple gene expression, methylation and protein datasets, achieving very promising results validated by the ground truth or gene enrichment analysis. We integrated CAM with compartment modeling (CM) and developed multi-tissue compartment modeling (MTCM) algorithm, tested on real DCE-MRI data derived from mouse models with consistent and plausible results. We also developed an open-source R-Java software package that implements various CAM based algorithms, including an R package approved by Bioconductor specifically for tumor-stroma deconvolution. While intercellular heterogeneity is often manifested by multiple clones with distinct sequences, systematic efforts to characterize intercellular genomic heterogeneity must effectively distinguish significant genuine clonal sequences from probabilistic fake derivatives. Based on the preliminary studies originally targeting immune T-cells, we tested and applied the SIGH algorithm to characterize intercellular heterogeneity directly from mixed sequencing reads. SIGH works by exploiting the statistical differences in both the sequencing error rates at different nucleobases and the read counts of fake sequences in relation to genuine clones of variable abundance. / Ph. D.
846

Exploring Per-Input Filter Selection and Approximation Techniques for Deep Neural Networks

Gaur, Yamini 21 June 2019 (has links)
We propose a dynamic, input dependent filter approximation and selection technique to improve the computational efficiency of Deep Neural Networks. The approximation techniques convert 32 bit floating point representation of filter weights in neural networks into smaller precision values. This is done by reducing the number of bits used to represent the weights. In order to calculate the per-input error between the trained full precision filter weights and the approximated weights, a metric called Multiplication Error (ME) has been chosen. For convolutional layers, ME is calculated by subtracting the approximated filter weights from the original filter weights, convolving the difference with the input and calculating the grand-sum of the resulting matrix. For fully connected layers, ME is calculated by subtracting the approximated filter weights from the original filter weights, performing matrix multiplication between the difference and the input and calculating the grand-sum of the resulting matrix. ME is computed to identify approximated filters in a layer that result in low inference accuracy. In order to maintain the accuracy of the network, these filters weights are replaced with the original full precision weights. Prior work has primarily focused on input independent (static) replacement of filters to low precision weights. In this technique, all the filter weights in the network are replaced by approximated filter weights. This results in a decrease in inference accuracy. The decrease in accuracy is higher for more aggressive approximation techniques. Our proposed technique aims to achieve higher inference accuracy by not approximating filters that generate high ME. Using the proposed per-input filter selection technique, LeNet achieves an accuracy of 95.6% with 3.34% drop from the original accuracy value of 98.9% for truncating to 3 bits for the MNIST dataset. On the other hand upon static filter approximation, LeNet achieves an accuracy of 90.5% with 8.5% drop from the original accuracy. The aim of our research is to potentially use low precision weights in deep learning algorithms to achieve high classification accuracy with less computational overhead. We explore various filter approximation techniques and implement a per-input filter selection and approximation technique that selects the filters to approximate during run-time. / Master of Science / Deep neural networks, just like the human brain can learn important information about the data provided to them and can classify a new input based on the labels corresponding to the provided dataset. Deep learning technology is heavily employed in devices using computer vision, image and video processing and voice detection. The computational overhead incurred in the classification process of DNNs prohibits their use in smaller devices. This research aims to improve network efficiency in deep learning by replacing 32 bit weights in neural networks with less precision weights in an input-dependent manner. Trained neural networks are numerically robust. Different layers develop tolerance to minor variations in network parameters. Therefore, differences induced by low-precision calculations fall well within tolerance limit of the network. However, for aggressive approximation techniques like truncating to 3 and 2 bits, inference accuracy drops severely. We propose a dynamic technique that during run-time, identifies the approximated filters resulting in low inference accuracy for a given input and replaces those filters with the original filters to achieve high inference accuracy. The proposed technique has been tested for image classification on Convolutional Neural Networks. The datasets used are MNIST and CIFAR-10. The Convolutional Neural Networks used are 4-layered CNN, LeNet-5 and AlexNet.
847

Selection for Body Weight in Chickens: Resource Allocations and Scaling

Jambui, Michelle 08 June 2016 (has links)
Evaluated were correlated responses to 54-generations of divergent selection for 8-week body weight (BW) and of BW at other ages and reproductive traits. Evaluated first was the influence of scaling on phenotypic responses to selection, phenotypic correlations of means and standard deviations, and unadjusted vs. standardized responses. Measured was BW at 4 (BW4), 8 (BW8), 24 (BW24), and 38 (BW38) weeks of age. Correlations between means and standard deviations were positive and greater in the LWS than HWS. Scaling masked the degree more than the pattern of response and was line specific with the magnitude of response greater in the LWS than HWS. While BW ratios across ages were not influenced by scaling in LWS, they were evident in HWS. Also measured were correlated responses of reproductive traits in selected and relaxed lines. Traits were age at first egg (AFE), body weight at first egg (WFE), their ratio (WAFE), and hen-day normal egg production (HDP). Although sexual maturity was delayed, the effect was more pronounced in the low than high weight lines. Selection for low BW decreased WFE, WAFE and HDP. Selection for high BW resulted in lower HDP, while WFE and WAFE were generally higher. Minimum AFE, WFE and WAFE in relation to sexual maturity were line specific. Opposition between relaxed and artificial selection resulted in a higher reproductive performance and fitness with relaxed than artificial selection. Overall, results demonstrate that correlated responses to long-term divergent selection were masked by scaling and negative correlated reproductive responses. / Master of Science
848

Selection of an Evidence-Based Pediatric Weight Management Program for the Dan River Region

Hooper, Margaret Berrey 13 May 2014 (has links)
Background: Efficacious pediatric weight management (PWM) programs have existed for over two decades, but there is limited evidence that these programs have been translated into regular practice. There is even less evidence that they have reached communities experiencing health disparities where access to care is limited. The purpose of this project was to use a community-engaged approach to select an evidence-based PWM program that could be delivered with the available resources in a community that is experiencing health disparities. Methods: The project was developed by the Partnership for Obesity Planning and Sustainability Community Advisory Board (POPS-CAB) in the Dan River Region of southwest Virginia. The POPS-CAB included representatives from a local pediatric health care center, the Danville/Pittsylvania Health Department, Danville Parks and Recreation, the Boys and Girls Club, and the Fralin Center for Translational Obesity Research (n=15). Three PWM programs were identified that met the criteria of demonstrating short and longer-term efficacy, across multiple studies and diverse populations, in reducing childhood obesity for children between the ages of 8 to 12 years across multiple studies. The programs included the Traffic Light Diet, Bright Bodies, and Golan and colleagues' Home Environmental Change Model. All three programs included a high frequency of in-person sessions delivered over a 6-month period, but one included an adapted version that delivered the content via interactive technology and could be delivered with far fewer resources (Family Connections adapted from the Home Environmental Change Model). A mixed-methods approach was used to determine program selection. This approach included individual POPS-CAB member rating of each program, followed by small group discussions, a collective quantitative rating, and, once all programs were reviewed a rank ordering of programs across characteristics. Finally, a large group discussion was conducted to come to agreement on the selection of one program for future local adaptation and implementation. All small and large group discussions were audio recorded and transcribed verbatim to identify themes that influenced the program selection decision. The quantitative results were averaged across individuals and across the groups. Qualitative results were reduced to meaning units, and then grouped into categories, and lastly, themes. Results: Individual ratings across Bright Bodies, Family Connections, and Traffic Light were 3.9 (0.3), 3.6 (0.5), and 3.4 (0.4), respectively. The ratings differed slightly between community and academic partners demonstrated by a higher rating for Bright Bodies by community members and a higher rating for Family Connections by academic members. After small group discussions the average group ratings across the programs was 3.8 (0.4) for Bright Bodies, 3.5 (0.6) for Family Connections, and 3.4 (0.6) for Traffic Light. Finally, the rank order of programs for potential implementation was Bright Bodies, Family Connections, and Traffic Light. Qualitative information for each program was broken down into four main themes of discussion, (1) the importance for the chosen program to have a balance of nutrition and physical activity, (2) negative perceptions of calorie counting, (3) a desire to target both the parent and the child, as well as (4) the need for practicality and usability the target settings. During the final large group discussion, the above themes suggest that the primary reasons that Bright Bodies was selected included the availability of nutrition information, structured physical activity sessions, presence of a usable workbook, as well as the balance of parent and child involvement. Conclusion: Key considerations in program selection were related more to the program content, delivery channel, and available resources for replication rather than simply selecting a program that was less resource intensive. / Master of Science
849

Signatures of natural selection and local adaptation in Populus trichocarpa and Populus deltoides along latitudinal clines

Bawa, Rajesh K. 18 February 2013 (has links)
Trees, like many other organisms, decrease their rate of metabolic activities to cope up with harsh environments. This stage of "dormancy" is marked by shedding of leaves and bud-set in deciduous trees. Recent studies have revealed the role of the circadian clock in synchronizing the timing of dormancy and physiology for conferring fitness in trees. To better understand the possible role of natural selection on circadian clock-related genes in climatic adaptation, I took a candidate gene approach, selecting circadian clock genes, some of which had been functionally validated, and others hypothesized, to identify signatures of natural selection in Populus trichocarpa and P. deltoides. Using both frequency spectrum based tests and tests of heterogeneity, I identified genetic variants deviating from selective neutrality. Results reveal that photoreceptors and dormancy regulator genes may have been the targets of natural selection. Nearly the same levels of selective constraints were found in different functional groups of genes irrespective of pleiotropy. Further, upstream regions of all genes showed high selective constraint, with some of them (FT-2, PIF-4, FRIGIDA) showing significantly higher variation than the other genes, hinting at the role of non-coding regulatory regions in local adaption. In some cases, the same genes in both species appeared as outliers, including PIF-6, FRI, FT-2, SRR1, TIC, and CO, which might reflect their common role in adaptation across species boundaries. All of these results indicate a complex nature of phenology regulation and local adaptation in Populus species with photoreceptors and dormancy regulator genes playing key roles. / Master of Science
850

Increasing Selection Accuracy and Speed through Progressive Refinement

Bacim de Araujo e Silva, Felipe 21 July 2015 (has links)
Although many selection techniques have been proposed and developed over the years, selection by pointing is perhaps the most popular approach for selection. In 3D interfaces, the laser-pointer metaphor is commonly used, since users only have to point to their target from a distance. However, the task of selecting objects that have a small visible area or that are in highly cluttered environments is hard when using pointing techniques. With both indirect and direct pointing techniques in 3D interfaces, smaller targets require higher levels of pointing precision from the user. In addition, issues such as target occlusion as well as hand and tracker jitter negatively affect user performance. Therefore, requiring the user to perform selection in a single precise step may result in users spending more time to select targets so that they can be more accurate (effect known as the speed-accuracy trade-off). We describe an approach to address this issue, called Progressive Refinement. Instead of performing a single precise selection, users gradually reduce the set of selectable objects to reduce the required precision of the task. This approach, however, has an inherent trade-off when compared to immediate selection techniques. Progressive refinement requires a gradual process of selection, often using multiple steps, although each step can be fast, accurate, and nearly effortless. Immediate techniques, on the other hand, involve a single-step selection that requires effort and may be slower and more error-prone. Therefore, the goal of this work was to explore this trade-off. The research includes the design and evaluation of progressive refinement techniques for 3D interfaces, using both pointing- and gesture-based interfaces for single-object selection and volume selection. Our technique designs and other existing selection techniques that can be classified as progressive refinement were used to create a design space. We designed eight progressive refinement techniques and compared them to the most commonly used techniques (for a baseline comparison) and to other state-of-the-art selection techniques in a total of four empirical studies. Based on the results of the studies, we developed a set of design guidelines that will help other researchers design and use progressive refinement techniques. / Ph. D.

Page generated in 0.0814 seconds