• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 139
  • 20
  • 17
  • 10
  • 6
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 260
  • 44
  • 39
  • 37
  • 32
  • 26
  • 22
  • 18
  • 17
  • 17
  • 17
  • 17
  • 16
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

BEYOND THE SAT/ACT: AN EXAMINATION OF NON-COGNITIVE FACTORS THAT CONTRIBUTE TO STUDENTS' COLLEGE SUCCESS

Mendrinos, Niki January 2014 (has links)
Standardized tests such as the SAT and ACT claim to predict students' success in college. Colleges and universities place a considerable emphasis on these test scores when reviewing and deciding on applicants. However, over the years, institutional leaders and academic researchers have questioned whether the SAT/ACT tests truly measure the skills needed for success in college and throughout life. This study uses non-cognitive variables to focus to what students with strong high-school grade point averages (HSGPAs), low SAT/ACT test scores (under 1000 on the 1600 point scale for the SAT, or 21 or lower on the ACT), and who completed college in four years with an overall 3.5 or higher college GPA, attributed or perceived their abilities for college success. The study also investigated these students' perceptions and beliefs about these tests (have they hindered their abilities or potential for college success), and how these students thought non-cognitive factors should be considered in the admission's process. In addition, the study compares this group of students to the rest of the incoming freshman class. / Educational Administration
172

Discrete Transition System Model and Verification for Mitochondrially Mediated Apoptotic Signaling Pathways

Lam, Huy Hong 13 July 2007 (has links)
Computational biology and bioinformatics for apoptosis have been gaining much momentum due to the advances in computational sciences. Both fields use extensive computational techniques and modeling to mimic real world behaviors. One problem of particular interest is on the study of reachability, in which the goal is to determine if a target state or protein concentration in the model is realizable for a signaling pathway. Another interesting problem is to examine faulty pathways and how a fault can make a previously unrealizable state possible, or vice versa. Such analysis can be extremely valuable to the understanding of apoptosis. However, these analyses can be costly or even impractical for some approaches, since they must simulate every aspect of the model. Our approach introduces an abstracted model to represent a portion of the apoptosis signaling pathways as a finite state machine. This abstraction allows us to apply hardware testing and verification techniques and also study the behaviors of the system without full simulation. We proposed a framework that is tailor-built to implement these verification techniques for the discrete model. Through solving Boolean constraint satisfaction problems (SAT-based) and with guided stimulation (Genetic Algorithm), we can further extract the properties and behaviors of the system. Furthermore, our model allows us to conduct cause-effect analysis of the apoptosis signaling pathways. By constructing single- and double-fault models, we are able to study what fault(s) can cause the model to malfunction and the reasons behind it. Unlike simulation, our abstraction approach allows us to study the system properties and system manipulations from a different perspective without fully relying on simulation. Using these observations as hypotheses, we aim to conduct laboratory experiments and further refine our model. / Master of Science
173

Exploits in Concurrency for Boolean Satisfiability

Sohanghpurwala, Ali Asgar Ali Akbar 14 December 2018 (has links)
Boolean Satisfiability (SAT) is a problem that holds great theoretical significance along with effective formulations that benefit many real-world applications. While the general problem is NP-complete, advanced solver algorithms and heuristics allow for fast solutions to many large industrial problems. In addition to SAT, many applications rely on generalizations of Satisfiability such as MaxSAT, and Satisfiability Modulo Theories (SMT). Much of the advancement in SAT solver performance has been in the realm of improved sequential solvers with advanced conflict resolution, learning mechanisms, and sophisticated heuristics. There have been some successful demonstrations of massively parallel and hardware-accelerated solvers for SAT, but these have failed to find their way into mainstream usage. This document first presents previous work in Hardware Acceleration of Satisfiability followed by an analysis of why these attempts failed to gain widespread acceptance. It then demonstrates an alternative, hardware-centric approach, based on distributed Stochastic Local Search (SLS) that is better suited to efficient hardware implementation. Then a parallel SLS/CDCL hybrid approach is proposed that is suitable for distributed search with minimal communication overhead while maintaining completeness. Finally the efficacy and flexibility of distributed local search is considered with an adaptation to Weighted Partial MaxSAT (WPMS) and a focused case study on converted Probabilistic Inference instances. / Ph. D. / The Boolean Satisfiability (SAT) problem is an important decision problem that asks whether there exists a solution that satisfies all given constraints over a set of variables that can assume values of either 0 or 1. May real-world decision problems can be translated into SAT, and there exist efficient sequential solvers that can quickly resolve many such instances. Less progress has been made in efficiently scaling SAT solvers to modern multi-core systems and massively parallel hardware accelerators such as GPUs and Field Programmable Gate Arrays (FPGAs). This thesis explore different approaches to solving SAT based decision and optimization problems with the goal of increasing concurrency.
174

Predicting the Academic Success of Female Engineering Students During the First Year of College Using the SAT and Non-Cognitive Variables

Lovegreen, Therese A. 06 May 2003 (has links)
The purpose of this study was to determine the value of the Non Cognitive Questionnaire (NCQ) and the Scholastic Aptitude Test (SAT) in predicting the academic achievement of first year female engineering students. Ancis and Sedlacek (1997) studied non-cognitive variables with a general population of undergraduate women. Their study validated the NCQ as a predictor of academic success for women students. The present study extends the work of Ancis and Sedlacek to examine female engineering students, a group for which no similar study has yet been published.Participants included 100 female first-year engineering students at a large, public Doctoral Research â Extensive institution located in the mid-Atlantic region of the United States. By race or ethnicity, the participants were White, Non-Hispanic (81%), Black Non-Hispanic (9%), Asian or Pacific Islander (4%), Hispanic or Latino (1%), Non-Resident Alien (1%), and Unknown Race or Ethnicity (6%). This study defined academic achievement as first semester Grade Point Average, which was used as the dependent variable. Participants completed the NCQ during summer orientation. NCQ scale scores and SAT Verbal and Math scores were used as independent variables in a step-wise regression analysis.The major finding of this study was that the NCQ scale scores did not add to the predictive value of the SAT scores in determining the first semester GPA of female engineering students. This was an unexpected finding in light of previous research that had documented the value of using non-cognitive variables, and specifically the NCQ, to predict the academic success of groups that are a minority in their educational settings.Because the major finding of this study is at odds with a large body of similar studies, the most important implications of this study relate to understanding this difference. Included in the discussion are questions about the methodology used in previous NCQ studies and about the influence of the single institution that has been the site of almost all previous NCQ research. / Master of Arts
175

Sufficiency-based Filtering of Invariants for Sequential Equivalence Checking

Hu, Wei 14 February 2011 (has links)
Verification, as opposed to Testing and Post-Silicon Validation, is a critical step for Integrated Circuits (IC) Design, answering the question "Are we designing the right function?" before the chips are manufactured. One of the core areas of Verification is Equivalence Checking (EC), which is a special yet independent case of Model Checking (MC). Equivalence Checking aims to prove that two circuits, when fed with the same inputs, produce the exact same outputs. There are broadly two ways to conduct Equivalence Checking, simulation and Formal Equivalence Checking. Simulation requires one to try out different input combinations and observe if the two circuits produce the same output. Obviously, since it is not possible to enumerate all combinations of different inputs, completeness cannot be guaranteed. On the other hand, Formal Equivalence Checking can achieve 100% confidence. As the number of gates and in particular, the number of flip-flops, in circuits has grown tremendously during the recent years, the problem of Formal Equivalence Checking has become much harder â A recent evaluation of a general-case Formal Equivalence Checking engine [1] shows that about 15% of industrial designs cannot be verified after a typical sequential synthesis flow. As a result, a lot of attention on Formal Equivalence Checking has been drawn both academically and industrially. For years Combinational Equivalence Checking(CEC) has been the pervasive framework for Formal Equivalence Checking(FEC) in the industry. However, due to the limitation of being able to verify circuits only with 1:1 flip-flop pairing, a pure CEC-based methodology requires a full regression of the verification process, meaning that performing sequential optimizations like retiming or FSM re-encoding becomes somewhat of a bottleneck in the design cycle [2]. Therefore, a more powerful framework — Sequential Equivalence Checking (SEC) — has been gradually adopted in industry. In this thesis, we target on Sequential Equivalence Checking by finding efficient yet powerful group of relationships (invariants) among the signals of the two circuits being compared. In order to achieve a high success rate on some of the extremely hard-to-verify circuits, we are interested in both two-node and multi-node (up to 4 nodes) invariants. Also we are interested in invariants among both flip-flops and internal signals. For large circuits, there can be too many potential invariants requiring much time to prove. However, we observed that a large portion of them may not even contribute to equivalence checking. Moreover, equivalence checking can be significantly helped if there exists a method to check if a subset of potential invariants would be sufficient (e.g., whether two-nodes are enough or multi-nodes are also needed) prior to the verification step. Therefore, we propose two sufficiency-based approaches to identify useful invariants out of the initial potential invariants for SEC. Experimental results show that our approach can either demonstrate insufficiency of the invariants or select a small portion of them to successfully prove the equivalence property. Our approaches are quite case-independent and flexible. They can be applied on circuits with different synthesis techniques and combined with other techniques. / Master of Science
176

Enhancing SAT-based Formal Verification Methods using Global Learning

Arora, Rajat 25 May 2004 (has links)
With the advances in VLSI and System-On-Chip (SOC) technology, the complexity of hardware systems has increased manifold. Today, 70% of the design cost is spent in verifying these intricate systems. The two most widely used formal methods for design verification are Equivalence Checking and Model Checking. Equivalence Checking requires that the implementation circuit should be exactly equivalent to the specification circuit (golden model). In other words, for each possible input pattern, the implementation circuit should yield the same outputs as the specification circuit. Model checking, on the other hand, checks to see if the design holds certain properties, which in turn are indispensable for the proper functionality of the design. Complexities in both Equivalence Checking and Model Checking are exponential to the circuit size. In this thesis, we firstly propose a novel technique to improve SAT-based Combinational Equivalence Checking (CEC) and Bounded Model Checking (BMC). The idea is to perform a low-cost preprocessing that will statically induce global signal relationships into the original CNF formula of the circuit under verification and hence reduce the complexity of the SAT instance. This efficient and effective preprocessing quickly builds up the implication graph for the circuit under verification, yielding a large set of logic implications composed of direct, indirect and extended backward implications. These two-node implications (spanning time-frame boundaries) are converted into two-literal clauses, and added to the original CNF database. The added clauses constrain the search space of the SAT-solver engine, and provide correlation among the different variables, which enhances the Boolean Constraint Propagation (BCP). Experimental results on large and difficult ISCAS'85, ISCAS'89 (full scan) and ITC'99 (full scan) CEC instances and ISCAS'89 BMC instances show that our approach is independent of the state-of-the-art SAT-solver used, and that the added clauses help to achieve more than an order of magnitude speedup over the conventional approach. Also, comparison with Hyper-Resolution [Bacchus 03] suggests that our technique is much more powerful, yielding non-trivial clauses that significantly simplify the SAT instance complexity. Secondly, we propose a novel global learning technique that helps to identify highly non-trivial relationships among signals in the circuit netlist, thereby boosting the power of the existing implication engine. We call this new class of implications as 'extended forward implications', and show its effectiveness through additional untestable faults they help to identify. Thirdly, we propose a suite of lemmas and theorems to formalize global learning. We show through implementation that these theorems help to significantly simplify a generic CNF formula (from Formal Verification, Artificial Intelligence etc.) by identifying the necessary assignments, equivalent signals, complementary signals and other non-trivial implication relationships among its variables. We further illustrate through experimental results that the CNF formula simplification obtained using our tool outshines the simplification obtained using other preprocessors. / Master of Science
177

Sequential Equivalence Checking with Efficient Filtering Strategies for Inductive Invariants

Nguyen, Huy 24 May 2011 (has links)
Powerful sequential optimization techniques can drastically change the Integrated Circuit (IC) design paradigm. Due to the limited capability of sequential verification tools, aggressive sequential optimization is shunned nowadays as there is no efficient way to prove the preservation of equivalence after optimization. Due to the fact that the number of transistors fitting on single fixed-size die increases with Moore's law, the problem gets harder over time and in an exponential rate. It is no surprise that functional verification becomes a major bottleneck in the time-to-market of a product. In fact, literature has reported that 70% of design time is spent on making sure the design is bug-free and operating correctly. One of the core verification tasks in achieving high quality products is equivalence checking. Essentially, equivalence checking ensures the preservation of optimized product's functionality to the unoptimized model. This is important for industry because the products are modified constantly to meet different goals such as low power, high performance, etc. The mainstream in conducting equivalence checking includes simulation and formal verification. In simulation approach, golden design and design under verification (DUV) are fed with same stimuli for input expecting outputs to produce identical responses. In case of discrepancy, traces will be generated and DUV will undergo modifications. With the increase in input pins and state elements in designs, exhaustive simulation becomes infeasible. Hence, the completeness of the approach is not guaranteed and notions of coverage has to be accompanied. On the other hand, formal verification incorporates mathematical proofs and guarantee the completeness over the search space. However, formal verification has problems of its own in which it is usually resource intensive. In addition, not all design can be verified after optimization processes. That is to say the golden model and DUV are vastly different in structure which cause modern checker to give inconclusive result. Due to this nature, this thesis focuses in improving the strength and the efficiency of sequential equivalence checking (SEC) using formal approach. While there has been great strides made in the verification for combinational circuits, SEC still remains rather rudimentary. Without powerful SEC as a backbone, aggressive sequential synthesis and optimization are often avoided if the optimized design cannot be proved to be equivalent to the original one. In an attempt to take on the challenges of SEC, we propose two frameworks that successfully determining equivalence for hard-to-verify circuits. The first framework utilizes arbitrary relations between any two nodes within the two sequential circuits in question. The two nodes can reside in the same or across the circuits; likewise, they can be from the same time-frame or across time-frames. The merit for this approach is to use global structure of the circuits to speed up the verification process. The second framework introduces techniques to identify subset but yet powerful multi-node relations (involve more than 2 nodes) which then help to prune large don't care search space and result in a successful SEC framework. In contrast with previous approaches in which exponential number of multi-node relations are mined and learned, we alleviate the computation cost by selecting much fewer invariants to achieve desired conclusion. Although independent, the two frameworks could be used in sequential to complement each other. Experimental results demonstrate that our frameworks can take on many hard-to-verify cases and show a significant speed up over previous approaches. / Master of Science
178

The Use of Graphic Organizers on the Reading Comprehension of High School Students

Ramos Lorenzo, Beckylee 01 January 2024 (has links) (PDF)
This study seeks to identify correlation in the use of graphic organizers in reading comprehension of twelfth grade English language arts students. Specifically, this study seeks to determine if the use of graphic organizers in the teaching of reading has impacts on the reading comprehension of 12th grade English language arts students. The study intends to contribute to closing an information gap surrounding the effects of using instructional strategies such as graphic organizers in reading comprehension in a twelfth-grade English class. The research questions helped to guide this study in identifying descriptive statistics within PSAT reading comprehension scores during the 2021-2022 school year using Prentice Hall workbooks and SAT reading comprehension scores during the 2022-2023 school year using Study Sync workbooks. Descriptive statistics between ESOL PSAT reading comprehension scores during the 2021-2022 using Prentice Hall workbooks and ESOL SAT reading comprehension scores during the 2022-2023 school year using Study Sync workbooks were analyzed. The study followed a quantitative method to study the correlation of graphic organizers on reading comprehension of students in the twelfth grade. A Pearson correlation was used to measure and interpret scores between 2021-2022 PSAT and 2022-2023 SAT was applied to identify correlation significance of using graphic organizers on reading comprehension of students in the twelfth grade. A G-power analysis was calculated to identify the sample size for a Pearson's r correlation for post hoc data. Findings may assist high school teachers and educational leaders to consider implementing graphic organizers as an instructional strategy in reading comprehension to increase student reading achievement.
179

Learning during search / Apprendre durant la recherche combinatoire

Arbelaez Rodriguez, Alejandro 31 May 2011 (has links)
La recherche autonome est un nouveau domaine d'intérêt de la programmation par contraintes, motivé par l'importance reconnue de l'utilisation de l'apprentissage automatique pour le problème de sélection de l'algorithme le plus approprié pour une instance donnée, avec une variété d'applications, par exemple: Planification, Configuration d'horaires, etc. En général, la recherche autonome a pour but le développement d'outils automatiques pour améliorer la performance d'algorithmes de recherche, e.g., trouver la meilleure configuration des paramètres pour un algorithme de résolution d'un problème combinatoire. Cette thèse présente l'étude de trois points de vue pour l'automatisation de la résolution de problèmes combinatoires; en particulier, les problèmes de satisfaction de contraintes, les problèmes d'optimisation de combinatoire, et les problèmes de satisfiabilité (SAT).Tout d'abord, nous présentons domFD, une nouvelle heuristique pour le choix de variable, dont l'objectif est de calculer une forme simplifiée de dépendance fonctionnelle, appelée dépendance-relaxée. Ces dépendances-relaxées sont utilisées pour guider l'algorithme de recherche à chaque point de décision.Ensuite, nous révisons la méthode traditionnelle pour construire un portefeuille d'algorithmes pour le problème de la prédiction de la structure des protéines. Nous proposons un nouveau paradigme de recherche-perpétuelle dont l'objectif est de permettre à l'utilisateur d'obtenir la meilleure performance de son moteur de résolution de contraintes. La recherche-perpétuelle utilise deux modes opératoires: le mode d'exploitation utilise le modèle en cours pour solutionner les instances de l'utilisateur; le mode d'exploration réutilise ces instances pour s'entraîner et améliorer la qualité d'un modèle d'heuristiques par le biais de l'apprentissage automatique. Cette deuxième phase est exécutée quand l'unit\'e de calcul est disponible (idle-time). Finalement, la dernière partie de cette thèse considère l'ajout de la coopération au cours d'exécution d'algorithmes de recherche locale parallèle. De cette façon, on montre que si on partage la meilleure configuration de chaque algorithme dans un portefeuille parallèle, la performance globale peut être considérablement amélioré. / Autonomous Search is a new emerging area in Constraint Programming, motivated by the demonstrated importance of the application of Machine Learning techniques to the Algorithm Selection Problem, and with potential applications ranging from planning and configuring to scheduling. This area aims at developing automatic tools to improve the performance of search algorithms to solve combinatorial problems, e.g., selecting the best parameter settings for a constraint solver to solve a particular problem instance. In this thesis, we study three different points of view to automatically solve combinatorial problems; in particular Constraint Satisfaction, Constraint Optimization, and SAT problems.First, we present domFD, a new Variable Selection Heuristic whose objective is to heuristically compute a simplified form of functional dependencies called weak dependencies. These weak dependencies are then used to guide the search at each decision point. Second, we study the Algorithm Selection Problem from two different angles. On the one hand, we review a traditional portfolio algorithm to learn offline a heuristics model for the Protein Structure Prediction Problem. On the other hand, we present the Continuous Search paradigm, whose objective is to allow any user to eventually get his constraint solver to achieve a top performance on their problems. Continuous Search comes in two modes: the functioning mode solves the user's problem instances using the current heuristics model; the exploration mode reuses these instances to training and improve the heuristics model through Machine Learning during the computer idle time. Finally, the last part of the thesis, considers the question of adding a knowledge-sharing layer to current portfolio-based parallel local search solvers for SAT. We show that by sharing the best configuration of each algorithm in the parallel portfolio on regular basis and aggregating this information in special ways, the overall performance can be greatly improved.
180

Symétries locales et globales en logique propositionnelle et leurs extensions aux logiques non monotones

Nabhani, Tarek 09 December 2011 (has links)
La symétrie est par définition un concept multidisciplinaire. Il apparaît dans de nombreux domaines. En général, elle revient à une transformation qui laisse invariant un objet. Le problème de satisfaisabilité (SAT) occupe un rôle central en théorie de la complexité. Il est le problème de décision de référence de la classe NP-complet (Cook, 71). Il consiste à déterminer si une formule CNF admet ou non une valuation qui la rend vraie. Dans la première contribution de ce mémoire, nous avons introduit une nouvelle méthode complète qui élimine toutes les symétries locales pour la résolution du problème SAT en exploitant son groupe des symétries. Les résultats obtenus montrent que l'exploitation des symétries locales est meilleure que l'exploitation des symétries globales sur certaines instances SAT et que les deux types de symétries sont complémentaires, leur combinaison donne une meilleure exploitation.En deuxième contribution, nous proposons une approche d'apprentissage de clauses pour les solveurs SAT modernes en utilisant les symétries. Cette méthode n'élimine pas les modèles symétriques comme font les méthodes statiques d'élimination des symétries. Elle évite d'explorer des sous-espaces correspondant aux no-goods symétriques de l'interprétation partielle courante. Les résultats obtenus montrent que l'utilisation de ces symétries et ce nouveau schéma d'apprentissage est profitable pour les solveurs CDCL.En Intelligence Artificielle, on inclut souvent la non-monotonie et l'incertitude dans le raisonnement sur les connaissances avec exceptions. Pour cela, en troisième et dernière contribution, nous avons étendu la notion de symétrie à des logiques non classiques (non-monotones) telles que les logiques préférentielles, les X-logiques et les logiques des défauts.Nous avons montré comment raisonner par symétrie dans ces logiques et nous avons mis en évidence l'existence de certaines symétries dans ces logiques qui n'existent pas dans les logiques classiques. / Symmetry is by definition a multidisciplinary concept. It appears in many fields. In general, it is a transformation which leaves an object invariant. The problem of satisfiability (SAT) is one of the central problems in the complexity theory. It is the first decision Np-complete problem (Cook, 71). It deals with determining if a CNF formula admits a valuation which makes it true. First we introduce a new method which eliminates all the local symmetries during the resolution of a SAT problem by exploiting its group of symmetries. Our experimental results show that for some SAT instances, exploiting local symmetries is better than exploiting just global symmetries and both types of symmetries are complementary. As a second contribution, we propose a new approach of Conflict-Driven Clause Learning based on symmetry. This method does not eliminate the symmetrical models as the static symmetry elimination methods do. It avoids exploring sub-spaces corresponding to symmetrical No-goods of the current partial interpretation. Our experimental results show that using symmetries in clause learning is advantageous for CDCL solvers.In artificial intelligence, we usually include non-monotony and uncertainty in the reasoning on knowledge with exceptions. Finally, we extended the concept of symmetry to non-classical logics that are preferential logics, X-logics and default logics. We showed how to reason by symmetry in these logics and we prove the existence of some symmetries in these non-classical logics which do not exist in classical logics.

Page generated in 0.0567 seconds