• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 7
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 54
  • 25
  • 12
  • 12
  • 9
  • 9
  • 9
  • 9
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Investigating visual attention while solving college algebra problems

Johnson, Jennifer E. January 1900 (has links)
Master of Science / Mathematics / Andrew G. Bennett / This study utilizes eye-tracking technology as a tool to measure college algebra students’ mathematical noticing as defined by Lobato and colleagues (2012). Research in many disciplines has used eye-tracking technology to investigate the differences in visual attention under the assumption that eye movements reflect a person’s moment-to-moment cognitive processes. Motivated by the work done by Madsen and colleagues (2012) who found visual differences between those who correctly and incorrectly solve introductory college physics problems, we used eye-tracking to observe the visual attention difference between correct and incorrect solvers of college algebra problems. More specifically, we consider students’ visual attention when presented tabular representations of linear functions. We found that in several of the problems analyzed, those who answered the problem correctly spend more time looking at relevant table values of the problem while those who answered the problem incorrectly spend more time looking at irrelevant table labels x, y, y = f(x) of the problem in comparison to the correct solvers. More significantly, we found a noteworthy group of students, who did not move beyond table labels, using these labels solely to solve the problem. Future analyses need to be done to expand on the differences between eye patterns rather than just focusing on dwell time in the relevant and irrelevant areas of a table.
12

A Classification of the Weed Vegetation in Mituo County, Kaohsiung

Lin, Chun-yi 07 February 2010 (has links)
This study surveyed floristic composition and distribution of weed vegetation in Mituo County. 206 relevés were surveyed according to relevé method. A total of 140 species belonging to 32 families of the vascular plants were recorded. The weed communities were classified with nonmetric multidimentional scaling, two-way indicator species analysis, tabular comparison method, fidelity and synoptic table analysis. Discriminate analysis was used to evaluate the distinctness of classification unitsal vegetation classification system was made using Braun-Blanquet approach of floristic-sociological classification in lower levels and physiognomic-sociological classification in higher levels. In floristic-sociological classification, assication is the basic unit, and it should be grouped into higher units (alliance) by floristic composition. The results showed 1 formation class, 2 formations in phsiogonomic units, and 4 alliances, 6 associations in floristic units: I. Lower montane-lowland weed vegetation formation A. Echinochloa colona alliance a. Echinochloa colona association b. Trianthemum portulacastrum association c. Panicum maximum association B. Dichanthium aristatum alliance d. Dichanthium aristatum association C. Eriochloa procera alliance e. Eriochloa procera association II. Sand dune vegetation formation D. Ipomoea pescaprae subsp. brasiliensis alliance f. Ipomoea pescaprae subsp. brasiliensis association
13

Network-based visual analysis of tabular data

Liu, Zhicheng 04 April 2012 (has links)
Tabular data is pervasive in the form of spreadsheets and relational databases. Although tables often describe multivariate data without explicit network semantics, it may be advantageous to explore the data modeled as a graph or network for analysis. Even when a given table design conveys some static network semantics, analysts may want to look at multiple networks from different perspectives, at different levels of abstraction, and with different edge semantics. This dissertation is motivated by the observation that a general approach for performing multi-dimensional and multi-level network-based visual analysis on multivariate tabular data is necessary. We present a formal framework based on the relational data model that systematically specifies the construction and transformation of graphs from relational data tables. In the framework, a set of relational operators provide the basis for rich expressive power for network modeling. Powered by this relational algebraic framework, we design and implement a visual analytics system called Ploceus. Ploceus supports flexible construction and transformation of networks through a direct manipulation interface, and integrates dynamic network manipulation with visual exploration for a seamless analytic experience.
14

Programming language semantics as a foundation for Bayesian inference

Szymczak, Marcin January 2018 (has links)
Bayesian modelling, in which our prior belief about the distribution on model parameters is updated by observed data, is a popular approach to statistical data analysis. However, writing specific inference algorithms for Bayesian models by hand is time-consuming and requires significant machine learning expertise. Probabilistic programming promises to make Bayesian modelling easier and more accessible by letting the user express a generative model as a short computer program (with random variables), leaving inference to the generic algorithm provided by the compiler of the given language. However, it is not easy to design a probabilistic programming language correctly and define the meaning of programs expressible in it. Moreover, the inference algorithms used by probabilistic programming systems usually lack formal correctness proofs and bugs have been found in some of them, which limits the confidence one can have in the results they return. In this work, we apply ideas from the areas of programming language theory and statistics to show that probabilistic programming can be a reliable tool for Bayesian inference. The first part of this dissertation concerns the design, semantics and type system of a new, substantially enhanced version of the Tabular language. Tabular is a schema-based probabilistic language, which means that instead of writing a full program, the user only has to annotate the columns of a schema with expressions generating corresponding values. By adopting this paradigm, Tabular aims to be user-friendly, but this unusual design also makes it harder to define the syntax and semantics correctly and reason about the language. We define the syntax of a version of Tabular extended with user-defined functions and pseudo-deterministic queries, design a dependent type system for this language and endow it with a precise semantics. We also extend Tabular with a concise formula notation for hierarchical linear regressions, define the type system of this extended language and show how to reduce it to pure Tabular. In the second part of this dissertation, we present the first correctness proof for a Metropolis-Hastings sampling algorithm for a higher-order probabilistic language. We define a measure-theoretic semantics of the language by means of an operationally-defined density function on program traces (sequences of random variables) and a map from traces to program outputs. We then show that the distribution of samples returned by our algorithm (a variant of “Trace MCMC” used by the Church language) matches the program semantics in the limit.
15

Využití potenciálu BI sémantického modelu v MS SQL Server 2012

Zelený, Jindřích January 2015 (has links)
This thesis deals with different approaches in the development of an analytical model of the data warehouse with a focus on tabular mode and its associated technologies and tools from the Microsoft company. In the theoretical part the thesis introduces the principles of Business Intelligence and also the concept of the semantic model. It also states tabular model as a new approach of creating an analytical database stored in the RAM memory. The tabular model is developed on the top of the data warehouse of a fictitious company Contoso in the practical part. The emphasis is put mainly on the comparison between the tabular and multidimensional model. The work ends with deploying both models on a virtual server with a comparison of their computing power for each of the designed scenarios.
16

Combining Cell Painting, Gene Expression and Structure-Activity Data for Mechanism of Action Prediction

Everett Palm, Erik January 2023 (has links)
The rapid progress in high-throughput omics methods and high-resolution morphological profiling, coupled with the significant advances in machine learning (ML) and deep learning (DL), has opened new avenues for tackling the notoriously difficult problem of predicting the Mechanism of Action (MoA) for a drug of clinical interest. Understanding a drug's MoA can enrich our knowledge of its biological activity, shed light on potential side effects, and serve as a predictor of clinical success.  This project aimed to examine whether incorporating gene expression data from LINCS L1000 public repository into a joint model previously developed by Tian et al. (2022), which combined chemical structure and morphological profiles derived from Cell Painting, would have a synergistic effect on the model's ability to classify chemical compounds into ten well-represented MoA classes. To do this, I explored the gene expression dataset to assess its quality, volume, and limitations. I applied a variety of ML and DL methods to identify the optimal single model for MoA classification using gene expression data, with a particular emphasis on transforming tabular data into image data to harness the power of convolutional neural networks. To capitalize on the complementary information stored in different modalities, I tested end-to-end integration and soft-voting on sets of joint models across five stratified data splits.  The gene expression dataset was relatively low in quality, with many uncontrollable factors that complicated MoA prediction. The highest-performing gene expression model was a one-dimensional convolutional neural network, with an average macro F1 score of 0.40877 and a standard deviation of 0.034. Approaches converting tabular data into image data did not significantly outperform other methods. Combining optimized single models resulted in a performance decline compared to the best single model in the combination. To take full advantage of algorithmic developments in drug development and high-throughput multi-omics data, my project underscores the need for standardizing data generation and optimizing data fusion methods.
17

Querying Structured Data via Informative Representations

Bandyopadhyay, Bortik January 2020 (has links)
No description available.
18

An Engineering Methodology for the Formal Verification of Function Block Based Systems

Pang, Linna 11 1900 (has links)
Many industrial control systems use programmable logic controllers (PLCs) since they provide a highly reliable, off-the-shelf hardware platform. On the programming side, function blocks (FBs) are reusable PLC components that can be composed to implement the required system behaviour. A higher quality system may be realized if the FBs are pre-certified to be compliant with an international standard such as IEC 61131-3. Unfortunately, the set of programming notations defined in IEC 61131-3 lack well-defined formal semantics. As a result, tool vendors and users of PLCs may have inconsistent interpretations of the expected system behaviour. To address this issue, we propose an engineering method for formally verifying the conformance of candidate implementations of FBs (and their compositions) to their high-level, input-output requirements. The proposed method is sufficiently general to handle FBs supplied by IEC 61131-3, and industrial FB applications involving real-time requirements. Our method involves several steps. First, we use tabular expressions to ensure the completeness and disjointness of the requirements for the FB. Second, we formalize the candidate implementation(s) of the FB in question. Third, we state and prove theorems regarding the consistency and correctness of the FB. All three steps are performed using the Prototype Verification Systems (PVS) proof assistant. As a first case study, we apply our approach to the IEC 61131-3 standard to examine the entire library of FBs and their supplied implementations described in structured text (ST) and function block diagrams (FBDs). As a second case study, we apply our approach to two realistic sub-systems taken from the nuclear domain. Applying the proposed method, we identified three kinds of issues: ambiguous behavioural descriptions, missing assumptions, and erroneous implementations. Furthermore, we suggest solutions to these issues. / Thesis / Doctor of Philosophy (PhD) / A formal verification approach for the function block based control systems
19

A Tabular Expression Toolbox for Matlab/Simulink

Eles, Colin J. 10 1900 (has links)
<p>Model based design has had a large impact on the process of software development in many different industries. A lack of formality in these environments can lead to incorrect software and does not facilitate the formal analysis of created models. A formal tool known as tabular expressions have been successfully used in developing safety critical systems, however insufficient tool support has hampered their wider adoption. To address this shortfall we have developed the Tabular Expression Toolbox for Matlab/Simulink.</p> <p>We have developed an intuitive user interface that allows users to easily create, modify and check the completeness and disjointness of tabular expressions using the theorem prover PVS or SMT solver CVC3. The tabular expressions are translated to m-functions allowing their seamless use with Matlab's simulation and code generation. We present a method of generating counter examples for incorrect tables and a means of effectively displaying this information to the user. We provide support for modelling inputs as floating point numbers, through subtyping a user can show the properness of a table using a more concrete representation of data. The developed tools and processes have been used in the modelling of a nuclear shutdown system as a case study of the practicality and usefulness of the tools.</p> / Master of Applied Science (MASc)
20

Tracking and visualizing dimension space coverage for exploratory data analysis

Sarvghad Batn Moghaddam, Ali 15 August 2016 (has links)
In this dissertation, I investigate interactive visual history for collaborative exploratory data analysis (EDA). In particular, I examine use of analysis history for improving the awareness of the dimension space coverage 1 2 3 to better support data exploration. Commonly, interactive history tools facilitate data analysis by capturing and representing information about the analysis process. These tools can support a wide range of use-cases from simple undo and redo to complete reconstructions of the visualization pipeline. In the con- text of exploratory collaborative Visual Analytics (VA), history tools are commonly used for reviewing and reusing past states/actions and do not efficiently support other use-cases such as understanding the past analysis from the angle of dimension space coverage. How- ever, such knowledge is essential for exploratory analysis which requires constant formulation of new questions about data. To carry out exploration, an analyst needs to understand “what has been done” versus “what is remaining” to explore. Lack of such insight can result in premature fixation on certain questions, compromising the coverage of the data set and breadth of exploration [80]. In addition, exploration of large data sets sometimes requires collaboration between a group of analysts who might be in different time/location settings. In this case, in addition to personal analysis history, each team member needs to understand what aspects of the problem his or her collaborators have explored. Such scenarios are common in domains such as science and business [34] where analysts explore large multi-dimensional data sets in search of relationships, patterns and trends. Currently, analysts typically rely on memory and/or externalization to keep track of investigated versus uninvestigated aspects of the problem. Although analysis history 4 mechanisms have the potential to assist analyst(s) with this problem, most common visual representations of history are geared towards reviewing & reusing the visualization pipeline or visualization states. I started this research with an observational user study to gain a better understanding of analysts’ history needs in the context of collaborative exploratory VA. This study showed that understanding the coverage of dimension space by using linear history 5 was cumbersome and inefficient. To address this problem, I investigated how alternate visual representations of analysis history could support this use-case. First, I designed and evaluated Footprint-I, a visual history tool that represented analysis from the angle of dimension space coverage (i.e. history of investigation of data dimensions; specifically, this approach revealed which dimensions had been previously investigated and in which combinations). I performed a user study that evaluated participants’ ability to recall the scope of past analysis using my proposed design versus a linear representation of analysis history. I measured participants’ task duration and accuracy in answering questions about a past exploratory VA session. Findings of this study showed that participants with access to dimension space coverage information were both faster and more accurate in understanding dimension space coverage information. Next, I studied the effects of providing coverage information on collaboration. To investigate this question, I designed and implemented Footprint-II, the next version of Footprint-I. In this version, I redesigned the representation of dimension space coverage to be more usable and scalable. I conducted a user study that measured the effects of presenting history from the angle of dimension space coverage on task coordination (tacit breakdown of a common task between collaborators). I asked each participant to assume the role of a business data analyst and continue a exploratory analysis work which was started by a collaborator. The results of this study showed that providing dimension space coverage information helped participants to focus on dimensions that were not investigated in the initial analysis, hence improving tacit task coordination. Finally, I investigated the effects of providing live dimension space coverage information on VA outcomes. To this end, I designed and implemented a standalone prototype VA tool with a visual history module. I used scented widgets [76] to incorporate real-time dimension space coverage information into the GUI widgets. Results of a user study showed that providing live dimension space coverage information increased the number of top-level findings. Moreover, it expanded the breadth of exploration (without compromising the depth) and helped analysts to formulate and ask more questions about their data. / Graduate / 0984 / ali.sarvghad@gmail.com

Page generated in 0.0649 seconds