• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 31
  • 28
  • 21
  • 19
  • 11
  • 10
  • 6
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 442
  • 39
  • 38
  • 36
  • 30
  • 29
  • 29
  • 29
  • 28
  • 27
  • 25
  • 24
  • 24
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

The impact of adding front-of-package sodium content labels to grocery products: an experimental study

Goodman, Samantha January 2011 (has links)
A high sodium diet is a predominant risk factor for hypertension, which is in turn a major risk factor for cardiovascular disease. Canadians consume approximately twice the daily Adequate Intake of sodium, most of which comes from processed foods. Enhancing nutrition labelling for sodium in the form of front-of-package (FOP) labels may help consumers select healthier products. This experimental study examined the efficacy of 4 types of FOP nutrition labels on participant selection of low versus high sodium products. 430 adults from the Waterloo Region were randomly assigned to one of 5 experimental conditions: (1) a control condition with no FOP label; (2) a basic numeric FOP label; (3) a numeric FOP label with “high” and “low” sodium content descriptors; (4) a detailed Traffic Light (TL) label with colour coding, content descriptors and numeric information; and (5) a simple TL label that did not include numeric information. Participants were shown pairs of grocery products that varied primarily in sodium content, and asked to select a free product. Selection of the low versus high sodium product served as the primary behavioural outcome; rankings and ratings of the experimental labels were also examined. Regression models were used to determine the relative efficacy of the 4 labelling formats, as well as the socio-demographic and diet and health-related predictors of these outcomes. Results indicated that participants in the FOP conditions with “low” and “high” sodium content descriptors (conditions 3, 4 and 5) were significantly more likely to choose the low sodium product compared to the control group. The detailed TL was ranked as the most effective at helping participants select low sodium products; this label was also rated significantly higher than other formats in liking, understanding and believability. Product selection did not differ significantly across socio-demographic groups, suggesting that FOP labelling might reduce the disparity in the use and understanding of nutrition labels among groups of varying socioeconomic status. This study has important policy implications. Results suggest that FOP labels should include content descriptors, which add prescriptive value and may help consumers select healthier products by improving understanding. TL labels, which incorporate content descriptors and colour coding, are recommended for future FOP labelling initiatives.
62

Large-scale metabolic flux analysis for mammalian cells: a systematic progression from model conception to model reduction to experimental design

Lake-ee Quek Unknown Date (has links)
Recombinant protein production by mammalian cells is a core component of today’s multi-billion dollar biopharmaceutical industry. Transcriptome and proteome technologies have been used to probe for cellular components that correlate with higher cell-specific productivity, but have yet to yield results that can be translated into practical metabolic engineering strategies. The recognition of cellular complexity has led to an increasing adoption of systems biology, a holistic investigation approach that aims to bring together different omics technologies and to analyze the resulting datasets under a unifying context. Fluxomics is chosen as the platform context to investigate cell metabolism because it captures the integrated effects of gene expression, enzyme activity, metabolite availability and regulation, thereby providing a global picture of the cell’s metabolic phenotype. At present, the routine quantification of cell metabolism revolves around very basic cellular parameters: growth, substrate utilization and product formation. For a systems approach, however, just measuring gross metabolic features is insufficient; we are compelled to perform high-resolution, large-scale fluxomics in order to match the scale of other omics datasets. The challenges of performing large-scale fluxomics come from two opposing fronts. Metabolic flux analysis (MFA) is the estimation of intracellular fluxes from experimental data using a stoichiometric model, a process very much susceptible to modelling biases. The in silico challenge is to construct the most comprehensive model to represent the metabolism of a specific cell, while the in vivo challenge is to resolve as many fluxes as possible using experimental measurements or constraints. A compromise needs to be established between maximizing the resolution of the MFA model and working within technical limitations of the flux experiment. Conventional MFA models assembled from textbook pathways have been available for animal cell culture for the past 15 years. A state-of-the-art model was developed and used to analyse continuous hybridoma culture and batch CHO cell culture data (Chapter 3). Reasonable metabolic assumptions combined with constraint based analysis exploiting irreversibility constraints enabled the resolution of most fluxes in central carbon metabolism. However, while the results appear consistent, there is insufficient information in conventional measurement of uptake, secretion and growth data to assess the completeness of the model and validity of all assumptions. 13C metabolic flux analysis (13C MFA) can potentially resolve fluxes in the central carbon metabolism using flux constraints generated from 13C enrichment patterns of metabolites, but the multitude of substrate uptakes (glucose and amino acids) seen in mammalian cells, in addition to the lack of 13C enrichment data from proteinogenic amino acids, makes it very difficult to anticipate how a labelling experiment should be carried out. The challenges above have led to the development of a systematic workflow to perform large-scale MFA for mammalian cells. A genome-scale model (GeMs), an accurate compilation of gene-protein-reaction-metabolite associations, is the starting basis to perform whole-cell fluxomics. A semi-automated method was developed in order to rapidly extract a prototype of GeM from KEGG and UniProtKB databases (Chapter 4). Core metabolic pathways in the mouse GeM are mostly complete, suggesting that these databases are comprehensive and sufficient. The rapid prototyping system takes advantage of this, making long term maintenance of an accurate and up-to-date GeM by an individual possible. A large number of under-determined pathways in the mouse GeM cannot be resolved by 13C MFA because they do not produce any distinctive 13C enrichment patterns among the carbon metabolites. This has led to the development of SLIPs (short linearly independent pathways) for visualizing these under-determined metabolic pathways contained in large-scale GeMs (Chapter 5). Certain SLIPs are subsequently removed based on careful consideration of their pathway functions and the implications of their removal. A majority of SLIPs have a cyclic configuration, sharing similar redox or energy co-metabolites; very few represent true conversion of substrates to products. Of the 266 under-determined SLIPs generated from the mouse GeM, only 27 SLIPs were incorporated into the final working model under the criterion that they are significant pathways and are potentially resolvable by tracer experiments. Most of these SLIPs are degradation pathways of essential amino acids and inter-conversion of non-essential amino acids (Chapter 8). In parallel, OpenFLUX was developed to perform large-scale isotopic 13C MFA (Chapter 6). This software was built to accept multiple labelled substrates, and no restriction has been placed on the model type or enrichment data. These are necessary features to support large-scale flux analysis for mammalian cells. This was followed by the development of a design strategy that uses analytical gradients of isotopomer measurements to predict resolvability of free fluxes, from which the effectiveness of various 13C experimental scenarios using different combinations of input substrates and isotopomer measurements can be evaluated (Chapter 7). Hypothetical and experimental results have confirmed the predictions that, when glucose and glutamate/glutamine are simultaneously consumed, two separate experiments using [U-13C]- and [1-13C]-glucose, respectively, should be performed. If there is a restriction to a single experiment, then the 80:20 mixture of [U-13C]- and [1-13C]-glucose can provide a better resolution than other labelled glucose mixtures (Chapter 7 and Chapter 8). The tools and framework developed in this thesis brings us within reach of performing large-scale, high-resolution fluxomics for animal cells and hence realising systems-level investigation of mammalian metabolism. Moreover, with the establishment of a more rigorous, systematic modelling approach and higher functioning computational tools, we are now at a position to validate mammalian cell culture flux experiments performed 15 years ago.
63

Large-scale metabolic flux analysis for mammalian cells: a systematic progression from model conception to model reduction to experimental design

Lake-ee Quek Unknown Date (has links)
Recombinant protein production by mammalian cells is a core component of today’s multi-billion dollar biopharmaceutical industry. Transcriptome and proteome technologies have been used to probe for cellular components that correlate with higher cell-specific productivity, but have yet to yield results that can be translated into practical metabolic engineering strategies. The recognition of cellular complexity has led to an increasing adoption of systems biology, a holistic investigation approach that aims to bring together different omics technologies and to analyze the resulting datasets under a unifying context. Fluxomics is chosen as the platform context to investigate cell metabolism because it captures the integrated effects of gene expression, enzyme activity, metabolite availability and regulation, thereby providing a global picture of the cell’s metabolic phenotype. At present, the routine quantification of cell metabolism revolves around very basic cellular parameters: growth, substrate utilization and product formation. For a systems approach, however, just measuring gross metabolic features is insufficient; we are compelled to perform high-resolution, large-scale fluxomics in order to match the scale of other omics datasets. The challenges of performing large-scale fluxomics come from two opposing fronts. Metabolic flux analysis (MFA) is the estimation of intracellular fluxes from experimental data using a stoichiometric model, a process very much susceptible to modelling biases. The in silico challenge is to construct the most comprehensive model to represent the metabolism of a specific cell, while the in vivo challenge is to resolve as many fluxes as possible using experimental measurements or constraints. A compromise needs to be established between maximizing the resolution of the MFA model and working within technical limitations of the flux experiment. Conventional MFA models assembled from textbook pathways have been available for animal cell culture for the past 15 years. A state-of-the-art model was developed and used to analyse continuous hybridoma culture and batch CHO cell culture data (Chapter 3). Reasonable metabolic assumptions combined with constraint based analysis exploiting irreversibility constraints enabled the resolution of most fluxes in central carbon metabolism. However, while the results appear consistent, there is insufficient information in conventional measurement of uptake, secretion and growth data to assess the completeness of the model and validity of all assumptions. 13C metabolic flux analysis (13C MFA) can potentially resolve fluxes in the central carbon metabolism using flux constraints generated from 13C enrichment patterns of metabolites, but the multitude of substrate uptakes (glucose and amino acids) seen in mammalian cells, in addition to the lack of 13C enrichment data from proteinogenic amino acids, makes it very difficult to anticipate how a labelling experiment should be carried out. The challenges above have led to the development of a systematic workflow to perform large-scale MFA for mammalian cells. A genome-scale model (GeMs), an accurate compilation of gene-protein-reaction-metabolite associations, is the starting basis to perform whole-cell fluxomics. A semi-automated method was developed in order to rapidly extract a prototype of GeM from KEGG and UniProtKB databases (Chapter 4). Core metabolic pathways in the mouse GeM are mostly complete, suggesting that these databases are comprehensive and sufficient. The rapid prototyping system takes advantage of this, making long term maintenance of an accurate and up-to-date GeM by an individual possible. A large number of under-determined pathways in the mouse GeM cannot be resolved by 13C MFA because they do not produce any distinctive 13C enrichment patterns among the carbon metabolites. This has led to the development of SLIPs (short linearly independent pathways) for visualizing these under-determined metabolic pathways contained in large-scale GeMs (Chapter 5). Certain SLIPs are subsequently removed based on careful consideration of their pathway functions and the implications of their removal. A majority of SLIPs have a cyclic configuration, sharing similar redox or energy co-metabolites; very few represent true conversion of substrates to products. Of the 266 under-determined SLIPs generated from the mouse GeM, only 27 SLIPs were incorporated into the final working model under the criterion that they are significant pathways and are potentially resolvable by tracer experiments. Most of these SLIPs are degradation pathways of essential amino acids and inter-conversion of non-essential amino acids (Chapter 8). In parallel, OpenFLUX was developed to perform large-scale isotopic 13C MFA (Chapter 6). This software was built to accept multiple labelled substrates, and no restriction has been placed on the model type or enrichment data. These are necessary features to support large-scale flux analysis for mammalian cells. This was followed by the development of a design strategy that uses analytical gradients of isotopomer measurements to predict resolvability of free fluxes, from which the effectiveness of various 13C experimental scenarios using different combinations of input substrates and isotopomer measurements can be evaluated (Chapter 7). Hypothetical and experimental results have confirmed the predictions that, when glucose and glutamate/glutamine are simultaneously consumed, two separate experiments using [U-13C]- and [1-13C]-glucose, respectively, should be performed. If there is a restriction to a single experiment, then the 80:20 mixture of [U-13C]- and [1-13C]-glucose can provide a better resolution than other labelled glucose mixtures (Chapter 7 and Chapter 8). The tools and framework developed in this thesis brings us within reach of performing large-scale, high-resolution fluxomics for animal cells and hence realising systems-level investigation of mammalian metabolism. Moreover, with the establishment of a more rigorous, systematic modelling approach and higher functioning computational tools, we are now at a position to validate mammalian cell culture flux experiments performed 15 years ago.
64

User hints for optimisation processes

Do Nascimento, Hugo Alexandre Dantas January 2003 (has links)
Innovative improvements in the area of Human-Computer Interaction and User Interfaces have en-abled intuitive and effective applications for a variety of problems. On the other hand, there has also been the realization that several real-world optimization problems still cannot be totally auto-mated. Very often, user interaction is necessary for refining the optimization problem, managing the computational resources available, or validating or adjusting a computer-generated solution. This thesis investigates how humans can help optimization methods to solve such difficult prob-lems. It presents an interactive framework where users play a dynamic and important role by pro-viding hints. Hints are actions that help to insert domain knowledge, to escape from local minima, to reduce the space of solutions to be explored, or to avoid ambiguity when there is more than one optimal solution. Examples of user hints are adjustments of constraints and of an objective function, focusing automatic methods on a subproblem of higher importance, and manual changes of an ex-isting solution. User hints are given in an intuitive way through a graphical interface. Visualization tools are also included in order to inform about the state of the optimization process. We apply the User Hints framework to three combinatorial optimization problems: Graph Clus-tering, Graph Drawing and Map Labeling. Prototype systems are presented and evaluated for each problem. The results of the study indicate that optimization processes can benefit from human interaction. The main goal of this thesis is to list cases where human interaction is helpful, and provide an ar-chitecture for supporting interactive optimization. Our contributions include the general User Hints framework and particular implementations of it for each optimization problem. We also present a general process, with guidelines, for applying our framework to other optimization problems.
65

Estudo de marcação dos anticorpos monoclonais IOR-CEA-1 e IOR-EGF/R3 com sup(99m)Tc

DIAS, CARLA R. de B.R. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:50:50Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T13:58:08Z (GMT). No. of bitstreams: 1 11096.pdf: 7071913 bytes, checksum: f0395995edaa69e463b5e5407a40c10c (MD5) / Dissertacao (Mestrado) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
66

Geo-Semantic Labelling of Open Data. SEMANTiCS 2018-14th International Conference on Semantic Systems

Neumaier, Sebastian, Polleres, Axel January 2018 (has links) (PDF)
In the past years Open Data has become a trend among governments to increase transparency and public engagement by opening up national, regional, and local datasets. However, while many of these datasets come in semi-structured file formats, they use di ff erent schemata and lack geo-references or semantically meaningful links and descriptions of the corresponding geo-entities. We aim to address this by detecting and establishing links to geo-entities in the datasets found in Open Data catalogs and their respective metadata descriptions and link them to a knowledge graph of geo-entities. This knowledge graph does not yet readily exist, though, or at least, not a single one: so, we integrate and interlink several datasets to construct our (extensible) base geo-entities knowledge graph: (i) the openly available geospatial data repository GeoNames, (ii) the map service OpenStreetMap, (iii) country-specific sets of postal codes, and (iv) the European Union's classification system NUTS. As a second step, this base knowledge graph is used to add semantic labels to the open datasets, i.e., we heuristically disambiguate the geo-entities in CSV columns using the context of the labels and the hierarchical graph structure of our base knowledge graph. Finally, in order to interact with and retrieve the content, we index the datasets and provide a demo user interface. Currently we indexed resources from four Open Data portals, and allow search queries for geo-entities as well as full-text matches at http://data.wu.ac.at/odgraph/ .
67

Enabling Spatio-Temporal Search in Open Data

Neumaier, Sebastian, Polleres, Axel 04 April 2018 (has links) (PDF)
Intuitively, most datasets found in Open Data are organised by spatio-temporal scope, that is, single datasets provide data for a certain region, valid for a certain time period. For many use cases (such as for instance data journalism and fact checking) a pre-dominant need is to scope down the relevant datasets to a particular period or region. Therefore, we argue that spatio-temporal search is a crucial need for Open Data portals and across Open Data portals, yet - to the best of our knowledge - no working solution exists. We argue that - just like for for regular Web search - knowledge graphs can be helpful to significantly improve search: in fact, the ingredients for a public knowledge graph of geographic entities as well as time periods and events exist already on the Web of Data, although they have not yet been integrated and applied - in a principled manner - to the use case of Open Data search. In the present paper we aim at doing just that: we (i) present a scalable approach to construct a spatio-temporal knowledge graph that hierarchically structures geographical, as well as temporal entities, (ii) annotate a large corpus of tabular datasets from open data portals, (iii) enable structured, spatio-temporal search over Open Data catalogs through our spatio-temporal knowledge graph, both via a search interface as well as via a SPARQL endpoint, available at data.wu.ac.at/odgraphsearch/ / Series: Working Papers on Information Systems, Information Business and Operations
68

Geo-Semantic Labelling of Open Data

Neumaier, Sebastian, Savenkov, Vadim, Polleres, Axel January 2018 (has links) (PDF)
In the past years Open Data has become a trend among governments to increase transparency and public engagement by opening up national, regional, and local datasets. However, while many of these datasets come in semi-structured file formats, they use different schemata and lack geo-references or semantically meaningful links and descriptions of the corresponding geo-entities. We aim to address this by detecting and establishing links to geo-entities in the datasets found in Open Data catalogs and their respective metadata descriptions and link them to a knowledge graph of geo-entities. This knowledge graph does not yet readily exist, though, or at least, not a single one: so, we integrate and interlink several datasets to construct our (extensible) base geo-entities knowledge graph: (i) the openly available geospatial data repository GeoNames, (ii) the map service OpenStreetMap, (iii) country-specific sets of postal codes, and (iv) the European Union¿s classification system NUTS. As a second step, this base knowledge graph is used to add semantic labels to the open datasets, i.e., we heuristically disambiguate the geo-entities in CSV columns using the context of the labels and the hierarchical graph structure of our base knowledge graph. Finally, in order to interact with and retrieve the content, we index the datasets and provide a demo user interface. Currently we indexed resources from four Open Data portals, and allow search queries for geo-entities as well as full-text matches at http://data.wu.ac.at/odgraph/.
69

A Rotulação no Discurso: uma Estratégia Sociocognitivo-interacional no Fazer Textual

Saib, Arlene de Araújo 22 February 2008 (has links)
Made available in DSpace on 2016-08-29T15:08:38Z (GMT). No. of bitstreams: 1 tese_3105_Arlene de Araujo Saib.pdf: 942725 bytes, checksum: c0ee49ff970f9d7bfb7c826e57fbdad0 (MD5) Previous issue date: 2008-02-22 / Essa pesquisa questiona a noção de referencia como representação extensional dos referentes entendidos como categorias do mundo e, apoiando-se numa concepção sociocognitivo-interacional de linguagem, defende a referenciação como atividade discursiva voltada para a criação de objetos-de-discurso ancorados no contexto enunciativo e produzidos no fazer textual. O recorte teórico-metodológico proposto focaliza as estratégias de rotulação (de criação de formas nominais referenciais) as quais criam um dominio conceitual para a interpretação das informações-suporte presentes num texto-fonte, geralmente uma proposição ou uma sequência de proposições com independência enunciativa. A análise fundamenta-se num certo grau de indeterminação da linguagem e na dinâmica da (re)categorização como índice de uma estratégia discursiva em que os rótulos desempenham papel relevante tanto no encadeamento discursivo das unidades informativas dos textos quanto na organização semântico-argumentativa global do discurso. Por essa via de análise, os rótulos constituem paráfrases resumitivas com papel coesivo bem definido na superfície textual. Entretanto, a escolha da construção nominal (tanto do núcleo quanto dos determinantes) depende muito mais da interação entre os sujeitos envolvidos no processo interativo do que na relação de correferência buscada na semântica dos objetos ou dos fatos enunciados. O exame do corpus, constituído de textos opinativos presentes na mídia impressa brasileira e colhidos no período de dezembro de 2005 a dezembro de 2007, apontou para a necessidade de ultrapassar o plano das relações anafóricas e integrar o funcionamento dos rótulos num referencial dêitico-enunciativo de linguagem. / This research Discusses the notion of reference as an extentional representation of the referents seen as world categories and, based on a social-cognitive-interactional conception of the language, it also considers the referentiation as a discursive activity aimed at the creation of speech objects engaged in the enunciative context and produced in the text. The theorical-methodological frame proposed focuses on the strategies of labelling (creation of referential nominal forms) that create a conceptual field for the supporting information present in a source text, usually a proposition or a sequence of propositions which are enunciatively independent. This analysis is based on a certain degree of language indetermination and the dynamics of (re) categorization as an index of sppeech strategy in which labels have an important role in the sequence of the informative units of the text as well as in the global semantic-argumentative organization of speech. From this point of view, labelling constitutes synthetic paraphrases with a cohesive role defined on the textual surface. However, the choice of the nominal construction (of the nucleus, as well as the determinants) depends much more on the interaction between the subjects involved in the process rather than on a relation of co-reference found in the semantics of the objects or the enunciated facts. The exam of the corpus, composed of opinative texts present in the Brazilian printed midia, and selected from December 2005 to December 2007, pointed to the necessity to overcome the level of the anaphoric relations, and integrate the function of labelling in a deitic-enunciative referential of language .
70

Enabling Spatio-Temporal Search in Open Data

Neumaier, Sebastian, Polleres, Axel 04 April 2018 (has links) (PDF)
Intuitively, most datasets found on governmental Open Data portals are organized by spatio-temporal criteria, that is, single datasets provide data for a certain region, valid for a certain time period. Likewise, for many use cases (such as, for instance, data journalism and fact checking) a pre-dominant need is to scope down the relevant datasets to a particular period or region. Rich spatio-temporal annotations are therefore a crucial need to enable semantic search for (and across) Open Data portals along those dimensions, yet -- to the best of our knowledge -- no working solution exists. To this end, in the present paper we (i) present a scalable approach to construct a spatio-temporal knowledge graph that hierarchically structures geographical as well as temporal entities, (ii) annotate a large corpus of tabular datasets from open data portals with entities from this knowledge graph, and (iii) enable structured, spatio-temporal search and querying over Open Data catalogs, both via a search interface as well as via a SPARQL endpoint, available at http://data.wu.ac.at/odgraphsearch/ / Series: Working Papers on Information Systems, Information Business and Operations

Page generated in 0.4995 seconds