• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 28
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 129
  • 38
  • 37
  • 37
  • 26
  • 23
  • 18
  • 18
  • 17
  • 17
  • 15
  • 15
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Answer Set Programming and Other Computing Paradigms

January 2013 (has links)
abstract: Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling language in order to enhance expressivity, such as incorporating aggregates and interfaces with ontologies. Also, in order to overcome the grounding bottleneck of computation in ASP, there are increasing interests in integrating ASP with other computing paradigms, such as Constraint Programming (CP) and Satisfiability Modulo Theories (SMT). Due to the non-monotonic nature of the ASP semantics, such enhancements turned out to be non-trivial and the existing extensions are not fully satisfactory. We observe that one main reason for the difficulties rooted in the propositional semantics of ASP, which is limited in handling first-order constructs (such as aggregates and ontologies) and functions (such as constraint variables in CP and SMT) in natural ways. This dissertation presents a unifying view on these extensions by viewing them as instances of formulas with generalized quantifiers and intensional functions. We extend the first-order stable model semantics by by Ferraris, Lee, and Lifschitz to allow generalized quantifiers, which cover aggregate, DL-atoms, constraints and SMT theory atoms as special cases. Using this unifying framework, we study and relate different extensions of ASP. We also present a tight integration of ASP with SMT, based on which we enhance action language C+ to handle reasoning about continuous changes. Our framework yields a systematic approach to study and extend non-monotonic languages. / Dissertation/Thesis / Ph.D. Computer Science 2013
72

SlimRank: um modelo de seleção de respostas para perguntas de consumidores / SlimRank: an answer selection model for consumer questions

Marcelo Criscuolo 16 November 2017 (has links)
A disponibilidade de conteúdo gerado por usuários em sites colaborativos de perguntas e respostas tem impulsionado o avanço de modelos de Question Answering (QA) baseados em reúso. Essa abordagem pode ser implementada por meio da tarefa de seleção de respostas (Answer Selection, AS), que consiste em encontrar a melhor resposta para uma dada pergunta em um conjunto pré-selecionado de respostas candidatas. Nos últimos anos, abordagens baseadas em vetores distribucionais e em redes neurais profundas, em particular em redes neurais convolutivas (CNNs), têm apresentado bons resultados na tarefa de AS. Contudo, a maioria dos modelos é avaliada sobre córpus de perguntas objetivas e bem formadas, contendo poucas palavras. Raramente estruturas textuais complexas são consideradas. Perguntas de consumidores, comuns em sites colaborativos, podem ser bastante complexas. Em geral, são representadas por múltiplas frases inter-relacionadas, que apresentam pouca objetividade, vocabulário leigo e, frequentemente, contêm informações em excesso. Essas características aumentam a dificuldade da tarefa de AS. Neste trabalho, propomos um modelo de seleção de respostas para perguntas de consumidores. São contribuições deste trabalho: (i) uma definição para o objeto de pesquisa perguntas de consumidores; (ii) um novo dataset desse tipo de pergunta, chamado MilkQA; e (iii) um modelo de seleção de respostas, chamado SlimRank. O MilkQA foi criado a partir de um arquivo de perguntas e respostas coletadas pelo serviço de atendimento de uma renomada instituição pública de pesquisa agropecuária (Embrapa). Anotadores guiados pela definição de perguntas de consumidores proposta neste trabalho selecionaram 2,6 mil pares de perguntas e respostas contidas nesse arquivo. A análise dessas perguntas levou ao desenvolvimento do modelo SlimRank, que combina representação de textos na forma de grafos semânticos com arquiteturas de CNNs. O SlimRank foi avaliado no dataset MilkQA e comparado com baselines e dois modelos do estado da arte. Os resultados alcançados pelo SlimRank foram bastante superiores aos resultados dos baselines, e compatíveis com resultados de modelos do estado da arte; porém, com uma significativa redução do tempo computacional. Acreditamos que a representação de textos na forma de grafos semânticos combinada com CNNs seja uma abordagem promissora para o tratamento dos desafios impostos pelas características singulares das perguntas de consumidores. / The increasing availability of user-generated content in community Q&A sites has led to the advancement of Question Answering (QA) models that relies on reuse. Such approach can be implemented by the task of Answer Selection (AS), which consists in finding the best answer for a given question in a pre-selected pool candidate answers. Recently, good results have been achieved by AS models based on distributed word vectors and deep neural networks that are used to rank answers for a given question. Convolutinal Neural Networks (CNNs) are particularly succesful in this task. Most of the AS models are built over datasets that contains only short and objective questions expressed as interrogative sentences containing few words. Complex text structures are rarely considered. However, consumer questions may be really complex. This kind of question is the main form of seeking information in community Q&A sites, forums and customer services. Consumer questions have characteristics that increase the difficulty of the answer selection task. In general, they are composed of multiple interrelated sentences that are usually subjective, and contains laymans terms and excess of details that may be not particulary relevant. In this work, we propose an answer selection model for consumer questions. Specifically the contributions of this work are: (i) a definition for the consumer questions research object; (ii) a new dataset of this kind of question, which we call MilkQA; and (iii) an answer selection model, named SlimRank. MilkQA was created from an archive of questions and answers collected by the customer service of a well-known public agricultural research institution (Embrapa). It contains 2.6 thousand question-answer pairs selected and anonymized by human annotators guided by the definition proposed in this work. The analysis of questions in MilkQA led to the development of SlimRank, which combines semantic textual graphs with CNN architectures. SlimRank was evaluated on MilkQA and compared to baselines and two state-of-the-art answer selection models. The results achieved by our model were much higher than the baselines and comparable to the state of the art, but with significant reduction of computational time. Our results suggest that combining semantic text graphs with convolutional neural networks are a promising approach for dealing with the challenges imposed by consumer questions unique characteristics.
73

Why-Query Support in Graph Databases

Vasilyeva, Elena 08 November 2016 (has links)
In the last few decades, database management systems became powerful tools for storing large amount of data and executing complex queries over them. In addition to extended functionality, novel types of databases appear like triple stores, distributed databases, etc. Graph databases implementing the property-graph model belong to this development branch and provide a new way for storing and processing data in the form of a graph with nodes representing some entities and edges describing connections between them. This consideration makes them suitable for keeping data without a rigid schema for use cases like social-network processing or data integration. In addition to a flexible storage, graph databases provide new querying possibilities in the form of path queries, detection of connected components, pattern matching, etc. However, the schema flexibility and graph queries come with additional costs. With limited knowledge about data and little experience in constructing the complex queries, users can create such ones, which deliver unexpected results. Forced to debug queries manually and overwhelmed by the amount of query constraints, users can get frustrated by using graph databases. What is really needed, is to improve usability of graph databases by providing debugging and explaining functionality for such situations. We have to assist users in the discovery of what were the reasons of unexpected results and what can be done in order to fix them. The unexpectedness of result sets can be expressed in terms of their size or content. In the first case, users have to solve the empty-answer, too-many-, or too-few-answers problems. In the second case, users care about the result content and miss some expected answers or wonder about presence of some unexpected ones. Considering the typical problems of receiving no or too many results by querying graph databases, in this thesis we focus on investigating the problems of the first group, whose solutions are usually represented by why-empty, why-so-few, and why-so-many queries. Our objective is to extend graph databases with debugging functionality in the form of why-queries for unexpected query results on the example of pattern matching queries, which are one of general graph-query types. We present a comprehensive analysis of existing debugging tools in the state-of-the-art research and identify their common properties. From them, we formulate the following features of why-queries, which we discuss in this thesis, namely: holistic support of different cardinality-based problems, explanation of unexpected results and query reformulation, comprehensive analysis of explanations, and non-intrusive user integration. To support different cardinality-based problems, we develop methods for explaining no, too few, and too many results. To cover different kinds of explanations, we present two types: subgraph- and modification-based explanations. The first type identifies the reasons of unexpectedness in terms of query subgraphs and delivers differential graphs as answers. The second one reformulates queries in such a way that they produce better results. Considering graph queries to be complex structures with multiple constraints, we investigate different ways of generating explanations starting from the most general one that considers only a query topology through coarse-grained rewriting up to fine-grained modification that allows fine changes of predicates and topology. To provide a comprehensive analysis of explanations, we propose to compare them on three levels including a syntactic description, a content, and a size of a result set. In order to deliver user-aware explanations, we discuss two models for non-intrusive user integration in the generation process. With the techniques proposed in this thesis, we are able to provide fundamentals for debugging of pattern-matching queries, which deliver no, too few, or too many results, in graph databases implementing the property-graph model.
74

Strategic energy systems analysis:Possible pathways for the transition of electricity sector inTanzania

Avgerinopoulos, Georgios January 2013 (has links)
This study examines the concept of the evolution of electricity sector in Tanzania.Electrification of Africa has raised large discussion and thus, nine scenarios based ondifferent production pathways and demand projections are formulated. The studyconsiders both grid based centralized electricity and decentralized power production.The main differentiation is between a centralized electricity system and decentralizedpower that are closer to demand. A model is created using three modeling tools(Answer-OSeMOSYS, LEAP and MESSAGE) and the results are presented andcompared. Finally, different funding options for electricity expansion projects inTanzania are explored in order to investigate the feasibility of the scenarios as well asa geopolitical analysis is carried out.
75

Comments as reviews: Predicting answer acceptance by measuring sentiment on stack exchange

William Chase Ledbetter IV (12261440) 16 June 2023 (has links)
<p>Online communication has increased the need to rapidly interpret complex emotions due to the volatility of the data involved; machine learning tasks that process text, such as sentiment analysis, can help address this challenge by automatically classifying text as positive, negative, or neutral. However, while much research has focused on detecting offensive or toxic language online, there is also a need to explore and understand the ways in which people express positive emotions and support for one another in online communities. This is where sentiment dictionaries and other computational methods can be useful, by analyzing the language used to express support and identifying common patterns or themes.</p> <p><br></p> <p>This research was conducted by compiling data from social question and answering around machine learning on the site Stack Exchange. Then a classification model was constructed using binary logistic regression. The objective was to discover whether predictions of marked solutions are accurate by treating the comments as reviews. Measuring collaboration signals may help capture the nuances of language around support and assistance, which could have implications for how people understand and respond to expressions of help online. By exploring this topic further, researchers can gain a more complete understanding of the ways in which people communicate and connect online.</p>
76

Cluster-assisted Grading : Comparison of different methods for pre-processing, text representation and cluster analysis in cluster-assisted short-text grading / Kluster-assisterad rättning : Jämförelse av olika metoder för bearbetning, textrepresentation och klusteranalys i kluster-assisterad rättning

Båth, Jacob January 2022 (has links)
School teachers spend approximately 30 percent of their time grading exams and other assessments. With an increasingly digitized education, a research field have been initiated that aims to reduce the time spent on grading by automating it. This is an easy task for multiple-choice questions but much harder for open-ended questions requiring free-text answers, where the latter have shown to be superior for knowledge assessment and learning consolidation. While results in previous work have presented promising results of up to 90 percent grading accuracy, it is still problematic using a system that gives the wrong grade in 10 percent of the cases. This has given rise to a research field focusing on assisting teachers in the grading process, instead of fully replacing them. Cluster analysis has been the most popular tool for this, grouping similar answers together and letting teachers process groups of answers at once, instead of evaluating each question one-at-a-time. This approach has shown evidence to decrease the time spent on grading substantially, however, the methods for performing the clustering vary widely between studies, leaving no apparent methodology choice for real-use implementation. Using several techniques for pre-processing, text representation and choice of clustering algorithm, this work compared various methods for clustering free-text answers by evaluating them on a dataset containing almost 400 000 student answers. The results showed that using all of the tested pre-processing techniques led to the best performance, although the difference to using minimum pre-processing were small. Sentence embeddings were the text representation approach that performed the best, however, it remains to be answered how it should be used when spelling and grammar is part of the assessment, as it lacks the ability to identify such errors. A suitable choice of clustering algorithm is one where the number of clusters can be specified, as determining this automatically proved to be difficult. Teachers can then easily adjust the number of clusters based on their judgement. / Skollärare spenderar ungefär 30 procent av sin tid på rättning av prov och andra bedömningar. I takt med att mer utbildning digitaliseras, försöker forskare hitta sätt att automatisera rättning för att minska den administrativa bördan för lärare. Flervalsfrågor har fördelen att de enkelt kan rättas automatiskt, medan öppet ställda frågor som kräver ett fritt formulerat svar har visat sig vara ett bättre verktyg för att mäta elevers förståelse. Dessa typer av frågor är däremot betydligt svårare att rätta automatiskt, vilket lett till forskning inom automatisk rättning av dessa. Även om tidigare forskning har lyckats uppnå resultat med upp till 90 procents träffsäkerhet, är det fortfarande problematiskt att det blir fel i de resterande 10 procenten av fallen. Detta har lett till forskning som fokuserar på underlätta för lärare i rättningen, istället för att ersätta dem. Klusteranalys har varit det mest populära tillvägagångssättet för att åstadkomma detta, där liknande svar grupperas tillsammans, vilket möjliggör rättning av flera svar samtidigt. Denna metod har visat sig minska rättningstiden signifikant, däremot har metoderna för att göra klusteranalysen varierat brett, vilket gör det svårt att veta hur en implementering i ett verkligt scenario bör se ut. Genom att använda olika tekniker för textbearbetning, textrepresentation och val av klusteralgoritm, jämför detta arbete olika metoder för att klustra fritext-svar, genom att utvärdera dessa på nästan 400 000 riktiga elevsvar. Resultatet visar att mer textbearbetning generellt är bättre, även om skillnaderna är små. Användning av så kallade sentence embeddings ledde till bäst resultat när olika tekniker för textrepresentation jämfördes. Däremot har denna teknik svårare att identifiera grammatik- och stavningsfel, hur detta ska hanteras är en fråga för framtida forskning. Ett lämpligt val av klustringsalgoritm är en där antalet kluster kan bestämmas av användaren, då det visat sig svårt att bestämma det automatiskt. Lärare kan då justera antalet kluster ifall det skulle vara för få eller för många.
77

A Novel Method for Thematically Analyzing Student Responses to Open-ended Case Scenarios

Shakir, Umair 06 December 2023 (has links)
My dissertation is about how engineering educators can use natural language processing (NLP) in implementing open-ended assessments in undergraduate engineering degree programs. Engineering students need to develop an ability to exercise judgment about better and worse outcomes of their decisions. One important consideration for improving engineering students' judgment involves creating sound educational assessments. Currently, engineering educators face a trad-off in selecting between open- and closed-ended assessments. Closed-ended assessments are easy to administer and score but are limited in what they measure given students are required, in many instances, to choose from a priori list. Conversely, open-ended assessments allow students to write their answers in any way they choose in their own words. However, open-ended assessments are likely to take more personal hours and lack consistency for both inter-grader and intra-grader grading. The solution to this challenge is the use of NLP. The working principles of the existing NLP models is the tallying of words, keyword matching, or syntactic similarity of words, which have often proved too brittle in capturing the language diversity that students could write. Therefore, the problem that motivated the present study is how to assess student responses based on underlying concepts and meanings instead of morphological characteristics or grammatical structure in sentences. Some of this problem can be addressed by developing NLP-assisted grading tools based on transformer-based large language models (TLLMs) such as BERT, MPNet, GPT-4. This is because TLLMs are trained on billions of words and have billions of parameters, thereby providing capacity to capture richer semantic representations of input text. Given the availability of TLLMs in the last five years, there is a significant lack of research related to integrating TLLMs in the assessment of open-ended engineering case studies. My dissertation study aims to fill this research gap. I developed and evaluated four NLP approaches based on TLLMs for thematic analysis of student responses to eight question prompts of engineering ethics and systems thinking case scenarios. The study's research design comprised the following steps. First, I developed an example bank for each question prompt with two procedures: (a) human-in-the-loop natural language processing (HILNLP) and (b) traditional qualitative coding. Second, I assigned labels using the example banks to unlabeled student responses with the two NLP techniques: (i) k-Nearest Neighbors (kNN), and (ii) Zero-Shot Classification (ZSC). Further, I utilized the following configurations of these NLP techniques: (i) kNN (when k=1), (ii) kNN (when k=3), (iii) ZSC (multi-labels=false), and (iv) ZSC (multi-labels=true). The kNN approach took input of both sentences and their labels from the example banks. On the other hand, the ZSC approach only took input of labels from the example bank. Third, I read each sentence or phrase along with the model's suggested label(s) to evaluate whether the assigned label represented the idea described in the sentence and assigned the following numerical ratings: accurate (1), neutral (0), and inaccurate (-1). Lastly, I used those numerical evaluation ratings to calculate accuracy of the NLP approaches. The results of my study showed moderate accuracy in thematically analyzing students' open-ended responses to two different engineering case scenarios. This is because no single method among the four NLP methods performed consistently better than the other methods across all question prompts. The highest accuracy rate varied between 53% and 92%, depending upon the question prompts and NLP methods. Despite these mixed results, this study accomplishes multiple goals. My dissertation demonstrates to community members that TLLMs have potential for positive impacts on improving classroom practices in engineering education. In doing so, my dissertation study takes up one aspect of instructional design: assessment of students' learning outcomes in engineering ethics and systems thinking skills. Further, my study derived important implications for practice in engineering education. First, I gave important lessons and guidelines for educators interested in incorporating NLP into their educational assessment. Second, the open-source code is uploaded to a GitHub repository, thereby making it more accessible to a larger group of users. Third, I gave suggestions for qualitative researchers on conducting NLP-assisted qualitative analysis of textual data. Overall, my study introduced state-of-the-art TLLM-based NLP approaches to a research field where it holds potential yet remains underutilized. This study can encourage engineering education researchers to utilize these NLP methods that may be helpful in analyzing the vast textual data generated in engineering education, thereby reducing the number of missed opportunities to glean information for actors and agents in engineering education. / Doctor of Philosophy / My dissertation is about how engineering educators can use natural language processing (NLP) in implementing open-ended assessments in undergraduate engineering degree programs. Engineering students need to develop an ability to exercise judgment about better and worse outcomes of their decisions. One important consideration for improving engineering students' judgment involves creating sound educational assessments. Currently, engineering educators face a trade-off in selecting between open- and closed-ended assessments. Closed-ended assessments are easy to administer and score but are limited in what they measure given students are required, in many instances, to choose from a priori list. Conversely, open-ended assessments allow students to write their answers in any way they choose in their own words. However, open-ended assessments are likely to take more personal hours and lack consistency for both inter-grader and intra-grader grading. The solution to this challenge is the use of NLP. The working principles of the existing NLP models are the tallying of words, keyword matching, or syntactic similarity of words, which have often proved too brittle in capturing the language diversity that students could write. Therefore, the problem that motivated the present study is how to assess student responses based on underlying concepts and meanings instead of morphological characteristics or grammatical structure in sentences. Some of this problem can be addressed by developing NLP-assisted grading tools based on transformer-based large language models (TLLMs). This is because TLLMs are trained on billions of words and have billions of parameters, thereby providing capacity to capture richer semantic representations of input text. Given the availability of TLLMs in the last five years, there is a significant lack of research related to integrating TLLMs in the assessment of open-ended engineering case studies. My dissertation study aims to fill this research gap. The results of my study showed moderate accuracy in thematically analyzing students' open-ended responses to two different engineering case scenarios. My dissertation demonstrates to community members that TLLMs have potential for positive impacts on improving classroom practices in engineering education. This study can encourage engineering education researchers to utilize these NLP methods that may be helpful in analyzing the vast textual data generated in engineering education, thereby reducing the number of missed opportunities to glean information for actors and agents in engineering education.
78

Structural Self-Supervised Objectives for Transformers

Di Liello, Luca 21 September 2023 (has links)
In this Thesis, we leverage unsupervised raw data to develop more efficient pre-training objectives and self-supervised tasks that align well with downstream applications. In the first part, we present three alternative objectives to BERT’s Masked Language Modeling (MLM), namely Random Token Substitution (RTS), Cluster-based Random Token Substitution C-RTS, and Swapped Language Modeling (SLM). Unlike MLM, all of these proposals involve token swapping rather than replacing tokens with BERT’s [MASK]. RTS and C-RTS involve pre- dicting the originality of tokens, while SLM tasks the model at predicting the original token values. Each objective is applied to several models, which are trained using the same computational budget and corpora. Evaluation results reveal RTS and C-RTS require up to 45% less pre-training time while achieving performance on par with MLM. Notably, SLM outperforms MLM on several Answer Sentence Selection and GLUE tasks, despite utilizing the same computational budget for pre-training. In the second part of the Thesis, we propose self-supervised pre-training tasks that exhibit structural alignment with downstream applications, leading to improved performance and reduced reliance on labeled data to achieve comparable results. We exploit the weak supervision provided by large corpora like Wikipedia and CC-News, challenging the model to recognize whether spans of text originate from the same paragraph or document. To this end, we design (i) a pre-training objective that targets multi-sentence inference models by performing predictions over multiple spans of texts simultaneously, (ii) self-supervised objectives tailored to enhance performance in Answer Sentence Selection and its Contextual version, and (iii) a pre-training objective aimed at performance improvements in Summarization. Through continuous pre-training, starting from renowned checkpoints such as RoBERTa, ELEC- TRA, DeBERTa, BART, and T5, we demonstrate that our models achieve higher performance on Fact Verification, Answer Sentence Selection, and Summarization. We extensively evaluate our proposals on different benchmarks, revealing significant accuracy gains, particularly when annotation in the target dataset is limited. Notably, we achieve state-of-the-art results on the development set of the FEVER dataset and results close to state-of-the-art models using much more parameters on the test set. Furthermore, our objectives enable us to attain state-of-the-art results on ASNQ, WikiQA, and TREC-QA test sets, across all evaluation metrics (MAP, MRR, and P@1). For Summarization, our objective enhances summary quality, as measured by various metrics like ROUGE and BLEURT. We maintain that our proposals can be seamlessly combined with other techniques from recently proposed works, as they do not require alterations to the internal structure of Transformer models but only involve modifications to the training tasks.
79

整體規劃在集群分析之應用研究

張志強, ZHANG, ZHI-GIANG Unknown Date (has links)
本論文所探討之主題乃是針對一般所利用之集群方法,試著以整數規劃的方法來探討 集群分析的問題。 整數規劃之特性在於其所得之分組結果為真正的最佳解,而一般集群方法(如連鎖法 ,k 一平均數法)所得之結果僅是局部最佳解。 本文共分五章,第一章為緒論;第二章簡介一般集群方法;第三章建立四個整數規劃 的模型,俾用以解決不同需求之集群分析的問題;第四章實例探討,以某國中學生之 學科成績做為集群分析之變數,將每個學生依其成績高低而予以分組,並就一般集群 方法及整數規劃方法各作分析,並予比較;第五章為結論。全文共計一冊,約一萬五 仟字。
80

Probabilistic and constraint based modelling to determine regulation events from heterogeneous biological data

Aravena, Andrés 13 December 2013 (has links) (PDF)
Cette thèse propose une méthode pour construire des réseaux de régulation causales réalistes, qui a une taux de faux positifs inférieur aux méthodes traditionnelles. Cette approche consiste à intégrer des informa- tions hétérogènes à partir de deux types de prédictions de réseau pour déterminer une explication causale du gène observé co-expression. Ce processus d'intégration se modélise comme un problème d'optimisation combinatoire, de complexité NP-difficile. Nous introduisons une approche heuristique pour déterminer une solution approchée en un temps d'exécution pratique. Notre évaluation montre que, pour l'espèce modèle E. coli, le réseau de régulation résultant de l'application de cette méthode a une précision supérieure à celle construite avec des outils traditionnels. La bactérie Acidithiobacillus ferrooxidans présente des défis particu- liers pour la détermination expérimentale de son réseau de régulation. En utilisant les outils que nous avons développés, nous proposons un réseau de régulation putatif et analysons la pertinence de ces régulateurs centraux. Il s'agit de la quatrième contribution de cette thèse. Dans une deuxième partie de cette thèse, nous explorons la façon dont ces relations réglementaires se manifestent, en développant une méthode pour compléter un réseau de signalisation lié à la maladie d'Alzheimer. Enfin, nous abordons le problème ma- thématique de la conception de la sonde de puces à ADN. Nous concluons que, pour prévoir pleinement les dynamiques d'hybridation, nous avons besoin d' une fonction de l'énergie modifiée pour les structures secondaires des molécules d'ADN attaché surface et proposons un schéma pour la détermination de cette fonction.

Page generated in 0.0312 seconds