• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 651
  • 560
  • 299
  • 103
  • 61
  • 60
  • 41
  • 33
  • 32
  • 32
  • 31
  • 31
  • 31
  • 28
  • 21
  • Tagged with
  • 2119
  • 771
  • 479
  • 268
  • 248
  • 208
  • 202
  • 197
  • 182
  • 147
  • 137
  • 137
  • 129
  • 129
  • 124
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Corpus luteum and fetoplacental function during pregnancy in the rhesus monkey (Macaca mulatta)

Walsh, Scott Wesley. January 1975 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1975. / Typescript. Vita. Description based on print version record. Includes bibliographical references.
32

Prostaglandin F₂[subscript alpha] (PGF₂[subscript alpha])--independent and--dependent regulation of the bovine luteal endothelin system

Choudhary, Ekta. January 2005 (has links)
Thesis (M.S.)--West Virginia University, 2005. / Title from document title page. Document formatted into pages; contains viii, 40 p. : ill. Includes abstract. Includes bibliographical references (p. 36-40).
33

Effects of unilateral and bilateral pregnancy on maintenance of the corpus luteum and development of the conceptus in cattle

Del Campo, Marcelo Rafael. January 1980 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1980. / Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references.
34

Habeas corpus legislation and the passing of the 1679 habeas corpus act

Sleeper, Susan Patricia, January 1968 (has links)
Thesis (M.A.)--University of Wisconsin--Madison, 1968. / eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references.
35

A threshold model for development of the corpus callosum in normal and acallosal mice /

Bishop, Katherine Mary. January 1997 (has links) (PDF)
Thesis (Ph.D.)--University of Alberta, 1997. / Submitted to the Faculty of Graduate Studies and Research in partial fulfilment of the requirements for the degree of Doctor of Philosophy, Department of Psychology. Also available online.
36

The role of the endothelin system in prostaglandin F₂[alpha]-induced luteolysis

Doerr, Matthew D. January 2007 (has links)
Thesis (M.S.)--West Virginia University, 2007. / Title from document title page. Document formatted into pages; containsvi, 64 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 51-64).
37

Construction de corpus généraux et spécialisés à partir du Web (Ad hoc and general-purpose corpus construction from web sources) / Ad hoc and general-purpose corpus construction from web sources

Barbaresi, Adrien 19 June 2015 (has links)
Le premier chapitre s'ouvre par un description du contexte interdisciplinaire. Ensuite, le concept de corpus est présenté en tenant compte de l'état de l'art. Le besoin de disposer de preuves certes de nature linguistique mais embrassant différentes disciplines est illustré par plusieurs scénarios de recherche. Plusieurs étapes clés de la construction de corpus sont retracées, des corpus précédant l'ère digitale à la fin des années 1950 aux corpus web des années 2000 et 2010. Les continuités et changements entre la tradition en linguistique et les corpus tirés du web sont exposés. Le second chapitre rassemble des considérations méthodologiques. L'état de l'art concernant l'estimation de la qualité de textes est décrit. Ensuite, les méthodes utilisées par les études de lisibilité ainsi que par la classification automatique de textes sont résumées. Des dénominateurs communs sont isolés. Enfin, la visualisation de textes démontre l'intérêt de l'analyse de corpus pour les humanités numériques. Les raisons de trouver un équilibre entre analyse quantitative et linguistique de corpus sont abordées.Le troisième chapitre résume l'apport de la thèse en ce qui concerne la recherche sur les corpus tirés d'internet. La question de la collection des données est examinée avec une attention particulière, tout spécialement le cas des URLs sources. La notion de prétraitement des corpus web est introduite, ses étapes majeures sont brossées. L'impact des prétraitements sur le résultat est évalué. La question de la simplicité et de la reproducibilité de la construction de corpus est mise en avant.La quatrième partie décrit l'apport de la thèse du point de vue de la construction de corpus proprement dite, à travers la question des sources et le problèmes des documents invalides ou indésirables. Une approche utilisant un éclaireur léger pour préparer le parcours du web est présentée. Ensuite, les travaux concernant la sélection de documents juste avant l'inclusion dans un corpus sont résumés : il est possible d'utiliser les apports des études de lisibilité ainsi que des techniques d'apprentissage artificiel au cours de la construction du corpus. Un ensemble de caractéristiques textuelles testées sur des échantillons annotés évalue l'efficacité du procédé. Enfin, les travaux sur la visualisation de corpus sont abordés : extraction de caractéristiques à l'échelle d'un corpus afin de donner des indications sur sa composition et sa qualité. / At the beginning of the first chapter the interdisciplinary setting between linguistics, corpus linguistics, and computational linguistics is introduced. Then, the notion of corpus is put into focus. Existing corpus and text definitions are discussed. Several milestones of corpus design are presented, from pre-digital corpora at the end of the 1950s to web corpora in the 2000s and 2010s. The continuities and changes between the linguistic tradition and web native corpora are exposed.In the second chapter, methodological insights on automated text scrutiny in computer science, computational linguistics and natural language processing are presented. The state of the art on text quality assessment and web text filtering exemplifies current interdisciplinary research trends on web texts. Readability studies and automated text classification are used as a paragon of methods to find salient features in order to grasp text characteristics. Text visualization exemplifies corpus processing in the digital humanities framework. As a conclusion, guiding principles for research practice are listed, and reasons are given to find a balance between quantitative analysis and corpus linguistics, in an environment which is spanned by technological innovation and artificial intelligence techniques.Third, current research on web corpora is summarized. I distinguish two main approaches to web document retrieval: restricted retrieval and web crawling. The notion of web corpus preprocessing is introduced and salient steps are discussed. The impact of the preprocessing phase on research results is assessed. I explain why the importance of preprocessing should not be underestimated and why it is an important task for linguists to learn new skills in order to confront the whole data gathering and preprocessing phase.I present my work on web corpus construction in the fourth chapter. My analyses concern two main aspects, first the question of corpus sources (or prequalification), and secondly the problem of including valid, desirable documents in a corpus (or document qualification). Last, I present work on corpus visualization consisting of extracting certain corpus characteristics in order to give indications on corpus contents and quality.
38

A study of the corpus striatum

Kemp, Janet M. January 1970 (has links)
No description available.
39

The interocular transfer of brightness and pattern discriminations in the normal and corpus callosum-sectioned rat /

Sheridan, Charles L. January 1963 (has links)
No description available.
40

eDictor: da plataforma para a nuvem / eDictor: from platform to the cloud

Veronesi, Luiz Henrique Lima 04 February 2015 (has links)
Neste trabalho, apresentamos uma nova proposta para edição de textos que fazem parte de um corpus eletrônico. Partindo do histórico de desenvolvimento do corpus Tycho Brahe e da ferramenta eDictor, propõe-se a análise de todo o processo de trabalho de criação de um corpus para obter uma forma de organização da informação mais concisa e sem redundâncias, através do uso de um único repositório de informações contendo os dados textuais e morfossintáticos do texto. Esta forma foi atingida através da criação de uma estrutura de dados baseada em unidades mínimas chamadas tokens e blocos de unidades chamados chunks. A relação entre os tokens e os chunks, da forma como considerada neste trabalho, é capaz de guardar a informação de como o texto é estruturado em sua visualização (página, parágrafos, sentenças) e na sua estrutura sintática em árvores. A base de análise é composta por todos os arquivos pertencentes ao catálogo de textos do corpus Tycho Brahe. Através desta análise, foi possível chegar a elementos genéricos que se relacionam, desconstruindo o texto e criando uma relação de pontos de início e fim relativos às palavras (tokens) e não seguindo sua forma linear. A introdução do conceito de orientação a objetos possibilitou a criação de uma relação entre unidades ainda menores que o token, os split tokens que também são tokens, pois herdam as características do elemento mais significativo, o token. O intuito neste trabalho foi buscar uma forma com o menor número possível de atributos buscando diminuir a necessidade de se criar atributos específicos demais ou genéricos de menos. Na busca deste equilíbrio, foi verificada a necessidade de se criar um atributo específico para o chunk sintático, um atributo de nível que indica a distância de um nó da árvore para o nó raiz. Organizada a informação, o acesso a ela se torna mais simples e parte-se para definição da interface do usuário. A tecnologia web disponível permite que elementos sejam posicionados na tela reproduzindo a visualização que ocorre no livro e também permite que haja uma independência entre um e outro elemento. Esta independência é o que permite que a informação trafegue entre o computador do usuário e a central de processamento na nuvem sem que o usuário perceba. O processamento ocorre em background, utilizando tecnologias assíncronas. A semelhança entre as tecnologias html e xml introduziu uma necessidade de adaptação da informação para apresentação ao usuário. A solução apresentada neste trabalho é pensada de forma a atribuir aos tokens informações que indiquem que eles fazem parte de um chunk. Assim, não seriam as palavras que pertencem a uma sentença, mas cada palavra que possuiria um pedaço de informação que a faz pertencente à sentença. Esta forma de se pensar muda a maneira como a informação é exibida. / In this work, we present a new proposal for text edition organized under an electronic corpus. Starting from Tycho Brahe corpus development history and the eDictor tool, we propose to analyze the whole work process of corpus creation in order to obtain a more concise and less redudant way of organizing information by using a single source repository for textual and morphosyntactic data. This single source repository was achieved by the creation of a data structure based on minimal significative units called tokens and grouping units named chunks. The relationship between tokens and chunks, in the way considered on this work, allows storage of information about how the text is organized visually (pages, paragraphs, sentences) and on how they are organized syntactically as represented by syntactic trees. All files referred to the Tycho Brahe corpus catalog were used as base for analysis. That way, it was possible to achieve generic elements that relate to each other in a manner that the text is deconstructed by using relative pointers to each token in the text instead of following the usual linear form. The introduction of oriented-object conception made the creation of relationship among even smaller units possible, they are the split tokens, but split tokens are also tokens, as they inherit characteristics from the most significative element (the token). The aim here was being attributeless avoiding the necessity of too specific or too vague attributes. Looking for that balance, it was verified the necessity of creating a level attribute for syntactic data that indicates the distance of a tree node to its root node. After information is organized, access to it become simpler and then focus is turned to user-interface definition. Available web technology allows the use of elements that may be positioned on the screen reproducing the way the text is viewed within a book and it also allows each element to be indepedent of each other. This independence is what allows information to travel between user computer and central processing unit at the cloud without user perception. Processing occurs in background using asynchronous technology. Resemblance between html and xml introduced a necessity of adaption to present the information to the user. The adopted solution in this work realizes that tokens must contain the information about the chunk to which they belong. So this is not a point of view where words belong to sentences, but that each word have a piece of information that make them belong to the sentence. This subtile change of behavioring changes the way information is displayed.

Page generated in 0.0427 seconds