• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 619
  • 158
  • 86
  • 74
  • 55
  • 47
  • 33
  • 17
  • 16
  • 14
  • 13
  • 12
  • 9
  • 8
  • 8
  • Tagged with
  • 1432
  • 210
  • 190
  • 190
  • 183
  • 180
  • 124
  • 118
  • 104
  • 103
  • 99
  • 85
  • 81
  • 80
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Impacto da geração de grafos na classificação semissupervisionada / Impact of graph construction on semi-supervised classification

Sousa, Celso André Rodrigues de 18 July 2013 (has links)
Uma variedade de algoritmos de aprendizado semissupervisionado baseado em grafos e métodos de geração de grafos foram propostos pela comunidade científica nos últimos anos. Apesar de seu aparente sucesso empírico, a área de aprendizado semissupervisionado carece de um estudo empírico detalhado que avalie o impacto da geração de grafos na classificação semissupervisionada. Neste trabalho, é provido tal estudo empírico. Para tanto, combinam-se uma variedade de métodos de geração de grafos com uma variedade de algoritmos de aprendizado semissupervisionado baseado em grafos para compará-los empiricamente em seis bases de dados amplamente usadas na literatura de aprendizado semissupervisionado. Os algoritmos são avaliados em tarefas de classificação de dígitos, caracteres, texto, imagens e de distribuições gaussianas. A avaliação experimental proposta neste trabalho é subdividida em quatro partes: (1) análise de melhor caso; (2) avaliação da estabilidade dos classificadores semissupervisionados; (3) avaliação do impacto da geração de grafos na classificação semissupervisionada; (4) avaliação da influência dos parâmetros de regularização no desempenho de classificação dos classificadores semissupervisionados. Na análise de melhor caso, avaliam-se as melhores taxas de erro de cada algoritmo semissupervisionado combinado com os métodos de geração de grafos usando uma variedade de valores para o parâmetro de esparsificação, o qual está relacionado ao número de vizinhos de cada exemplo de treinamento. Na avaliação da estabilidade dos classificadores, avalia-se a estabilidade dos classificadores semissupervisionados combinados com os métodos de geração de grafos usando uma variedade de valores para o parâmetro de esparsificação. Para tanto, fixam-se os valores dos parâmetros de regularização (quando existirem) que geraram os melhores resultados na análise de melhor caso. Na avaliação do impacto da geração de grafos, avaliam-se os métodos de geração de grafos combinados com os algoritmos de aprendizado semissupervisionado usando uma variedade de valores para o parâmetro de esparsificação. Assim como na avaliação da estabilidade dos classificadores, para esta avaliação, fixam-se os valores dos parâmetros de regularização (quando existirem) que geraram os melhores resultados na análise de melhor caso. Na avaliação da influência dos parâmetros de regularização na classificação semissupervisionada, avaliam-se as superfícies de erro geradas pelos classificadores semissupervisionados em cada grafo e cada base de dados. Para tanto, fixam-se os grafos que geraram os melhores resultados na análise de melhor caso e variam-se os valores dos parâmetros de regularização. O intuito destes experimentos é avaliar o balanceamento entre desempenho de classificação e estabilidade dos algoritmos de aprendizado semissupervisionado baseado em grafos numa variedade de métodos de geração de grafos e valores de parâmetros (de esparsificação e de regularização, se houver). A partir dos resultados obtidos, pode-se concluir que o grafo k- vizinhos mais próximos mútuo (mutKNN) pode ser a melhor opção dentre os métodos de geração de grafos de adjacência, enquanto que o kernel RBF pode ser a melhor opção dentre os métodos de geração de matrizes ponderadas. Em adição, o grafo mutKNN tende a gerar superfícies de erro que são mais suaves que aquelas geradas pelos outros métodos de geração de grafos de adjacência. Entretanto, o grafo mutKNN é instável para valores relativamente pequenos de k. Os resultados obtidos neste trabalho indicam que o desempenho de classificação dos algoritmos semissupervisionados baseados em grafos é fortemente influenciado pela configuração de parâmetros. Poucos padrões evidentes foram encontrados para auxiliar o processo de seleção de parâmetros. As consequências dessa instabilidade são discutidas neste trabalho em termos de pesquisa e aplicações práticas / A variety of graph-based semi-supervised learning algorithms have been proposed by the research community in the last few years. Despite its apparent empirical success, the field of semi-supervised learning lacks a detailed empirical study that evaluates the influence of graph construction on semisupervised learning. In this work we provide such an empirical study. For such purpose, we combine a variety of graph construction methods with a variety of graph-based semi-supervised learning algorithms in order to empirically compare them in six benchmark data sets widely used in the semi-supervised learning literature. The algorithms are evaluated in tasks about digit, character, text, and image classification as well as classification of gaussian distributions. The experimental evaluation proposed in this work is subdivided into four parts: (1) best case analysis; (2) evaluation of classifiers stability; (3) evaluation of the influence of graph construction on semi-supervised learning; (4) evaluation of the influence of regularization parameters on the classification performance of semi-supervised learning algorithms. In the best case analysis, we evaluate the lowest error rates of each semi-supervised learning algorithm combined with the graph construction methods using a variety of sparsification parameter values. Such parameter is associated with the number of neighbors of each training example. In the evaluation of classifiers stability, we evaluate the stability of the semi-supervised learning algorithms combined with the graph construction methods using a variety of sparsification parameter values. For such purpose, we fixed the regularization parameter values (if any) with the values that achieved the best result in the best case analysis. In the evaluation of the influence of graph construction, we evaluate the graph construction methods combined with the semi-supervised learning algorithms using a variety of sparsification parameter values. In this analysis, as occurred in the evaluation of classifiers stability, we fixed the regularization parameter values (if any) with the values that achieved the best result in the best case analysis. In the evaluation of the influence of regularization parameters on the classification performance of semi-supervised learning algorithms, we evaluate the error surfaces generated by the semi-supervised classifiers in each graph and data set. For such purpose, we fixed the graphs that achieved the best results in the best case analysis and varied the regularization parameters values. The intention of our experiments is evaluating the trade-off between classification performance and stability of the graphbased semi-supervised learning algorithms in a variety of graph construction methods as well as parameter values (sparsification and regularization, if applicable). From the obtained results, we conclude that the mutual k-nearest neighbors (mutKNN) graph may be the best choice for adjacency graph construction while the RBF kernel may be the best choice for weighted matrix generation. In addition, mutKNN tends to generate error surfaces that are smoother than those generated by other adjacency graph construction methods. However, mutKNN is unstable for relatively small values of k. Our results indicate that the classification performance of the graph-based semi-supervised learning algorithms are heavily influenced by parameter setting. We found just a few evident patterns that could help parameter selection. The consequences of such instability are discussed in this work in research and practice
292

Replicação de estudos empíricos em engenharia de software. / Empirical studies replication in engineering software.

Dória, Emerson Silas 11 June 2001 (has links)
A crescente utilização de sistemas baseados em computação em praticamente todas as áreas da atividade humana provoca uma crescente demanda por qualidade e produtividade, tanto do ponto de vista do processo de produção como do ponto de vista dos produtos de software gerados. Nessa perspectiva, atividades agregadas sob o nome de Garantia de Qualidade de Software têm sido introduzidas ao longo de todo o processo de desenvolvimento de software. Dentre essas atividades destacam-se as atividades de Teste e Revisão, ambas com o objetivo principal de minimizar a introdução de erros durante o processo de desenvolvimento nos produtos de software gerados. A atividade de Teste constitui um dos elementos para fornecer evidências da confiabilidade do software em complemento a outras atividades, como por exemplo, o uso de revisões e de técnicas formais e rigorosas de especificação e de verificação. A atividade de Revisão, por sua vez, é um 'filtro' eficiente para o processo de engenharia de software, pois favorece a identificação e a eliminação de erros antes do passo seguinte do processo de desenvolvimento. Atualmente, pesquisas estão sendo realizadas com objetivo de determinar qual técnica, Revisão ou Teste, é mais adequada e efetiva, em determinadas circunstâncias, para descobrir determinadas classes de erros; e de forma mais ampla, como as técnicas podem ser aplicadas de forma complementar para melhoria da qualidade de software. Ainda que a atividade de teste seja indispensável no processo de desenvolvimento, investigar o aspecto complementar dessas técnicas é de grande interesse, pois em muitas situações tem-se observado que as revisões são tão ou mais efetivas quanto os testes. Nessa perspectiva, este trabalho tem como objetivo realizar um estudo comparativo, por meio da replicação de experimentos, entre Técnicas de Teste e Técnicas de Revisão no que se refere à detecção de erros em produtos de software (código fonte e documento de especificação de requisitos). Para realizar esse estudo são utilizados critérios de teste das técnicas funcional (particionamento em classes de equivalência e análise do valor limite), estrutural (todos-nós, todos-arcos, todos-usos, todos-potenciais-usos), baseada em erros (análise de mutantes), bem como, técnicas de leitura (stepwise abstraction e perspective based reading) e técnicas de inspeção (ad hoc e checklist). Além de comparar a efetividade e a eficiência das técnicas em detectar erros em produtos de software, este trabalho objetivo ainda utilizar os conhecimentos específicos relacionados a critérios de teste para reavaliar as técnicas utilizadas nos experimentos de Basili & Selby, Kamsties & Lott e Basili. / The increasing use of computer based systems in practically all human activity areas provokes higher demand for quality and productivity, from the point of view of software process as well as from the point of view of software products. In this perspective, activities aggregated under the name of Software Quality Assurance have been introduced throughout the software development process. Amongst these activities, the test and review activities are distinguished, both of them aiming at minimizing the introduction of errors during the development process. The test activity constitutes one of the elements to supply evidences of software reliability as a complement to other activities, for example, the use of review and formal, rigorous techniques for specification and verification. The review activity, in turn, is an efficient 'filter' for the process of software engineering, therefore it favors the identification of errors before the next step of the development process. Currently, researches have been carried out with the objective of determining which technique, review or test, is more appropriate and effective, in certain circumstances, to discover some classes of errors, and mostly, how the techniques can be applied in complement to each other for improvement of software quality. Even if the test activity is indispensable in the development process, investigating the complementary aspect of these techniques is of great interest, for in many situations it has been observed that reviews are as or more effective as test. In this perspective, this work aims at accomplishing a comparative study, through the replication of experiments, between Testing Techniques and Reviews concerning error detection in software products at the source code and requirement specification level. To carry out this study are used testing criteria of the techniques: functional (equivalence partitioning and boundary value analysis); structural (all-nodes, all-edges, all-uses, all-potential-uses); error based (mutation testing), as well as reading techniques (stepwise abstraction and perspective based reading) and inspection techniques (ad hoc e checklist). Besides comparing the effectiveness and efficiency of the techniques in detecting errors in software products, this work also aims at reevaluating and eventually at improving the techniques used in experiment of Basili & Selby, Kamsties & Lott and Basili.
293

Bayesian Analysis of Binary Sales Data for Several Industries

Chen, Zhilin 30 April 2015 (has links)
The analysis of big data is now very popular. Big data may be very important for companies, societies or even human beings if we can take full advantage of them. Data scientists defined big data with four Vs: volume, velocity, variety and veracity. In a short, the data have large volume, grow with high velocity, represent with numerous varieties and must have high quality. Here we analyze data from many sources (varieties). In small area estimation, the term ``big data' refers to numerous areas. We want to analyze binary for a large number of small areas. Then standard Markov Chain Monte Carlo methods (MCMC) methods do not work because the time to do the computation is prohibitive. To solve this problem, we use numerical approximations. We set up four methods which are MCMC, method based on Beta-Binomial model, Integrated Nested Normal Approximation Model (INNA) and Empirical Logistic Transform (ELT) method. We compare the processing time and accuracies of these four methods in order to find the fastest and reasonable accurate one. Last but not the least, we combined the empirical logistic transform method, the fastest and accurate method, with time series to explore the sales data over time.
294

Bayesian Logistic Regression Model with Integrated Multivariate Normal Approximation for Big Data

Fu, Shuting 28 April 2016 (has links)
The analysis of big data is of great interest today, and this comes with challenges of improving precision and efficiency in estimation and prediction. We study binary data with covariates from numerous small areas, where direct estimation is not reliable, and there is a need to borrow strength from the ensemble. This is generally done using Bayesian logistic regression, but because there are numerous small areas, the exact computation for the logistic regression model becomes challenging. Therefore, we develop an integrated multivariate normal approximation (IMNA) method for binary data with covariates within the Bayesian paradigm, and this procedure is assisted by the empirical logistic transform. Our main goal is to provide the theory of IMNA and to show that it is many times faster than the exact logistic regression method with almost the same accuracy. We apply the IMNA method to the health status binary data (excellent health or otherwise) from the Nepal Living Standards Survey with more than 60,000 households (small areas). We estimate the proportion of Nepalese in excellent health condition for each household. For these data IMNA gives estimates of the household proportions as precise as those from the logistic regression model and it is more than fifty times faster (20 seconds versus 1,066 seconds), and clearly this gain is transferable to bigger data problems.
295

Empirical studies of financial and labor economics

Li, Mengmeng 12 August 2016 (has links)
This dissertation consists of three essays in financial and labor economics. It provides empirical evidence for testing the efficient market hypothesis in some financial markets and for analyzing the trends of power couples’ concentration in large metropolitan areas. The first chapter investigates the Bitcoin market’s efficiency by examining the correlation between social media information and Bitcoin future returns. First, I extract Twitter sentiment information from the text analysis of more than 130,000 Bitcoin-related tweets. Granger causality tests confirm that market sentiment information affects Bitcoin returns in the short run. Moreover, I find that time series models that incorporate sentiment information better forecast Bitcoin future prices. Based on the predicted prices, I also implement an investment strategy that yields a sizeable return for investors. The second chapter examines episodes of exuberance and collapse in the Chinese stock market and the second-board market using a series of extended right-tailed augmented Dickey-Fuller tests. The empirical results suggest that multiple “bubbles” occurred in the Chinese stock market, although insufficient evidence is found to claim the same for the second-board market. The third chapter analyzes the trends of power couples’ concentration in large metropolitan areas of the United States between 1940 and 2010. The urbanization of college-educated couples between 1940 and 1990 was primarily due to the growth of dual-career households and the resulting severity of the co-location problem (Costa and Kahn, 2000). However, the concentration of college-educated couples in large metropolitan areas stopped increasing between 1990 and 2010. According to the results of a multinomial logit model and a triple difference-in-difference model, this is because the co-location effect faded away after 1990.
296

What are the delivery system design characteristics of information-centric mass claims processes?

Alves, Kyle Vierra January 2017 (has links)
This thesis examines the operational delivery systems of information-centric Mass Claims Processes. Empirical data is presented which builds upon existing literature within the Operations Management discipline. This thesis aims to extend the area of knowledge which focuses on the rendering of assistance to very large groups of individuals disadvantaged through particular events such as armed conflict, civil unrest, acts of government and other similarly sweeping actions. One such approach of aid delivery is through a legal process known as a Mass Claims Process which delivers assistance. This research examines how this assistance is rendered to the individual, the ‘claimant’, through a legally guided and controlled analysis of claimant-provided information. Such organisations are typically either publicly funded or funded through social schemes, which introduces significant pressure for efficiency. Similarly, the legal nature of MCPs emphasises the need for accuracy in the delivery of justice and law. The research addresses a number of areas not fully explored by the extant literature. There is a lack of research which explores the apparent trade-off between efficiency and accuracy in large scale legal services. Little empirical evidence exists on the application of Postponement strategies in information-centric operations. This research also investigates a previously unexplored context in which strategic frameworks must find optimal alignment between the service concept and the design of the delivery system in a restricted and challenging environment. Fieldwork was carried out over a three year period in two separate organisations, and utilised a polar case approach to increase the validity of the findings. The phenomenon of information interrelation, previously unidentified in the literature, is shown to have significant impact in this context. Several models are presented to describe the dynamic relationships between the characteristics and the strategic choices of the MCP. The results produce a set of findings illustrating optimal design choices for the key delivery system characteristics associated with MCPs. The financial impact of such organisations reaches into the billions (USD), and will continue to be a significant economic consideration for the foreseeable future. As such, research in this area has the ability to increase the efficient use of organisational resources for the organisations, while improving the service for the applicants. Whilst this thesis contributes to the body of knowledge for delivery system design, further research is welcomed, especially on the phenomenon of information interrelation, for the growing area of information-centric organisations.
297

Franz Brentano: o conceito, o objeto e o método de uma "psicologia do ponto de vista empírico"

Petry, Ana Maris 03 September 2012 (has links)
Made available in DSpace on 2016-04-27T17:27:02Z (GMT). No. of bitstreams: 1 Ana Maris Petry.pdf: 721205 bytes, checksum: cc064bb634f036d7337e7d9b3094db0d (MD5) Previous issue date: 2012-09-03 / This work aims to present the program of an empirical psychology of Franz Brentano, identifying the author's fundamental contributions to the foundation of a psychological science. When psychology sought autonomy of the philosophy and recognition as a positive science, it was necessary to: distinguish psychic phenomena of physical phenomena, to justify the distinction of the psychology of physiology; to clarify the concept of psyche, to justify an autonomy of psychology in relation to the philosophy; and also, identify a methodological access such research objects as an alternative to introspection, a possibility strongly opposed not only by positivism, but also by Hume and Kant. Without a clear and distinct definition of such concepts a psychological science would not be possible. Starting from the 19th century scientific historical context, this text introduces the concept of an empirical psychology, the research object of this discipline and the method established for such study as were defined by Brentano. The text follows the development of work Psychology from an empirical standpoint, published in 1874. This is a review of this crucial work of Brentano, showing its key concepts. The particular Brentano s conception of phenomenon and the identification of intentionality as fundamental characteristic of psychic phenomena enable Brentano to define the project of a psychological science that follows the science positivist criteria and, at the same time to avoid a mere phenomenical psychology / Este trabalho tem por objetivo apresentar o programa de uma psicologia empírica de Franz Brentano, identificando as fundamentais contribuições do autor para a fundação de uma ciência psicológica. Quando a psicologia buscava autonomia da filosofia e reconhecimento como ciência positiva, era necessário: distinguir fenômenos psíquicos de fenômenos físicos, para justificar a distinção da psicologia da fisiologia; clarificar o conceito de psique, para justificar uma autonomia da psicologia em relação à filosofia; e ainda, identificar um acesso metodológico a tais objetos de investigação como alternativa à introspecção, possibilidade fortemente combatida não só pelo positivismo, mas também por Hume e Kant. Sem uma clara e distinta definição de tais conceitos uma ciência psicológica não seria possível. Partindo do contexto histórico científico do século XIX, esse texto apresenta o conceito de uma psicologia empírica, o objeto de investigação dessa disciplina e o método estabelecido para tal estudo como foram definidos por Brentano. O texto segue o desenvolvimento da obra Psicologia do ponto de vista empírico, publicada em 1874. Trata-se de uma revisão desta fundamental obra de Brentano, explicitando seus principais conceitos. A particular concepção brentaniana de fenômeno e a identificação da intencionalidade como fundamental característica dos fenômenos psíquicos possibilitou a Brentano delinear o projeto de uma ciência psicológica que atendesse aos critérios positivistas de cientificidade e, ao mesmo tempo, evitasse uma psicologia meramente fenomênica
298

Previsão de longo prazo de níveis no sistema hidrológico do TAIM

Galdino, Carlos Henrique Pereira Assunção January 2015 (has links)
O crescimento populacional e a degradação dos corpos d’água vêm exercendo pressão à agricultura moderna, a proporcionar respostas mais eficientes quanto ao uso racional da água. Para uma melhor utilização dos recursos hídricos, faz-se necessário compreender o movimento da água na natureza, onde o conhecimento prévio dos fenômenos atmosféricos constitui uma importante ferramenta no planejamento de atividades que utilizam os recursos hídricos como fonte primária de abastecimento. Nesse trabalho foram realizadas previsões de longo prazo com antecedência de sete meses e intervalo de tempo mensal de níveis no Sistema Hidrológico do Taim, utilizando previsões de precipitação geradas por um modelo de circulação global. Para realizar as previsões foi elaborado um modelo hidrológico empírico de regressão, onde foram utilizadas técnicas estatísticas de análise e manipulação de séries históricas para correlacionar os dados disponíveis aos níveis (volumes) de água no banhado. Partindo do pressuposto que as previsões meteorológicas são a maior fonte de incerteza na previsão hidrológica, foi utilizada a técnica de previsão por conjunto (ensemble) e dados do modelo COLA, com 30 membros, para quantificar as incertezas envolvidas. Foi elaborado um algoritmo para gerar todas as possibilidades de regressão linear múltipla com os dados disponíveis, onde oito equações candidatas foram selecionadas para realizar as previsões. Numa análise preliminar dos dados de entrada de precipitações previstas foi observado que o modelo de circulação global não representou os extremos observados de forma satisfatória, sendo executado um processo de remoção do viés. O modelo de empírico de simulação foi posteriormente executado em modo continuo, gerando previsões de longo prazo de níveis para os próximos sete meses, para cada mês no período de junho/2004 a dezembro/2011. Os resultados obtidos mostraram que a metodologia utilizada obteve bons resultados, com desempenho satisfatórios até o terceiro mês, decaindo seu desempenho nos meses posteriores, mas configurando-se em uma ferramenta para auxílio à gestão dos recursos hídricos do local de estudo. / Population growth and degradation of water bodies have been pressuring modern agriculture, to provide more efficient responses about the rational use of water. For a better use of water resources, it is necessary to understand the movement of water in nature, where prior knowledge of atmospheric phenomena is an important tool in planning activities that use water as the primary source of supply. In this study were performed long-term forecasts of water levels (seven months of horizon, monthly time-step) in the Hydrological System Taim, using rainfall forecasts generated by a global circulation model as input. To perform predictions was developed an empirical hydrological regression model. This model was developed based on statistical techniques of analysis and manipulation of historical data to correlate the input data available to the levels (volume) of water in a wetland. Assuming that weather forecasts are a major source of uncertainty in hydrological forecasting, we used an ensemble forecast from COLA 2.2 with 30 members to quantify the uncertainties involved. An algorithm was developed to generate all the multiple linear regression models with the available data, where eight candidates equations were selected for hydrological forecasting. In a preliminary analysis of the precipitation forecast was observed that the global circulation model did not achieve a good representation of extremes values, thus a process of bias removal was carried out. Then the empirical model was used to generate water levels forecast for the next seven months, in each month of the period june/2004 to december/2011. The results showed that the methodology used has a satisfactory performance until the lead time three (third month in the future) where the performance starts to show lower values. Beside the sharply lost of performance in the last lead times, the model is a support tool that can help the decision making in the management of water resources for the study case.
299

An Analysis of the Differences between Unit and Integration Tests

Trautsch, Fabian 08 April 2019 (has links)
No description available.
300

Liminality of NHS research ethics committees : navigating participant protection and research promotion across regulatory spaces

Dove, Edward Stellwagen January 2018 (has links)
NHS research ethics committees (RECs) serve as the gatekeepers of health research involving human participants. They have the power to decide, through a regulatory 'event licensing' system, whether or not any given proposed research study is ethical and therefore appropriate to undertake. RECs have several regulatory functions. Their primary function has been to protect the interests of research participants and minimise risk of harm to them. Yet RECs, and other actors connected to them, also provide stewardship for the promotion of ethical and socially valuable research. While this latter function traditionally has been seen as secondary, the 'function hierarchy' is increasingly blurred in regulation. Regulatory bodies charged with managing RECs now emphasise that the functions of RECs are to both protect the interests of research participants, and also promote ethical research that is of potential benefit to participants, science, and society. Though the UK has held in some of its previous regulations (broadly defined) that RECs equally function to facilitate (ethical) health research, I argue that the 'research promotionist' ideology has moved 'up the ladder' in the regulation of RECs and in the regulation of health research, all the way to implementation in law, specifically in the Care Act 2014, and in the regulatory bodies charged with overseeing health research, namely the Health Research Authority. This thesis therefore asks: what impact does this ostensibly twinned regulatory objective then have on the substantive and procedural workings of RECs? I invoke a novel 'anthropology of regulation' as an original methodological contribution, which enables me to study empirically the nature of regulation and the experiences of actors within a regulatory space (or spaces), and the ways in which they themselves are affected by regulation. Anthropology of regulation structures my overall empirical inquiry to query how RECs, with a classic primary mandate to protect research participants, now interact with regulatory bodies charged with promoting health research and reducing perceived regulatory barriers. I further query what this changing environment might do to the bond of research and ethics as seen through REC processes of ethical deliberation and decision-making, by invoking the original concept of 'regulatory stewardship'. I argue that regulatory stewardship is a critical, but hitherto invisible, component of health research regulation, and requires fuller recognition and better integration into the effective functioning of regulatory oversight of research involving human participants.

Page generated in 0.0512 seconds