• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 619
  • 158
  • 86
  • 74
  • 55
  • 47
  • 33
  • 17
  • 16
  • 14
  • 13
  • 12
  • 9
  • 8
  • 8
  • Tagged with
  • 1433
  • 210
  • 190
  • 190
  • 183
  • 180
  • 124
  • 118
  • 104
  • 103
  • 99
  • 85
  • 81
  • 80
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Liminality of NHS research ethics committees : navigating participant protection and research promotion across regulatory spaces

Dove, Edward Stellwagen January 2018 (has links)
NHS research ethics committees (RECs) serve as the gatekeepers of health research involving human participants. They have the power to decide, through a regulatory 'event licensing' system, whether or not any given proposed research study is ethical and therefore appropriate to undertake. RECs have several regulatory functions. Their primary function has been to protect the interests of research participants and minimise risk of harm to them. Yet RECs, and other actors connected to them, also provide stewardship for the promotion of ethical and socially valuable research. While this latter function traditionally has been seen as secondary, the 'function hierarchy' is increasingly blurred in regulation. Regulatory bodies charged with managing RECs now emphasise that the functions of RECs are to both protect the interests of research participants, and also promote ethical research that is of potential benefit to participants, science, and society. Though the UK has held in some of its previous regulations (broadly defined) that RECs equally function to facilitate (ethical) health research, I argue that the 'research promotionist' ideology has moved 'up the ladder' in the regulation of RECs and in the regulation of health research, all the way to implementation in law, specifically in the Care Act 2014, and in the regulatory bodies charged with overseeing health research, namely the Health Research Authority. This thesis therefore asks: what impact does this ostensibly twinned regulatory objective then have on the substantive and procedural workings of RECs? I invoke a novel 'anthropology of regulation' as an original methodological contribution, which enables me to study empirically the nature of regulation and the experiences of actors within a regulatory space (or spaces), and the ways in which they themselves are affected by regulation. Anthropology of regulation structures my overall empirical inquiry to query how RECs, with a classic primary mandate to protect research participants, now interact with regulatory bodies charged with promoting health research and reducing perceived regulatory barriers. I further query what this changing environment might do to the bond of research and ethics as seen through REC processes of ethical deliberation and decision-making, by invoking the original concept of 'regulatory stewardship'. I argue that regulatory stewardship is a critical, but hitherto invisible, component of health research regulation, and requires fuller recognition and better integration into the effective functioning of regulatory oversight of research involving human participants.
302

A Bayesian approach for modeling stochastic deterioration

SILVA, Rodrigo Bernardo da 31 January 2010 (has links)
Made available in DSpace on 2014-06-12T17:40:37Z (GMT). No. of bitstreams: 2 arquivo720_1.pdf: 2087569 bytes, checksum: 4e440439e51674690e086dbc501c7a58 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / A modelagem de deterioracão tem estado na vanguarda das analises Bayesianas de confiabilidade. As abordagens mais conhecidas encontradas na literatura para este proposito avaliam o comportamento da medida de confiabilidade ao longo do tempo a luz dos dados empiricos, apenas. No contexto de engenharia de confiabilidade, essas abordagens têm aplicabilidade limitada uma vez que frequentemente lida-se com situacões caracterizadas pela escassez de dados empiricos. Inspirado em estrategias Bayesianas que agregam dados empiricos e opiniões de especialistas na modelagem de medidas de confiabilidade não-dependentes do tempo, este trabalho propõe uma metodologia para lidar com confiabilidade dependente do tempo. A metodologia proposta encapsula conhecidas abordagens Bayesianas, como metodos Bayesianos para combinar dados empiricos e opiniões de especialistas e modelos Bayesianos indexados no tempo, promovendo melhorias sobre eles a fim de encontrar um modelo mais realista para descrever o processo de deterioracão de um determinado componente ou sistema. Os casos a serem discutidos são os tipicamente encontrados na pratica de confiabilidade (por meio de simulacão): avaliacão dos dados sobre tempo de execucão para taxas de falha e a quantidade de deterioracão, dados com base na demanda para probabilidade de falha; e opiniões de especialistas para analise da taxa de falha, quantidade de deterioracão e probabilidade de falha. Estes estudos de caso mostram que o uso de informacões especializadas pode levar a uma reducão da incerteza sobre distribuicões de medidas de confiabilidade, especialmente em situacões em que poucas ou nenhuma falha e observada.
303

Psychologické a sociální aspekty působení cestovní kanceláře / Psychological and social aspects of tour operator's activity

Kaplina, Olesya January 2010 (has links)
This graduation thesis focuses on analyzing a tour operator's supply and it's comparison with the results of empirical research. Trends of demand in tourism industry, preferences and expectations of clients will be compared with the vision of the tour operator on the base of theoretical knowledge and the results of empirical research. To reach improvement of the supply the needed measures will be advised in the end of the thesis. The thesis is divided into two parts, the theoretical base and the empirical research.
304

Requirements engineering in software startups: a qualitative investigation / Engenharia de requisitos em startups de software: uma investigação qualitativa

Jorge Augusto Melegati Gonçalves 06 March 2017 (has links)
Software startups face a very demanding market: they must deliver high innovative solutions in the shortest possible period of time. Resources are limited and time to reach market is short. Then, it is extremely important to gather the right requirements and that they are precise. Nevertheless, software requirements are usually not clear and startups struggle to identify what they should build. This context affects how requirements engineering activities are performed in these organizations. This work seeks to characterize the state-of-practice of requirements engineering in software startups. Using an iterative approach, seventeen interviews were conducted during three stages with founders and/or managers of different Brazilian software startups operating in different market sectors and with different maturity levels. Data was analyzed using grounded theory techniques such open and axial coding through continuous comparison. As a result, a conceptual model of requirements engineering state-of-practice in software startups was developed consisting of its context influences (founders, software development manager, developers, business model, market and ecosystem) and activities description (product team; elicitation; analysis, validation and prioritization; product validation and documentation). Software development and startup development techniques are also presented and their use in the startup context is analyzed. Finally, using a bad smell analogy borrowed from software development literature, some bad practices and behaviors identified in software startups are presented and solutions to avoid them proposed. / Startups de software enfrentam um mercado muito exigente: elas devem entregar soluções altamente inovativas no menor período de tempo possível. Recursos são limitados e tempo para alcançar o mercado é pequeno. Então, é extremamente importante coletar os requisitos certos e que eles sejam precisos. Entretanto, os requisitos de software geralmente não são claros e as startups fazem um grande esforço para identificar quais serão implementados. Esse contexto afeta como as atividades de engenharia de requisitos são executadas nessas organizações. Este trabalho procura compreender o estado-da-prática da engenharia de requisitos em startups de software. Usando uma abordagem iterativa, dezessete entrevistas foram realizados em três diferentes estágios com fundadores e/ou gestores de diferentes startups de software brasileiras operando em diferentes setores e com diferentes estágios de maturidade. Os dados foram analisados usando técnicas de teoria fundamentada como codificação aberta e axial através da comparação contínua. Como resultado, um modelo conceitual do estado-da-prática da engenharia de requisitos em startups de software foi desenvolvido consistindo da suas influências do contexto (fundadores, gerente de desenvolvimento de software, desenvolvedores, modelo de negócio, mercado e ecossistema) e descrição das atividades (time de produto; levantamento; análise, validação e priorização; e documentação). Técnicas oriundas de metodologias de desenvolvimento de software e desenvolvimento de startups também são apresentadas e seu uso em no contexto de startups é analisado. Finalmente, a partir de uma analogia de maus cheiros presente na literatura de desenvolvimento de software, algumas más práticas e maus comportamentos identificados em startups de software são apresentados e algumas sugestões de solução são propostas.
305

The effects of regulation and competition on the housing market from a structural model / Os efeitos da regulação e competição no mercado imobiliário a partir de um modelo estrutural

Silva, Rômullo Carvalho da 03 October 2017 (has links)
Real estate activity is notably influenced by local land use regulation. By acting as redtape costs or explicitly through compensatory payments, such rules typically increase construction costs by reducing supply and increasing demand in the housing market, which leads to higher prices. This is the first study to model the decision-making process of housing supply using a game theoretic approach. To shed light on the roleof regulation and competition in this industry, I employ a static-entry model that incorporates the demand, cost and strategic factors common to the activity. I built aunique data set forthe city of São Paulo, Brazil, with information on the universe ofnew residential buildings launched in the city, along with a history of all licenses requested by the developers tothe government for each project. My empirical results show that the regulation play akey role in developer activity. For the areas with the most intense activity of the city,the bureaucracy in the residential approval process cost annually R$ 47 millions (US$ 14millions) by entrant firm. / A atividade imobiliária é particularmente influenciada pela regulação do uso da terra. Ao agir como \"custos burocráticos\" ou explicitamente por meio de pagamentos compensatórios,essas regras normalmente aumentam os custos de construção ao reduzir a oferta e aumentara demanda de novas residências, o que leva a preços mais altos. Este é o primeiro estudo amodelar o processo de decisão de oferta de residências usando uma abordagem de teoria dos jogos. Para mostrar os efeitos da regulação e da competição nesse setor, eu emprego um modelo de entrada estático que incorpora os fatores de demanda, custo e estratégicos comuns a essa atividade. Eu construí uma base de dados única para a cidade de São Paulo, Brasil, com informações sobre o universo de novos empreendimentos residenciais na cidade, juntamente comum histórico de todas as licenças e alvarás requeridos pelas incorporadoras à Prefeitura para cada projeto. Meus resultados mostram que a regulação tem um papel chave na atividade imobiliária. Para as áreas com atividade mais intensa da cidade, a burocracia no processo de aprovação de novas residências tem um custo média anual de R$ 47 milhões por firma entrante no mercado.
306

PPP模式是推动电子政务平台健康发展的良药吗? --来自华东地区50个城市政务App的面板数据分析

January 2019 (has links)
abstract: 当前各个城市都在努力推动“互联网+政务”的公共服务新模式,打造政府公共服务平台,提供“一站式”的服务资源,以民众需求为中心,解决民众信息查询、政府办事预约、政策咨询等公共活动的需要。可以看出,政务平台给民众的生活带来极大的便利,是当前各地政府响应中央建立“智慧城市”“数字中国”的重要举措。 本课题发现当前的政务平台逐步引入PPP模式,借助社会资源开发政务平台。但是,PPP模式是否有利于政务平台的建设,受到哪些因素的制约,如何更好地利用PPP模式进行开发工作,这些问题在现前的研究中没有得到很好地探索。带着这些问题,本课题对PPP模式在政务平台建设中的作用进行了深入剖析。主要研究内容如下: 在第一部分中,本课题政府公共服务和政务相关理论进行了全面整理,发现政务平台要想走出一条健康发展之路,需要借助社会资源进行市场化,而PPP模式符合当前政务平台建设的需要。本文对PPP模式在国内外电子政务的应用进行了分析,提出了本文的研究主题。 在第二部分中,本文对华东地区50座城市的政务平台进行了调研,对常见问题进行了整理,发现PPP模式已经广泛应用于政务平台建设中,且主要有四种模式,本文对50座城市的政务平台建设情况进行了数据采集,并进行了深入分析。 在第三部分中,结合调研现状和文献研究成果,提出了PPP模式影响政务平台建设的相关假设,并构建了计量模型。通过短面板分析验证假设,并进行了Robust分析,证实结论的普适性。 在第四部分中,本文分析了研究结果,认为政务平台采用PPP模式能够有效促进政务平台的建设水平,提高用户满意度;并且PPP模式与合作企业的估摸、信息的透明程度和平台的交互能力存在显著的交互作用,共同影响用户对政务平台的评价。政府引入PPP模式,充分对接可利用资源,并加强盈利控制,对当前政务平台的建设是具有积极意义的。 / Dissertation/Thesis / Doctoral Dissertation Business Administration 2019
307

The Economics of Information and Spatial Price Behavior: An Empirical Assessment of Producer's Cattle Prices in the Western Region of the USA: 1973-1981

Juan, Maria Lourdes del Rosario 01 May 1983 (has links)
This study is an empirical verification of the theories of the economics of information at the cattle producer's market in the U.S. Western Region. Weekly data on producer's price quotations were obtained from CATTLEFAX for 1973 through 1981. The three major objectives of the research are: first, to determine whether price dispersion exists in the cattle producer's market in the Western Region: second, if significant price dispersion are shown to exist, to determine the nature of the regional price distribution; and third, if price dispersion do exist, to determine the implications of the dispersion relative to competitive structure, efficient informational flows, and relative informational content of said distribution. Price dispersion exists in the cattle producer's market of the Western Region. It is the buyers who "establish" prices and sellers act as price takers. Where heavy trading occurs, information of prices is more efficiently transmitted resulting in a more symmetrical distribution. The composition of the market is relatively stable. however, imperfect information results in splitting the market into high price and low price favoring the better informed. Utah producers are rational in their pricing decisions. Price dispersion can be attributed to the lag in obtaining price information and asymmetry in efficiency of information gathering.
308

Racial Discrimination in Home Ownership: Impact of the 2008 Economic Crisis

Severtson, John 01 January 2019 (has links)
This paper uses regression analysis on a national data set from the United States from 2001-2016 to analyze racial or ethnic group disparities in home ownership between whites and blacks, Asian and Pacific Islanders, Puerto Ricans, Cubans, Mexicans, other Hispanics and American Indians. I employ Integrated Public Use Microdata combined with Bureau of Labor Statistics data and Federal Reserve Economic Data from the Federal Reserve Bank of St. Louis. Controlling for demographic, educational, income and wealth, employment and housing characteristics, I find no significant differences between whites and Asian and Pacific Islanders, Mexicans and American Indians. However, blacks, Puerto Ricans, Cubans and other Hispanics face racial disadvantages in regard to home ownership. All minority racial or ethnic groups, except American Indians, lost home ownership parity to whites from 2007-2011, the years primarily affected by the economic crisis.
309

CONTEXT-AWARE DEBUGGING FOR CONCURRENT PROGRAMS

Chu, Justin 01 January 2017 (has links)
Concurrency faults are difficult to reproduce and localize because they usually occur under specific inputs and thread interleavings. Most existing fault localization techniques focus on sequential programs but fail to identify faulty memory access patterns across threads, which are usually the root causes of concurrency faults. Moreover, existing techniques for sequential programs cannot be adapted to identify faulty paths in concurrent programs. While concurrency fault localization techniques have been proposed to analyze passing and failing executions obtained from running a set of test cases to identify faulty access patterns, they primarily focus on using statistical analysis. We present a novel approach to fault localization using feature selection techniques from machine learning. Our insight is that the concurrency access patterns obtained from a large volume of coverage data generally constitute high dimensional data sets, yet existing statistical analysis techniques for fault localization are usually applied to low dimensional data sets. Each additional failing or passing run can provide more diverse information, which can help localize faulty concurrency access patterns in code. The patterns with maximum feature diversity information can point to the most suspicious pattern. We then apply data mining technique and identify the interleaving patterns that are occurred most frequently and provide the possible faulty paths. We also evaluate the effectiveness of fault localization using test suites generated from different test adequacy criteria. We have evaluated Cadeco on 10 real-world multi-threaded Java applications. Results indicate that Cadeco outperforms state-of-the-art approaches for localizing concurrency faults.
310

The nonparametric least-squares method for estimating monotone functions with interval-censored observations

Cheng, Gang 01 May 2012 (has links)
Monotone function, such as growth function and cumulative distribution function, is often a study of interest in statistical literature. In this dissertation, we propose a nonparametric least-squares method for estimating monotone functions induced from stochastic processes in which the starting time of the process is subject to interval censoring. We apply this method to estimate the mean function of tumor growth with the data from either animal experiments or tumor screening programs to investigate tumor progression. In this type of application, the tumor onset time is observed within an interval. The proposed method can also be used to estimate the cumulative distribution function of the elapsed time between two related events in human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) studies, such as HIV transmission time between two partners and AIDS incubation time from HIV infection to AIDS onset. In these applications, both the initial event and the subsequent event are only known to occur within some intervals. Such data are called doubly interval-censored data. The common property of these stochastic processes is that the starting time of the process is subject to interval censoring. A unified two-step nonparametric estimation procedure is proposed for these problems. In the first step of this method, the nonparametric maximum likelihood estimate (NPMLE) of the cumulative distribution function for the starting time of the stochastic process is estimated with the framework of interval-censored data. In the second step, a specially designed least-squares objective function is constructed with the above NPMLE plugged in and the nonparametric least-squares estimate (NPLSE) of the mean function of tumor growth or the cumulative distribution function of the elapsed time is obtained by minimizing the aforementioned objective function. The theory of modern empirical process is applied to prove the consistency of the proposed NPLSE. Simulation studies are extensively carried out to provide numerical evidence for the validity of the NPLSE. The proposed estimation method is applied to two real scientific applications. For the first application, California Partners' Study, we estimate the distribution function of HIV transmission time between two partners. In the second application, the NPLSEs of the mean functions of tumor growth are estimated for tumors with different stages at diagnosis based on the data from a cancer surveillance program, the SEER program. An ad-hoc nonparametric statistic is designed to test the difference between two monotone functions under this context. In this dissertation, we also propose a numerical algorithm, the projected Newton-Raphson algorithm, to compute the non– and semi-parametric estimate for the M-estimation problems subject to linear equality or inequality constraints. By combining the Newton-Raphson algorithm and the dual method for strictly convex quadratic programming, the projected Newton-Raphson algorithm shows the desired convergence rate. Compared to the well-known iterative convex minorant algorithm, the projected Newton-Raphson algorithm achieves much quicker convergence when computing the non- and semi-parametric maximum likelihood estimate of panel count data.

Page generated in 0.049 seconds