• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 26
  • 26
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Μελέτη κατολισθίσεων από ιστορικά, βιβλιογραφικά δεδομένα χαρτογράφησης στην περιοχή Ιωαννίνων

Καλαντζή, Φωτεινή 11 November 2009 (has links)
Στην παρούσα μελέτη έγινε α) αποδελτίωση των εφημερίδων για το χρονικό διάστημα 1960 έως 2008 σχετικά με κατολισθητικά φαινόμενα στο Νομό Ιωαννίνων και β) αναλυτική διερεύνηση των φαινομένων αυτών σε 3 περιοχές του νομού με τη χρήση ιστορικών και βιβλιογραφικών δεδομένων, αποδελτιωμένων πληροφοριών από εφημερίδες καθώς και επιτόπιας χαρτογράφησης και καταγραφής έργων που έγιναν μετά την εκδήλωση των κατολισθήσεων / At the present research study a) we have extracted data from newspapers for the period between 1960 and 2008 considering landslide events at the Prefecture of Ioannina and b) we have analytically investigated these phenomena at 3 regions of the Prefecture by using historical data, scientific literature and data extracted from newspapers as well as on the spot mapping and recording of the reconstruction work that have taken place.
12

Improving the efficiency of clinical trial designs by using historical control data or adding a treatment arm to an ongoing trial

Bennett, Maxine Sarah January 2018 (has links)
The most common type of confirmatory trial is a randomised trial comparing the experimental treatment of interest to a control treatment. Confirmatory trials are expensive and take a lot of time in the planning, set up and recruitment of patients. Efficient methodology in clinical trial design is critical to save both time and money and allow treatments to become available to patients quickly. Often there are data available on the control treatment from a previous trial. These historical data are often used to design new trials, forming the basis of sample size calculations, but are not used in the analysis of the new trial. Incorporating historical control data into the design and analysis could potentially lead to more efficient trials. When the historical and current control data agree, incorporating historical control data could reduce the number of control patients required in the current trial and therefore the duration of the trial, or increase the precision of parameter estimates. However, when the historical and current data are inconsistent, there is a potential for biased treatment effect estimates, inflated type I error and reduced power. We propose two novel weights to assess agreement between the current and historical control data: a probability weight based on tail area probabilities; and a weight based on the equivalence of the historical and current control data parameters. For binary outcome data, agreement is assessed using the posterior distributions of the response probability in the historical and current control data. For normally distributed outcome data, agreement is assessed using the marginal posterior distributions of the difference in means and the ratio of the variances of the current and historical control data. We consider an adaptive design with an interim analysis. At the interim, the agreement between the historical and current control data is assessed using the probability or equivalence probability weight approach. The allocation ratio is adapted to randomise fewer patients to control when there is agreement and revert back to a standard trial design when there is disagreement. The final analysis is Bayesian utilising the analysis approach of the power prior with a fixed weight. The operating characteristics of the proposed design are explored and we show how the equivalence bounds can be chosen at the design stage of the current study to control the maximum inflation in type I error. We then consider a design where a treatment arm is added to an ongoing clinical trial. For many disease areas, there are often treatments in different stages of the development process. We consider the design of a two-arm parallel group trial where it is planned to add a new treatment arm during the trial. This could potentially save money, patients, time and resources. The addition of a treatment arm creates a multiple comparison problem. Dunnett (1955) proposed a design that controls the family-wise error rate when comparing multiple experimental treatments to control and determined the optimal allocation ratio. We have calculated the correlation between test statistics for the method proposed by Dunnett when a treatment arm is added during the trial and only concurrent controls are used for each treatment comparison. We propose an adaptive design where the sample size of all treatment arms are increased to control the family-wise error rate. We explore adapting the allocation ratio once the new treatment arm is added to maximise the overall power of the trial.
13

Optimal decision criteria for the study design and sample size of a biomarker-driven phase III trial / バイオマーカーを用いた第三相試験における最適な試験デザイン及び被験者数の決定基準

Takazawa, Akira 25 May 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(医科学) / 甲第22647号 / 医科博第110号 / 新制||医科||7(附属図書館) / 京都大学大学院医学研究科医科学専攻 / (主査)教授 佐藤 俊哉, 教授 川上 浩司, 教授 福原 俊一 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
14

NEUzeit interaktiv visualisiert

Gambashidze, Mariam, Moser, Jana, Listabarth, Jakob, Hanewinkel, Christian 09 February 2024 (has links)
No description available.
15

Change decision support:extraction and analysis of late architecture changes using change characterization and software metrics

Williams, Byron Joseph 02 May 2009 (has links)
Software maintenance is one of the most crucial aspects of software development. Software engineering researchers must develop practical solutions to handle the challenges presented in maintaining mature software systems. Research that addresses practical means of mitigating the risks involved when changing software, reducing the complexity of mature software systems, and eliminating the introduction of preventable bugs is paramount to today’s software engineering discipline. Giving software developers the information that they need to make quality decisions about changes that will negatively affect their software systems is a key aspect to mitigating those risks. This dissertation presents work performed to assist developers to collect and process data that plays a role in change decision-making during the maintenance phase. To address these problems, developers need a way to better understand the effects of a change prior to making the change. This research addresses the problems associated with increasing architectural complexity caused by software change using a twoold approach. The first approach is to characterize software changes to assess their architectural impact prior to their implementation. The second approach is to identify a set of architecture metrics that correlate to system quality and maintainability and to use these metrics to determine the level of difficulty involved in making a change. The two approaches have been combined and the results presented provide developers with a beneficial analysis framework that offers insight into the change process.
16

The Distribution of Environmental Contaminants: a Socio-Historical Study of Selected Neighborhoods in Dallas County, Texas

Cutrer, Jennifer G. 12 1900 (has links)
This research expands on recent sociological studies which maintain that environmental contaminants in America are disproportionately placed in neighborhoods inhabited by minorities and the poor. Prior studies have focused on the predictor variables which identify areas of contamination near residential neighborhoods, yet fail to explore the socio-political and historical factors which contribute to these phenomena. The Environmental Protection Agency's Toxic Release Inventory 1990 database, the Texas Natural Resource Conservation Commission's Annual Report of the Hazardous and Solid Waste Program for 1992, and the U.S. Census Bureau's 1990 Census Data for Dallas County were utilized in pinpointing industries violating toxic release standards. Socio-historical data was obtained from government and historical records and reports, books, and newspaper clippings on Dallas County. Maps and data were obtained from the North Central Texas Council of Governments, and the cities of Dallas and Garland. Chapter I discusses the synergetic forces of capitalism, urban growth, uneven development, and settlement patterns resulting in the distribution of environmental contaminants. Chapter II reviews the literature and presents evidence that race and class are strong predictors of where environmental contaminants are located. Chapter III outlines the data and methods employed. Chapter IV traces the historical development of Dallas County. Chapter V details those political, economic, and social factors contributing to the convergence of people and contaminants within three selected neighborhoods. The forces which historically relegate minorities, particularly Blacks and Hispanics, and the poor to less desirable jobs, cheaper housing, and land costs are also explored. Cheap land and labor attract industry which, in turn, attracts more laborers. Chapter VI, the summary and conclusions, utilizes the socio-spatial approach in examining urban infrastructure development (i.e. roads and railways) which also reduces adjacent land costs making housing more affordable for minorities and the poor. This study concludes that because of historical development and capitalism's exploitation of labor, these populations become entrenched in neighborhoods located adjacent to those industries where contaminants are emitted.
17

Datorspelsbranschen - Att estimera resursbehovet för nya projektteam baserat på historiska data

Kullgard, Peter January 2015 (has links)
Datorspelsbranschen är en relativt ung bransch, både jämfört med andra delarav mjukvaruindustrin och nöjesbranschen. En av de stora utmaningarnaspelutvecklare står inför är planeringen av den långsiktiga kompetensförsörjningen.Dock saknas sammanställda historiska data och applicerbararesursestimeringsmetoder, vilket gör detta arbete näst intill omöjligt. Det förstadelmålet med den här studien är att samla in historiska data angående strukturenoch storleken på utvecklingsteam som utvecklat AAA-spel (professionelltutvecklade spel med en hög budget) för pc-datorer. Datan samlas in frånkreditlistor hämtade via webbsidan Mobygames.com. Det andra delmålet beståri att utreda huruvida den insamlade datan kan appliceras på modeller förestimering av framtida personalbehov. De två modellerna som används ärtrendanalys samt kvotanalys. Resultatet visar att det går att samla in och byggaupp en historisk databas samt att båda modellerna kan appliceras på databasenmed gott resultat.Trendanalysen visar att storleken på projektteamen de senaste20 åren i stort sett utvecklats linjärt och kvotanalysen att det finns klara ochtydliga relationer mellan olika yrkesgrupper samt relationer mellanprojektteamet och en yrkesgrupp. / The video-game business is relatively young in relation to the software andentertainment industries. One of the major challenges for video-gamedevelopers is long-term human resource planning. Due to missing historicaldata in combination with applicable estimations methods, it is close toimpossible to perform such planning. The first sub-goal of this study is tocollect historical data about the structures and sizes of game development teamsinvolved in developing "Triple-A" games (professionally-developed games witha large budget) for PC. The collection of data is done through the websitemobygames.com. The second sub-goal is to examine whether the collected datacould be applied to an existing model for use in the forecasting of futurestaffing needs. The two models selected for this purpose are the trend analysismodel and the ratio analysis model. The result shows it is possible to build up adatabase with historical data. Trend analysis shows that the size of projectteams over the past 20 years has evolved almost linearly and ratio analysisshows clear relationships between the sizes of different occupational groups.
18

Approche Bayésienne de la survie dans les essais cliniques pour les cancers rares / Bayesian Approach to Survival in Clinical Trials in Rare Cancers

Brard, Caroline 20 November 2018 (has links)
L'approche Bayésienne permet d’enrichir l'information apportée par l'essai clinique, en intégrant des informations externes à l'essai. De plus, elle permet d’exprimer les résultats directement en termes de probabilité d’un certain effet du traitement, plus informative et interprétable qu’une p-valeur et un intervalle de confiance. Par ailleurs, la réduction fréquente d’une analyse à une interprétation binaire des résultats (significatif ou non) est particulièrement dommageable dans les maladies rares. L’objectif de mon travail était d'explorer la faisabilité, les contraintes et l'apport de l'approche Bayésienne dans les essais cliniques portant sur des cancers rares lorsque le critère principal est censuré. Tout d’abord, une revue de la littérature a confirmé la faible implémentation actuelle des méthodes Bayésiennes dans l'analyse des essais cliniques avec critère de survie.Le second axe de ce travail a porté sur le développement d’un essai Bayésien avec critère de survie, intégrant des données historiques, dans le cadre d’un essai réel portant sur une pathologie rare (ostéosarcome). Le prior intégrait des données historiques individuelles sur le bras contrôle et des données agrégées sur l’effet relatif du traitement. Une large étude de simulations a permis d’évaluer les caractéristiques opératoires du design proposé, de calibrer le modèle, tout en explorant la problématique de la commensurabilité entre les données historiques et actuelles. Enfin, la ré-analyse de trois essais cliniques publiés a permis d’illustrer l'apport de l'approche Bayésienne dans l'expression des résultats et la manière dont cette approche permet d’enrichir l’analyse fréquentiste d’un essai. / Bayesian approach augments the information provided by the trial itself by incorporating external information into the trial analysis. In addition, this approach allows the results to be expressed in terms of probability of some treatment effect, which is more informative and interpretable than a p-value and a confidence interval. In addition, the frequent reduction of an analysis to a binary interpretation of the results (significant versus non-significant) is particularly harmful in rare diseases.In this context, the objective of my work was to explore the feasibility, constraints and contribution of the Bayesian approach in clinical trials in rare cancers with a primary censored endpoint. A review of the literature confirmed that the implementation of Bayesian methods is still limited in the analysis of clinical trials with a censored endpoint.In the second part of our work, we developed a Bayesian design, integrating historical data in the setting of a real clinical trial with a survival endpoint in a rare disease (osteosarcoma). The prior incorporated individual historical data on the control arm and aggregate historical data on the relative treatment effect. Through a large simulation study, we evaluated the operating characteristics of the proposed design and calibrated the model while exploring the issue of commensurability between historical and current data. Finally, the re-analysis of three clinical trials allowed us to illustrate the contribution of Bayesian approach to the expression of the results, and how this approach enriches the frequentist analysis of a trial.
19

Utilisation des données historiques dans l'analyse régionale des aléas maritimes extrêmes : la méthode FAB / Using historical data in the Regional Analysis of extreme coastal events : the FAB method

Frau, Roberto 13 November 2018 (has links)
La protection des zones littorales contre les agressions naturelles provenant de la mer, et notamment contre le risque de submersion marine, est essentielle pour sécuriser les installations côtières. La prévention de ce risque est assurée par des protections côtières qui sont conçues et régulièrement vérifiées grâce généralement à la définition du concept de niveau de retour d’un événement extrême particulier. Le niveau de retour lié à une période de retour assez grande (de 1000 ans ou plus) est estimé par des méthodes statistiques basées sur la Théorie des Valeurs Extrêmes (TVE). Ces approches statistiques sont appliquées à des séries temporelles d’une variable extrême observée et permettent de connaître la probabilité d’occurrence de telle variable. Dans le passé, les niveaux de retour des aléas maritimes extrêmes étaient estimés le plus souvent à partir de méthodes statistiques appliquées à des séries d’observation locales. En général, les séries locales des niveaux marins sont observées sur une période limitée (pour les niveaux marins environ 50 ans) et on cherche à trouver des bonnes estimations des extrêmes associées à des périodes de retour très grandes. Pour cette raison, de nombreuses méthodologies sont utilisées pour augmenter la taille des échantillons des extrêmes et réduire les incertitudes sur les estimations. En génie côtier, une des approches actuellement assez utilisées est l’analyse régionale. L’analyse régionale est indiquée par Weiss (2014) comme une manière très performante pour réduire les incertitudes sur les estimations des événements extrêmes. Le principe de cette méthodologie est de profiter de la grande disponibilité spatiale des données observées sur différents sites pour créer des régions homogènes. Cela permet d’estimer des lois statistiques sur des échantillons régionaux plus étendus regroupant tous les événements extrêmes qui ont frappé un ou plusieurs sites de la région (...) Cela ainsi que le caractère particulier de chaque événement historique ne permet pas son utilisation dans une analyse régionale classique. Une méthodologie statistique appelée FAB qui permet de réaliser une analyse régionale tenant en compte les données historiques est développée dans ce manuscrit. Élaborée pour des données POT (Peaks Over Threshold), cette méthode est basée sur une nouvelle définition d’une durée d’observation, appelée durée crédible, locale et régionale et elle est capable de tenir en compte dans l’analyse statistique les trois types les plus classiques de données historiques (données ponctuelles, données définies par un intervalle, données au-dessus d’une borne inférieure). En plus, une approche pour déterminer un seuil d’échantillonnage optimal est définie dans cette étude. La méthode FAB est assez polyvalente et permet d’estimer des niveaux de retour soit dans un cadre fréquentiste soit dans un cadre bayésien. Une application de cette méthodologie est réalisée pour une base de données enregistrées des surcotes de pleine mer (données systématiques) et 14 surcotes de pleine mer historiques collectées pour différents sites positionnés le long des côtes françaises, anglaises, belges et espagnoles de l’Atlantique, de la Manche et de la mer du Nord. Enfin, ce manuscrit examine la problématique de la découverte et de la validation des données historiques / The protection of coastal areas against the risk of flooding is necessary to safeguard all types of waterside structures and, in particular, nuclear power plants. The prevention of flooding is guaranteed by coastal protection commonly built and verified thanks to the definition of the return level’s concept of a particular extreme event. Return levels linked to very high return periods (up to 1000 years) are estimated through statistical methods based on the Extreme Value Theory (EVT). These statistical approaches are applied to time series of a particular extreme variable observed and enables the computation of its occurrence probability. In the past, return levels of extreme coastal events were frequently estimated by applying statistical methods to time series of local observations. Local series of sea levels are typically observed in too short a period (for sea levels about 50 years) in order to compute reliable estimations linked to high return periods. For this reason, several approaches are used to enlarge the size of the extreme data samples and to reduce uncertainties of their estimations. Currently, one of the most widely used methods in coastal engineering is the Regional Analysis. Regional Analysis is denoted by Weiss (2014) as a valid means to reduce uncertainties in the estimations of extreme events. The main idea of this method is to take advantage of the wide spatial availability of observed data in different locations in order to form homogeneous regions. This enables the estimation of statistical distributions of enlarged regional data samples by clustering all extreme events occurred in one or more sites of the region. Recent investigations have highlighted the importance of using past events when estimating extreme events. When historical data are available, they cannot be neglected in order to compute reliable estimations of extreme events. Historical data are collected from different sources and they are identified as data that do not come from time series. In fact, in most cases, no information about other extreme events occurring before and after a historical observation is available. This, and the particular nature of each historical data, do not permit their use in a Regional Analysis. A statistical methodology that enables the use of historical data in a regional context is needed in order to estimate reliable return levels and to reduce their associated uncertainties. In this manuscript, a statistical method called FAB is developed enabling the performance of a Regional Analysis using historical data. This method is formulated for POT (Peaks Over Threshold) data. It is based on the new definition of duration of local and regional observation period (denominated credible duration) and it is able to take into account all the three typical kinds of historical data (exact point, range and lower limit value). In addition, an approach to identify an optimal sampling threshold is defined in this study. This allows to get better estimations through using the optimal extreme data sample in the FAB method.FAB method is a flexible approach that enables the estimation of return levels both in frequentist and Bayesian contexts. An application of this method is carried out for a database of recorded skew surges (systematic data) and for 14 historical skew surges recovered from different sites located on French, British, Belgian and Spanish coasts of the Atlantic Ocean, the English Channel and the North Sea. Frequentist and Bayesian estimations of skew surges are computed for each homogeneous region and for every site. Finally, this manuscript explores the issues surrounding the finding and validation of historical data
20

Histoire des interactions entre les sociétés humaines et le milieu aquatique durant l'Anthropocène : évolutions des peuplements piscicoles dans le bassin de la Seine / Human societies and aquatic environment interactions during the Anthropocene : evolutions of the Seine catchment fish communities

Beslagic, Sarah 19 April 2013 (has links)
Le bassin hydrographique de la Seine figure parmi les bassins français les plus anthropisés. Les communautés piscicoles qui le peuplent sont potentiellement marquées par les pressions humaines qui s'exercent sur le milieu aquatique, et dont l'ampleur depuis l'Époque industrielle a été grandissante. L'objectif de ce travail était d'identifier les facteurs à l'origine des modifications de la répartition des peuplements de poissons sur le temps long. Afin de mettre en évidence les possibles changements intervenus durant les deux derniers siècles, ce travail a été mené en croisant deux types d'approches, l'une historique et l'autre écologique. Cette étude, basée sur des données d'observations de poissons dans le bassin de la Seine depuis le début du XIXe siècle, montre une importante modification de la faune piscicole (i.e. introductions et disparitions d'espèces) à l'échelle du bassin. Par ailleurs. L’analyse comparative des données historiques, pour lesquelles existent des mentions d'abondances, et des données écologiques actuelles montre que les aires de répartition de certaines espèces ont été très largement modifiées. Sur différents secteurs du bassin, les résultats témoignent d'évolutions parfois importantes de la composition des peuplements en lien avec différents types de pressions anthropiques exercées sur le milieu à des échelles d'espace et de temps variées. Ce travail confirme l'intérêt d'une approche interdisciplinaire dans la compréhension de l'histoire des écosystèmes. De futures recherches sont néanmoins nécessaires afin de cerner l'importance de facteurs plus locaux sur la répartition des espèces piscicoles. / In France, the Seine River basin has been harshly impacted by anthropogenic pressures since the Industrial revolution. Fish community is expected to be disturbed by these human pressures on aquatic environment. This thesis aimed to highlight the factors involved in the changes of fish species spatial distribution over the last two centuries using both historical and ecological approaches. The analysis of historical data of fish distributions on the Seine River basin showed some important modifications of fish communities (i.e. species introductions and extinctions) at the catchment scale, since the beginning of the XIXth century. The comparative study based on historical abundances and actual data showed that certain species have changed their distribution area. Besides, substantial evolutions in fish assemblages occurred in certain rivers, probably in relation to anthropogenic pressures acting at different spatial and temporal scales. This study confirms the interest of an interdisciplinary approach to improve knowledge of ecosystems' history. Nevertheless, future researches are still needed to better understand the effects of local factors on fish distribution area.

Page generated in 0.0949 seconds