• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 10
  • 5
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 108
  • 22
  • 13
  • 13
  • 13
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Scalable Estimation and Testing for Complex, High-Dimensional Data

Lu, Ruijin 22 August 2019 (has links)
With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, etc. These data provide a rich source of information on disease development, cell evolvement, engineering systems, and many other scientific phenomena. To achieve a clearer understanding of the underlying mechanism, one needs a fast and reliable analytical approach to extract useful information from the wealth of data. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex data, powerful testing of functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a wavelet-based approximate Bayesian computation approach that is likelihood-free and computationally scalable. This approach will be applied to two applications: estimating mutation rates of a generalized birth-death process based on fluctuation experimental data and estimating the parameters of targets based on foliage echoes. The second part focuses on functional testing. We consider using multiple testing in basis-space via p-value guided compression. Our theoretical results demonstrate that, under regularity conditions, the Westfall-Young randomization test in basis space achieves strong control of family-wise error rate and asymptotic optimality. Furthermore, appropriate compression in basis space leads to improved power as compared to point-wise testing in data domain or basis-space testing without compression. The effectiveness of the proposed procedure is demonstrated through two applications: the detection of regions of spectral curves associated with pre-cancer using 1-dimensional fluorescence spectroscopy data and the detection of disease-related regions using 3-dimensional Alzheimer's Disease neuroimaging data. The third part focuses on analyzing data measured on the cortical surfaces of monkeys' brains during their early development, and subjects are measured on misaligned time markers. In this analysis, we examine the asymmetric patterns and increase/decrease trend in the monkeys' brains across time. / Doctor of Philosophy / With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, and biological measurements. These data provide a rich source of information on disease development, engineering systems, and many other scientific phenomena. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex biological and engineering data, powerful testing of high-dimensional functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a computation-based statistical approach that achieves efficient parameter estimation scalable to high-dimensional functional data. The second part focuses on developing a powerful testing method for functional data that can be used to detect important regions. We will show nice properties of our approach. The effectiveness of this testing approach will be demonstrated using two applications: the detection of regions of the spectrum that are related to pre-cancer using fluorescence spectroscopy data and the detection of disease-related regions using brain image data. The third part focuses on analyzing brain cortical thickness data, measured on the cortical surfaces of monkeys’ brains during early development. Subjects are measured on misaligned time-markers. By using functional data estimation and testing approach, we are able to: (1) identify asymmetric regions between their right and left brains across time, and (2) identify spatial regions on the cortical surface that reflect increase or decrease in cortical measurements over time.
82

Épistasie en médecine évolutive

Gamache, Isabel 07 1900 (has links)
La variabilité de la réponse aux médicaments entre les individus est en grande partie attribuable aux différences génétiques causées par des mutations génétiques. Ces mutations ont émergé au cours de l'évolution humaine et peuvent être neutres, bénéfiques ou délétères en termes de survie ou de succès reproductif. Bien que de nombreuses études identifient des variants génétiques associés à des phénotypes, comme la réponse aux médicaments, peu d'attention est accordée à l'origine de ces mutations ou à leur présence au sein des populations. La médecine évolutive entre alors en jeu en étudiant les origines évolutives des mutations associées à des phénotypes. Ce domaine se situe à l'intersection de la médecine et de la biologie évolutive, et il cherche à comprendre comment le corps humain est devenu ce qu'il est aujourd'hui. Cette thèse se concentrera sur l'évolution des gènes impliqués dans la réponse aux médicaments. La première partie de cette thèse se penchera sur la relation entre les gènes ADCY9 et CETP, qui sont liés à la réponse au médicament dalcetrapib visant à réduire les événements cardiovasculaires en ciblant la protéine CETP. Une mutation dans le gène ADCY9 a été précédemment identifiée comme modulant la réponse à ce médicament. Nous avons identifié plusieurs pressions de sélection dans le gène ADCY9, mais nous avons concentré nos analyses sur son interaction épistasique, c'est-à-dire non linéaire, co-évolutive avec le gène CETP. Des effets de cette interaction sur plusieurs phénotypes ont été observés, et des mécanismes potentiels sous-tendant cette pression co-évolutive et son association avec le médicament ont été identifiés. La deuxième partie de cette thèse sera la suite d'un projet portant sur l'étude des pressions de sélection sur la superfamille des cytochromes P450. Les gènes de cette superfamille sont généralement impliqués dans la détoxification de l'organisme, y compris par la métabolisation d'environ 75% des médicaments couramment prescrits. Des analyses préliminaires ont révélé des enrichissements de pression de sélection dans deux sous-familles, à savoir les CYP3A et les CYP4F. Des phénotypes potentiellement sous pressions de sélection ont été identifiés dans la sous-famille des CYP3A au sein de la population africaine. En conclusion, l'intégration de la génétique des populations avec la transcriptomique et les études d'association phénotypiques enrichit notre compréhension des liens entre les pharmacogènes au sein de diverses populations. Cette approche représente un pas de plus vers l'amélioration de la médecine de précision. / Variability in drug response between individuals is largely due to genetic differences caused by genetic mutations. These mutations have emerged in the course of human evolution and can be neutral, beneficial or deleterious in terms of survival or reproductive success. Although many studies identify genetic variants associated with phenotypes such as drug response, little attention is paid to the origin of these mutations or their presence in the population. This is where evolutionary medicine comes in, studying the evolutionary origins of mutations associated with phenotypes. This field lies at the intersection of medicine and evolutionary biology, and seeks to understand how the human body became what it is today. This thesis will focus on the evolution of genes involved in drug response. The first part of this thesis will look at the relationship between the genes ADCY9 and CETP, linked to the response to the drug dalcetrapib aimed at reducing cardiovascular events by targeting the CETP protein. A mutation in the ADCY9 gene has been previously identified as modulating the response to this drug. We identified several selection pressures in the ADCY9 gene, but focused our analyses on the co-evolutionary epistatic interactions, meaning non-linear. Effects of this interaction on several phenotypes were observed, and potential mechanisms underlying this co-evolutionary pressure and its association with the drug were identified. The second part of this thesis will follow on from a project investigating selection pressures on the cytochrome P450 superfamily. Genes in this superfamily are generally involved in the detoxification of the body, including the metabolization of around 75% of commonly prescribed drugs. Preliminary analyses have revealed selective pressure enrichments in two subfamilies, CYP3A and CYP4F. Potential phenotypes under selective pressure were identified in the CYP3A subfamily in the African population. In conclusion, the integration of population genetics with transcriptomics and phenotypic association studies enhances our understanding of the connections among pharmacogenes across diverse populations. This approach signifies another stride towards advancing precision medicine.
83

Practical Exploit Mitigation Design Against Code Re-Use and System Call Abuse Attacks

Jelesnianski, Christopher Stanislaw 09 January 2023 (has links)
Over the years, many defense techniques have been proposed by the security community. Even so, few have been adopted by the general public and deployed in production. This limited defense deployment and weak security has serious consequences, as large scale cyber-attacks are now a common occurrence in society. One major obstacle that stands in the way is practicality, the quality of being designed for actual use or having usefulness or convenience. For example, an exploit mitigation design may be considered not practical to deploy if it imposes high performance overhead, despite offering excellent and robust security guarantees. This is because achieving hallmarks of practical design, such as minimizing adverse side-effects like performance degradation or memory monopolization, is difficult in practice, especially when trying to provide a high level of security for users. Secure and practical exploit mitigation design must successfully navigate several challenges. To illustrate, modern-day attacks, especially code re-use attacks, understand that rudimentary defenses such as Data Execution Prevention (DEP) and Address Space Layout Randomization (ASLR) will be deployed moving forward. These attacks have therefore evolved and diversified their angles of attack to become capable of leveraging a multitude of different code components. Accordingly, the security community has uncovered these threats and maintained progress in providing possible resolutions with new exploit mitigation designs. More specifically though, defenses have had to correspondingly extend their capabilities to protect more aspects of code, leading to defense techniques becoming increasingly complex. Trouble then arises as supporting such fine-grained defenses brings inherent disadvantages such as significant hardware resource utilization that could be otherwise used for useful work. This complexity has made performance, security, and scalability all competing ideals in practical system design. At the same time, other recent efforts have implemented mechanisms with negligible performance impact, but do so at the risk of weaker security guarantees. This dissertation first formalizes the challenges in modern exploit mitigation design. To illustrate these challenges, this dissertation presents a survey from the perspective of both attacker and defender to provide an overview of this current security landscape. This includes defining an informal taxonomy of exploit mitigation strategies, explaining prominent attack vectors that are faced by security experts today, and identifying and defining code components that are generally abused by code re-use. This dissertation then presents two practical design solutions. Both defense system designs uphold goals of achieving realistic performance, providing strong security guarantees, being robust for modern application code-bases, and being able to scale across the system at large. The first practical exploit mitigation design this dissertation presents is MARDU. MARDU is a novel re-randomization approach that utilizes on-demand randomization and the concept of code trampolines to support sharing of code transparently system-wide. To the best of my knowledge, MARDU is the first presented re-randomization technique capable of runtime code sharing for re-randomized code system-wide. Moreover, MARDU is one of the very few re-randomization mechanisms capable of performing seamless live thread migration to newly randomized code without pausing application execution. This dissertation describes the full design, implementation, and evaluation of MARDU to demonstrate its merits and show that careful design can uphold all practical design goals. For instance, scalability is a major challenge for randomization strategies, especially because traditional OS design expects code to be placed in known locations so that it can be reached by multiple processes, while randomization is purposefully trying to achieve the opposite, being completely unpredictable. This clash in expectations between system and defense design breaks a few very important assumptions for an application's runtime environment. This forces most randomization mechanisms to abandon the hope of upholding memory deduplication. MARDU resolves this challenge by applying trampolines to securely reach functions protected under secure memory. Even with this new calling convention in place, MARDU shows re-randomization degradation can be significantly reduced without sacrificing randomization entropy. Moreover, MARDU shows it is capable of defeating prominent code re-use variants with this practical design. This dissertation then presents its second practical exploit mitigation solution, BASTION. BASTION is a fine-grained system call filtering mechanism aimed at significantly strengthening the security surrounding system calls. Like MARDU, BASTION upholds the principles of this dissertation and was implemented with practicality in mind. BASTION's design is based on empirical observation of what a legitimate system call invocation consists of. BASTION introduces System Call Integrity to enforce the correct and intended use of system calls within a program. In order to enforce this novel security policy, BASTION proposes three new specialized contexts for the effective enforcement of legitimate system call usage. Namely, these contexts enforce that: system calls are only invoked with the correct calling convention, system calls are reached through legitimate control-flow paths, and all system call arguments are free from attacker corruption. By enforcing System Call Integrity with the previously mentioned contexts, this dissertation adds further evidence that context-sensitive defense strategies are superior to context-insensitive ones. BASTION is able to prevent over 32 real-world and synthesized exploits in its security evaluation and incurs negligible performance overhead (0.60%-2.01%). BASTION demonstrates that narrow and specialized exploit mitigation designs can be effective in more than one front, to the point that BASTION not only revents code re-use, but is capable of defending against any attack class that requires the utilization of system calls. / Doctor of Philosophy / Limited security defense deployment and weak security has serious consequences, as large scale cyber-attacks are now a common occurrence. This may be surprising since many defense techniques have been proposed; yet in reality, few have become dopted by the general public. To elaborate, designing an ideal defense that is strong security-wise but does not use any computer resources is challenging. In practice, there is no free lunch, and therefore a design must consider how to best balance security with performance in an effort to be practical for users to deploy their defense. Common tradeoffs include adverse side-effects such as slowing down user applications or imposing significant memory usage. Therefore, practical and strong defense design is important to promote integration into the next generation of computer hardware and software. By sustaining practical design, the needed jump between a proof-of-concept and implementing it on commodity computer chips is substantially smaller. A practical defense should foremost guarantee strong levels of security and should not slow down a user's applications. Ideally, a practical defense is implemented to the point it seems invisible to the user and they don't even notice it. However, balancing practicality with strong security is hard to achieve in practice. This dissertation first reviews the current security landscape - specifically two important attack strategies are examined. First, code re-use attacks, are exactly what they sound like; code re-use essentially reuse various bits and pieces of program code to create an attack. Second, system call abuse. System calls are essential functions that ordinarily allow a user program to talk with a computer's operating system; they enable operations such as a program asking for more memory or reading and writing files. When system calls are maliciously abused, they can cause a computer to use up all its free memory or even launch an attacker-written program. This dissertation goes over how these attacks work and correspondingly explains popular defense strategies that have been proposed by the security community so far. This dissertation then presents two defense system solutions that demonstrate how a practical defense system could be made. To that end, the full design, implementation, and evaluation of each defense system, named MARDU and BASTION, is presented. This dissertation leverages attack insights as well as compiler techniques to achieve its goal. A compiler is an essential developer tool that converts human written code into a computer program. Moreover, compilers can be used to apply additional optimizations and security hardening techniques to make a program more secure. This dissertation's first defense solution, MARDU, is a runtime randomization defense. MARDU protects programs by randomizing the location of code chunks throughout execution so that attackers cannot find the code pieces they need to create an attack. Notably, MARDU is the first randomization defense that is able to be seamlessly deployed system-wide and is backwards compatible with programs not outfitted with MARDU. This dissertation's second defense solution, BASTION, is a defense system that strictly focuses on protection of system calls in a program. As mentioned earlier, system calls are security critical functions that allow a program to talk a computer operating system. BASTION protects the entire computer by ensuring that every time a system call is called by a user program, it was rightfully requested by the program and not maliciously by an attacker. BASTION verifies this request is legitimate by confirming that the current program state meets a certain set of criteria.
84

A Study on Fingerprinting of Locally Assigned MAC-Addresses

Djervbrant, Karl-Johan, Häggström, Andreas January 2019 (has links)
The number of WiFi Devices is increasing every day, most people traveling hasa device with a WiFi network card enabled. This is something EffectSoft AB in Halmstad utilizes in their service Flow, to track and count devices. The accuracy of counted devices was however not accurate enough for a commercial use and this is where this candidate Thesis will continue research on how to improve the accuracy. It introduces the fundamental problem on how one cannot directly count transmitted MAC-Addresses to count present devices, since the manufacturers implement features against this such as MAC-Address randomization. It covers how manufacturers are not consistent in their implementation of the IEEE 802.11 standard, how this can be utilized to estimate how many devices are present in the networkwith three different approaches. It also concludes that Control Frame Attacks is not a viable approach any longer to count devices and the best method for counting devices are a combination of Passive Probe Request Analysis techniques. / Mängden enheter som kommunicerar över WiFi ökar dagligen och idag bär de flesta människor en enhet med ett aktiverat WiFi-nätverkskort. Detta använder EffectSoft AB, ett företag i Halmstad till sin teknik Flow för att räkna mobila enheter. Noggrannheten för beräkningen är dock inte tillräckligt bra för att produkten ska kunna vara applicerbar på marknaden och därav handlar denna kandidatuppsatsen om beräkning av mobila enheter. Denna rapport presenterar de problem som man stöter på vid beräkning av mobila enheter som till exempel randomisering av MAC-Adresser. Den täcker även hur tillverkare inte är konsekventa i sin implementation av IEEE 802.11 standarden och hur detta kan utnyttjas genom tre metoderför beräkning av antal mobila enheter. Det fastställs att Control Frame Attack inte längre är en möjlig metod för syftet samt att den bästa metoden för beräkning avantalet mobila enheter är en kombination av olika passiva Probe Request analyser.
85

Méthode d'inférence par bootstrap pour l'estimateur sisVIVE en randomisation mendélienne

Dessy, Tatiana 11 1900 (has links)
No description available.
86

Direct Numerical Simulation Of Pipe Flow Using A Solenoidal Spectral Method

Tugluk, Ozan 01 June 2012 (has links) (PDF)
In this study, which is numerical in nature, direct numerical simulation (DNS) of the pipe flow is performed. For the DNS a solenoidal spectral method is employed, this involves the expansion of the velocity using divergence free functions which also satisfy the prescribed boundary conditions, and a subsequent projection of the N-S equations onto the corresponding dual space. The solenoidal functions are formulated in Legendre polynomial space, which results in more favorable forms for the inner product integrals arising from the Petrov-Galerkin scheme employed. The developed numerical scheme is also used to investigate the effects of spanwise oscillations and phase randomization on turbulence statistics, and drag, in turbulent incompressible pipe flow for low to moderate Reynolds numbers (i.e. $mathrm{Re} sim 5000$) ).
87

Cache Prediction and Execution Time Analysis on Real-Time MPSoC

Neikter, Carl-Fredrik January 2008 (has links)
<p>Real-time systems do not only require that the logical operations are correct. Equally important is that the specified time constraints always are complied. This has successfully been studied before for mono-processor systems. However, as the hardware in the systems gets more complex, the previous approaches become invalidated. For example, multi-processor systems-on-chip (MPSoC) get more and more common every day, and together with a shared memory, the bus access time is unpredictable in nature. This has recently been resolved, but a safe and not too pessimistic cache analysis approach for MPSoC has not been investigated before. This thesis has resulted in designed and implemented algorithms for cache analysis on real-time MPSoC with a shared communication infrastructure. An additional advantage is that the algorithms include improvements compared to previous approaches for mono-processor systems. The verification of these algorithms has been performed with the help of data flow analysis theory. Furthermore, it is not known how different types of cache miss characteristic of a task influence the worst case execution time on MPSoC. Therefore, a program that generates randomized tasks, according to different parameters, has been constructed. The parameters can, for example, influence the complexity of the control flow graph and average distance between the cache misses.</p>
88

Performance evaluation and enhancement for AF two-way relaying in the presence of channel estimation error

Wang, Chenyuan 30 April 2012 (has links)
Cooperative relaying is a promising diversity achieving technique to provide reliable transmission, high throughput and extensive coverage for wireless networks in a variety of applications. Two-way relaying is a spectrally efficient protocol, providing one solution to overcome the half-duplex loss in one-way relay channels. Moreover, incorporating the multiple-input-multiple-output (MIMO) technology can further improve the spectral efficiency and diversity gain. A lot of related work has been performed on the two-way relay network (TWRN), but most of them assume perfect channel state information (CSI). In a realistic scenario, however, the channel is estimated and the estimation error exists. So in this thesis, we explicitly take into account the CSI error, and investigate its impact on the performance of amplify-and-forward (AF) TWRN where either multiple distributed single-antenna relays or a single multiple-antenna relay station is exploited. For the distributed relay network, we consider imperfect self-interference cancellation at both sources that exchange information with the help of multiple relays, and maximal ratio combining (MRC) is then applied to improve the decision statistics under imperfect signal detection. The system performance degradation in terms of outage probability and average bit-error rate (BER) are analyzed, as well as their asymptotic trend. To further improve the spectral efficiency while maintain the spatial diversity, we utilize the maximum minimum (Max-Min) relay selection (RS), and examine the impact of imperfect CSI on this single RS scheme. To mitigate the negative effect of imperfect CSI, we resort to adaptive power allocation (PA) by minimizing either the outage probability or the average BER, which can be cast as a Geometric Programming (GP) problem. Numerical results verify the correctness of our analysis and show that the adaptive PA scheme outperforms the equal PA scheme under the aggregated effect of imperfect CSI. When employing a single MIMO relay, the problem of robust MIMO relay design has been dealt with by considering the fact that only imperfect CSI is available. We design the MIMO relay based upon the CSI estimates, where the estimation errors are included to attain the robust design under the worst-case philosophy. The optimization problem corresponding to the robust MIMO relay design is shown to be nonconvex. This motivates the pursuit of semidefinite relaxation (SDR) coupled with the randomization technique to obtain computationally efficient high-quality approximate solutions. Numerical simulations compare the proposed MIMO relay with the existing nonrobust method, and therefore validate its robustness against the channel uncertainty. / Graduate
89

Representative Subsets for Preference Queries

Chester, Sean 26 August 2013 (has links)
We focus on the two overlapping areas of preference queries and dataset summarization. A (linear) preference query specifies the relative importance of the attributes in a dataset and asks for the tuples that best match those preferences. Dataset summarization is the task of representing an entire dataset by a small, representative subset. Within these areas, we focus on three important sub-problems, significantly advancing the state-of-the-art in each. We begin with an investigation into a new formulation of preference queries, identifying a neglected and important subclass that we call threshold projection queries. While literature typically constrains the attribute preferences (which are real-valued weights) such that their sum is one, we show that this introduces bias when querying by threshold rather than cardinality. Using projection, rather than inner product as in that literature, removes the bias. We then give algorithms for building and querying indices for this class of query, based, in the general case, on geometric duality and halfspace range searching, and, in an important special case, on stereographic projection. In the second part of the dissertation, we investigate the monochromatic reverse top-k (mRTOP) query in two dimensions. A mRTOP query asks for, given a tuple and a dataset, the linear preference queries on the dataset that will include the given tuple. Towards this goal, we consider the novel scenario of building an index to support mRTOP queries, using geometric duality and plane sweep. We show theoretically and empirically that the index is quick to build, small on disk, and very efficient at answering mRTOP queries. As a corollary to these efforts, we defined the top-k rank contour, which encodes the k-ranked tuple for every possible linear preference query. This is tremendously useful in answering mRTOP queries, but also, we posit, of significant independent interest for its relation to myriad related linear preference query problems. Intuitively, the top-k rank contour is the minimum possible representation of knowledge needed to identify the k-ranked tuple for any query, without apriori knowledge of that query. We also introduce k-regret minimizing sets, a very succinct approximation of a numeric dataset. The purpose of the approximation is to represent the entire dataset by just a small subset that nonetheless will contain a tuple within or near to the top-k for any linear preference query. We show that the problem of finding k-regret minimizing sets—and, indeed, the problem in literature that it generalizes—is NP-Hard. Still, for the special case of two dimensions, we provide a fast, exact algorithm based on the top-k rank contour. For arbitrary dimension, we introduce a novel greedy algorithm based on linear programming and randomization that does excellently in our empirical investigation. / Graduate / 0984
90

Philosophical controversies in the evaluation of medical treatments : With a focus on the evidential roles of randomization and mechanisms in Evidence-Based Medicine

Mebius, Alexander January 2015 (has links)
This thesis examines philosophical controversies surrounding the evaluation of medical treatments, with a focus on the evidential roles of randomised trials and mechanisms in Evidence-Based Medicine. Current 'best practice' usually involves excluding non-randomised trial evidence from systematic reviews in cases where randomised trials are available for inclusion in the reviews. The first paper challenges this practice and evaluates whether adding of evidence from non-randomised trials might improve the quality and precision of some systematic reviews. The second paper compares the alleged methodological benefits of randomised trials over observational studies for investigating treatment benefits. It suggests that claims about the superiority of well-conducted randomised controlled trials over well-conducted observational studies are justified, especially when results from the two methods are contradictory. The third paper argues that postulating the unpredictability paradox in systematic reviews when no detectable empirical differences can be found requires further justification. The fourth paper examines the problem of absence causation in the context of explaining causal mechanisms and argues that a recent solution (Barros 2013) is incomplete and requires further justification. Solving the problem by describing absences as causes of 'mechanism failure' fails to take into account the effects of absences that lead to vacillating levels of mechanism functionality (i.e. differences in effectiveness or efficiency). The fifth paper criticises literature that has emphasised functioning versus 'broken' or 'non-functioning' mechanisms emphasising that many diseases result from increased or decreased mechanism function, rather than complete loss of function. Mechanistic explanations must account for differences in the effectiveness of performed functions, yet current philosophical mechanistic explanations do not achieve this. The last paper argues that the standard of evidence embodied in the ICE theory of technological function (i.e. testimonial evidence and evidence of mechanisms) is too permissive for evaluating whether the proposed functions of medical technologies have been adequately assessed and correctly ascribed. It argues that high-quality evidence from clinical studies is necessary to justify functional ascriptions to health care technologies. / <p>QC 20150312</p>

Page generated in 0.0942 seconds