• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 20
  • 9
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 96
  • 96
  • 30
  • 27
  • 22
  • 17
  • 14
  • 13
  • 13
  • 12
  • 10
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Identification de biomarqueurs prédictifs de l'efficacité du nivolumab dans le traitement de patients atteints de cancer bronchique non à petites cellules de stade avancé. / Identification of predictive biomarkers for the efficacy of nivolumab in patients with advanced non-small cell cancer.

Richard, Corentin 04 October 2019 (has links)
L’arrivée récente de l’immunothérapie a bouleversé la prise en charge des cancers broncho-pulmonaires non à petites cellules (CBNPC). Le nivolumab, anticorps inhibiteur du point de contrôle immunitaire PD-1, a montré des résultats remarquables en deuxième ligne métastatique après échec des chimiothérapies standards de première intention. Cependant, seul un quart des patients tire un bénéfice durable de la prise de ce traitement. `A ce jour, aucun biomarqueur prédictif de l'efficacité thérapeutique du nivolumab n'a pu être identifié de manière claire et consensuelle. La recherche de biomarqueurs prédictifs de bénéfice ou de résistance à ce traitement répresente donc un enjeu majeur.L’apparition du séquençage à haut débit au cours de la dernière décennie a eu un impact considérable sur la recherche clinique et fondamentale, permettant d’appréhender la génétique d’une tumeur dans son ensemble. Ces nouvelles techniques s’ajoutent à d’autres déjà éprouvées telles que l’immunophénotypage ou l’immunohistochimie à disposition des chercheurs pour une analyse extensive des caractéristiques de la tumeur et du patient.L’objectif de ce travail a été d’identifier des marqueurs prédictifs d’efficacité du nivolumab dans le traitement des CBNPC avancés au moyen de ces différentes technologies. Pour cela, notre étude s'est alors intéressée à une cohorte multicentrique de 115 patients atteints de CBNPC et traités par nivolumab en deuxième ou troisième ligne métastatique après échec d'un doublet cytotoxique. Dans les limites de disponibilité et de qualité des échantillons, les profils génétique, transcriptomique et immunohistochimique de la tumeur ainsi que les profils clinique et immunologique des patients ont été analysés.Nos résultats mettent en évidence des marqueurs prédictifs majeurs de réponse au nivolumab. Ainsi, une bonne réponse au doublet cytotoxique de première intention favorise une efficacité optimale du nivolumab en ligne ultérieure. Par ailleurs, un contrôle régulier de l'évolution des cellules myéloïdes immunosuppresives et des cellules cytotoxiques exprimant TIM-3 d'un patient permet de détecter une résistance primaire ou secondaire au traitement. D'autre part, l'estimation conjointe des expressions des protéines PD-L1 et CD8 par séquençage d'ARN constitue un marqueur prédictif majeur de réponse. Sa capacité prédictive surpasse celle de l'estimation de PD-L1 seule et celle d'autres signatures transcriptomiques précédemment établies et composées d'un nombre plus important de gènes. Enfin, l'étude des séquençages d'exome des tumeurs montre l'importance d'une analyse étendue de la génétique tumorale et la nécessité de ne pas se limiter à l'estimation de sa charge mutationnelle.Dans ce travail, nous avons pu mettre en évidence des marqueurs prédictifs d'efficacité du nivolumab dans le traitement des CBNPC avancés. Nos résultats soulignent l'importance de l'utilisation de plusieurs technologies pour la caractérisation de la biologie tumorale et de l'immunité du patient dans une démarche de découverte de biomarqueurs et de construction de modèles prédictifs d'efficacité des immunothérapies. / The recent introduction of immunotherapy has disrupted the management of non-small cell lung cancer (NSCLC). Nivolumab, an antibody targeting the immune checkpoint inhibitor PD-1, has shown remarkable results in seconde-line setting after failure of standard first-line chemotherapy. However, only a quarter of patients benefits from this therapy. To date, no predictive biomarker of the therapeutic efficacy of nivolumab has been identified in a clear and consensual manner. The research for predictive biomarkers of efficacy or resistance to this treatment is, therefore, a major challenge.The emergence of high-throughput sequencing over the past decade has had a significant impact on clinical and fundamental research, making possible to understand the genetics of a tumor as a whole. These new techniques are in addition to other already proven techniques such as immunophenotyping or immunohistochemistry available to researchers for extensive analysis of tumor and patient characteristics.The objective of this work was to identify predictors of the efficacy of nivolumab in the treatment of advanced NSCLC using these different technologies. To do this, our study focused on a multicentre cohort of 115 NSCLC patients treated with nivolumab in the second- or third-line after failure of a cytotoxic doublet. Within the limits of sample availability and quality, the genetic, transcriptomic and immunohistochemical profiles of the tumor as well as the clinical and immunological profiles of the patients were analysed.Our results highlight major predictive markers of response to nivolumab. Thus, a good response to the first-line cytotoxic doublet promotes optimal efficacy of subsequent online nivolumab. In addition, regular monitoring of the evolution of a patient's immunosuppressive myeloid cells and cytotoxic cells expressing TIM-3 can detect primary or secondary resistance to treatment. On the other hand, the joint estimation of PD-L1 and CD8 protein expressions by RNA sequencing is a major predictive marker of response. Its predictive capacity surpasses that of the PD-L1 estimate alone and that of other previously established transcriptomic signatures composed of a larger number of genes. Finally, the study of tumor exome sequencing shows the importance of extensive analysis of tumor genetics and the need not only to focus on the estimation its mutation burden.In this work, we were able to identify predictive markers of the efficacy of nivolumab in the treatment of advanced NSCLC. Our results highlight the importance of using several technologies for the characterization of tumor biology and patient immunity in a process of biomarker discovery and the construction of predictive models of the efficacy of immunotherapies.
52

Inertial migration of deformable capsules and droplets in oscillatory and pulsating microchannel flows

Ali Lafzi (10682247) 18 April 2022 (has links)
<div>Studying the motion of cells and investigating their migration patterns in inertial microchannels have been of great interest among researchers because of their numerous biological applications such as sorting, separating, and filtering them. A great drawback in conventional microfluidics is the inability to focus extremely small biological particles and pathogens in the order of sub-micron and nanometers due to the requirement of designing an impractically elongated microchannel, which could be in the order of a few meters in extreme cases. This restriction is because of the inverse correlation between the cube of the particle size and the theoretically required channel length. Exploiting an oscillatory flow is one solution to this issue where the total distance that the particle needs to travel to focus is virtually extended beyond the physical length of the device. Due to the present symmetry in such flow, the directions of the lift forces acting on the particle remain the same, making the particle focusing feasible. </div><div><br></div><div>Here, we present results of simulation of such oscillatory flows of a single capsule in a rectangular microchannel containing a Newtonian fluid. A 3D front-tracking method has been implemented to numerically study the dynamics of the capsule in the channel of interest. Several cases have been simulated to quantify the influence of the parameters involved in this problem such as the channel flow rate, capsule deformability, frequency of oscillation, and the type of applied mechanism for inducing flow oscillations. In all cases, the capsule blockage ratio and the initial location are the same, and it is tracked until it reaches its equilibrium position. The capability to focus the capsule in a short microchannel with oscillatory flow has been observed for capsule deformabilities and mechanisms to induce the oscillations used in our study. Nevertheless, there is a limit to the channel flow rate beyond which, there is no single focal point for the capsule. Another advantage of having an oscillatory microchannel flow is the ability to control the capsule focal point by changing the oscillation frequency according to the cases presented in the current study. The capsule focusing point also depends on its deformability, flow rate, and the form of the imposed periodic pressure gradient; more deformable capsules with lower maximum velocity focus closer to the channel center. Also, the difference between the capsule equilibrium point in steady and oscillatory flows is affected by the capsule stiffness and the device flow rate. Furthermore, increasing the oscillation frequency, capsule rigidity, and system flow rate shorten the essential device length. </div><div><br></div><div>Although the oscillation frequency can provide us with new particle equilibrium positions, especially ones between the channel center and wall that can be very beneficial for separation purposes, it has the shortcoming of having a zero net throughput. To address this restriction, a steady component has been added to the formerly defined oscillatory flow to make it pulsating. Furthermore, this type of flow adds more new equilibrium points because it behaves similarly to a pure oscillatory flow with an equivalent frequency in that regard. They also enable the presence of droplets at high Ca or Re that could break up in the steady or a very low-frequency regime. Therefore, we perform new numerical simulations of a deformable droplet suspended in steady, oscillatory, and pulsating microchannel flows. We have observed fluctuations in the trajectory of the drop and have shown that the amplitude of these oscillations, the average of the oscillatory deformation, and the average migration velocity decrease by increasing the frequency. The dependence of the drop focal point on the shape of the velocity profile has been investigated as well. It has been explored that this equilibrium position moves towards the wall in a plug-like profile, which is the case at very high frequencies. Moreover, due to the expensive cost of these simulations, a recursive version of the Multi Fidelity Gaussian processes (MFGP) has been used to replace the numerous high-fidelity (or fine-grid) simulations that cannot be afforded numerically. The MFGP algorithm is used to predict the equilibrium distance of the drop from the channel center for a wide range of the input parameters, namely Ca and frequency, at a constant Re. It performs exceptionally well by having an average R^2 score of 0.986 on 500 random test sets.</div><div><br></div><div>The presence of lift forces is the main factor that defines the dynamics of the drop in the microchannel. The last part of this work will be dedicated to extracting the active lift force profiles and identify their relationships with the parameters involved to shed light on the underlying physics. This will be based on a novel methodology that solely depends on the drop trajectory. Assuming a constant Re, we then compare steady lift forces at different Ca numbers and oscillatory ones at the same constant Ca. We will then define analytical equations for the obtained lift profiles using non-linear regression and predict their key coefficients over a continuous range of inputs using MFGP.</div>
53

Accès personnalisé à l'information : prise en compte de la dynamique utilisateur / Personnalized access to information : taking the user's dynamic into account

Guàrdia Sebaoun, Elie 29 September 2017 (has links)
L’enjeu majeur de cette thèse réside dans l’amélioration de l’adéquation entre l’information retournée et les attentes des utilisateurs à l’aide de profils riches et efficaces. Il s’agit donc d’exploiter au maximum les retours utilisateur (qu’ils soient donnés sous la forme de clics, de notes ou encore d’avis écrits) et le contexte. En parallèle la forte croissance des appareils nomades (smartphones, tablettes) et par conséquent de l’informatique ubiquitaire nous oblige à repenser le rôle des systèmes d’accès à l’information. C’est pourquoi nous ne nous sommes pas seulement intéressés à la performance à proprement parler mais aussi à l’accompagnement de l’utilisateur dans son accès à l’information. Durant ces travaux de thèse, nous avons choisi d’exploiter les textes écrit par les utilisateurs pour affiner leurs profils et contextualiser la recommandation. À cette fin, nous avons utilisé les avis postés sur les sites spécialisés (IMDb, RateBeer, BeerAdvocate) et les boutiques en ligne (Amazon) ainsi que les messages postés sur Twitter.Dans un second temps, nous nous sommes intéressés aux problématiques de modélisation de la dynamique des utilisateurs. En plus d’aider à l’amélioration des performances du système, elle permet d’apporter une forme d’explication quant aux items proposés. Ainsi, nous proposons d’accompagner l’utilisateur dans son accès à l’information au lieu de le contraindre à un ensemble d’items que le système juge pertinents. / The main goal of this thesis resides in using rich and efficient profiling to improve the adequation between the retrieved information and the user's expectations. We focus on exploiting as much feedback as we can (being clicks, ratings or written reviews) as well as context. In the meantime, the tremendous growth of ubiquitous computing forces us to rethink the role of information access platforms. Therefore, we took interest not solely in performances but also in accompanying users through their access to the information. Through this thesis, we focus on users dynamics modeling. Not only it improves the system performances but it also brings some kind of explicativity to the recommendation. Thus, we propose to accompany the user through his experience accessing information instead of constraining him to a given set of items the systems finds fitting.
54

Factors contributing to and predictive models for drugs exhibiting negative food effects of unknown mechanisms

Marasanapalle, Venugopal P. 01 January 2007 (has links) (PDF)
Drugs exhibiting decreased extent of absorption in the fed state administration when compared to the fasted state administration are termed to exhibit a negative food effect. The known causes for negative food effects are luminal degradation and complexation to metal ions/Ca 2+ . For the drugs that do not undergo GI degradation and metal ion complexation, different factors were attributed to cause negative food effects, which are inconclusive. The objectives of this investigation were; to identify the physicochemical and physiological changes between fasted and fed states and their role in causing negative food effects; to develop an empirical model that correlates the biopharmaceutical properties of molecules to negative food effects; and translate the empirical model to a mechanistic model and explain the mechanisms of negative food effects for drugs that do not have clearly defined mechanisms of negative food effects. The important physicochemical change in the upper intestine was identified to be pH. The pH of the upper intestine in the fasted state is typically 6.5, whereas, the overall post-prandial pH after a standard meal in the duodenum is 5.4 (5.0 − 5.7) and the jejunal pH is 4.7 owing to the emptying of acidic chyme. Negative food effect drugs exhibited incomplete GI absorption, low Log P values and low apical to basolateral Caco-2 permeabilities. Acidic/basic drugs exhibiting either negative food effects or no food effects with a molecular size range of 200–450 Da and no physiological effects (such as secretions and motility) were selected from the literature. Multiple linear regression analysis using five drugs exhibiting negative food effects and seven drugs exhibiting no food effects indicated that, percent food effects correlated to acidic/basic dissociation constants (Ka/Kb) and to Caco-2 permeability (R 2 = 0.9114, Power ≈ 1 and p < 0.00002). A mathematical model, adopted to understand the mechanisms of negative food effects suggested that, lowering of permeability or solubility of the model compounds at the lower pH of the postprandial upper intestinal state may contribute to their negative food effects. Finally, this model was found to be useful in predicting negative food effects using in situ rat permeability values.
55

Modeling Biotic and Abiotic Drivers of Public Health Risk from West Nile Virus in Ohio, 2002-2006

Rosile, Paul A. 10 October 2014 (has links)
No description available.
56

Finanční analýza společnosti ProScan, a.s. / Financial analysis of ProScan a. s.

Šudová, Markéta January 2010 (has links)
The purpose of financial analysis of ProScan, a. s. is the evaluation of financial health and financial stability of this company using different methods and indicators of financial analysis, including spatial comparison with the competition. Financial analysis is conducted for the period from 1. 1. 2005 to 31. 12. 2010. The thesis is divided into two parts -- theoretical and practical. Used methods include horizontal and vertical analysis, analysis of funds, liquidity analysis, debt and financial stability analysis, profitability analysis, activity analysis and predictive models.
57

Finanční analýza vybrané společnosti / Financial analysis of the selected company

Klestilová, Pavla January 2010 (has links)
The theoretical part of the diploma thesis deals with the main aspects of financial analysis, especially with horizontal and vertical analysis of absolute values, financial ratios, new valuation models and bankruptcy prediction models. The following part concentrates on the company Crocodille ČR, spol. s r.o. using the theoretical approaches mentioned above. It focuses on trend analysis, the calculation of Weighted Average Cost of Capital and Economic Value Added as well as industry comparative analysis. The final recommendations regarding e.g. the sensitivity analysis of company indebtedness are included in the last chapter.
58

TDNet : A Generative Model for Taxi Demand Prediction / TDNet : En Generativ Modell för att Prediktera Taxiefterfrågan

Svensk, Gustav January 2019 (has links)
Supplying the right amount of taxis in the right place at the right time is very important for taxi companies. In this paper, the machine learning model Taxi Demand Net (TDNet) is presented which predicts short-term taxi demand in different zones of a city. It is based on WaveNet which is a causal dilated convolutional neural net for time-series generation. TDNet uses historical demand from the last years and transforms features such as time of day, day of week and day of month into 26-hour taxi demand forecasts for all zones in a city. It has been applied to one city in northern Europe and one in South America. In northern europe, an error of one taxi or less per hour per zone was achieved in 64% of the cases, in South America the number was 40%. In both cities, it beat the SARIMA and stacked ensemble benchmarks. This performance has been achieved by tuning the hyperparameters with a Bayesian optimization algorithm. Additionally, weather and holiday features were added as input features in the northern European city and they did not improve the accuracy of TDNet.
59

Predictive Models of Student Learning

Pardos, Zachary Alexander 26 April 2012 (has links)
In this dissertation, several approaches I have taken to build upon the student learning model are described. There are two focuses of this dissertation. The first focus is on improving the accuracy with which future student knowledge and performance can be predicted by individualizing the model to each student. The second focus is to predict how different educational content and tutorial strategies will influence student learning. The two focuses are complimentary but are approached from slightly different directions. I have found that Bayesian Networks, based on belief propagation, are strong at achieving the goals of both focuses. In prediction, they excel at capturing the temporal nature of data produced where student knowledge is changing over time. This concept of state change over time is very difficult to capture with classical machine learning approaches. Interpretability is also hard to come by with classical machine learning approaches; however, it is one of the strengths of Bayesian models and aids in studying the direct influence of various factors on learning. The domain in which these models are being studied is the domain of computer tutoring systems, software which uses artificial intelligence to enhance computer based tutorial instruction. These systems are growing in relevance. At their best they have been shown to achieve the same educational gain as one on one human interaction. Computer tutors have also received the attention of White House, which mentioned an tutoring platform called ASSISTments in its National Educational Technology Plan. With the fast paced adoption of these data driven systems it is important to learn how to improve the educational effectiveness of these systems by making sense of the data that is being generated from them. The studies in this proposal use data from these educational systems which primarily teach topics of Geometry and Algebra but can be applied to any domain with clearly defined sub-skills and dichotomous student response data. One of the intended impacts of this work is for these knowledge modeling contributions to facilitate the move towards computer adaptive learning in much the same way that Item Response Theory models facilitated the move towards computer adaptive testing.
60

Active evaluation of predictive models

Sawade, Christoph January 2012 (has links)
The field of machine learning studies algorithms that infer predictive models from data. Predictive models are applicable for many practical tasks such as spam filtering, face and handwritten digit recognition, and personalized product recommendation. In general, they are used to predict a target label for a given data instance. In order to make an informed decision about the deployment of a predictive model, it is crucial to know the model’s approximate performance. To evaluate performance, a set of labeled test instances is required that is drawn from the distribution the model will be exposed to at application time. In many practical scenarios, unlabeled test instances are readily available, but the process of labeling them can be a time- and cost-intensive task and may involve a human expert. This thesis addresses the problem of evaluating a given predictive model accurately with minimal labeling effort. We study an active model evaluation process that selects certain instances of the data according to an instrumental sampling distribution and queries their labels. We derive sampling distributions that minimize estimation error with respect to different performance measures such as error rate, mean squared error, and F-measures. An analysis of the distribution that governs the estimator leads to confidence intervals, which indicate how precise the error estimation is. Labeling costs may vary across different instances depending on certain characteristics of the data. For instance, documents differ in their length, comprehensibility, and technical requirements; these attributes affect the time a human labeler needs to judge relevance or to assign topics. To address this, the sampling distribution is extended to incorporate instance-specific costs. We empirically study conditions under which the active evaluation processes are more accurate than a standard estimate that draws equally many instances from the test distribution. We also address the problem of comparing the risks of two predictive models. The standard approach would be to draw instances according to the test distribution, label the selected instances, and apply statistical tests to identify significant differences. Drawing instances according to an instrumental distribution affects the power of a statistical test. We derive a sampling procedure that maximizes test power when used to select instances, and thereby minimizes the likelihood of choosing the inferior model. Furthermore, we investigate the task of comparing several alternative models; the objective of an evaluation could be to rank the models according to the risk that they incur or to identify the model with lowest risk. An experimental study shows that the active procedure leads to higher test power than the standard test in many application domains. Finally, we study the problem of evaluating the performance of ranking functions, which are used for example for web search. In practice, ranking performance is estimated by applying a given ranking model to a representative set of test queries and manually assessing the relevance of all retrieved items for each query. We apply the concepts of active evaluation and active comparison to ranking functions and derive optimal sampling distributions for the commonly used performance measures Discounted Cumulative Gain and Expected Reciprocal Rank. Experiments on web search engine data illustrate significant reductions in labeling costs. / Maschinelles Lernen befasst sich mit Algorithmen zur Inferenz von Vorhersagemodelle aus komplexen Daten. Vorhersagemodelle sind Funktionen, die einer Eingabe – wie zum Beispiel dem Text einer E-Mail – ein anwendungsspezifisches Zielattribut – wie „Spam“ oder „Nicht-Spam“ – zuweisen. Sie finden Anwendung beim Filtern von Spam-Nachrichten, bei der Text- und Gesichtserkennung oder auch bei der personalisierten Empfehlung von Produkten. Um ein Modell in der Praxis einzusetzen, ist es notwendig, die Vorhersagequalität bezüglich der zukünftigen Anwendung zu schätzen. Für diese Evaluierung werden Instanzen des Eingaberaums benötigt, für die das zugehörige Zielattribut bekannt ist. Instanzen, wie E-Mails, Bilder oder das protokollierte Nutzerverhalten von Kunden, stehen häufig in großem Umfang zur Verfügung. Die Bestimmung der zugehörigen Zielattribute ist jedoch ein manueller Prozess, der kosten- und zeitaufwendig sein kann und mitunter spezielles Fachwissen erfordert. Ziel dieser Arbeit ist die genaue Schätzung der Vorhersagequalität eines gegebenen Modells mit einer minimalen Anzahl von Testinstanzen. Wir untersuchen aktive Evaluierungsprozesse, die mit Hilfe einer Wahrscheinlichkeitsverteilung Instanzen auswählen, für die das Zielattribut bestimmt wird. Die Vorhersagequalität kann anhand verschiedener Kriterien, wie der Fehlerrate, des mittleren quadratischen Verlusts oder des F-measures, bemessen werden. Wir leiten die Wahrscheinlichkeitsverteilungen her, die den Schätzfehler bezüglich eines gegebenen Maßes minimieren. Der verbleibende Schätzfehler lässt sich anhand von Konfidenzintervallen quantifizieren, die sich aus der Verteilung des Schätzers ergeben. In vielen Anwendungen bestimmen individuelle Eigenschaften der Instanzen die Kosten, die für die Bestimmung des Zielattributs anfallen. So unterscheiden sich Dokumente beispielsweise in der Textlänge und dem technischen Anspruch. Diese Eigenschaften beeinflussen die Zeit, die benötigt wird, mögliche Zielattribute wie das Thema oder die Relevanz zuzuweisen. Wir leiten unter Beachtung dieser instanzspezifischen Unterschiede die optimale Verteilung her. Die entwickelten Evaluierungsmethoden werden auf verschiedenen Datensätzen untersucht. Wir analysieren in diesem Zusammenhang Bedingungen, unter denen die aktive Evaluierung genauere Schätzungen liefert als der Standardansatz, bei dem Instanzen zufällig aus der Testverteilung gezogen werden. Eine verwandte Problemstellung ist der Vergleich von zwei Modellen. Um festzustellen, welches Modell in der Praxis eine höhere Vorhersagequalität aufweist, wird eine Menge von Testinstanzen ausgewählt und das zugehörige Zielattribut bestimmt. Ein anschließender statistischer Test erlaubt Aussagen über die Signifikanz der beobachteten Unterschiede. Die Teststärke hängt von der Verteilung ab, nach der die Instanzen ausgewählt wurden. Wir bestimmen die Verteilung, die die Teststärke maximiert und damit die Wahrscheinlichkeit minimiert, sich für das schlechtere Modell zu entscheiden. Des Weiteren geben wir eine Möglichkeit an, den entwickelten Ansatz für den Vergleich von mehreren Modellen zu verwenden. Wir zeigen empirisch, dass die aktive Evaluierungsmethode im Vergleich zur zufälligen Auswahl von Testinstanzen in vielen Anwendungen eine höhere Teststärke aufweist. Im letzten Teil der Arbeit werden das Konzept der aktiven Evaluierung und das des aktiven Modellvergleichs auf Rankingprobleme angewendet. Wir leiten die optimalen Verteilungen für das Schätzen der Qualitätsmaße Discounted Cumulative Gain und Expected Reciprocal Rank her. Eine empirische Studie zur Evaluierung von Suchmaschinen zeigt, dass die neu entwickelten Verfahren signifikant genauere Schätzungen der Rankingqualität liefern als die untersuchten Referenzverfahren.

Page generated in 0.0388 seconds