• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 494
  • 92
  • 71
  • 61
  • 36
  • 21
  • 19
  • 18
  • 13
  • 6
  • 5
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 1016
  • 681
  • 260
  • 180
  • 130
  • 125
  • 117
  • 97
  • 81
  • 80
  • 79
  • 77
  • 66
  • 63
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Structure et activité des Archaea planctoniques dans les écosystèmes aquatiques / Structure and activity of planktonic Archaea in aquatic ecosystems

Hugoni, Mylène 31 October 2013 (has links)
Les Archaea planctoniques contribuent de façon significative aux grands cycles biogéochimiques dans les écosystèmes aquatiques, néanmoins la structure des communautés actives ainsi que leurs variations saisonnières sont encore largement méconnues. En outre, la découverte de l’implication des Archaea dans le cycle de l’azote (Ammonia Oxidizing Archaea ou AOA), plus particulièrement dans le processus de nitrification a considérablement modifié la perception d’un processus autotrophe réalisé uniquement par des bactéries (Ammonia Oxidizing Bacteria ou AOB). Dans les écosystèmes marins, la large distribution des AOA suggère que ces microorganismes joueraient un rôle prépondérant dans le cycle de l’azote néanmoins, ces observations ne sont pas généralisables à l’ensemble des écosystèmes aquatiques en raison de leur grande diversité et/ou d'un manque d'informations et d’études sur certains d'entre eux. Ainsi, les objectifs de ce projet étaient i) d’étudier la structure spatiale et temporelle des communautés d’Archaea actives dans des écosystèmes aquatiques contrastés en termes d’apports anthropiques et/ou de gradients de salinité (lac, estuaire, milieu côtier) ; ii) de déterminer la contribution relative des Archaea au processus d’oxydation de l’ammonium, en comparaison avec celle des bactéries ; et iii) de mieux comprendre les paramètres environnementaux qui pourraient déterminer l’établissement des communautés d’AOA ou d’AOB. / Aquatic Archaea are important players among microbial plankton and significantly contribute to biogeochemical cycles, especially nitrogen, but details regarding their community structure and seasonal activity and dynamics remain largely unexplored. In marine ecosystems, the widespread distribution of Ammonia Oxidizing Archaea (AOA) suggests that they probably play a major role in nutrients cycling. However, we cannot generalize these observations to all aquatic ecosystems because of their high diversity and/or a lack of information and studies on these organisms for some of these ecosystems. More precisely, lacustrine and coastal ecosystems were less studied while they are potentially subjected to strong anthropogenic impacts. Moreover, notable differences in terms of diversity and activity between marine and freshwater communities can be expected, considering the specific environmental parameters of each ecosystem. The objectives of this thesis were: i) to study the archaeal community structure across a temporal scale and assess the diversity of archaeal communities and AOA in diverse aquatic ecosystems along anthropogenic and/or salinity gradient (lacustrine, estuarine and coastal ecosystems); ii) to determine their relative contribution in ammonia oxidation, compared to Ammonia Oxidizing Bacteria (AOB) by looking at their spatial and temporal distribution and activity, and iii) to explore more precisely the environmental parameters that could drive AOA and/or AOB establishment.
212

Phenotypic evolution and adaptive strategies in marine phytoplankton (Coccolithophores)

Šupraha, Luka January 2016 (has links)
Coccolithophores are biogeochemically important marine algae that interact with the carbon cycle through photosynthesis (CO2 sink), calcification (CO2 source) and burial of carbon into oceanic sediments. The group is considered susceptible to the ongoing climate perturbations, in particular to ocean acidification, temperature increase and nutrient limitation. The aim of this thesis was to investigate the adaptation of coccolithophores to environmental change, with the focus on temperature stress and nutrient limitation. The research was conducted in frame of three approaches: experiments testing the physiological response of coccolithophore species Helicosphaera carteri and Coccolithus pelagicus to phosphorus limitation, field studies on coccolithophore life-cycles with a method comparison and an investigation of the phenotypic evolution of the coccolithophore genus Helicosphaera over the past 15 Ma. Experimental results show that the physiology and morphology of large coccolithophores are sensitive to phosphorus limitation, and that the adaptation to low-nutrient conditions can lead to a decrease in calcification rates. Field studies have contributed to our understanding of coccolithophore life cycles, revealing complex ecological patterns within the Mediterranean community which are seemingly regulated by seasonal, temperature-driven environment changes. In addition, the high-throughput sequencing (HTS) molecular method was shown to provide overall good representation of coccolithophore community composition. Finally, the study on Helicosphaera evolution showed that adaptation to decreasing CO2 in higher latitudes involved cell and coccolith size decrease, whereas the adaptation in tropical ecosystems also included a physiological decrease in calcification rates in response to nutrient limitation. This thesis advanced our understanding of coccolithophore adaptive strategies and will improve our predictions on the fate of the group under ongoing climate change.
213

Power allocation and cell association in cellular networks

Ho, Danh Huu 26 August 2019 (has links)
In this dissertation, power allocation approaches considering path loss, shadowing, and Rayleigh and Nakagami-m fading are proposed. The goal is to improve power consumption, and energy and throughput efficiency based on user target signal to interference plus noise ratio (SINR) requirements and an outage probability threshold. First, using the moment generating function (MGF), the exact outage probability over Rayleigh and Nakagami-m fading channels is derived. Then upper and lower bounds on the outage probability are derived using the Weierstrass, Bernoulli and exponential inequalities. Second, the problem of minimizing the user power subject to outage probability and user target SINR constraints is considered. The corresponding power allocation problems are solved using Perron-Frobenius theory and geometric programming (GP). A GP problem can be transformed into a nonlinear convex optimization problem using variable substitution and then solved globally and efficiently by interior point methods. Then, power allocation problems for throughput maximization and energy efficiency are proposed. As these problems are in a convex fractional programming form, parametric transformation is used to convert the original problems into subtractive optimization problems which can be solved iteratively. Simulation results are presented which show that the proposed approaches are better than existing schemes in terms of power consumption, throughput, energy efficiency and outage probability. Prioritized cell association and power allocation (CAPA) to solve the load balancing issue in heterogeneous networks (HetNets) is also considered in this dissertation. A Hetnet is a group of macrocell base stations (MBSs) underlaid by a diverse set of small cell base stations (SBSs) such as microcells, picocells and femtocells. These networks are considered to be a good solution to enhance network capacity, improve network coverage, and reduce power consumption. However, HetNets are limited by the disparity of power levels in the different tiers. Conventional cell association approaches cause MBS overloading, SBS underutilization, excessive user interference and wasted resources. Satisfying priority user (PU) requirements while maximizing the number of normal users (NUs) has not been considered in existing power allocation algorithms. Two stage CAPA optimization is proposed to address the prioritized cell association and power allocation problem. The first stage is employed by PUs and NUs and the second stage is employed by BSs. First, the product of the channel access likelihood (CAL) and channel gain to interference plus noise ratio (GINR) is considered for PU cell association while network utility is considered for NU cell association. Here, CAL is defined as the reciprocal of the BS load. In CAL and GINR cell association, PUs are associated with the BSs that provide the maximum product of CAL and GINR. This implies that PUs connect to BSs with a low number of users and good channel conditions. NUs are connected to BSs so that the network utility is maximized, and this is achieved using an iterative algorithm. Second, prioritized power allocation is used to reduce power consumption and satisfy as many NUs with their target SINRs as possible while ensuring that PU requirements are satisfied. Performance results are presented which show that the proposed schemes provide fair and efficient solutions which reduce power consumption and have faster convergence than conventional CAPA schemes. / Graduate
214

Engaging Esters as Cross-Coupling Electrophiles

Ben Halima, Taoufik 09 August 2019 (has links)
Cross-coupling reactions, where a transition metal catalyst facilitates the formation of a new carbon-carbon or carbon-heteroatom bond between two coupling partners, has become one of the most widely used, reliable, and robust family of transformations for the construction of molecules. The Nobel Prize was awarded to pioneers in this field who primarily used aryl iodides, bromides, and triflates as electrophilic coupling partners. The expansion of the reaction scope to non-traditional electrophiles is an ongoing challenge to enable an even greater number of useful products to be made from simple starting materials. The major goal of this thesis research is to improve and expand upon this field by using esters as electrophiles via the activation of the strong C(acyl)−O bond. Esters are particularly robust in comparison to other carboxylic acid derivatives used in cross-coupling reactions. Success on the activation of such inert functional group using catalysis has both fundamental and practical value. By discovering new reaction modes of this abundant functional group, synthetic routes to access novel or industrially important molecules can be improved. Chapter 1 of this thesis describes a literature overview of what has been accomplished in the field of cross coupling reactions using carboxylic acid derivatives as electrophilic coupling partners. Chapter 2 discloses the first palladium Suzuki-Miyaura couplings of phenyl esters to produce ketones. The method is efficient and robust, giving good yields of useful products. The reaction is proposed to proceed via an oxidative addition to the strong C(acyl)−O bond of the ester. In contrast to previous efforts in this field that use traditional catalysts such as Pd(PPh3)4, the developed reaction requires use of an electron-rich, bulky N-heterocyclic carbene ligand, which facilitates the strong bond activation. Furthermore, a palladium-catalyzed cross-coupling between aryl esters and anilines is reported, enabling access to diverse amides. The reaction takes place via a similar activation of the C−O bond by oxidative addition with a Pd−NHC complex, which enables the use of relatively non-nucleophilic anilines that otherwise require stoichiometric activation with strong bases to react. Chapter 3 discloses a nickel-catalyzed amide bond formation using unactivated and abundant esters. In this transformation, an accessible nickel catalyst can facilitate the activation of diverse aliphatic and aromatic esters to enable direct amide bond formation with amines as nucleophiles. No stoichiometric base, acid, or other activating agent is needed, providing exceptional functional group tolerance and producing only methanol as a by-product. This reaction is of both fundamental and practical importance because it is the first to demonstrate that simple conditions can enable Ni to cleave the C–O bond of an ester to make an oxidative addition product, which can be subsequently coupled with amines. This discovery contrasts industrially-common and wasteful methods that still require stoichiometric activating agents or multistep synthesis. Chapter 4 describes the evaluation of different types of cross-coupling reactions using methyl esters as electrophilic coupling partner. A high-throughput screening technique has been applied to this project. A combination between specific ligands, known by their efficiency to activate strong C−O bonds, and literature-based conditions has been designed for the chosen transformations. Using this strategy, two promising hits have been obtained using the same NHC ligand: a decarbonylative Suzuki-Miyaura and a decarbonylative borylation reaction.
215

Biologia computacional aplicada para a análise de dados em larga escala / Computational biology for high-through put data analysis

Oliveira, Daniele Yumi Sunaga de 16 April 2013 (has links)
A enorme quantidade de dados que vem sendo gerada por tecnologias modernas de biologia representam um grande desafio para áreas como a bioinformática. Há uma série de programas disponíveis para a análise destes dados, mas que nem sempre são compreendidos o suficiente para serem corretamente aplicados, ou ainda, há problemas que requerem o desenvolvimento de novas soluções. Neste trabalho, nós apresentamos a análise de dados de duas das principais fontes de dados em larga escala: microarrays e sequenciamento. Na primeira, avaliamos se a estatística do método Rank Products (RP) é adequada para a identificação de genes diferencialmente expressos em estudos de doenças complexas, cujo uma das características é a heterogeneidade genética entre indivíduos com o mesmo fenótipo. Na segunda, desenvolvemos uma ferramenta chamada hunT para buscar por genes alvos do fator de transcrição T - um importante marcador de mesoderma com papel chave no desenvolvimento de vertebrados -, através da identificação de sítios de ligação para o T em suas sequências reguladoras. O desempenho do RP foi testado usando dados simulados e dados reais de um estudo de fissura lábio-palatina não-sindrômica, de autismo e também de um estudo que avalia o efeito da privação do sono em humanos. Nossos resultados mostraram que o RP é uma solução eficiente para detectar genes consistentemente desregulados em somente um subgrupo de pacientes, que esta habilidade é mantida com poucas amostras, mas que o seu desempenho é prejudicado quando são analisados poucos genes. Obtivemos fortes evidências biológicas da eficiência do método nos estudos com dados reais através da identificação de genes e vias previamente associados às doenças e da validação de novos genes candidatos através da técnica de PCR quantitativo em tempo real. Já o programa hunT identificou 4.602 genes de camundongo com o sítio de ligação para o domínio do T, sendo alguns deles já demonstrados experimentalmente. Identificamos 32 destes genes com expressão alterada em um estudo onde avaliamos o transcriptoma da diferenciação in vitro de células tronco embrionárias de camundongo para mesoderma, sugerindo a participação destes genes neste processo sendo regulados pelo T / The large amount of data generated by modern technologies of biology provides a big challenge for areas such as bioinformatics. In order to analyze these data there are several computer programs available; however these are not always well understood enough to be correctly applied. Moreover, there are problems that require the development of new solutions. In this work, we present the data analysis of two main high-throughput data sources: microarrays and sequencing. Firstly, we evaluated whether the statistic of Rank Products method (RP) is suitable for the identification of differentially expressed genes in studies of complex diseases, which are characterized by the vast genetic heterogeneity among the individuals affected. Secondly, we developed a tool named hunT to search for target genes of T transcription factor - an important mesodermal marker that plays a key role in the vertebrate development -, by identifying binding sites for T in their regulatory sequences. The RP performance was tested using both simulated and real data from three different studies: non-syndromic cleft lip and palate, autism and sleep deprivation effect in Humans. Our results have shown that RP is an effective solution for the identification of consistently deregulated genes in a subgroup of patients, this ability is maintained even with few samples, however its performance is impaired when only few genes are analyzed. We have obtained strong biological of effectiveness of the method in the studies with real data by not only identifying genes and pathways previously associated with diseases but also corroborating the behavior of novel candidate genes with the real-time PCR technique. The hunT program has identified 4,602 mouse genes containing the binding site for the T domain, some of which have already been demonstrated experimentally. We identified 32 of these genes with altered expression in a study which evaluated the transcriptome of in vitro differentiation of mouse embryonic stem cells to mesoderm, suggesting the involvement of these genes in this process regulated by T
216

Identification and Characterization of PDE8 Inhibitors Using a Fission Yeast Based High-throughput Screening Platform

Demirbas Cakici, Didem January 2011 (has links)
Thesis advisor: Charles S. Hoffman / In this thesis, I describe the development of a screening platform for detecting PDE8A inhibitors using the cAMP-dependent glucose sensing pathway of the fission yeast Schizosaccharomyces pombe, which led us to discover several PDE8A selective inhibitors. In this system, the only PDE of the fission yeast is replaced with mammalian PDE8A1 in strains that have been engineered such that PDE inhibition is required to allow cell growth. Using this system, I screened 56 compounds obtained from PDE4 and PDE7 high throughput screens (HTSs) and identified a PDE4-PDE8 dual specificity inhibitor. Using this as a positive control, I developed a robust high-throughput screen (HTS) for PDE8A inhibitors and screened 240,267 compounds at the Harvard Medical School ICCB Screening Facility. Approximately 0.2 % of the screened compounds were potential PDE8A inhibitors with 0.03% displaying significant potency. Secondary assays of 367 of the most effective compounds against strains expressing PDE8A (both full length and catalytic domain), PDE4A and PDE7A or PDE7B led to the selection of structurally diverse compounds for further testing. To profile the selectivity of twenty-eight of these compounds, dose response assays were conducted using 16 yeast strains that express different PDE isoforms (representing all PDE families with the exception of the PDE6 family). These assays identified compounds with different patterns of inhibition, including structurally-distinct PDE8A-specific inhibitors. By evaluating the effects of these compounds for steroid production in mouse Leydig cells, biologically active compounds that can elevate steroid production were identified. / Thesis (PhD) — Boston College, 2011. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Biology.
217

High-throughput analysis of contrived cocaine mixtures by direct analysis in real time/single quadrupole mass spectrometry and post acquisition chemometric analysis

Horsley, Andrew Blair 12 March 2016 (has links)
Direct Analysis in Real Time (DART) ionization/mass spectrometry allows for the high throughput analysis of a wide range of materials including but not limited to: solids, liquids, powders, tablets, and plant materials. The ability to detect cocaine was established in a reproducible manner with the use of a DART ionization source (IonSense Inc., Saugus, MA) interfaced to a modified single quadrupole mass spectrometer. Development of a methodology for the detection of cocaine within contrived street quality drug mixtures involved the optimization of the ionization source, sample introduction mechanism, ion guide, and mass analysis parameters. An analytical method was created that utilized ionized helium carrier gas heated to 300°C and an automated sample introduction apparatus consisting of a Linear Rail Enclosure that holds consumable QuickStrip^TM sample cards. Ionized molecules were then fragmented by manipulation of voltage levels within the ion guide to gain more structural information prior to detection by a single quadrupole mass spectrometer. Cocaine was detected by the modified DART/MS analytical platform and gave two peaks within the mass spectrum at m/z 304 and 182. Optimization of in-source fragmentation by manual adjustment of the skimmer focus voltage allowed for the reproducible fragmentation of cocaine and the ability to increase or decrease the amount of fragmentation seen between the two peaks detected for cocaine. With the use of fragmentation, this analytical platform can be classified as a Category A technique as defined by the Scientific Working Group for the Analysis of Seized Drugs. The robust detection of cocaine was demonstrated for reference samples at concentrations as low as 10 ng/μL (50 ng) with high signal abundance greater than ten times the signal to noise ratio. Furthermore, the detection of cocaine at 10 ng/μL was demonstrated for multi component mixtures of up to 14 additional components containing common adulterants and diluents found within street quality samples. In total, 25 common excipients were tested using the same method parameters as optimized for cocaine analysis. Of these 25 excipients tested, five were not detected in positive ion mode (one could be detected in negative ion mode). Of the twenty excipients that could be detected by mass spectrometry, two pairs of excipients (levamisole/tetramisole and creatine/creatinine) could not be differentiated from each other. There were no excipients tested that had equivalent m/z values as those of cocaine. Experimentation into the effects of various excipients at multiple concentrations on the abundance of the two cocaine peaks was performed. Regardless of excipient amount (up to 10 times more concentrated than cocaine) and the number of components (up to 15 total components) the ratio of abundance between the m/z 304 to 182 peaks did not vary greater than 22% relative standard deviation. A match criteria protocol was developed for the ability of an analyst to confirm the presence of cocaine within unknown forensic case samples that have previously tested positive for the presumptive identification of cocaine. The identification of cocaine was based on various factors such as the signal to noise ratio at m/z 304 and 182, the ratio of abundance between those two peaks as well as positive and negative controls. This match criteria protocol was utilized for 25 double blind mock forensic casework samples was performed. Determination for the presence of cocaine within these unknown samples gave an analyst error rate of 0%, with no false positives or false negatives predicted. To further aid human interpretation and identification of compounds within mixtures, the advanced chemometric software, Analyze IQ, was utilized. Development of predictive classification models using a combination of pre-processing steps, principle component analysis and machine learning techniques was achieved. Models were built using 381 unique samples for the purposes of identifying the presence of cocaine within unknown samples. Of all methods available within the Analyze IQ software, the optimization of a model using principle component analysis with support vector machine regression with a radial basis function kernel yielded an initial error rate of 0% for 72 samples tested. Furthermore, of the samples tested against the model, 20 samples were comprised of excipients that were not incorporated into the initial model development process. The inclusion of these samples (10 spiked with cocaine, 10 absent of cocaine), shows that predictive modeling based software can provide an accurate, robust, and evolving approach to the identification of cocaine within sample compositions that have not previously been tested and stored in a database of known reference samples. Predictive modeling has advantages over current mass spectral libraries, which are limited to the identification of pure compounds. To further test the abilities of predictive models, optimized machine learning models were applied to 25 double blind mock forensic casework samples. The predictive modeling error rate was identical to the human interpretation rate for the double blind mock casework samples with a 0% error rate. Using the DART/MS analytical platform, 25 mock forensic casework samples along with positive and negative controls were analyzed and identified for the presence of cocaine within 30 minutes. On the order of 15 to 30 times faster than modern GC/MS and LC/MS methods, the ability to analyze and identify samples faster would allow for an increase in samples being processed on a daily basis and allow for the reduction of case backlogs that currently plague controlled substances sections of forensic science laboratories throughout the United States.
218

Opportunistic Routing in Multihop Wireless Networks: Capacity, Energy Efficiency, and Security

Zeng, Kai 24 July 2008 (has links)
"Opportunistic routing (OR) takes advantages of the spatial diversity and broadcast nature of wireless networks to combat the time-varying links by involving multiple neighboring nodes (forwarding candidates) for each packet relay. This dissertation studies the properties, energy efficiency, capacity, throughput, protocol design and security issues about OR in multihop wireless networks. Firstly, we study geographic opportunistic routing (GOR), a variant of OR which makes use of nodes' location information. We identify and prove three important properties of GOR. The first one is on prioritizing the forwarding candidates according to their geographic advancements to the destination. The second one is on choosing the forwarding candidates based on their advancements and link qualities in order to maximize the expected packet advancement (EPA) with different number of forwarding candidates. The third one is on the concavity of the maximum EPA in respect to the number of forwarding candidates. We further propose a local metric, EPA per unit energy consumption, to tradeoff the routing performance and energy efficiency for GOR. Leveraging the proved properties of GOR, we propose two efficient algorithms to select and prioritize forwarding candidates to maximize the local metric. Secondly, capacity is a fundamental issue in multihop wireless networks. We propose a framework to compute the end-to-end throughput bound or capacity of OR in single/multirate systems given OR strategies (candidate selection and prioritization). Taking into account wireless interference and unique properties of OR, we propose a new method of constructing transmission conflict graphs, and we introduce the concept of concurrent transmission sets to allow the proper formulation of the maximum end-to-end throughput problem as a maximum-flow linear programming problem subject to the transmission conflict constraints. We also propose two OR metrics: expected medium time (EMT) and expected advancement rate (EAR), and the corresponding distributed and local rate and candidate set selection schemes, the Least Medium Time OR (LMTOR) and the Multirate Geographic OR (MGOR). We further extend our framework to compute the capacity of OR in multi-radio multi-channel systems with dynamic OR strategies. We study the necessary and sufficient conditions for the schedulability of a traffic demand vector associated with a transmitter to its forwarding candidates in a concurrent transmission set. We further propose an LP approach and a heuristic algorithm to obtain an opportunistic forwarding strategy scheduling that satisfies a traffic demand vector. Our methodology can be used to calculate the end-to-end throughput bound of OR in multi-radio/channel/rate multihop wireless networks, as well as to study the OR behaviors (such as candidate selection and prioritization) under different network configurations. Thirdly, protocol design of OR in a contention-based medium access environment is an important and challenging issue. In order to avoid duplication, we should ensure only the "best" receiver of each packet to forward it in an efficient way. We investigate the existing candidate coordination schemes and propose a "fast slotted acknowledgment" (FSA) to further improve the performance of OR by using a single ACK to coordinate the forwarding candidates with the help of the channel sensing technique. Furthermore, we study the throughput of GOR in multi-rate and single-rate systems. We introduce a framework to analyze the one-hop throughput of GOR, and provide a deeper insight on the trade-off between the benefit (packet advancement, bandwidth, and transmission reliability) and cost (medium time delay) associated with the node collaboration. We propose a local metric named expected one-hop throughput (EOT) to balance the benefit and cost. Finally, packet reception ratio (PRR) has been widely used as an indicator of the link quality in multihop wireless networks. Many routing protocols including OR in wireless networks depend on the PRR information to make routing decision. Providing accurate link quality measurement (LQM) is essential to ensure the right operation of these routing protocols. However, the existing LQM mechanisms are subject to malicious attacks, thus can not guarantee to provide correct link quality information. We analyze the security vulnerabilities in the existing link quality measurement (LQM) mechanisms and propose an efficient broadcast-based secure LQM (SLQM) mechanism, which prevents the malicious attackers from reporting a higher PRR than the actual one. We analyze the security strength and the cost of the proposed mechanism. "
219

Development of a Biomimetic In Vitro Skeletal Muscle Tissue Model

Forte, Jason Matthew 12 April 2017 (has links)
Many congenital skeletal muscle disorders including muscular dystrophies are caused by genetic mutations that lead to a dysfunction in myocytes effectively binding to the extracellular matrix. This leads to a chronic and continuous cycle of breakdown and regeneration of muscle tissue, ultimately resulting in loss of muscle function and patient mortality. Such disorders lack effective clinical treatments and challenge researchers to develop new therapeutics. The current drug development process often yields ineffective therapeutics due to the lack of genetic homology between pre-clinical animal models and humans. In addition current engineered tissue models using human cells fail to properly emulate native muscle morphology and function due to necrotic tissue cores and an abundance undigested ECM protein. Thus, a more precise benchtop model of 3D engineered human muscle tissue could serve as a better platform for translation to a disease model and could better predict candidate drug efficacy during pre- clinical development. This work presents the methodology for generating a high-content system of contiguous skeletal muscle tissue constructs produced entirely from human cells by using a non-adhesive hydrogel micro-molding technique. Subsequent culture and mold modifications confirmed by morphological and contractile protein analysis improve tissue longevity and myocyte maturation. Finally, mechanical strength and contractile force measurements confirmed that such modulations resulted in skeletal muscle microtissues that were more mimetic of human muscle tissue. This cell self-assembly technique yielded tissues approximately 150um in diameter with cell densities approaching that of native muscle. Modifications including seeding pre-differentiated myoblasts and the addition of ECM producing fibroblasts improved both tissue formation efficiency and cell alignment. Further culture modifications including supplementation of the culture medium with 50ug/ml ascorbic acid and 100ng/ml Insulin-like growth factor-1 coupled with a mold redesign that allowed tissue to passively contract during maturation while still remaining anchored under tension further improved ECM production, myogenic differentiation, and long-term longevity in culture. Further confirmation of the culture improvements were demonstrated by increases in mechanical strength and contractile force production. In conclusion, this approach overcomes cell density limitations with exogenous ECM-based methods and provides a platform for producing 3D models of human skeletal muscle by making tissue entirely using cells. Future work will attempt to translate the methodology used for tissue generation and long-term culture to create benchtop models of disease models of skeletal muscle, streamlining pre- clinical benchtop testing to better predict candidate drug efficacy for skeletal muscle diseases and disorders along with elucidating side effects of non-target drugs.
220

Optimisation du débit pour des applications linéaires multi-tâches sur plateformes distribuées incluant des temps de reconfiguration / Troughput optimization of linear multitask workflow applications on distributed platforms including setup times

Coqblin, Mathias 23 January 2015 (has links)
Les travaux présentés dans cette thèse portent sur l’ordonnancement d’applications multi-tâches linéaires de type workflow sur des plateformes distribuées. La particularité du système étudié est que le nombre de machines composant la plateforme est plus petit que le nombre de tâches à effectuer. Dans ce cas les machines sont supposées être capables d’effectuer toutes les tâches de l’application moyennant une reconfiguration, sachant que toute reconfiguration demande un temps donné dépendant ou non des tâches. Le problème posé est de maximiser le débit de l’application,c’est à dire le nombre moyen de sorties par unité de temps, ou de minimiser la période, c’est à dire le temps moyen entre deux sorties. Par conséquent le problème se décompose en deux sous problèmes: l’assignation des tâches sur les machines de la plateforme (une ou plusieurs tâches par machine), et l’ordonnancement de ces tâches au sein d’une même machine étant donné les temps de reconfiguration. Pour ce faire la plateforme dispose d’espaces appelés buffers, allouables ou imposés, pour stocker des résultats de production temporaires et ainsi éviter d’avoir à reconfigurer les machines après chaque tâche. Si les buffers ne sont pas pré-affectés nous devons également résoudre le problème de l’allocation de l’espace disponible en buffers afin d’optimiser l’exécution de l’ordonnancement au sein de chaque machine. Ce document est une étude exhaustive des différents problèmes associés à l’hétérogénéité de l’application ; en effet si la résolution des problèmes est triviale avec des temps de reconfiguration et des buffers homogènes, elle devient bien plus complexe si ceux-ci sont hétérogènes. Nous proposons ainsi d’étudier nos trois problèmes majeurs pour différents degrés d’hétérogénéité de l’application. Nous proposons des heuristiques pour traiter ces problèmes lorsqu’il n’est pas possible de trouver une solution algorithmique optimale. / In this document we tackle scheduling problems of multitask linear workflow applications ondistributed platforms. In our particular problem the number of available machines on the platformis lower than the number of stages within the pipeline. The machines are then assumed to be able toperform any kind of task on the application given the appropriate reconfiguration (or setup), the catchbeing that any reconfiguration is time consuming. The problem that we try to solve is to maximizethe throughput of such applications, i.e., the mean amount of outputs per unit of time, or to minimizeits period, i.e., the average time between two outputs. As a result this problem is split into two subproblems:mapping tasks onto different machines of the platform (most machines will likely handleseveral tasks), and find an optimal schedule within a machine while taking setup times into account.To solve this we introduce buffers, which are spaces available for each machine to store temporaryproduction results and avoid reconfiguring after each task execution, and which may or may notbe already allocated for each stage. If those buffers are not already allocated to each task then athird problem must be solved to properly allocate the available space onto each buffer, as differentbuffer configurations have a huge impact on the scheduling of a machine. This document presentsan exhaustive coverage of the different problems that are associated with the heterogeneity of theapplication; the problems with homogeneous buffer capacities and setup times are rather simple tosolve, but they get a lot more complex as heterogeneity increases. We study the three main subproblemsfor each heterogeneity combination, and offer heuristic solution to solve them when anoptimal solution cannot be reasonably found.

Page generated in 0.0677 seconds