• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 6
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 36
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Buněčná lokalizace rezistentních proteinů Vga(A)LC a Msr(A) prostřednictvím fluorescenční mikroskopie / Subcellular localization of resistant proteins Vga(A)LC and Msr(A) using fluorescence microscopy

Nguyen Thi Ngoc, Bich January 2018 (has links)
Vga(A)LC and Msr(A) are clinically significant resistant proteins in staphylococci that confer resistance to translational inhibitors. They belong to ARE ABC-F protein subfamily, which is part of ABC transporters. Unlike typical ABC transporters, ABC-F proteins do not have transmembrane domains that are responsible for the transport of substances through the membrane. Therefore, they do not have characteristic transport function but regulatory or resistance function. Their mechanism of action on the ribosome has been described only recently, where these proteins displace the antibiotic from the ribosome. However, some aspects of their function are still unclear. For example, what is the function of the Vga(A) location on a membrane that has been detected in the membrane fraction but not in the ribosomal. In this work, using fluorescence microscopy, I observed subcellular localization of the Vga(A)LC-mEos2, Vga(A)LC-GFP and Msr(A)-eqFP650 resistant fusion proteins in live cells of S. aureus under different culture conditions . It has been shown that Vga(A)LC-GFP and Msr(A)-eqFP650 occur in a foci near the membrane. Depending on ATPase activity or the presence of an antibiotic, the localization of Msr(A)-eqFP650 in the cell changes from focal to diffuse, presumably on ribosomes, suggesting a...
32

Mokslinių jūros tyrinėjimų atskirose jūros erdvėse reglamentavimas tarptautinėje jūrų ir Lietuvos Respublikos teisėje / Regulation of marine scientific research in separate maritime zones in international law of the sea and the Republic of Lithuania

Kubiliūtė, Aistė 03 June 2014 (has links)
Didėjantis tarptautinis bendradarbiavimas jūros aplinkos apsaugos srityje skatina intensyvesnį duomenų apie jūros aplinką rinkimą bei palankesnių sąlygų sudarymą MJT vykdymui. Šiame darbe siekta išanalizuoti MJT reguliavimą atskirose jūros erdvėse bei įvertinti Baltijos jūros šalių praktiką ir reglamentavimo trūkumus. Buvo išanalizuota 1982 m. Jungtinių Tautų jūrų teisės konvencija, ypač nuostatos susijusios su MJT, Helsinkio komisijos rekomendacijos, papildančios MJT teisinį režimą, apžvelgti Baltijos jūros valstybių, įskaitant Lietuvos, norminiai aktai. Darbe išanalizuotos MJT charakteristikos, Jūrų teisės konvencijoje naudojamos tyrimų sąvokos, pagrindiniai MJT reguliavimo principai, MJT praktika Baltijos jūros šalyse bei MJT ir jų teisinio reguliavimo svarba ES mastu. Tyrimų rezultatai parodė, kad Baltijos šalyse vis dėlto egzistuoja nemažai trūkumų MJT reglamentavime, ypač leidimų išdavimo procedūrose. / Growing international cooperation in marine protection field intensifies marine data collection and creation of more favourable conditions for marine scientific research activities. This scientific work analyses MSR regulation in separate maritime zones and assesses Baltic States practice and gaps in regulation. UNCLOS, especially provisions related to MSR, recommendation of HELCOM that complements legal regulation of MSR, Baltic States’ and Lithuanian national legal instruments were taken into account. Work presents MSR characteristics, research definitions which are used in Convention, the main MSR regulation principles, Baltic States practice regarding MSR and interest by EU on importance of legal regulation. Results of analysis have showed the existing gaps in MSR regulation in Baltic States, especially within permits issuing procedures.
33

Codes With Locality For Distributed Data Storage

Moorthy, Prakash Narayana 03 1900 (has links) (PDF)
This thesis deals with the problem of code design in the setting of distributed storage systems consisting of multiple storage nodes, storing many different data les. A primary goal in such systems is the efficient repair of a failed node. Regenerating codes and codes with locality are two classes of coding schemes that have recently been proposed in literature to address this goal. While regenerating codes aim to minimize the amount of data-download needed to carry out node repair, codes with locality seek to minimize the number of nodes accessed during node repair. Our focus here is on linear codes with locality, which is a concept originally introduced by Gopalan et al. in the context of recovering from a single node failure. A code-symbol of a linear code C is said to have locality r, if it can be recovered via a linear combination of r other code-symbols of C. The code C is said to have (i) information-symbol locality r, if all of its message symbols have locality r, and (ii) all-symbol locality r, if all the code-symbols have locality r. We make the following three contributions to the area of codes with locality. Firstly, we extend the notion of locality, in two directions, so as to permit local recovery even in the presence of multiple node failures. In the first direction, we consider codes with \local error correction" in which a code-symbol is protected by a local-error-correcting code having local-minimum-distance 3, and thus allowing local recovery of the code-symbol even in the presence of 2 other code-symbol erasures. In the second direction, we study codes with all-symbol locality that can recover from two erasures via a sequence of two local, parity-check computations. When restricted to the case of all-symbol locality and two erasures, the second approach allows, in general, for design of codes having larger minimum distance than what is possible via the rst approach. Under both approaches, by studying the generalized Hamming weights of the dual codes, we derive tight upper bounds on their respective minimum distances. Optimal code constructions are identified under both approaches, for a class of code parameters. A few interesting corollaries result from this part of our work. Firstly, we obtain a new upper bound on the minimum distance of concatenated codes and secondly, we show how it is always possible to construct the best-possible code (having largest minimum distance) of a given dimension when the code's parity check matrix is partially specified. In a third corollary, we obtain a new upper bound for the minimum distance of codes with all-symbol locality in the single erasure case. Secondly, we introduce the notion of codes with local regeneration that seek to combine the advantages of both codes with locality as well as regenerating codes. These are vector-alphabet analogues of codes with local error correction in which the local codes themselves are regenerating codes. An upper bound on the minimum distance is derived when the constituent local codes have a certain uniform rank accumulation (URA) property. This property is possessed by both the minimum storage regenerating (MSR) and the minimum bandwidth regenerating (MBR) codes. We provide several optimal constructions of codes with local regeneration, where the local codes are either the MSR or the MBR codes. The discussion here is also extended to the case of general vector-linear codes with locality, in which the local codes do not necessarily have the URA property. Finally, we evaluate the efficacy of two specific coding solutions, both possessing an inherent double replication of data, in a practical distributed storage setting known as Hadoop. Hadoop is an open-source platform dealing with distributed storage of data in which the primary aim is to perform distributed computation on the stored data via a paradigm known as Map Reduce. Our evaluation shows that while these codes have efficient repair properties, their vector-alphabet-nature can negatively a affect Map Reduce performance, if they are implemented under the current Hadoop architecture. Specifically, we see that under the current architecture, the choice of number processor cores per node and Map-task scheduling algorithm play a major role in determining their performance. The performance evaluation is carried out via a combination of simulations and actual experiments in Hadoop clusters. As a remedy to the problem, we also pro-pose a modified architecture in which one allows erasure coding across blocks belonging to different les. Under the modified architecture, the new coding solutions will not suffer from any Map Reduce performance-loss as seen in the original architecture, while retaining all of their desired repair properties
34

Combinação de técnicas de delineamento de experimentos e elementos finitos com a otimização via simulação Monte Carlo /

Oliveira, José Benedito da Silva January 2019 (has links)
Orientador: Aneirson Francisco da Silva / Resumo: A Estampagem a Frio é um processo de conformação plástica de chapas metálicas, que possibilita, por meio de ferramentas específicas, obter componentes com boas propriedades mecânicas, geometrias e espessuras variadas, diferentes especificações de materiais e com boa vantagem econômica. A multiplicidade destas variáveis gera a necessidade de utilização de técnicas estatísticas e de simulação numérica, que suportem a sua análise e adequada tomada de decisão na elaboração do projeto das ferramentas de conformação. Este trabalho foi desenvolvido em uma empresa brasileira multinacional de grande porte que atua no setor de autopeças, em seu departamento de engenharia de projetos de ferramentas, com o propósito de reduzir o estiramento e a ocorrência de trincas em uma travessa de 6,8 [mm] de aço LNE 380. A metodologia proposta obtém os valores dos fatores de entrada e sua influência na variável resposta com o uso de técnicas de Delineamento de Experimentos (DOE) e simulação pelo método de Elementos Finitos (FE). Uma Função Empírica é desenvolvida a partir desses dados, com o uso da técnica de regressão, obtendo-se a variável resposta y (espessura na região crítica), em função dos fatores influentes xi do processo. Com a Otimização via Simulação Monte Carlo (OvSMC) insere-se a incerteza nos coeficientes desta Função Empírica, sendo esta a principal contribuição deste trabalho, pois é o que ocorre, por via de regra, na prática com problemas experimentais. Simulando-se por FE as ferram... (Resumo completo, clicar acesso eletrônico abaixo) / Mestre
35

Coding Schemes For Distributed Subspace Computation, Distributed Storage And Local Correctability

Vadlamani, Lalitha 02 1900 (has links) (PDF)
In this thesis, three problems have been considered and new coding schemes have been devised for each of them. The first is related to distributed function computation, the second to coding for distributed storage and the final problem is based on locally correctable codes. A common theme of the first two problems considered is distributed computation. The first problem is motivated by the problem of distributed function computation considered by Korner and Marton, where the goal is to compute XOR of two binary sources at the receiver. It has been shown that linear encoders give better sum rates for some source distributions as compared to the usual Slepian-Wolf scheme. We generalize this distributed function computation setting to the case of more than two sources and the receiver is interested in computing multiple linear combinations of the sources. Consider `m' random variables each of which takes values from a finite field and are associated with a certain joint probability distribution. The receiver is interested in the lossless computation of `s' linear combinations of the m random variables. By considering the set of all linear combinations of m random variables as a vector space V , this problem can be interpreted as a subspace-computation problem. For this problem, we develop three increasingly refined approaches, all based on linear encoders. The first two approaches which are termed as common code approach and selected subspace approach, use a common matrix to encode all the sources. In the common code approach, the desired subspace W is computed at the receiver, whereas in the selected subspace approach, possibly a larger subspace U which contains the desired subspace is computed. The larger subspace U which gives the minimum sum rate itself is based on a decomposition of vector space V into a chain of subspaces. The chain of subspaces is determined by the joint probability distribution of m random variables and a notion of normalized measure of entropy. The third approach is a nested code approach, where all the encoding matrices are nested and the same subspace U which is identified in the selected subspace approach is computed. We characterize the sum rates under all the three approaches. The sum rate under nested code approach is no larger than both selected subspace approach and Slepian-Wolf approach. For a large class of joint distributions and subspaces W , the nested code scheme is shown to improve upon Slepian-Wolf scheme. Additionally, a class of source distributions and subspaces are identified, for which the nested code approach is sum-rate optimal. In the second problem, we consider a distributed storage network, where data is stored across nodes in a network which are failure-prone. The goal is to store data reliably and efficiently. For a required level of reliability, it is of interest to minimise storage overhead and also of interest to perform node repair efficiently. Conventionally replication and maximum distance separable (MDS) codes are employed in such systems. Though replication is very efficient in terms of node repair, the storage overhead is high. MDS codes have low storage overhead but even the repair of a single failed node requires contacting a large number of nodes and downloading all their data. We consider two coding solutions that have recently been proposed, which enable efficient node repair in case of single node failure. The first solution called regenerating codes seeks to minimize the amount of data downloaded for node repair, while codes with locality attempt to minimize the number of helper nodes accessed. We extend these results in two directions. In the first one, we introduce the notion of codes with locality where the local codes have minimum distance more than 2 and hence can recover a code symbol locally even in the presence of multiple erasures. These codes are termed as codes with local erasure correction. We say that a code has information locality if there exists a set of message symbols, each of which is covered by local codes. A code is said to have all-symbol locality if all the code symbols are covered by local codes. An upper bound on the minimum distance of codes with information locality is presented and codes that are optimal with respect to this bound are constructed. We make a connection between codes with local erasure correction and concatenated codes. The second direction seeks to build codes that combine the advantages of both codes with locality as well as regenerating codes. These codes, termed here as codes with local regeneration, are codes with locality over a vector alphabet, in which the local codes themselves are regenerating codes. There are two well known classes of regenerating codes known as minimum storage regenerating (MSR) codes and minimum bandwidth regenerating (MBR) codes. We derive two upper bounds on the minimum distance of vector-alphabet codes with locality, one for the case when the local codes are MSR codes and the second for the case when the local codes are MBR codes. We also provide several optimal constructions of both classes of codes which achieve their respective minimum distance bounds with equality. The third problem deals with locally correctable codes. A block code of length `n' is said to be locally correctable, if there exists a randomized algorithm such that any one of the coordinates of the codeword can be recovered by querying at most `r' coordinates, even in presence of some fraction of errors. We study the local correctability of linear codes whose duals contain 4-designs. We also derive a bound relating `r' and fraction of errors that can be tolerated, when each instance of the randomized algorithm is `t'-error correcting instead of simple parity computation.
36

Optimisation des méthodes d'extraction des composés phénoliques des raisins libanais et de leurs coproduits / Optimization of phenolic compound's extraction methods from Lebanese grapes and their byproducts

Rajeha, Hiba 29 June 2015 (has links)
Ce travail de doctorat traite l’optimisation des méthodes d’extraction des composés phénoliques à partir des sous-produits de la viticulture et de la viniculture, à savoir les sarments de vigne et les marcs de raisins. Plusieurs technologies innovantes sont appliquées et comparées : l’extraction accélérée par solvant (EAS), les décharges électriques de haute-tension (DEHT), les ultrasons (US) et les champs électriques pulsés (CEP). Les extractions solide-liquide faites sur les sarments ont montré que, parmi les solvants étudiés, l’eau est le moins efficace. L’ajout de la β-cyclodextrine dans l’eau améliore le procédé d’extraction mais est moins efficace que les mélanges hydroéthanoliques. L’extraction en milieu alcalin donne le meilleur rendement en composés phénoliques. L’intensification de l’extraction des composés phénoliques des sarments est possible grâce aux nouvelles technologies d’extraction. L’efficacité des méthodes testées est la moindre avec les US, moyenne avec les CEP pour atteindre le meilleur rendement phénolique avec les DEHT. La filtrabilité de ces extraits est d’autant plus lente que leur composition est complexe. L’ultrafiltration membranaire permet une très bonne purification et concentration des composés phénoliques. L’étude des mécanismes d’action des DEHT a permis d’identifier les phénomènes favorisant l’extraction des composés phénoliques à partir des sarments. Un effet mécanique des DEHT, capable de fragmenter les sarments, est en majorité responsable de cette amélioration. Le procédé énergivore du broyage pourra alors être omis. Un effet électrique contribuant également à l’intensification du procédé d’extraction est démontré. La formation de peroxyde d’hydrogène durant le traitement par DEHT est quantifiée mais ne semble pas altérer les composés phénoliques qui sont des molécules à capacité antiradicalaire élevée. Quant aux études portées sur les marcs de raisins, la variation simultanée de plusieurs paramètres opératoires a permis l’optimisation de l’extraction aqueuse et hydroéthanolique des composés phénoliques en ayant recours à la méthodologie de surface de réponse (MSR). Le passage d’un milieu aqueux à un milieu hydroéthanolique a permis d’améliorer nettement le procédé d’extraction solide-liquide des composés phénoliques et l’utilisation de l’EAS a permis l’augmentation du rendement en composés phénoliques jusqu’à trois fois par rapport à l’optimum obtenu en milieu hydroéthanolique. / This study deals with the optimization of the extraction methods of phenolic compounds from viticulture and viniculture by-products, namely vine shoots and grape pomace. Several innovative technologies were tested and compared: high voltage electrical discharges (HVED), accelerated solvent extraction (ASE), ultrasounds (US) and pulsed electric fields (PEF). The solid-liquid extraction conducted on vine shoots showed that, amongst the studied solvents, water is the least effective. The addition of the β-cyclodextrin to water improves the extraction process but remains less effective than that with hydroethanolic mixtures. The extraction in alkaline medium gives the highest phenolic compound extraction yields. The intensification of phenolic compound extraction from vine shoots was possible thanks to new extraction technologies. The effectiveness of the tested methods was the least with US, followed by PEF to accomplish the highest phenolic yield with HVED. The filterability of the extracts was slower when their composition was complex, and the membrane technology allowed a good purification and concentration of phenolic compounds. The reason behind the high effectiveness of HVED was investigated. The action mechanisms of HVED were studied in details. A mechanical effect of HVED provoked vine shoots fragmentation and particle size reduction. This was the main phenomenon responsible for the intensification of the extraction process. It also suggested that a grinding pretreatment would not be necessary prior to HVED, which considerably diminishes the energy input of the overall process. The presence of a non-mechanical effect and its contribution in the efficiency of HVED were also shown. The formation of hydrogen peroxide during the treatment was observed. However it did not seem to alter vine shoot phenolic compounds since these demonstrated a high radical scavenging capacity. As for the studies conducted on grape pomace, the simultaneous variation of several operating parameters allowed the aqueous and hydroethanolic optimization of phenolic compound extraction from these byproducts by response surface methodology (RSM). The passage from an aqueous to a hydroethanolic medium clearly improved the solid-liquid extraction of phenolic compounds from grape pomace. The use of ASE further increased the phenolic compound yield up to three times as compared to the optimum obtained with a hydroethanolic solvent.

Page generated in 0.0759 seconds