31 |
NSIBM : un solveur parallèle de Navier-Stokes avec raffinement automatique basé sur la méthode des frontières immergées / NSIBM : a parallel Navier-Stokes solver with automatic mesh refinement based on immersed boundary methodDurrenberger, Daniel 18 December 2015 (has links)
Cette thèse, intitulée NSIBM : un solveur parallèle de Navier-Stokes avec raffinement automatique basé sur la méthode des frontières immergées, a été effectuée au sein du laboratoire iCube, département de mécanique, à Strasbourg, dans le quartier de l'Orangerie, sous la direction du professeur Yannick Hoarau. L'essentiel du travail effectué consiste en le développement d'un programme capable de résoudre numériquement l'équation de Navier-Stokes qui régit des fluides en mouvement. Une attention particulière a été portée à la production de maillages conformes aux géométries proposées et à leur génération. Les moyens mis en œuvre ici pour gérer l'éternel problème de la finesse du maillage opposée au trop grand nombre de cellules sont multiples : le raffinement, la parallélisation et les frontières immergées. Dans un premier temps, j'ai conçu un générateur de maillage en deux et trois dimensions en y intégrant la possibilité de diviser des cellules, et cela de manière automatique, par des critères géométriques, numériques ou physiques. Il permet également de supprimer des cellules, de manière à ne pas mailler le vide ou les parties solides de la géométrie.Dans un deuxième temps, j'ai rendu ce code parallèle en lui donnant la capacité d'utiliser plusieurs processeurs, afin de calculer plus vite et donc d'utiliser davantage de mailles. Cette étape fait appel à deux technologies : Metis, qui partage équitablement les mailles sur le nombre choisi de processeurs et OpenMPI, qui est l'interface de communication entre ces processeurs. Enfin, la méthode des frontières immergées a été introduite au code pour gérer les bords non verticaux ou horizontaux dans un maillage cartésien, c'est-à-dire formé de rectangles ou de pavés droits. Elle consiste à donner un caractère hybride à une cellule traversée par une frontière par l'introduction d'un terme numérique de forçage simulant la présence de la paroi.Ce travail de développement a ensuite été mis à l'épreuve et validé dans une série de cas tests en deux comme en trois dimensions. Des exemples de maillages complexes générés facilement sont donnés. / This thesis, entitled NSIBM: a parallel Navier-Stokes solver with automatic mesh refinement based on immersed boundary method, has been conducted within the iCube laboratory dedicated to mechanics and located in Strasbourg. It has been supervised by Professor Yannick Hoarau. This work mainly deals with coding a program able to solve the Navier-Stokes equations that governs moving fluids, in a numerical way. Particular attention was paid to the production of meshes that suit given geometries and their generation.The means used here to handle the eternal problem of the fineness of the mesh opposed to too many cells are several~:refinement, parallelization and the immersed boundary method.Initially, I designed a two and three-dimensional mesh generator that includes the possibility of dividing cells,in an automatic way, by geometrical, numerical or physical criteria. It also allows to remove cells, where there is no point keeping it. Secondly, I parallelized the program by giving him the ability to use multiple processors to calculate faster and therefore use bigger meshes.This step uses two available libraries~: \textit{Metis}, which gives a optimal mesh partition, and \textit{openMPI}, which deals with communication between nodes. Finally, the immersed boundary method has been implemented to handle non-vertical or non-horizontal edges in a cartesian grid. Its principle is to confer a hybrid status to a cell which is crossed by an edge by adding a numerical force term simulating the presence of the boundary. This development work was then tested and validated in a serie of test cases in two and three dimensions. Examples of complex meshes easily generated are given.
|
32 |
Implication de la protéine SG1 dans le maintien des épigénomes chez Arabidopsis thaliana / Involvement of SG1 protein in maintaining the epigenomes in Arabidopsis thalianaDeremetz, Aurélie 03 December 2015 (has links)
La chromatine est le support de l’information génétique et sa structure, ainsi que son activitétranscriptionnelle, peuvent être modulées par des modifications épigénétiques. Le maintien des marquesrépressives telles que la méthylation de l’ADN et des histones, hors du corps des gènes, est nécessairepour le bon développement de la plante. Chez Arabidopsis thaliana, le mutant sg1 présente des défautsdéveloppementaux sévères caractéristiques de mutants affectés dans des mécanismes épigénétiques. Nousavons montré que le phénotype de sg1 est causé par une hyperméthylation CHG et H3K9me2 dans denombreux gènes. En effet, SG1 contrôle la transcription de l’histone déméthylase IBM1 et lesmodifications de l’épigénome observées chez sg1 sont dues à une dérégulation de IBM1. Nous avonsidentifié sept protéines partenaires de SG1, dont certaines se lient aux marques chromatiniennes. Nousavons réalisé un crible suppresseur qui a permis d’identifier FPA, une protéine régulant lapolyadénylation de certains transcrits, comme acteur impliqué dans le contrôle des cibles de SG1, dontIBM1. Nos résultats montrent que le complexe SG1 régule la transcription de ses cibles en influençant,par un mécanisme encore inconnu, le choix du site de polyadénylation, en lien avec les marqueschromatiniennes présentes aux locus cibles. D’autre part, certaines épimutations induites par la mutationsg1 peuvent être maintenues pendant plusieurs générations. Pour rechercher un lien entre méthylation desgènes et conséquences phénotypiques, nous avons caractérisé des épimutations liées à un défaut dedéveloppement de la fleur et identifié un certain nombre de gènes candidats potentiellement responsablesdu phénotype. Les résultats obtenus au cours de ma thèse ont contribué à préciser le rôle joué par lecomplexe SG1 et à comprendre le lien entre celui-ci et les marques épigénétiques. / Chromatin is known to contain the genetic information and its structure and transcriptionalstate can be regulated by epigenetic modifications. Repressive marks such as DNA and histonesmethylation needs to be kept away from gene bodies to enable the proper development of the plant. InArabidopsis thaliana, sg1 mutants show a range of severe developmental defects similar to thoseobserved in mutants affected in epigenetic pathways. We have shown that sg1 mutant phenotype iscaused by an increase of CHG and H3K9me2 methylation in many gene bodies. Indeed, SG1 regulatesthe histone demethylase IBM1 transcription and the impairment observed in sg1 mutant epigenomes iscaused by IBM1 misregulation. We found seven proteins interacting with SG1, among which somepartners are able to bind chromatin marks. Through a suppressor screen we identified FPA, alreadyknown to regulate the polyadenylation of some transcripts, as a player involved in SG1 targetsregulation, including IBM1. Our results show that the SG1 complex regulates target genes transcriptionby affecting polyadenylation site choice, in a way that remains to be determined, in a chromatin marksdependent manner. We also found that some of the sg1-induced epimutations can be maintained throughseveral generations. To investigate the link between gene body methylation and phenotypicconsequences, we have characterized epimutations related to a defect in floral development andidentified some candidate genes potentially responsible for the floral phenotype. Thus, our resultscontributed to clarify the role of SG1 and to understand its connection with epigenetic marks.
|
33 |
Augmenting MPI Programming Process with Cognitive ComputingKazilas, Panagiotis January 2019 (has links)
Cognitive Computing is a new and quickly advancing technology. In thelast decade Cognitive Computing has been used to assist researchers in theirendeavors in many different scientific fields such as Health & medicine,Education, Marketing, Psychology and Financial Services. On the otherhand, Parallel programming is a more complex concept than sequentialprogramming. The additional complexity of Parallel Programming isintroduced by its nature that requires implementations of more complexalgorithms and it introduces additional concepts to the developers, namelythe communication between the processes (Distributed memory systems)that execute the parallel program and their synchronization (Share memorysystems). As a result of this additional complexity, a lot of novice developersare reserved in their attempts to implement parallel programs. The objectiveof this research project was to investigate whether we can assist parallelprogramming process through cognitive computing solutions. In order toachieve our objective, the MPI Assistant, a Q&A system has been developedand a case study has been carried out to determine our application’s efficiencyin our attempt to assist parallel programming developers. The case studyshowed that our MPI Assistant system indeed helped developers reduce thetime they spend to develop their solutions, but not improve the quality ofthe program or its efficiency as these improvements require features that areout of this research project’s scope. However, the case study had limitednumber of participants, which may affect our results’ reliability. As a nextstep in our attempt to determine if cognitive computing technologies are ableto assist developers in their parallel programming development, we movedto investigate if cognitive solutions can extract better and more completeresponses compared to our manually-created responses that we created forthe MPI Assistant. We have experimented with 2 different approaches to theproblem. An approach where we manually created responses for the MPIAssistant, and an approach where we investigated if cognitive solutions canautomatically extract better and complete responses. We compared the qualityof the latter automatic responses with the quality of the former which weremanually created.
|
34 |
Interconnection between BPM and BI products / Propojení produktů BPM a BIZikmund, Martin January 2008 (has links)
Interconnection between various types of IT systems used in an enterprise is crucial in these days. Most of companies are using many different kinds of applications in their daily running of the enterprise which leads to necessity of sharing data across those applications to enable all employees to make a right decision upon correct information. In my diploma thesis I deal with interconnection of two systems -- Business Process Management (BPM) and Business Intelligence (BI). Both systems belong to group of top IT systems with big influence on ongoing business and right decision making on all levels from operational to strategic. My paper contains theoretical as well as practical part of the solution for interconnection of BI and BPM systems. First part is about presenting and describing basic concepts and technologies which are used in process of integration of BI and BPM. At the beginning there is a short introduction to BPM, BI and SOA. Following part is including analysis of three major ways of interconnection between BI and BPM systems. Last part of the first theoretical section presents two products. IBM FileNet P8 representative of BPM system and IBM Cognos 8 BI as a representative of BI system. Second part deals with the practical example of real integration between BI and BPM systems. In first part of this section is simple description of the scenario -- business case. After that there is a detail depiction of two different kinds of integration of BI and BPM. Analysis of benefits, advantages and further possibilities are at the end of work.
|
35 |
A real time multitasking kernel for the IBM personal computerJu, Szewei, 1960- January 1988 (has links)
The purpose of this study is to design a simple, efficient, single-user multitasking kernel for real-time applications on the IBM Personal Computer. Since real-time application consists of many tasks and their order of execution cannot be predetermined, it is almost impossible to write a monolithic block of code that can meet the response time of all the tasks. By using multitasking, each task is assigned a priority based on the urgency of its response time. The kernel uses a priority-based preemptive scheduling strategy to select a new task to run, so the highest-priority task can always get to run when it is ready. The Basic Input/Output System of the PC is rewritten to be reentrant so that it can be shared by multiple tasks.
|
36 |
Analýza CSR společnosti T-Mobile Czech Republic,a.s. / Analysis of CSR activities of T-Mobile Czech Republic, a.s.Ježková, Ivana January 2010 (has links)
The Master's thesis deals with a question of corporate social responsibility. The thesis is divided into two main parts. The theoretical and methodological part explains main terms related to CSR, introduces important organizations oriented on CSR and describes standards of corporate social responsibility that can be followed. The first part is also supplemented with interesting data gained in public opinion research on the theme of CSR and its public perception. In the practical part, there are analyzed CSR strategies of two companies: T-Mobile Czech Republic, a.s. and IBM Czech Republic, spol. s r.o. Individual activities are classified into three basic pillars of corporate social responsibility. Approaches of these two companies are compared and in the conclusion of the thesis there are recommended steps to make corporate social responsibility of the two companies more effective.
|
37 |
Empirické porovnání komerčních systémů dobývání znalostí z databází / An Empirical Comparison of Commercial Data Mining ToolsFaruzel, Petr January 2009 (has links)
The presented work "An Empirical Comparison of Commercial Data Mining Tools" deals with data mining tools from world's leading software providers of statistical solutions. The aim of this work is to compare commercial packages IBM SPSS Modeler and SAS Enterprise Miner in terms of their specification and utility considering a chosen set of evaluation criteria. I would like to achieve the appointed goal by a detailed analysis of selected features of the surveyed software packages as well as by their application on real data. The comparison is founded on 29 component criteria which reflect user's requirements regarding functionality, usability and flexibility of the system. The pivotal part of the comparative process is based on an application of the surveyed data mining tools on data concerning meningoencephalitis. Results predestinate evaluation of their performance while analyzing small and large data. Quality of developed data models and duration of their derivation are stated in reference to the use of six comparable data mining techniques for classification. Small data more likely comply with IBM SPSS Modeler. Although it produces slightly less accurate models, their development times are much shorter. Increasing the amount of data changes the situation in favor of competition. SAS Enterprise Miner manages better results while analyzing large data. Considerably more accurate models are accompanied by slightly shorter times of their development. Functionality of the surveyed data mining tools is comparable, whereas their usability and flexibility differentiate. IBM SPSS Modeler offers apparently better usability and learnability. Users of SAS Enterprise Miner have a slightly more flexible data mining tool at hand.
|
38 |
A " visible CPU" using a Z80-based microprocessor system via elementary microinstructions.January 1982 (has links)
by Lai Kin-wing. / Typescript (photocopy) / Includes bibliographical references / Thesis (M.Ph.)--Chinese University of Hong Kong, 1982
|
39 |
Využití sociálních sítí v marketingové strategii firmy / The use of the social networks in the marketing strategyČapek, Jan January 2011 (has links)
The thesis deals in the first part with the definition of social networks, their historical development and the basic principles of the operation. The content of the second part is focused on the benefits and risks connected to the usage of the social networks and contains the analysis of trends in the marketing practice. The next part is devoted to the most used social networks and describes their functional elements, which can be used by the companies to achieve their marketing objectives. The practical part of this thesis begins with the introduction of the company IBM, their marketing strategy and evaluation of the activities within social networks. The main objective of this thesis is to propose steps that will help IBM to avoid the mistakes which are currently being made. These steps include mainly the layout design and the usage of the most frequently used functionalities to allow more effective utilization of the social networks in marketing strategy. Although is Facebook the key element for the application of the proposed steps, the attention is also devoted to Twitter and the newly established social network called Google+.
|
40 |
Design and implementation of a simulator for a local area network utilizing an IBM PC/AT or compatible computer /Midgley, Christian G. January 1988 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 1988. / Includes bibliographical references (leaves [111]-[112]).
|
Page generated in 0.0541 seconds