• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 22
  • 22
  • 7
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Apprentissage pour le contrôle de plateformes parallèles à large échelle / Learning to control large-scale parallel platforms

Reis, Valentin 28 September 2018 (has links)
Fournir les infrastructures de calcul nécessaires à la résolution des problèmescom-plexes de la société moderne constitue un défistratégique. Lesorganisations y répondent classiquement en mettant en place de largesinfrastructures de calcul parallèle et distribué. Les vendeurs de systèmes deCalcul Hautes Performances sont incités par la compétition à produire toujoursplus de puissance de calcul et de stockage, ce qui mène à des plateformes”Petascale“ spécifiques et sophistiquées, et bientôt à des machines”Exascale“. Ces systèmes sont gérés de manière centralisée à l’aide desolutions logicielles de gestion de jobs et de resources dédiées. Un problèmecrucial auquel répondent ces logiciels est le problème d’ordonnancement, pourlequel le gestionnaire de resources doit choisir quand, et sur quellesresources exécuter quelle tache calculatoire. Cette thèse fournit des solutionsà ce problème. Toutes les plateformes sont différentes. En effet, leurinfrastructure, le comportement de leurs utilisateurs et les objectifs del’organisation hôte varient. Nous soutenons donc que les politiquesd’ordonnancement doivent s’adapter au comportement des systèmes. Dans cemanuscrit, nous présentons plusieurs manières d’obtenir cette adaptativité. Atravers une approche expérimentale, nous étudions plusieurs compromis entre lacomplexité de l’approche, le gain potentiel, et les risques pris. / Providing the computational infrastucture needed to solve complex problemsarising in modern society is a strategic challenge. Organisations usuallyadress this problem by building extreme-scale parallel and distributedplatforms. High Performance Computing (HPC) vendors race for more computingpower and storage capacity, leading to sophisticated specific Petascaleplatforms, soon to be Exascale platforms. These systems are centrally managedusing dedicated software solutions called Resource and Job Management Systems(RJMS). A crucial problem adressed by this software layer is the job schedulingproblem, where the RJMS chooses when and on which resources computational taskswill be executed. This manuscript provides ways to adress this schedulingproblem. No two platforms are identical. Indeed, the infrastructure, userbehavior and organization's goals all change from one system to the other. Wetherefore argue that scheduling policies should be adaptative to the system'sbehavior. In this manuscript, we provide multiple ways to achieve thisadaptativity. Through an experimental approach, we study various tradeoffsbetween the complexity of the approach, the potential gain, and the riskstaken.
2

Robin Hoods and Good Samaritans: The Role of Patients in Health Care Distribution

Hardwig, John 01 February 1987 (has links)
There are good reasons - both medical and moral - for wanting to redistribute health care resources, and American hospitals and physicians are already involved in the practice of redistribution. However, such redistribution compromises both patient autonomy and the fiduciary relationship essential to medicine. These important values would be most completely preserved by a system in which patients themselves would be the agents of redistribution, by sharing their medical resources. Consequently, we should see whether patients would be willing to share before we resort to surreptitiously redistributing their resources or denying medical care to some who want and need it. We should change our health care payments systems to allow patients to donate their medical benefits to those in need.
3

Integrated Scheduling and Information Support System for Transit Maintenance Departments

Lopez Alvarado, Paula Andrea 25 March 2005 (has links)
The projected increase of population in the United States and particularly in the state of Florida shows a clear need of improvement in mass transportation systems. To provide outstanding service to rides, well maintained fleet that ensures safety for riders and other people on the streets is imperative. This research presents an information support system that assists maintenance managers to review and analyze data and evaluate alternatives in order to make better decisions that maximize efficiency in operations at transportation organizations. A system that consists of a mathematical scheduling model that interacts with a forecasting model and repair time standards has been designed to allocate resources in maintenance departments. The output from the mathematical models provides the data required for the database to work. Although the literature presents several studies in the field of maintenance scheduling and time standards, it stops short in combining these approaches. In this research, mathematical methods are used to forecast repair jobs occurrence to react to increments in service demand. Furthermore, an integer programming scheduling model that uses the data from both, the developed time standards and the forecasting model is presented. The information resulting from the models is entered to a database to create the information support system for transit organizations. The database gives the scenarios that facilitate optimizing the allocation of jobs in the facility and determines the best workforce for each required task. Information was obtained from observations at three transit facilities in the Central Florida area; the model developed is tested in their scenario by using historical data of the maintenance jobs currently performed.
4

Bioética e direito à saúde: reflexões sobre o compartilhamento democrático das tecnologias médicas avançadas

Rocha, Renata Oliveira da January 2013 (has links)
123 f. / Submitted by Ana Valéria de Jesus Moura (anavaleria_131@hotmail.com) on 2013-05-24T19:28:39Z No. of bitstreams: 1 DISSERTAÇÃO - MESTRADO UFBA - RENATA OLIVEIRA DA ROCHA - 201.pdf: 937205 bytes, checksum: fffc66f309529ceb7daf1d7ed0d666de (MD5) / Approved for entry into archive by Ana Valéria de Jesus Moura(anavaleria_131@hotmail.com) on 2013-05-24T19:30:19Z (GMT) No. of bitstreams: 1 DISSERTAÇÃO - MESTRADO UFBA - RENATA OLIVEIRA DA ROCHA - 201.pdf: 937205 bytes, checksum: fffc66f309529ceb7daf1d7ed0d666de (MD5) / Made available in DSpace on 2013-05-24T19:30:19Z (GMT). No. of bitstreams: 1 DISSERTAÇÃO - MESTRADO UFBA - RENATA OLIVEIRA DA ROCHA - 201.pdf: 937205 bytes, checksum: fffc66f309529ceb7daf1d7ed0d666de (MD5) Previous issue date: 2013 / O estudo tem como objetivo fundamental avaliar o compartilhamento democrático do acesso da população aos bens e serviços de saúde resultantes do progresso científico. O direito à saúde na pós-modernidade apresenta dilemas que o Direito, diante da crise paradgmática atual, não consegue responder sem se adequar à nova realidade, aos novos atores e, evidentemente, sem utilizar novas ferramentas. O progresso científico na área médica ao mesmo tempo que pede reflexões a respeito dos limites que devem ser impostos aos experimentos científicos com seres humanos, tendo em vista, especialmente, os perigos ainda desconhecidos dessa prática, evidencia a necessidade de que sejam criados meios para que esse progresso exista tão somente para o bem da humanidade. Contudo, o que se nos depara na realidade é a exclusão dos menos favorecidos, dos “vulnerados”, com relação ao acesso aos benefícios decorrentes das tecnologias médicas avançadas em face, notadamente, dos elevados custos que as acompanham. Diante desses casos, a Bioética constitui a ferramenta legítima e pertinente, em condições de oferecer o arcabouço teórico de critérios de alocação de recursos para a solução do problema, em consonância com a justiça social. O Estado, nesse mister, tem o dever de efetivar políticas públicas, com ampla participação popular, que contemple, no serviço público de saúde, tecnologias médicas avançadas quando indispensáveis para a manutenção da vida e dignidade humana. A judicialização do direito em saúde é um dos reflexos de políticas públicas não efetivadas da forma devida e constitui prática legítima, fazendo a justiça do caso concreto quando diante da negativa de atendimento a ser prestado pelo Estado a situações em que o tratamento é indispensável para a manutenção da vida e dignidade do paciente. / Salvador
5

Alocação de recursos em saúde: quando a realidade e os direitos fundamentais se chocam

Lemos, Maria Elisa Villas-Bôas Pinheiro de January 2009 (has links)
192 f. / Submitted by Ana Valéria de Jesus Moura (anavaleria_131@hotmail.com) on 2013-06-12T18:36:27Z No. of bitstreams: 1 MARIA ELISA VILLAS-BÔAS PINHEIRO DE LEMOS.pdf: 1233974 bytes, checksum: 4e7336d5612fc52a18a1a5aba04ffd45 (MD5) / Approved for entry into archive by Ana Valéria de Jesus Moura(anavaleria_131@hotmail.com) on 2013-06-12T18:37:29Z (GMT) No. of bitstreams: 1 MARIA ELISA VILLAS-BÔAS PINHEIRO DE LEMOS.pdf: 1233974 bytes, checksum: 4e7336d5612fc52a18a1a5aba04ffd45 (MD5) / Made available in DSpace on 2013-06-12T18:37:29Z (GMT). No. of bitstreams: 1 MARIA ELISA VILLAS-BÔAS PINHEIRO DE LEMOS.pdf: 1233974 bytes, checksum: 4e7336d5612fc52a18a1a5aba04ffd45 (MD5) Previous issue date: 2009 / O presente estudo versa sobre o cotejo entre o discurso jurídico-constitucional de proteção à saúde e as dificuldades à efetivação desse direito. Cuida-se de desafios que cada vez mais frequentemente batem às portas do Judiciário, requerendo respostas que o Direito nem sempre se encontra aparelhado a dar, seja por empecilhos ideológicos (como a resistência à aplicabilidade imediata e à eficácia dos direitos sociais), seja por óbices fáticos (como a escassez material), com notáveis reflexos na gestão de recursos limitados para necessidades ilimitadas e tendencialmente crescentes. Nessa seara, destaca-se a discussão dos mecanismos aptos a contribuírem para uma melhor solução desse impasse, enfatizando, especialmente, a importância da racionalização da atuação judicial efetivadora e do conhecimento e análise dos processos éticos de alocação, tanto no âmbito da macroalocação de recursos, realizada na esfera das políticas públicas, quanto no âmbito da microalocação individual de recursos. Como pressuposto lógico, avaliam-se a evolução e as características dos direitos humanos e fundamentais, em que se situa o direito à saúde, e das políticas públicas na área, bem assim a interpretação dada às normas programáticas de direitos sociais, à luz dos novos paradigmas do pós-positivismo. Apontam-se e enfrentam-se os argumentos mais comumente esgrimidos contra a judiciabilidade do direito à saúde, analisando sua inserção no contexto de um mínimo existencial, indissociável da própria dignidade humana, mas que, por outro lado, encontra limites na reserva do possível real, fator que não pode ser desconsiderado sequer quando da alegação de urgência nos pleitos liminares, donde se demandar a discussão de balizas coerentes e equilibradas para a ponderação desses pedidos. A esses requisitos se soma a indicação de conhecimento crítico de critérios éticos, a nortear o pensamento da matéria, para o que se mostra pertinente o estudo pontual da Bioética e da Teoria da Justiça, embasando a avaliação dos parâmetros de razoabilidade, isonomia e equidade na alocação de recursos escassos em saúde. Intenta-se, com isso, promover a busca ao maior grau de efetivação possível do referido direito fundamental, respeitando e ampliando a condição de dignidade humana, bem assim garantindo a concreção do texto constitucional, sem desconsiderar as contingências da realidade. / Salvador
6

Politiques polyvalentes et efficientes d'allocation de ressources pour les systèmes parallèles / Multi-Purpose Efficient Resource Allocation for Parallel Systems

Mendonca, Fernando 23 May 2017 (has links)
Les plateformes de calcul à grande échelle ont beaucoup évoluées dernières années. La réduction des coûts des composants simplifie la construction de machines possédant des multicœurs et des accélérateurs comme les GPU.Ceci a permis une propagation des plateformes à grande échelle,dans lesquelles les machines peuvent être éloignées les unes des autres, pouvant même être situées sur différents continents. Le problème essentiel devient alors d'utiliser ces ressources efficacement.Dans ce travail nous nous intéressons d'abord à l'allocation efficace de tâches sur plateformes hétérogènes composées CPU et de GPU. Pour ce faire, nous proposons un outil nommé SWDUAL qui implémente l'algorithme de Smith-Waterman simultanément sur CPU et GPU, en choisissant quelles tâches il est plus intéressant de placer sur chaque type de ressource. Nos expériences montrent que SWDUAL donne de meilleurs résultats que les approches similaires de l'état de l'art.Nous analysons ensuite une nouvelle méthode d'ordonnancement enligne de tâches indépendantes de différentes tailles. Nous proposons une nouvelle technique qui optimise la métrique du stretch. Elle consiste à déplacer les jobs qui retardent trop de petites tâches sur des machines dédiées. Nos résultats expérimentaux montrent que notre méthode obtient de meilleurs résultats que la politique standard et qu'elle s'approche dans de nombreux cas des résultats d'une politique préemptive, qui peut être considérée comme une borne inférieure.Nous nous intéressons ensuite à l'impact de différentes contraintes sur la politique FCFS avec backfilling. La contrainte de contiguïté essaye de compacter les jobs et de réduire la fragmentation dans l'ordonnancement. La contrainte de localité basique place les jobs de telle sorte qu'ils utilisent le plus petit nombre de groupes de processeurs appelés textit. Nos résultats montrent que les bénéfices de telles contraintes sont suffisants pour compenser la réduction du nombre de jobs backfillés due à la réduction de la fragmentation.Nous proposons enfin une nouvelle contrainte nommée localité totale, dans laquelle l'ordonnanceur modélise la plateforme par un fat tree et se sert de cette information pour placer les jobs là où leur coût de communication est minimal.Notre campagne d'expériences montre que cette contrainte obtient de très bons résultats par rapport à un backfilling basique, et de meilleurs résultats que les contraintes précédentes. / The field of parallel supercomputing has been changing rapidly inrecent years. The reduction of costs of the parts necessary to buildmachines with multicore CPUs and accelerators such as GPUs are ofparticular interest to us. This scenario allowed for the expansion oflarge parallel systems, with machines far apart from each other,sometimes even located on different continents. Thus, the crucialproblem is how to use these resources efficiently.In this work, we first consider the efficient allocation of taskssuitable for CPUs and GPUs in heterogeneous platforms. To that end, weimplement a tool called SWDUAL, which executes the Smith-Watermanalgorithm simultaneously on CPUs and GPUs, choosing which tasks aremore suited to one or another. Experiments show that SWDUAL givesbetter results when compared to similar approaches available in theliterature.Second, we study a new online method for scheduling independent tasksof different sizes on processors. We propose a new technique thatoptimizes the stretch metric by detecting when a reasonable amount ofsmall jobs is waiting while a big job executes. Then, the big job isredirected to separate set of machines, dedicated to running big jobsthat have been redirected. We present experiment results that show thatour method outperforms the standard policy and in many cases approachesthe performance of the preemptive policy, which can be considered as alower bound.Next, we present our study on constraints applied to the Backfillingalgorithm in combination with the FCFS policy: Contiguity, which is aconstraint that tries to keep jobs close together and reducefragmentation during the schedule, and Basic Locality, that aims tokeep jobs as much as possible inside groups of processors calledclusters. Experiment results show that the benefits of using theseconstrains outweigh the possible decrease in the number of backfilledjobs due to reduced fragmentation.Finally, we present an additional constraint to the Backfillingalgorithm called Full Locality, where the scheduler models the topologyof the platform as a fat tree and uses this model to assign jobs toregions of the platform where communication costs between processors isreduced. The experiment campaign is executed and results show that FullLocality is superior to all the previously proposed constraints, andspecially Basic Backfilling.
7

DEPENDABLE CLOUD RESOURCES FOR BIG-DATA BATCH PROCESSING & STREAMING FRAMEWORKS

Bara M Abusalah (10692924) 07 May 2021 (has links)
The examiner of cloud computing systems in the last few years observes that there is a trend of the emergence of new Big Data frameworks every single year. Since Hadoop was developed in 2007, new frameworks followed it such as Spark, Storm, Heron, Apex, Flink, Samza, Kafka ... etc. Each framework is developed in a certain way to target and achieve certain objectives better than other frameworks do. However, there are few common functionalities and aspects that are shared between these frameworks. One vital aspect all these frameworks strive to achieve is better reliability and faster recovery time in case of failures. Despite all the advances in making datacenters dependable, failures actually still happen. This is particularly onerous for long-running “big data” applications, where partial failures can lead to significant losses and lengthy recomputations. This is also crucial for streaming systems where events are processed and monitored online in real time, and any delay in data delivery will cause a major inconvenience to the users.<div>Another observation is that some reliability implementations are redundant between different frameworks. Big data processing frameworks like Hadoop MapReduce include fault tolerance mechanisms, but these are commonly targeted at specific system/failure models, and are often redundant between frameworks. Encapsulating these implementations into one layer and making it shared between different applications will benefit more than one frame-work without the burden of re-implementing the same reliability approach in each single framework.<br></div><div>These observations motivated us to solve the problem by presenting two systems: Guardian and Warden. Guardian is tailored towards batch processing big data systems while Warden is targeted towards stream processing systems. Both systems are robust, RMS based, generic, multi-framework, flexible, customizable, low overhead systems that allow their users to run their applications with individually configurable fault tolerance granularity and degree, with only minor changes to their implementation.<br></div><div>Most reliability approaches carry out one rigid fault tolerance technique targeted towards one system at a time. It is more challenging to provide a reliability approach that is pluggable in multiple Big Data frameworks at a time and can achieve low overheads comparable with single targeted framework approaches, yet is flexible and customizable by its users to make it tailored towards their objectives. The genericity is attained by providing an interface that can be used in different applications from different frameworks in any part of the application code. The low overhead is achieved by providing faster application finish times with and without failures. The customizability is fulfilled by providing the users the options to choose between two fault tolerance guarantees (Crash Failures / Byzantine Failures) and, in case of streaming systems; it is combined with two delivery semantics (Exactly Once / At Most Once).<br></div><div>In other words, this thesis proposes the paradigm of dependable resources: big data processing frameworks are typically built on top of resource management systems (RMSs),and proposing fault tolerance support at the level of such an RMS yields generic fault tolerance mechanisms, which can be provided with low overhead by leveraging constraints on resources.<br></div><div>To the best of our knowledge, such approach was never tried on multiple big data batch processing and streaming frameworks before.<br></div><div>We demonstrate the benefits of Guardian by evaluating some batch processing frame-works such as Hadoop, Tez, Spark and Pig on a prototype of Guardian running on Amazon-EC2, improving completion time by around 68% in the presence of failures, while maintaining around 6% overhead. We’ve also built a prototype of Warden on the Flink and Samza (with Kafka) streaming frameworks. Our evaluations on Warden highlight the effectiveness of our approach in the presence of failures and without failures compared to other fault tolerance techniques (such as checkpointing)<br></div>
8

Performance and Cost Optimization for Distributed Cloud-native Systems

Ashraf Y Mahgoub (13169517) 28 July 2022 (has links)
<p> First, NoSQL data-stores provide a set of features that is demanded by high perfor?mance computing (HPC) applications such as scalability, availability and schema flexibility. High performance computing (HPC) applications, such as metagenomics and other big data systems, need to store and analyze huge volumes of semi-structured data. Such applica?tions often rely on NoSQL-based datastores, and optimizing these databases is a challenging endeavor, with over 50 configuration parameters in Cassandra alone. As the application executes, database workloads can change rapidly over time (e.g. from read-heavy to write-heavy), and a system tuned for one phase of the workload becomes suboptimal when the workload changes. </p>
9

Pour une allocation équitable des ressources en GMF

Provost, Line 03 1900 (has links)
Objectif : Évaluer la « lourdeur » de la prise en charge clinique des personnes vivant avec le VIH/SIDA (PVVIH) afin d’ajuster l’allocation des ressources en GMF. Méthodologie : Analyse comparative entre le GMF de la Clinique médicale l’Actuel, les GMF montréalais et de l’ensemble du Québec, en identifiant les différences dans les profils de consommation de soins pour les années civiles 2006 à 2008 et les coûts d’utilisation des services pour l’année 2005. Résultats : En 2008, 78% de la clientèle inscrite au GMF de la Clinique médicale l’Actuel est vulnérable comparativement à 28% pour les autres GMF montréalais, une tendance observée pour l’ensemble du Québec. Le nombre moyen de visites par individu inscrit et vulnérable est de 7,57 au GMF l’Actuel alors que la moyenne montréalaise est de 3,37 et celle du Québec de 3,47. Enfin, le coût moyen des visites médicales au GMF l’Actuel en 2005 est de 203,93 $ comparativement à des coûts variant entre 132,14 et 149,53 $ pour les unités de comparaison. Conclusion : L’intensité de l’utilisation des ressources au GMF de la Clinique médicale l’Actuel (nombre d’individus vulnérables, nombre de visites et coûts) suggère que la prise en charge clinique des personnes vivant avec le VIH/SIDA est beaucoup plus lourde qu’un citoyen tout venant ou même de la majorité des autres catégories de vulnérabilité. Afin d’offrir un traitement juste et équitable aux GMF, l’inscription devrait être ajustée afin de tenir compte de la « lourdeur » de cette clientèle et valoriser la prise en charge des personnes qui présentent des tableaux cliniques complexes. / Objective: To evaluate the “burden” involved in the clinical management of people living with HIV/AIDS, in order to adjust the allocation of resources in terms of family medicine groups (FMG). Methodology: A comparative analysis of FMG Clinique médicale l’Actuel, FMGs in Montréal and throughout Québec, identifying differences in care consumption profiles for the years 2006 to 2008 and the costs of use of services for 2005. Results: In 2008, seventy eight percent (78%) of the clientele registered with the FMG at Clinique médicale l’Actuel was considered vulnerable, in comparison to twenty eight percent (28%) at other Montréal FMGs, a trend observed throughout Québec. The average number of visits per registered individuals was 7.57 at the Actuel FMG, while the average in Montréal was 3.37 and in Québec overall, 3.47. In 2005, the average cost of a visit at the Actuel FMG was $203.93 compared to costs that varied from $132.14 to $149.53 for comparative units. Conclusion: The intensity of use of FMG resources at the Clinique médicale l’Actuel (number of vulnerable individuals, number of visits and costs) suggests that the clinical management of people living with HIV/AIDS is a much heavier burden than that of an average citizen, or even from the majority of other categories of vulnerability. In order to ensure that all FMGs are treated fairly and equitably, registration should be adjusted to take into account the “burden” of this clientele and to place more value on the case management of people with complex clinical presentations.
10

Bachelor thesis in Business Administration : <em>A qualitative investigation of recruitment freezes; How can they be managed and what are the consequences when they are implemented? </em>

Johnsson, Björn, Ericson, Valentina January 2009 (has links)
No description available.

Page generated in 0.1287 seconds