• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 76
  • 38
  • 34
  • 8
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 1
  • Tagged with
  • 205
  • 32
  • 24
  • 24
  • 22
  • 22
  • 18
  • 18
  • 17
  • 16
  • 16
  • 14
  • 14
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

La tension entre l’accessibilité et l’intelligibilité du droit : le cas du droit administratif et du droit du travail en France

Hernot, Kévin 04 1900 (has links)
No description available.
102

Traitement de maquettes numériques pour la préparation de modèles de simulation en conception de produits à l'aide de techniques d'intelligence artificielle / A priori evaluation of simulation models preparation processes using artificial intelligence techniques

Danglade, Florence 07 December 2015 (has links)
Maitriser le triptyque coût-qualité-délai lors des différentes phases du Processus de Développement d’un Produit (PDP) dans un environnement de plus en plus concurrentiel est un enjeu majeur pour l’industrie. Le développement de nouvelles méthodes et de nouveaux outils pour adapter une représentation du produit à une activité du PDP est l’une des nombreuses pistes d’amélioration du processus et certainement l’une des plus prometteuses. Cela est particulièrement vrai dans le domaine du transfert de modèles de Conception Assistée par Ordinateur (CAO) vers des activités de simulations numériques. Actuellement, les méthodes et outils de préparation d’un modèle CAO original vers un modèle dédié à une activité existent. Cependant, ces processus de préparation sont des tâches complexes qui reposent souvent sur les connaissances des experts et sont peu formalisés, en particulier lorsque l’on considère des maquettes numériques riches comprenant plusieurs centaines de milliers de pièces. Pouvoir estimer a priori l’impact de la préparation de la maquette numérique sur le résultat de la simulation permettrait d’identifier dès le début le meilleur processus et assurerait une meilleure maitrise des processus et des coûts de préparation. Cette thèse a pour objectif de relever ce défi en utilisant des techniques d’intelligence artificielles capables d'imiter et de prévoir un comportement à partir d'exemples judicieusement choisis. L’idée principale est d’utiliser des exemples de préparation de maquettes numériques comme entrées d’algorithmes d’apprentissage pour configurer des estimateurs de la performance d’un processus. Lorsqu’un nouveau cas se présente, ces estimateurs pourront alors prédire a priori l’impact de la préparation sur le résultat de l’analyse sans avoir à la réaliser. Afin d'atteindre cet objectif, une méthode a été développée pour construire une base d’exemples représentatifs, identifier les variables d’entrée et de sortie déterminantes et configurer des modèles d’apprentissage. La performance d’un processus de préparation sera évaluée à l’aide de critères tels que des coûts de préparation, des coûts de simulation et des erreurs sur le résultat de l’analyse dues à la simplification des modèles CAO. Ces critères seront les données de sortie des algorithmes d’apprentissage. Le premier challenge de l’approche proposée est d’extraire les données des modèles 3D complétées par des données relatives au cas de simulation qui caractérisent au mieux un processus de préparation , puis d’identifier les variables explicatives les plus déterminantes. Un autre challenge est de configurer des modèles d’apprentissage capables d’évaluer avec une bonne précision la qualité d’un processus malgré un nombre limité d’exemples de processus de préparation et de données disponibles (seules les données relatives aux modèles CAO originaux, aux cas de simulation sont connues pour un nouveau cas). Au final, l’estimateur de la performance d’un processus aidera les analystes dans le choix d'opérations de préparation de modèles CAO. Cela ne les dispensera pas de la simulation mais permettra d'obtenir plus rapidement un modèle préparé de meilleure qualité. Les techniques d’intelligence artificielles utilisées seront des classifieurs de type réseaux de neurones ou arbres de décision. L’approche proposée sera appliquée à la préparation de modèles CAO riches pour l’analyse CFD. / Controlling the well-known triptych costs, quality and time during the different phases of the Product Development Process (PDP) is an everlasting challenge for the industry. Among the numerous issues that are to be addressed, the development of new methods and tools to adapt to the various needs the models used all along the PDP is certainly one of the most challenging and promising improvement area. This is particularly true for the adaptation of CAD (Computer-Aided Design) models to CAE (Computer-Aided Engineering) applications. Today, even if methods and tools exist, such a preparation phase still requires a deep knowledge and a huge amount of time when considering Digital Mock-Up (DMU) composed of several hundreds of thousands of parts. Thus, being able to estimate a priori the impact of DMU preparation process on the simulation results would help identifying the best process right from the beginning, and this will ensure a better control of processes and preparation costs. This thesis addresses such a difficult problem and uses Artificial Intelligence (AI) techniques to learn and accurately predict behaviors from carefully selected examples. The main idea is to identify rules from these examples used as inputs of learning algorithms. Once those rules obtained, they can be used as estimators to be applied a priori on new cases for which the impact of a preparation process can be estimated without having to perform it. To reach this objective, a method to build a representative database of examples has been developed, the right input and output variables have been identified, then the learning model and its associated control parameters have been tuned. The performance of a preparation process is assessed by criteria like preparation costs, analysis costs and the errors induced by the simplifications on the analysis results. The first challenge of the proposed approach is to extract and select most relevant input variables from the original and 3D prepared models, which are completed with data characterizing the preparation processes. Another challenge is to configure learning models able to assess with good accuracy the quality of a process, despite a limited number of examples of preparation processes and data available (the only data known to a new case are the data that characterize the original CAD models and simulation case). In the end, the estimator of the process’ performance will help analysts in the selection of CAD model preparation operations. This does not exempt the analysts to make the numerical simulation. However, this will get faster a simplified model of best quality. The rules linking the output variables to the input ones are obtained using AI techniques such as well-known neural networks and decision trees. The proposed approach is illustrated and validated on industrial examples in the context of CFD simulations.
103

Traitement des objets 3D et images par les méthodes numériques sur graphes / 3D object processing and Image processing by numerical methods

El Sayed, Abdul Rahman 24 October 2018 (has links)
La détection de peau consiste à détecter les pixels correspondant à une peau humaine dans une image couleur. Les visages constituent une catégorie de stimulus importante par la richesse des informations qu’ils véhiculent car avant de reconnaître n’importe quelle personne il est indispensable de localiser et reconnaître son visage. La plupart des applications liées à la sécurité et à la biométrie reposent sur la détection de régions de peau telles que la détection de visages, le filtrage d'objets 3D pour adultes et la reconnaissance de gestes. En outre, la détection de la saillance des mailles 3D est une phase de prétraitement importante pour de nombreuses applications de vision par ordinateur. La segmentation d'objets 3D basée sur des régions saillantes a été largement utilisée dans de nombreuses applications de vision par ordinateur telles que la correspondance de formes 3D, les alignements d'objets, le lissage de nuages de points 3D, la recherche des images sur le web, l’indexation des images par le contenu, la segmentation de la vidéo et la détection et la reconnaissance de visages. La détection de peau est une tâche très difficile pour différentes raisons liées en général à la variabilité de la forme et la couleur à détecter (teintes différentes d’une personne à une autre, orientation et tailles quelconques, conditions d’éclairage) et surtout pour les images issues du web capturées sous différentes conditions de lumière. Il existe plusieurs approches connues pour la détection de peau : les approches basées sur la géométrie et l’extraction de traits caractéristiques, les approches basées sur le mouvement (la soustraction de l’arrière-plan (SAP), différence entre deux images consécutives, calcul du flot optique) et les approches basées sur la couleur. Dans cette thèse, nous proposons des méthodes d'optimisation numérique pour la détection de régions de couleurs de peaux et de régions saillantes sur des maillages 3D et des nuages de points 3D en utilisant un graphe pondéré. En se basant sur ces méthodes, nous proposons des approches de détection de visage 3D à l'aide de la programmation linéaire et de fouille de données (Data Mining). En outre, nous avons adapté nos méthodes proposées pour résoudre le problème de la simplification des nuages de points 3D et de la correspondance des objets 3D. En plus, nous montrons la robustesse et l’efficacité de nos méthodes proposées à travers de différents résultats expérimentaux réalisés. Enfin, nous montrons la stabilité et la robustesse de nos méthodes par rapport au bruit. / Skin detection involves detecting pixels corresponding to human skin in a color image. The faces constitute a category of stimulus important by the wealth of information that they convey because before recognizing any person it is essential to locate and recognize his face. Most security and biometrics applications rely on the detection of skin regions such as face detection, 3D adult object filtering, and gesture recognition. In addition, saliency detection of 3D mesh is an important pretreatment phase for many computer vision applications. 3D segmentation based on salient regions has been widely used in many computer vision applications such as 3D shape matching, object alignments, 3D point-point smoothing, searching images on the web, image indexing by content, video segmentation and face detection and recognition. The detection of skin is a very difficult task for various reasons generally related to the variability of the shape and the color to be detected (different hues from one person to another, orientation and different sizes, lighting conditions) and especially for images from the web captured under different light conditions. There are several known approaches to skin detection: approaches based on geometry and feature extraction, motion-based approaches (background subtraction (SAP), difference between two consecutive images, optical flow calculation) and color-based approaches. In this thesis, we propose numerical optimization methods for the detection of skins color and salient regions on 3D meshes and 3D point clouds using a weighted graph. Based on these methods, we provide 3D face detection approaches using Linear Programming and Data Mining. In addition, we adapted our proposed methods to solve the problem of simplifying 3D point clouds and matching 3D objects. In addition, we show the robustness and efficiency of our proposed methods through different experimental results. Finally, we show the stability and robustness of our methods with respect to noise.
104

Traitement conjoint de la géométrie et de la radiance d'objets 3D numérisés / Joint treatment of geometry and radiance for 3D model digitisation

Vanhoey, Kenneth 18 February 2014 (has links)
Depuis quelques décennies, les communautés d'informatique graphique et de vision ont contribué à l'émergence de technologies permettant la numérisation d'objets 3D. Une demande grandissante pour ces technologies vient des acteurs de la culture, notamment pour l'archivage, l'étude à distance et la restauration d'objets du patrimoine culturel : statuettes, grottes et bâtiments par exemple. En plus de la géométrie, il peut être intéressant de numériser la photométrie avec plus ou moins de détail : simple texture (2D), champ de lumière (4D), SV-BRDF (6D), etc. Nous formulons des solutions concrètes pour la création et le traitement de champs de lumière surfaciques représentés par des fonctions de radiance attachés à la surface.Nous traitons le problème de la phase de construction de ces fonctions à partir de plusieurs prises de vue de l'objet dans des conditions sur site : échantillonnage non structuré voire peu dense et bruité. Un procédé permettant une reconstruction robuste générant un champ de lumière surfacique variant de prévisible et sans artefacts à excellente, notamment en fonction des conditions d'échantillonnage, est proposé. Ensuite, nous suggérons un algorithme de simplification permettant de réduire la complexité mémoire et calculatoire de ces modèles parfois lourds. Pour cela, nous introduisons une métrique qui mesure conjointement la dégradation de la géométrie et de la radiance. Finalement, un algorithme d'interpolation de fonctions de radiance est proposé afin de servir une visualisation lisse et naturelle, peu sensible à la densité spatiale des fonctions. Cette visualisation est particulièrement bénéfique lorsque le modèle est simplifié. / Vision and computer graphics communities have built methods for digitizing, processing and rendering 3D objects. There is an increasing demand coming from cultural communities for these technologies, especially for archiving, remote studying and restoring cultural artefacts like statues, buildings or caves. Besides digitizing geometry, there can be a demand for recovering the photometry with more or less complexity : simple textures (2D), light fields (4D), SV-BRDF (6D), etc. In this thesis, we present steady solutions for constructing and treating surface light fields represented by hemispherical radiance functions attached to the surface in real-world on-site conditions. First, we tackle the algorithmic reconstruction-phase of defining these functions based on photographic acquisitions from several viewpoints in real-world "on-site" conditions. That is, the photographic sampling may be unstructured and very sparse or noisy. We propose a process for deducing functions in a manner that is robust and generates a surface light field that may vary from "expected" and artefact-less to high quality, depending on the uncontrolled conditions. Secondly, a mesh simplification algorithm is guided by a new metric that measures quality loss both in terms of geometry and radiance. Finally, we propose a GPU-compatible radiance interpolation algorithm that allows for coherent radiance interpolation over the mesh. This generates a smooth visualisation of the surface light field, even for poorly tessellated meshes. This is particularly suited for very simplified models.
105

Nouveau concept simplifié d’antennes reconfigurables utilisant les couplages interéléments : Mise en œuvre d’un réseau hybride / New simplified concept of reconfigurable antennas using the inter-element couplings : Implementation of a hybrid network

Oueslati, Aymen 17 December 2015 (has links)
Les travaux de cette thèse s’intéressent à un nouveau concept d’antenne reconfigurable offrant un bon compromis entre performances, complexité et coût. Ce concept, qualifié d’hybride, vise à combiner les avantages des réseaux d’antennes lacunaires et des antennes à éléments parasites. Cette hybridation est une alternative à la complexité des réseaux d’antennes conventionnels pour répondre aux exigences d’une architecture modulaire, générique et reconfigurable. L’intérêt majeur de ce concept est de proposer une architecture d’antenne permettant de réduire la complexité du circuit de formation des faisceaux (par la réduction du nombre d’éléments rayonnants à alimenter) tout en adressant les problématiques d’adaptation (TOS actif) des éléments excités. Ceci est permis grâce à la présence d’éléments parasites qui permettent de gérer la diffusion des couplages sur l’antenne. Cette thèse décrit le principe du concept hybride et propose une évaluation de ses potentialités. Par la suite, une définition des éléments à mettre en œuvre pour réaliser une preuve de concept est effectuée, en mettant l’accent sur l’importance de la caractérisation expérimentale. Les performances d’un prototype d’antenne hybride reconfigurable sont ensuite présentées afin de valider les développements et conclure sur cette solution innovante. / The work of this thesis aims to investigate a new concept of reconfigurable antenna allowing a good trade-off between performances, complexity and cost. This concept is called ‘hybrid’ because it is based on the capabilities of thinned arrays and parasitic element antennas. It is an alternative to classical antenna arrays and their complexity. The proposed concept has a modular architecture, and a good versatility for reconfigurable beams. The main advantage of this hybrid antenna is the simplicity of its beam formation network (BFN) which requires only a few number of excited elements. The antenna uses parasitic elements to manage the effects of couplings between the electromagnetic access. The problematic of active VSWR is also solved at the antenna level, avoiding the use of additional components in the BFN. This work details the principle of the reconfigurable hybrid antenna concept. The potentialities are evaluated. The elements required to realize a proof of concept are then defined, using a dedicated experimental setup. A prototype is manufactured and the performances have been checked to validate this innovative concept.
106

Systèmes d'équations différentielles linéaires singulièrement perturbées et développements asymptotiques combinés / Systems of singularly pertubed linear differential equations and composite asymptotic expansions

Hulek, Charlotte 12 June 2014 (has links)
Dans ce travail nous démontrons un théorème de simplification uniforme concernant les équations différentielles ordinaires du second ordre singulièrement perturbées au voisinage d’un point dégénéré, appelé point tournant. Il s’agit d’une version analytique d’un résultat formel dû à Hanson et Russell, qui généralise un théorème connu de Sibuya. Pour traiter ce problème, nous utilisons les développements asymptotiques combinés Gevrey introduits par Fruchard et Schäfke. Dans une première partie nous rappelons les définitions et théorèmes principaux de cette récente théorie. Nous établissons trois résultats généraux que nous utilisons ensuite dans la seconde partie de ce manuscrit pour démontrer le théorème principal de réduction analytique annoncé. Enfin nous considérons des équations différentielles ordinaires d’ordre supérieur à deux, singulièrement perturbées à point tournant, et nous démontrons un théorème de réduction analytique. / In this thesis we prove a theorem of uniform simplification for second order and singularly perturbed differential equations in a full neighborhood of a degenerate point, called a turning point. This is an analytic version of a formal result due to Hanson and Russell, which generalizes a well known theorem of Sibuya. To solve this problem we use the Gevrey composite asymptotic expansions introduced by Fruchard and Schäfke. In the first part we recall the main definitions and theorems of this recent theory. We establish three general results used in the second part of this thesis to prove the main theorem of analytic reduction. Finally we consider ordinary differential equations of order greater than two, which are singularly perturbed and have a turning point, and we prove a theorem of analytic reduction.
107

Power and narrative in project management : lessons learned in recognising the importance of phronesis

Rogers, Michael David January 2014 (has links)
A component part of modern project management practice is the ‘lessons learned’ activity that is designed to transfer experience and best practice from one project to another, thus improving the practice of project management. The departure point for this thesis is: If we are learning lessons from our experiences in project management, then why are we not better at managing projects? It is widely cited in most project management literature that 50–70% of all projects fail for one reason or another, a figure that has steadfastly refused to improve over many years. My contention is that the current rational approach to understanding lessons learned in project management, one entrenched in the if–then causality of first-order systems thinking where the nature of movement is a ‘corrective repetition of the past in order to realise an optimal future state’ (Stacey 2011: 301), does not reflect the actual everyday experience of organisational life. I see this as an experience of changing priorities, competing initiatives, unrealistic timescales, evaporation of resources, non-rational decisions based on power relations between actors in the organisations we find ourselves in; and every other manner of challenge that presents itself in modern large commercial organisations. I propose a move away from what I see as the current reductionist view of lessons learned, with its emphasis on objective observation, to one of involved subjective understanding. This is an understanding rooted in the particular experience of the individual acting into the social, an act that necessarily changes both the individual and the social. My contention is that a narrative approach to sense making as first-order abstractions in the activity of lessons learned within project management is what is required if we are to better learn from our experiences. This narrative approach that I have termed ‘thick simplification’ supports learning by enabling the reader of the lessons learned account to situate the ‘lesson learned’ within their own experience through treating the lessons learned as a potential future understanding .This requires a different view of what is going on between people in organisations – one that challenges the current reliance on detached process and recognises the importance of embedded phronesis, the Aristotelian virtue of practical judgement. It is an approach that necessarily ‘focuses attention directly on patterns of human relating, and asks what kind of power relations, ideology and communication they reflect’ (Stacey 2007: 266).
108

Simplificação e praticabilidade no direito tributário / Simplification and praticality in tax law

Saad, Sergio Sydionir 07 April 2014 (has links)
Nos dias atuais, em razão de vários fatores, está se tornando cada vez mais impraticável à administração pública garantir o cumprimento da arrecadação e fiscalização tributária. As normas simplificadoras, criadas em nome da praticabilidade, é a solução de compromisso que permite a garantia da tributação de todos, mas sem o custo irrazoável do aparato administrativo para averiguação individual de cada caso concreto. Deixar de avaliar individualmente cada caso na aplicação da lei tributária pode representar uma afronta aos princípios da segurança jurídica, legalidade, igualdade, capacidade contributiva entre outros. Analisar as soluções que atendam esta demanda pela praticabilidade e que não agridam a justiça individual assegurada pelos princípios constitucionais é o que visa esta dissertação. Entre as técnicas de simplificação abordadas, ressaltam-se as presunções e as ficções. As normas simplificadoras, como objeto de estudo, serão identificadas dentro do universo das normas tributárias verificando-se sua finalidade extrafiscal. A Praticabilidade é estudada, trazendo-se um conceito, que a identifica como de caráter principiológico e sua relação com o princípio da eficiência. Como princípio a Praticabilidade é confrontada com: segurança jurídica, legalidade, igualdade, capacidade contributiva, justiça fiscal, discriminação constitucional de competências, proporcionalidade, razoabilidade, neutralidade e proibição de confisco, identificando os limites jurídicos impostos à sua utilização. No estudo de casos, estes limites são verificados e confrontados com o posicionamento mais atual da jurisprudência. / Nowadays, due to various factors, it is becoming increasingly impractical to government enforce the tax collection and enforcement. The simplifying rules, created in the name of practicality, is a compromise that allows the guarantee of all taxes, but without the unreasonable cost of the administrative apparatus for individual investigation of each case. Failure to evaluate each case individually when applying the tax law may represent an affront to the principles of legal certainty, legality, equality, fiscal capacity among others. Analyze solutions to meet this demand by practicality and do not harm the individual justice guaranteed by the constitutional principles is what this thesis aims. Among the techniques addressed for simplification, we emphasize the presumptions and fictions. The simplifying rules, as an object of study, will be identified within the universe of tax rules checking its extrafiscal purpose. The Feasibility study is bringing up a concept, identifying it as a principiológico character and its relation to the principle of efficiency. In principle the Feasibility is confronted with legal certainty legality, equality, ability to pay, taxation, constitutional powers of discrimination, proportionality fairness, neutrality and prohibiting confiscation, identifying the legal limits on its use. In the case studies, these limits are checked and compared with the most current positioning of jurisprudence.
109

Design Simplification by Analogical Reasoning

Balazs, Marton E. 09 February 2000 (has links)
Ever since artifacts have been produced, improving them has been a common human activity. Improving an artifact refers to modifying it such that it will be either easier to produce, or easier to use, or easier to fix, or easier to maintain, and so on. In all of these cases, "easier" means fewer resources are required for those processes. While 'resources' is a general measure, which can ultimately be expressed by some measure of cost (such as time or money), we believe that at the core of many improvements is the notion of reduction of complexity, or in other words, simplification. This talk presents our research on performing design simplification using analogical reasoning. We first define the simplification problem as the problem of reducing the complexity of an artefact from a given point of view. We propose that a point of view from which the complexity of an artefact can be measured consists of a context, an aspect and a measure. Next, we describe an approach to solving simplification problems by goal-directed analogical reasoning, as our implementation of this approach. Finally, we present some experimental results obtained with the system. The research presented in this dissertation is significant as it focuses on the intersection of a number of important, active research areas - analogical reasoning, functional representation, functional reasoning, simplification, and the general area of AI in Design.
110

Identificação dos desperdícios de um serviço de emergência com a utilização da metodologia Lean Thinking

Faveri, Fabiano de 27 September 2013 (has links)
Submitted by Fabricia Fialho Reginato (fabriciar) on 2015-06-26T23:28:54Z No. of bitstreams: 1 FabianodeFaveri.pdf: 1884088 bytes, checksum: 0ae1239ef4be06607d58cf7627341f11 (MD5) / Made available in DSpace on 2015-06-26T23:28:54Z (GMT). No. of bitstreams: 1 FabianodeFaveri.pdf: 1884088 bytes, checksum: 0ae1239ef4be06607d58cf7627341f11 (MD5) Previous issue date: 2013-09-27 / Nenhuma / Na busca da melhoria geral dos cuidados de saúde, organizações em diferentes países vêm adotando o Lean Thinking. Tal método consiste em uma abordagem sistemática de identificação e eliminação de desperdícios dos processos produtivos, enquanto tem o seu foco principal em agregar a qualidade e entregar ao cliente somente o que ele considera como valor. OBJETIVO: identificar os diferentes tipos de desperdícios existentes e sugestões de melhorias, no serviço de emergência de um hospital privado de Caxias do Sul, Rio Grande do Sul. MÉTODO: trata-se de um estudo de caso realizado em um serviço de emergência de um hospital da cidade de Caxias do Sul. A amostra foi constituída de 14 profissionais que atuam no serviço de emergência, além de 68 fluxos de atendimentos de pacientes. Para coleta de dados foi realizada a observação dos fluxos de trabalho associada à realização de grupo focal. Os dados quantitativos foram analisados através da estatística descritiva e os dados qualitativos através da análise de conteúdo. RESULTADOS: dos oito tipos de desperdícios abordados pela metodologia Lean Thinking, foram identificados no serviço sete delas: desperdício por espera, transporte desnecessário, movimentação desnecessária, inventários desnecessários, processamento inapropriado, defeitos e desperdício de potencial humano. CONCLUSÃO: A aplicação do Lean Thinking no serviço de emergência facilitou a determinação do que realmente é valor agregado, para o usuário do serviço. Sem sombra de dúvida metodologia facilita a visualização dos processos de trabalho e fluxos do paciente, oportunizando a análise crítica de todas as etapas para a busca da redução dos desperdícios existentes. / In search of the general improvement of health care organizations in different countries are adopting Lean Thinking. this method consists of a systematic approach to identifying and eliminating waste in production processes, while it has its main focus on adding quality and deliver to the customer only what he sees as value. OBJECTIVE: To identify the different types of existing waste and suggestions for improvements in the emergency department of a private hospital in Caxias do Sul, Rio Grande do Sul, Brazil. METHODS: This is a case study with a qualitative approach, performed in an emergency department a hospital in the city of Caxias do Sul the study sample consisted of 14 professionals working in the emergency department, and 68 flows care of patients. Data collection was conducted observation of workflows associated with conducting focus group. Quantitative data were analyzed using descriptive statistics and qualitative data through content analysis. RESULTS: of the eight types of waste covered by the methodology Lean Thinking, we identified the existence of seven of these, which were wasted by waiting, unnecessary transport and handling, inventory, unnecessary, inappropriate processing, defects and waste of human potential. CONCLUSION: The application of Lean Thinking in the emergency department facilitated the determination of what is actually added value to the service user. Undoubtedly methodology facilitates the visualization of processes and work flows of the patient, providing opportunities for critical analysis of all the steps to search for existing waste reduction.

Page generated in 0.2179 seconds