• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 19
  • 11
  • 10
  • 9
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 153
  • 27
  • 21
  • 21
  • 20
  • 19
  • 18
  • 18
  • 17
  • 17
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

[pt] CONFIABILIDADE METROLÓGICA DA ESTIMULAÇÃO MAGNÉTICA TRANSCRANIANA / [en] METROLOGICAL RELIABILITY OF TRANSCRANIAL MAGNETIC STIMULATION

IAM PALATNIK DE SOUSA 29 September 2016 (has links)
[pt] Um estudo do atual estado da confiabilidade metrológica da Estimulação Magnética Transcraniana (TMS) é apresentado. A questão da segurança é abordada em três aspectos principais: A segurança e desempenho dos equipamentos de TMS; a segurança em relação aos limites de exposição para operadores do equipamento e pacientes; e a segurança do protocolo terapêutico e dos parâmetros de tratamento. Propostas para um protocolo de relatório harmonizado e a base de uma possível futura norma técnica para equipamentos de TMS também são apresentadas. Os resultados de simulações e medições da densidade de fluxo magnético emitido por equipamentos de TMS de duas marcas são relatados, com os cálculos correspondentes das distâncias seguras em relação a exposição de operadores do equipamento, usando os métodos promulgados pelas diretrizes da Comissão Internacional de Proteção Contra a Radiação Não Ionizante (ICRNIP). Estas distâncias são então comparadas com estimativas prévias encontradas na literatura. O desenvolvimento das rotinas de simulação e do sistema de medição é descrito, incluindo possíveis futuras aplicações em outros estudos e aspectos metrológicos de incerteza de medição. / [en] A study of the current status of the metrological reliability of Transcranial Magnetic Stimulation (TMS) is presented. The matter of safety is approached in three major aspects: The safety and performance of the TMS devices; the safety regarding exposure limits for patients, staff and the general public; and the safety of the therapeutic protocol and of the treatment parameters. Proposals for a harmonized reporting framework and the basis for a possible future TMS safety and performance technical standard are also presented. The results of simulations and measurements of the magnetic flux densities emitted by two brands of TMS devices are reported, with the corresponding calculations for the safe distances regarding staff exposure, using the methods promulgated by the guidelines of the International Commission on Non Ionizing Radiation Protection (ICNIRP). These distances are compared to the previous estimates found in literature. The development of both the simulation routines and the measurement system are described, including possible future applications in other studies and metrological aspects of measurement uncertainty.
62

Design and Implementation of a Novel Single-Phase Switched Reluctance Motor Drive System

Staley, Amanda Martin 22 August 2001 (has links)
Single phase switched reluctance machines (SRMs) have a special place in the emerging high-volume, low-cost and low-performance applications in appliances and also in high-speed low-power motor drives in various industrial applications. Single phase SRMs have a number of drawbacks: low power density as they have only 50% utilization of windings, lack of self-starting feature unless otherwise built in to the machine, most of the times with permanent magnets or sometimes with distinct and special machine rotor configurations or additional mechanisms. Many of these approaches are expensive or make the manufacturing process more difficult. In order to overcome such disadvantages a method involving interpoles and windings is discussed in this research. Also, a new and novel converter topology requiring only a single switch and a single diode is realized. This research tests the concepts and feasibility of this new single-phase SRM motor topology and converter in one quadrant operation. The converter electronics and a simple minimum component, minimum cost analog converter are designed and implemented. The entire system is simulated and evaluated on its advantages and disadvantages. Simple testing without load is performed. This system has a large number of possibilities for development. Due to its lightweight, compact design and efficient, variable high-speed operation, the system might find many applications in pumps, fans, and drills. / Master of Science
63

Design and Implementation of a MAC protocol for Wireless Distributed Computing

Bera, Soumava 28 June 2011 (has links)
The idea of wireless distributed computing (WDC) is rapidly gaining recognition owing to its promising potential in military, public safety and commercial applications. This concept basically entails distributing a computationally intensive task that one radio device is assigned, among its neighboring peer radio devices. The added processing power of multiple radios can be harnessed to significantly reduce the time consumed in obtaining the results of the original complex task. Since the idea of wireless distributed computing depends on a radio device forming a network with its peers, it is imperative and necessary to have a medium access control (MAC) protocol for such networks which is capable of scheduling channel access by multiple radios in the network, ensuring reliable data transfer, incorporating rate adaptation as well as handling link failures. The thesis presented here elaborates the design and implementation of such a MAC protocol for WDC employed in a practical network of radio devices configurable through software. It also brings to light the design and implementation constraints and challenges faced in this endeavor and puts forward viable solutions. / Master of Science
64

Potential-Based Routing In Wireless Sensor Networks

Praveen Kumar, M 03 1900 (has links)
Recent advances in VLSI technology, and wireless communication have enabled the development of tiny, low-cost sensor nodes that communicate over short distances. These sensor nodes, which consist of sensing, data processing, and wireless communication capabilities, suggest the idea of sensor networks based on collaborative effort of a large number of sensor nodes. Sensor networks hold the promise for numerous applications such as intrusion detection, weather monitoring, security and tactical surveillance, distributed computing, and disaster management. Several new protocols and algorithms have been proposed in the recent past in order to realize these applications. In this thesis, we consider the problem of routing in Wireless Sensor Networks (WSNs). Routing is a challenging problem in WSNs due to the inherent characteristics which distinguish these networks from the others. Several routing algorithms have been proposed for WSNs, each considering a specific network performance objective such as long network lifetime (ChangandTassiulas,2004), end-to-end delay guarantees (T.Heetal,2003), and data fusion (RazvanCristescuetal,2005) etc. In this thesis, we utilize the Potential-based Routing Paradigm to develop routing algorithms for different performance objectives of interest in WSNs. The basic idea behind the proposed approach is to assign a scalar called the potential to every sensor node in the network. Data is then forwarded to the neighbor with highest potential. Potentials cause the data to flow along certain paths. By defining potential fields appropriately, one can cause data to flow along preferred paths, so that the given performance objective is achieved. We have demonstrated the usefulness of this approach by considering three performance objectives, and defining potentials appropriately in each case. The performance objectives that we have considered are (i) maximizing the time to network partition, (ii) maximizing the packet delivery ratio, and (iii) Data fusion. In an operational sensor network, sensor nodes’ energy levels gradually deplete, leading eventually to network partition. A natural objective is to route packets in such a way that the time to network partition is maximized. We have developed a potential function for this objective. We analyzed simple network cases and used the insight to develop a potential function applicable to any network. Simulation results showed that considerable improvements in time to network partition can be obtained compared to popular approaches such as maximum lifetime routing, and shortest hop count routing. In the next step, we designed a potential function that leads to routes with high packet delivery ratios. We proposed a “channel-state aware” potential definition for a simple 2-relay network and performed a Markov-chain based analysis to obtain the packet delivery ratio. Considerable improvement was observed compared to a channel-state-oblivious policy. This motivated us to define a channel-state-dependent potential function for a general network. Simulation results showed that for a relatively slowly changing wireless network, our approach can provide up to 20% better performance than the commonly-used shortest-hop-count routing. Finally, we considered the problem of correlated data gathering in sensor networks. The routing approach followed in literature is to construct a spanning tree rooted at the sink. Every node in the tree aggregates its data with the data from its children in order to reduce the number of transmitted bits. Due to this fact, the total energy cost of the data collection task is a function of the underlying tree structure. Noting that potential based routing schemes also result in a tree structure, we present a potential definition that results in the minimum energy cost tree under some special conditions. Specifically, we consider a scenario in which sensor nodes’ measurements are quantized to K values. The task at the sink is to construct a histogram of measurements of all sensor nodes. Sensor nodes do not directly send their measurements to sink. Instead, they construct a temporary histogram using the data from its children and forward it to its parent node in the tree. We present a potential definition that results in the minimum energy cost tree under some conditions on sensor nodes’ measurements. We include both the transmission energy cost as well as the energy cost associated with the aggregation process.
65

”Vi behöver den specifika kunskapen, hur man fått den är faktiskt rätt ointressant” : En intervjustudie om validering av reell kompetens / ”We need the specific knowledge, how it is achieved is of less interest” : An Interview Study on Recognition of Prior Learning

Weinsjö, Anna, Lindh, Anders January 2016 (has links)
Den här studien syftar till att få ökad kunskap om på vilka sätt utbildningsanordnare inom yrkeshögskolan arbetar med validering av reell kompetens. Metoden som används är semistrukturerade kvalitativa intervjuer med sju personer som arbetar hos olika anordnare av yrkeshögskoleutbildningar. Med utgångspunkt i tidigare forskning om validering och olika perspektiv på kunskap, kompetens och lärande visar de viktigaste resultaten att informanterna ser reell kompetens som en möjlighet men hur informanterna synliggör den skiljer dem åt. Ett ständigt närvarande orosmoln för informanterna är att subjektiviteten påverkar tillförlitligheten vilket i sin tur påverkar rättssäkerheten i deras arbete. Studien belyser utmaningen att arbeta med validering av reell kompetens. Komplexiteten är stor vilket visar sig både i tolkning av centrala begrepp, subjektivitetens ständiga närvaro samt att informanterna efterfrågar tydliga riktlinjer och generaliserbara metoder. / The aim of this study is to increase the knowledge on how organizers of higher occupational educators work with recognition of prior learning (RPL). The method used is semi-structured interviews with seven professionals from seven different educational organizers of higher occupational education. With a starting point in earlier research and on different views on knowledge, competence and learning are the most interesting findings that the informants look upon recognition of prior learning as a possibility, but the way the informants visionalize RPL differs. An ongoing air of consciousness is how subjectivity influences the reliability, which in the next step can have a great impact of the legal rights of the individuals. The study stresses the challenge working with RPL. There is a great complexity shown in differences in interpreting central concepts and that the subjectivity is always present. The informants ask for guidelines and methods possible to generalize.
66

[en] EXPERIMENTAL CHARACTERIZATION OF TRANSCRANIAL MAGNETIC STIMULATOR / [pt] CARACTERIZAÇÃO EXPERIMENTAL DE ESTIMULADOR MAGNÉTICO TRANSCRANIANO

CARLOS VINICIUS NASCIMENTO BARBOSA 20 February 2019 (has links)
[pt] A estimulação magnética transcraniana (EMT) é uma técnica não-invasiva utilizada no tratamento de distúrbios de origem neuropsiquiátrica, como transtornos de humor e alucinações auditivas. Por meio de uma técnica conhecida como neuromodulação, a atividade bioelétrica cerebral é estimulada a partir de correntes elétricas induzidas pela emissão de campo magnético alternado, em região anatômica cerebral previamente estabelecida segundo protocolos terapêuticos definidos, utilizando bobinas eletromagnéticas acopladas ao EMT, com modelos em forma de anel (ring), forma de oito (figure-of-eight), duplo cone (double cone), entre outros. A garantia da eficácia do tratamento e da segurança tanto dos pacientes quanto dos operadores do dispositivo depende da distribuição espacial da densidade de fluxo magnético emitida nas proximidades do equipamento de EMT, de acordo com o projeto da bobina de indução empregada. Um sistema multicanal de medição magnética (SMM) foi desenvolvido para avaliar a configuração da densidade de fluxo magnético gerada por dispositivos de estimulação magnética e em suas proximidades. Com o SMM é realizado o mapeamento magnético ao longo de um plano XY, sob as bobinas de modelos figure-of-eight e double cone, que inclui estudo de respostas, quando estimuladas com diferentes intensidades de saída e distâncias. Os resultados permitem caracterizar o decaimento da magnitude do campo, para os modelos de bobina figure-of-eight e double cone, em direção à região mais próxima da posição da mão do operador; e identificar, de forma preliminar, a distribuição da densidade de fluxo magnético em regiões mais superfíciais e o alcançe da penetração do estímulo destinado ao efeito terapêutico. / [en] Transcranial magnetic stimulation (TMS) is a noninvasive technique used in the treatment of disorders of neuropsychiatric origin, such as mood disorders and auditory hallucinations. By means of a technique known as neuromodulation, cerebral bioelectrical activity is stimulated from electrical currents induced by the emission of an alternating magnetic field, in an anatomical region previously established according to defined therapeutic protocols, using electromagnetic coils coupled to EMT, with models in the form of ring, figure-of-eight, double cone, among others. The assurance of treatment and safety efficacy for both the patients and the device operators depends on the spatial distribution of the magnetic flux density emitted in the vicinity of the EMT equipment, according to the design of the induction coil employed. A multichannel magnetic measurement system (SMM) has been developed in order to evaluate the configuration of magnetic flux density generated by and around magnetic stimulation devices. With the SMM the magnetic mapping along an XY plane was performed under the coils of figure-of-eight and double cone models, which includes study of responses, when stimulated with varying output intensities and distances. The results allow to characterize the decay of the magnitude of the field for the figure-of-eight and double cone coil models towards the region closest to the position of the operator s hand and to identify, in a preliminary way, the distribution of the flux density magnetic properties in more superficial regions and the extent of penetration of the stimulus intended for the therapeutic effect.
67

ADC : ambiente para experimentação e avaliação de protocolos de difusão confiável / Reliable broadcast protocols experimentation and evaluation environment (ADC)

Barcelos, Patricia Pitthan de Araujo January 1996 (has links)
Uma tendência recente em sistemas de computação é distribuir a computação entre diversos processadores físicos. Isto conduz a dois tipos de sistemas: sistemas fortemente acoplados e sistemas fracamente acoplados. Este trabalho enfoca os sistemas de computação classificados como fracamente acoplados, ou sistemas distribuídos, como são popularmente conhecidos. Um sistema distribuído, segundo [BAB 86], pode ser definido como um conjunto de processadores autônomos que não compartilham memória, não tem acesso a clocks' globais e cuja comunicação é realizada somente por troca de mensagens. As exigências intrínsecas de sistemas distribuídos compreendem a confiabilidade e a disponibilidade. Estas exigências tem levado a um crescente interesse em técnicas de tolerância a falhas, cujo objetivo é manter a consistência do sistema distribuído, mesmo na ocorrência de falhas. Uma técnica de tolerância a falhas amplamente utilizada em sistemas distribuídos é a técnica de difusão confiável. A difusão confiável é uma técnica de redundância de software, onde um processador dissemina um valor para os demais processadores em um sistema distribuído, o qual esta sujeito a falhas [BAB 85]. Por ser uma técnica básica de comunicação, diversos procedimentos de tolerância a falhas baseiam-se em difusão confiável. Este trabalho descreve a implementação de um ambiente de apoio a sistemas distribuídos intitulado Ambiente para Experimentação e Avaliação de Protocolos de Difusão Confiável (ADC). Neste ambiente são utilizados os recursos da difusão confiável para a obtenção de uma concordância entre todos os membros do sistema livres de falha. Esta concordância, conhecida como consenso, é obtida através de algoritmos de consenso, os quais visam introduzir o grau de confiabilidade exigido pelos sistemas distribuídos. O ADC (Ambiente para Experimentação e Avaliação de Protocolos de Difusão Confiável) foi desenvolvido em estações de trabalho SUN (SunOS) utilizando o sistema operacional de rede heterogêneo HetNOS [BAA 93] desenvolvido na UFRGS. O ambiente foi implementado com base em um estudo realizado sobre protocolos de difusão confiável [BAR 94]. Através da implementação do ADC e possível simular a execução de protocolos de difusão confiável aplicando modelos propostos para os mesmos. Desta execução são extraídos resultados, sobre os quais pode-se realizar uma analise. Esta análise tem sua fundamentação principalmente nos parâmetros de desempenho, confiabilidade e complexidade. Tanto a implementação do ADC como a realização da analise do modelo proposto foram realizados tendo como suporte alguns dos protocolos de difusão confiável disponíveis na literatura. O principal objetivo deste ambiente consiste na experimentação, ou seja, na verificação da relação teórico-prática dos sistemas distribuídos perante a utilização de uma técnica de redundância de software, a difusão confiável. Através deste ambiente torna-se possível a determinação de parâmetros tais como o número de mensagens de difusão trocadas entre os processos, o número de mensagens de retransmissão enviadas, o número de mensagens emitidas durante todo o processamento do modelo, etc. Estes parâmetros resultam numa analise consistente de protocolos de difusão confiável. / A recent trend in computing systems is to distribute the computation between several physical processors. This leads to two different systems: closely coupled systems and loosely coupled systems. This work focuses on computing systems classified as loosely coupled or distributed systems, as they are commonly known. According to [BAB 86], a distributed system can be defined as a set of autonomous processors with no shared memory, no global clocks and whose comunication is performed only by message exchange. The inherent requirements of distributed systems include reliability and availability. These have caused an increasing interest in fault tolerance techniques, whose goal is to keep the distributed system consistent despite failures. A fault tolerance technique largely used in distributed systems is reliable broadcast. Reliable broadcast is a software redundancy technique, where a processor disseminates a value to other processors in a distributed system, in which failures can occur [BAB85]. Because it is a basic communication technique, several fault tolerance procedures are based on reliable broadcast. This work describes the implementation of a support environment for distributed systems called Reliable Broadcast Protocols Experimentation and Evaluation Environment (ADC). Reliable broadcast resources are used in this environment to obtain an agreement among all off-failure system components. This agreement, called consensus, has been obtained through consensus algorithms, which aim to introduce the reliability degree required in distributed systems. The ADC has been developed in Sun workstation (SunOS) using the heterogeneous operating system HetNOS [BAA 93] which was developed at UFRGS. The environment has been implemented based on a research about reliable broadcast protocols [BAR 94]. Through the ADC it is possible to simulate the execution of reliable broadcast protocols applying proposed models to them. From this execution results are extracted, and over them analysis can be done. This analysis has been based essentialy in parameters such as performance, reliability and complexity. Some classical reliable broadcast protocols were used as a support to ADC implementation and model analysis. The main goal of this environment consists in validating diffusion protocols in a practical distributed systems environment, facing reliable broadcast. Through this environment it can be possible the analysis of important parameters resolution such as the number of messages exchanged between process, the number of retransmission of messages sent, the number of messages sent during the whole model processing, others. These parameters result in a consistent analysis of reliable broadcast protocols.
68

Prisjämförelsesidor : speglar de marknaden på ett objektivt sätt? / Price comparison sites : Can you trust them?

Castenbäck, Peter, Wennerström, Albin January 2007 (has links)
<p>Problemformulering: Är informationen på prisjämförelsesidorna tillförlitlig samt oberoende av återförsäljare som kan ha intresse i att uppvisa partisk information?</p><p>Syfte: Syftet är att, med hjälp av en explorativ undersökning samt användning av både kvantitativa och kvalitativa undersökningsmetoder, få klarhet i om informationen på prisjämförelsesidorna är tillförlitlig och därmed även oberoende av återförsäljare som kan ha intresse i att uppvisa partisk information.</p><p>Metod: Vårt tillvägagångssätt har bestått av både en kvantitativ och en kvalitativ metod. Den kvantitativa metoden har grundats i två separata explorativa tvärsnittsstudier och den kvalitativa metoden har bestått av 7 strukturerade intervjuer.</p><p>Vår ansats kan liknas vid en induktiv metod, då vi utgått från insamlad primärdata. Resultatet från dessa anknyts sedan till generella teorier. </p><p>Undersökningsobjekt: Vi har valt att studera de tre prisjämförelsesidor som anser sig vara informationsoberoende: Compricer, Pricerunner och Prisjakt.</p><p>Teori: Utifrån vår undersökning har vi valt teorier om marknadsföring på Internet, om e-handel, marknadsföringsmixen och slutligen Porters femkraftsmodell. Teorierna kommer ligga som en grund till vår analysdel</p><p>Slutsatser:Vår slutsats visade att informationen på de 3 prisjämförelsesidorna inte är fullständigt tillförlitlig. Vi fann att pricerunner inte är oberoende och att Prisjakt är oberoende, medan undersökningen ej visade klarhet i Compricers situation.</p> / <p>Problem quest: Are the information on the price comparison sites reliable and independent from selling agents who can have interest in showing partial information.</p><p>Purpose: The purpose is to investigate if the information on the Swedish Shopping agent sites are reliable, and even to investigate if unjust special treatment of the selling agents is common.</p><p>Methodology: Our approach has comprehended both a quantitative and a qualitative method. The quantitative method is based on two separate explorative cross-section studies, and the qualitative method has implicated 7 structured interviews. Since we have started with the collection of data, and after that expounded it with the general theories, our thesis are much like a inductive method.</p><p>Disquisition object: We have chosen to study the three Internet selling agents who consider themselves to apply independent information: Compricer, Pricerunner and Prisjakt.</p><p>Theory: Because of our disquisition have we chosen theories about marketing on Internet, e-business, the marketing mix and Porters five force model. The theories act like a base to our analysis of the data.</p><p>Conclusions: Our conclusion showed that the three selling agents aren’t reliable about the quotation of prices. Totally they had mistaken the price for 1/6 of the products. We found out that Pricerunner aren’t independent and that Prisjakt are relatively independent. The disquisition didn’t clarified Compricers grade of independents.</p>
69

Prisjämförelsesidor : speglar de marknaden på ett objektivt sätt? / Price comparison sites : Can you trust them?

Castenbäck, Peter, Wennerström, Albin January 2007 (has links)
Problemformulering: Är informationen på prisjämförelsesidorna tillförlitlig samt oberoende av återförsäljare som kan ha intresse i att uppvisa partisk information? Syfte: Syftet är att, med hjälp av en explorativ undersökning samt användning av både kvantitativa och kvalitativa undersökningsmetoder, få klarhet i om informationen på prisjämförelsesidorna är tillförlitlig och därmed även oberoende av återförsäljare som kan ha intresse i att uppvisa partisk information. Metod: Vårt tillvägagångssätt har bestått av både en kvantitativ och en kvalitativ metod. Den kvantitativa metoden har grundats i två separata explorativa tvärsnittsstudier och den kvalitativa metoden har bestått av 7 strukturerade intervjuer. Vår ansats kan liknas vid en induktiv metod, då vi utgått från insamlad primärdata. Resultatet från dessa anknyts sedan till generella teorier. Undersökningsobjekt: Vi har valt att studera de tre prisjämförelsesidor som anser sig vara informationsoberoende: Compricer, Pricerunner och Prisjakt. Teori: Utifrån vår undersökning har vi valt teorier om marknadsföring på Internet, om e-handel, marknadsföringsmixen och slutligen Porters femkraftsmodell. Teorierna kommer ligga som en grund till vår analysdel Slutsatser:Vår slutsats visade att informationen på de 3 prisjämförelsesidorna inte är fullständigt tillförlitlig. Vi fann att pricerunner inte är oberoende och att Prisjakt är oberoende, medan undersökningen ej visade klarhet i Compricers situation. / Problem quest: Are the information on the price comparison sites reliable and independent from selling agents who can have interest in showing partial information. Purpose: The purpose is to investigate if the information on the Swedish Shopping agent sites are reliable, and even to investigate if unjust special treatment of the selling agents is common. Methodology: Our approach has comprehended both a quantitative and a qualitative method. The quantitative method is based on two separate explorative cross-section studies, and the qualitative method has implicated 7 structured interviews. Since we have started with the collection of data, and after that expounded it with the general theories, our thesis are much like a inductive method. Disquisition object: We have chosen to study the three Internet selling agents who consider themselves to apply independent information: Compricer, Pricerunner and Prisjakt. Theory: Because of our disquisition have we chosen theories about marketing on Internet, e-business, the marketing mix and Porters five force model. The theories act like a base to our analysis of the data. Conclusions: Our conclusion showed that the three selling agents aren’t reliable about the quotation of prices. Totally they had mistaken the price for 1/6 of the products. We found out that Pricerunner aren’t independent and that Prisjakt are relatively independent. The disquisition didn’t clarified Compricers grade of independents.
70

Last Planner System – Areas of Application and Implementation Challenges

Porwal, Vishal 2010 December 1900 (has links)
In recent years projects have increasingly used Last Planner System (LPS) in building construction. However project managers still struggle with figuring out how the LPS could be applied on their specific projects. One main reason for this struggle is that explicit instructions for systematically applying LPS are not available. This thesis offers practitioners and researchers an account of LPS implementation challenges and an indication of how LPS can be applied. The thesis qualitatively aggregates the results of 26 test case projects of LPS applications to show researchers and practitioners reasons why LPS was applied, what benefits were realized and what challenges were found during the implementation. Senior and mid-level managers in AEC industry were surveyed to assess the implementation challenges that they encountered. The main findings of this analysis are; (1) that practitioners have used LPS for the purpose of making plans more reliable, (2) get smooth work flow (3) improve productivity. The survey findings imply that improvements in LPS implementation strategies can be made which will facilitate LPS adoption by the industry. The findings of this thesis suggest that further research on the integration of LPS into work and business processes of project teams is needed to further the widespread use of LPS throughout the building industry.

Page generated in 0.4314 seconds