• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1284
  • 376
  • 212
  • 163
  • 71
  • 63
  • 36
  • 33
  • 28
  • 28
  • 26
  • 12
  • 12
  • 10
  • 10
  • Tagged with
  • 2847
  • 398
  • 284
  • 280
  • 207
  • 195
  • 190
  • 162
  • 156
  • 156
  • 156
  • 152
  • 147
  • 142
  • 128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Identifieringav slöseri inom NCC Folkboende / Identification of waste in productionof NCC Folkboende

Prelog, Josefin, Huang, Sandra January 2017 (has links)
Idag är det högkonjunktur i byggindustrin och det byggs mer bostäder nu än vad det har gjorts förut. Produktionstider och kostnader för ett projekt pressas ner efter all konkurrens. Slöseri inom byggbranschen kan både bidra till en längre produktionstid och högre produktionskostnad. Tidigare studier visar att slöseri kan uppgå till 30-35 % utav ett projekts produktionskostnad. NCC bygger ett koncepthus, NCC Folkboende, vilket är ett platsgjutet koncept som genererar en effektiv byggprocess med lägre produktionskostnad. Konceptet har sitt ursprung från Umeå, och produktionstiden och produktionskostnaden grundar sig på Umeås kalkylerade timmar. Dessa timmar är svåra för andra distrikt att leva upp till vilket gör det intressant att följa aktuell studie för att finna möjliga orsaker. Lean Production är ett verktyg som härstammar från bilindustrin. Genom att fokusera på att minimera slöseri kan tid och material sparas in. Lean identifierar åtta typer av slöseri; överproduktion, väntan, omarbete, outnyttjad kreativitet, lager, överarbete, rörelse och transporter. För att identifiera slöseri i denna studie har metoden Activity Sampling använts. Metoden går ut på att göra många observationer för att mäta hur tiden utnyttjas i relation till arbetet. Med tillräckligt många observationer frambringas ett resultat som blir tillförlitligt under längre tid av arbetet. Resultatet visar i en jämförelse, mellan en grundkalkyl och verkligt utförda timmar, att det är en viss skillnad. Vidare har slöseri inte en så pass stor påverkan att det skulle utgöra en utav de större orsakerna under stomarbetet. Förbättringsåtgärder skulle kunna ske genom utbildning innan start med samtliga berörda. En erfarenhetsbank som registrerar vilka kompletteringsarbeten som görs för att få en bredare kunskap om dem. / The construction industry is a growing market and there is a huge demand for buildings today. The time for production and production costs for a project are being reduced because of the competitive market. Waste within the construction industry can both contribute to a longer production time and higher production costs. Previous studies confirm that waste can amount to 30-35% of the cost of production. NCC is building a concept house, NCC Folkboende, which is a site-based concept that generates an efficient construction process with lower production costs. NCC in Umeå developed the concept Folkboende. Production time and production costs are based on Umeå's estimated hours. The production time is difficult for other districts to maintain. This makes it interesting to follow the current study to find possible causes. Lean Production is a tool derived from the automotive industry. By focusing on eliminating waste, the time and materials can be saved. Lean identifies eight types of waste; Overproduction, waiting, reworking, untapped creativity, storage, overwork, movement and transportation. To identify waste in this study, the Activity Sampling method has been used. The method involves making many observations to measure how time is used in relation to the work. With enough observations, a result is obtained that becomes reliable for a long time of work. The result shows in a comparison, between the calculated hours and actual hours worked, that there is a difference. Furthermore, waste does not have such a big impact that it would be one of the major causes. To increase efficiency, education with all concerned before the production begins can be recommend based on the study. An experience bank that registers which supplementary work that has been done in the production will also be a recommendation.
412

Modelagem de dados de resposta ao item sob efeito de speededness / Modeling of Item Response Data under Effect of Speededness

Joelson da Cruz Campos 08 April 2016 (has links)
Em testes nos quais uma quantidade considerável de indivíduos não dispõe de tempo suciente para responder todos os itens temos o que é chamado de efeito de Speededness. O uso do modelo unidimensional da Teoria da Resposta ao Item (TRI) em testes com speededness pode nos levar a uma série de interpretações errôneas uma vez que nesse modelo é suposto que os respondentes possuem tempo suciente para responder todos os itens. Nesse trabalho, desenvolvemos uma análise Bayesiana do modelo tri-dimensional da TRI proposto por Wollack e Cohen (2005) considerando uma estrutura de dependência entre as distribuições a priori dos traços latentes a qual modelamos com o uso de cópulas. Apresentamos um processo de estimação para o modelo proposto e fazemos um estudo de simulação comparativo com a análise realizada por Bazan et al. (2010) na qual foi utilizada distribuições a priori independentes para os traços latentes. Finalmente, fazemos uma análise de sensibilidade do modelo em estudo e apresentamos uma aplicação levando em conta um conjunto de dados reais proveniente de um subteste do EGRA, chamado de Nonsense Words, realizado no Peru em 2007. Nesse subteste os alunos são avaliados por via oral efetuando a leitura, sequencialmente, de 50 palavras sem sentidos em 60 segundos o que caracteriza a presença do efeito speededness. / In tests where a reasonable amount of individuals does not have enough time to answer all items we observe what is called eect of Speededness. The use of a unidimensional model from Item Response Theory (IRT) in tests with speededness can lead us to erroneous interpretations, since this model assumes that the respondents have enough time to answer all items. In this work, we propose a Bayesian analysis of the three-dimensional item response models (IRT) proposed by Wollack and Cohen et al (2005) considering a dependency structure between the prior distributions of the latent traits which is modeled using Copulas. We propose and develop a MCMC algorithm for the estimation of the model. A simulation study comparing with the analysis in Bazan et al (2010), wherein an independent prior distribution assumption was presented. Finally, we apply our model in a set of real data from EGRA, called Nonsense Words, held in Peru in 2007, where students are evaluated for their performance in reading.
413

DEVELOPMENT AND APPLICATION OF METHODS FOR STABLE WATER ISOTOPE SAMPLING FROM A LOW GRADIENT CANAL

Unknown Date (has links)
Stable isotopes of water are used as tracers for characterizing surface water/groundwater interactions. Gaps in sampling protocol for these tracers in low gradient canals limits their use in studies of canal-groundwater exchanges. Several sampling methods were developed to determine the temporal and spatial isotopic variation in a canal. The influence of a flow control gate on isotopic composition and the sensitivity of isotope mixing calculations to choice of sampling method were also evaluated. There was little variability in the isotopic composition of the canal along a cross section perpendicular flow. Some variation occurred monthly and seasonally. The greatest variability occurred between the upstream and downstream side of the flow control gates when the gates were closed. Mixing calculations were not sensitive to the choice of sampling method. This study shed light on isotope sampling methods in canals for canal-groundwater interactions studies. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2019. / FAU Electronic Theses and Dissertations Collection
414

Novel Approaches for the Efficient Sampling and Detection of <em>Listeria monocytogenes</em> and <em>Brochothrix thermosphacta</em> on Food Contact Surfaces

Clemons, Jessica Anne 01 December 2010 (has links)
The primary step in the microbiological assessment of highly dynamic and complex food processing conditions is environmental sampling. The objectives of this study were to: (1) compare the efficacy of four sampling devices including Microbial-Vac system (MV), cellulose sponge (SP), polyester swab (SW) and composite tissue (CT), for the recovery of Listeria monocytogenes and Brochothrix thermosphacta on five surfaces and (2) to determine if there was a significant difference between the recovery of low (10 CFU/900cm2) and high (100 CFU/900cm2) L. monocytogenes inoculum levels using the sampling devices in a simulated food processing environment. Surfaces used for this study were stainless steel (SS), polyethylene cutting board (CB), polyurethane conveyor belt (PB), open hinge flat top conveyor belt (FT) and mesh conveyor belt (MB). Food contact surfaces were inoculated with L. monocytogenes to obtain a final cell population of 10 (low) or 100 (high) CFU/900 cm2. An average cell density of 10,000 CFU/25 cm2 was used for inoculating B. thermosphacta on each of the surfaces. Inoculated surfaces were dried and held for two hours at 4˚C then sampled and processed for detection. Because L. monocytogenes is a "zero tolerance" pathogen in ready-to-eat foods, the qualitative analysis included an enrichment step to detect presence/absence in the sample. In comparison, B. thermosphacta was directly plated in order to quantify the recovery capability of each device. Results indicated for recovery of 100 CFU/900 cm2 L. monocytogenes, there was no difference among devices on SS, CB or PB surfaces (p>0.05). However, a significant difference was detected at 10 CFU/900 cm2 on SS between MV and CT, 62.97 and 17.34%, respectively (p=0.0086). Results for FT indicated MV was superior over SP and SW (p=0.0004) for detection of high and low L. monocytogenes. There was no difference for the quantitative recovery of B. thermosphacta on PB and SS; however, there was a difference (p=0.0371) among devices on CB indicating MV was superior over SP and CT. The swab recovered 3.25 log CFU/25cm2 from flat top belts and was significantly lower (p=0.0259) than MV and SP devices, 4.29 and 4.12 log CFU/25cm2, respectively.
415

Multiband LNA Design and RF-Sampling Front-Ends for Flexible Wireless Receivers

Andersson, Stefan January 2006 (has links)
The wireless market is developing very fast today with a steadily increasing number of users all around the world. An increasing number of users and the constant need for higher and higher data rates have led to an increasing number of emerging wireless communication standards. As a result there is a huge demand for flexible and low-cost radio architectures for portable applications. Moving towards multistandard radio, a high level of integration becomes a necessity and can only be accomplished by new improved radio architectures and full utilization of technology scaling. Modern nanometer CMOS technologies have the required performance for making high-performance RF circuits together with advanced digital signal processing. This is necessary for the development of low-cost highly integrated multistandard radios. The ultimate solution for the future is a software-defined radio, where a single hardware is used that can be reconfigured by software to handle any standard. Direct analog-to-digital conversion could be used for that purpose, but is not yet feasible due to the extremely tough requirements that put on the analog-to-digital converter (ADC). Meanwhile, the goal is to create radios that are as flexible as possible with today’s technology. The key to success is to have an RF front-end architecture that is flexible enough without putting too tough requirements on the ADC. One of the key components in such a radio front-end is a multiband multistandard low-noise amplifier (LNA). The LNA must be capable of handling several carrier frequencies within a large bandwidth. Therefore it is not possible to optimize the circuit performance for just one frequency band as can be done for a single application LNA. Two different circuit topologies that are suitable for multiband multistandard LNAs have been investigated, implemented, and measured. Those two LNA topologies are: (i) wideband LNAs that cover all the frequency bands of interest (ii) tunable narrowband LNAs that are tunable over a wide range of frequency bands. Before analog-to-digital conversion the RF signal has to be downconverted to a frequency manageable by the analog-to-digital converter. Recently the concept of direct sampling of the RF signal and discrete-time signal processing before analog-to-digital conversion has drawn a lot of attention. Today’s CMOS technologies demonstrate very high speeds, making the RF-sampling technique appealing in a context of multistandard operation at GHz frequencies. In this thesis the concept of RF sampling and decimation is used to implement a flexible RF front-end, where the RF signal is sampled and downconverted to baseband frequency. A discrete-time switched-capacitor filter is used for filtering and decimation in order to decrease the sample rate from a value close to the carrier frequency to a value suitable for analog-to-digital conversion. To demonstrate the feasibility of this approach an RF-sampling front-end primarily intended for WLAN has been implemented in a 0.13 μm CMOS process.
416

Science-centric sampling approaches of geo-physical environments for realistic robot navigation

Parker, Lonnie Thomas 20 June 2012 (has links)
The objective of this research effort is to provide a methodology for assessing the effectiveness of sampling techniques used to gather different types of geo-physical information by a robotic agent. We focus on assessing how well unique real-time sampling strategies acquire information that is, otherwise, too dangerous or costly to collect by human scientists. Traditional sampling strategies and informed search tech- niques provide the underlying structure for a navigating robotic surveyor whose goal is to collect samples that yield an accurate representation of the measured phenomena under realistic constraints. These sampling strategies are alternative improvements that provide greater information gain than current sampling technology allows. The contributions of this work include the following: 1) A method for estimating spa- tially distributed phenomena, using a partial sample set of information, that shows improvement over that of a more traditional estimation method. 2) A method for sampling this phenomena in the form of a navigation scheme for a mobile robotic survey system. 3) A method of ranking and comparing different navigation algorithms relative to one another based on performance (reconstruction error) and resource (distance) constraints. We introduce a specific class of navigation algorithms as example sampling strategies to demonstrate how our methodology allows different robot navigation options to be contrasted and the most practical strategy selected.
417

Double Sampling Third Order Elliptic Function Low Pass Filter

Cheng, Mao-Yung 01 September 2011 (has links)
Most discrete time filters use Switched Capacitor structures, but Switched capacitor circuits have finite sampling rate and high power consumption. In this paper we use Switched Current structure to increase sampling rate and reduce power consumption. In this paper, we use a Class-AB structure to compose a double sampling third order low-pass filter. In this paper there are two integrator types. Modified backward Euler and modified forward Euler integrators were realized with double sampling technology from the backward Euler and forward Euler integrators. Compared with other circuits, the circuit has low power supply¡Blow power consumption ¡Bhigh sampling speed. We employ HSPICE and MATLAB to simulate and design the circuit. We use TSMC 0.35£gm process to implement this circuit. The power supply is 1.8V, the cut-off frequency is 3.6MHz, the sampling frequency is 72MHz, and the power consumption is 1.303mW.
418

Reduced Area Discrete-Time Down-Sampling Filter Embedded With Windowed Integration Samplers

Raviprakash, Karthik 2010 August 1900 (has links)
Developing a flexible receiver, which can be reconfigured to multiple standards, is the key to solving the problem of embedding numerous and ever-changing functionalities in mobile handsets. Difficulty in efficiently reconfiguring analog blocks of a receiver chain to multiple standards calls for moving the ADC as close to the antenna as possible so that most of the processing is done in DSP. Different standards are sampled at different frequencies and a programmable anti-aliasing filtering is needed here. Windowed integration samplers have an inherent sinc filtering which creates nulls at multiples of fs. The attenuation provided by sinc filtering for a bandwidth B is directly proportional to the sampling frequency fs and, in order to meet the anti-aliasing specifications, a high sampling rate is needed. ADCs operating at such a high oversampling rate dissipate power for no good use. Hence, there is a need to develop a programmable discrete-time down-sampling circuit with high inherent anti-aliasing capabilities. Currently existing topologies use large numbers of switches and capacitors which occupy a lot of area.A novel technique in reducing die area on a discrete-time sinc2 ↓2 filter for charge sampling is proposed. An SNR comparison of the conventional and the proposed topology reveals that the new technique saves 25 percent die area occupied by the sampling capacitors of the filter. The proposed idea is also extended to implement higher downsampling factors and a greater percentage of area is saved as the down-sampling factor is increased. The proposed filter also has the topological advantage over previously reported works of allowing the designers to use active integration to charge the capacitance, which is critical in obtaining high linearity. A novel technique to implement a discrete-time sinc3 ↓2 filter for windowed integration samplers is also proposed. The topology reduces the idle time of the integration capacitors at the expense of a small complexity overhead in the clock generation, thereby saving 33 percent of the die area on the capacitors compared to the currently existing topology. Circuit Level simulations in 45 nm CMOS technlogy show a good agreement with the predicted behaviour obtained from the analaysis.
419

Empirical Evaluations of Different Strategies for Classification with Skewed Class Distribution

Ling, Shih-Shiung 09 August 2004 (has links)
Existing classification analysis techniques (e.g., decision tree induction,) generally exhibit satisfactory classification effectiveness when dealing with data with non-skewed class distribution. However, real-world applications (e.g., churn prediction and fraud detection) often involve highly skewed data in decision outcomes. Such a highly skewed class distribution problem, if not properly addressed, would imperil the resulting learning effectiveness. In this study, we empirically evaluate three different approaches, namely the under-sampling, the over-sampling and the multi-classifier committee approaches, for addressing classification with highly skewed class distribution. Due to its popularity, C4.5 is selected as the underlying classification analysis technique. Based on 10 highly skewed class distribution datasets, our empirical evaluations suggest that the multi-classifier committee generally outperformed the under-sampling and the over-sampling approaches, using the recall rate, precision rate and F1-measure as the evaluation criteria. Furthermore, for applications aiming at a high recall rate, use of the over-sampling approach will be suggested. On the other hand, if the precision rate is the primary concern, adoption of the classification model induced directly from original datasets would be recommended.
420

Performance verification of personal aerosol sampling devices

Luecke, Steven T. 01 January 2003 (has links)
International standards establish criteria for size-selective aerosol sampling for industrial hygiene. Commercially available aerosol samplers are designed to conform to these criteria. This study uses semi-monodispersed aerosols generated in a vertically aligned test chamber to compare the performance of three commercially available respirable dust samplers, one of which can, in addition, simultaneously sample for thoracic and inhalable dust fractions. Comparison methods are used to calculate a theoretical fractional value based on the appropriate sampling conventions of the total dust concentration and size distribution of test materials. Performance of actual samplers can be conducted by comparing observed results to the theoretical value. Results show the design of the test chamber and use of fused aluminum oxide is appropriate to conduct simplified performance verification tests for inhalable and respirable dust samplers. This study showed the TSI RespiCon followed the inhalable and respirable conventions closely, but results for the thoracic fraction required the use of a correction factor. The SKC aluminum cyclone tended to undersample the respirable fraction, while the BGI CAS4 cyclone and the TSI RespiCon appear to most closely follow the convention. Improved selection of test material and characterization of particle sizes are recommended to further develop this method of performance verification.

Page generated in 0.043 seconds