• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 199
  • 65
  • 26
  • 26
  • 16
  • 11
  • 11
  • 10
  • 10
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 460
  • 63
  • 56
  • 56
  • 53
  • 48
  • 44
  • 43
  • 41
  • 39
  • 37
  • 37
  • 35
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Fatores motivadores e limitadores do alisamento de resultados (income smoothing) pelas empresas listadas na Bovespa

Carlin, Diego de Oliveira 31 August 2009 (has links)
Made available in DSpace on 2015-03-05T19:15:17Z (GMT). No. of bitstreams: 0 Previous issue date: 31 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Uma das formas de gerenciamento de resultados constantemente abordada na literatura é o Alisamento de Resultados (Income Smoothing), que representa uma redução da variabilidade dos resultados reportados, tornando-os mais estáveis ao longo do tempo. O objetivo deste estudo é verificar fatores capazes de influenciar esta prática, relacionados a motivações de mercado, motivações contratuais e motivações políticas e regulatórias, bem como possíveis fatores limitadores da mesma. Para tanto, investigou-se a ocorrência do alisamento de resultados em uma amostra de 141 empresas listadas na Bolsa de Valores de São Paulo (BOVESPA), abrangendo um período de análise de oito anos (2000 a 2007). O método utilizado para a identificação das empresas alisadoras foi aquele proposto por Eckel (1981), baseado na comparação entre o coeficiente de variabilidade da receita bruta e do lucro líquido da empresa. Após esta identificação, empregou-se análise univariada, por meio do teste ANOVA, para verificar a existência de diferenças / One form of earnings management that is constantly addressed in the literature is Income Smoothing, which constitutes a reduction in the variability of reported results, making them more stable over time. The aim of this study is to identify factors such as market-related, contractual, political and regulatory motivations which influence this practice, as well as any factors that may limit it. To this end, was investigated the occurrence of income smoothing in a sample of 141 companies listed on the São Paulo Stock Exchange (BOVESPA), over an eight-year analysis period (2000 to 2007). The method used to identify companies that practiced income smoothing was proposed by Eckel (1981), based on the comparison between the coefficient of variability of gross revenue and net profit of the company. Following identification, univariate analysis in the form of the ANOVA test was employed to verify the existence of significant differences in variables between groups of smoothers and non-smoothers firms. The test result
112

Optimization of Goods Incoming Process

Golovatova, Alona, Zhou, Jinshan January 2010 (has links)
Increasing interest to optimization of goods incoming process has paralleled the rise of product diversity and advanced warehouse management based on logistics support systems. Nowadays, companies are universally faced with the requirements to reengineer their business processes starting with goods incoming operation, aiming to significantly reduce total operating costs and quickly respond to ultimate consumer. Previous academic research has provided many alternatives to attain the expected results. Nevertheless, an enormous gap still exists between theoretical research and practical operations. The purpose of this paper is to bridge this gap from operation level.We introduced a theoretical framework which was organized around earlier studies, latest findings and established literatures associated with overview of goods incoming process, flows management, seven wastes, logistics support systems etc. Meanwhile, an online fashion retailer, Nelly.com goods incoming process has been mapped from goods receiving, packing and sorting, warehousing to data input. As a result, intermittent material flows and information flows has been realized. Later on, a sensitivity analysis was performed to observe all wastes in the process through precise timing of each detailed activity. As the weaknesses and opportunities have been identified within Nelly’s goods incoming process, some heuristic solutions were proposed in the paper.Generally, flows should be smoothed and accelerated, particularly material flows and information flows related with goods incoming process. The interruption and miscommunication should be avoided to streamline the whole operation. Whilst, we inferred that all wastes within every sub-process have to be aware of. Consequently, some techniques such as scheduled delivery, cross-docking, goods classification, improved logistics support systems were proposed to eliminate wastes. Further on, the prevailed business process reengineering should be conducted as the next step to reallocate some resources or operations.Lastly, we simulated an expected goods incoming process based on Nelly’s status quo and heuristic suggestions. And some future research issues have been presented at the end to extend the vision to relevant domains.
113

Změna stavu zásob a její význam v hospodářském cyklu / Net inventory investment and its importance in business cycle

Kučera, Lukáš January 2011 (has links)
Cyclical component of aggregate net inventory investment in Czech republic between I.quarter 1996 and IV.quarter 2010 can be described by Production smoothing model, which says, that it is more advantageous for firms to absorb shocks of their product demand into inventories than to adjust their production permanently. This statement stands on a discovery, that cyclical component of net inventory investment is negativelly corralated with cyclical component of final sales and at the same time variability of cyclical component of final sales is higher than cyclical component of production (GDP). Although it is not possible to expect that every firm in the economy will behave according to this model, it seems, with respect to the analysis, that the cost of adjustment is one of the most important factors considered by firms within their optimalization problem.
114

Uma comparação entre modelos de previsão de preços do boi gordo paulista / A comparison between São Paulo\'s live cattle prices forecasting models

Vitor Bianchi Lanzetta 23 February 2018 (has links)
O estudo comparou o desempenho preditivo dos modelos de previsão de redes neurais e de suavização exponencial, empregando dados diários do preço da arroba do boi gordo futuro (BM&FBOVESPA) entre janeiro de 2010 até dezembro de 2015. Os resultados mostram que modelos relativamente mais complexos como redes neurais não necessariamente apresentam melhor desempenho se comparados a modelos mais simples, e também mostram que a classificação relativa muda conforme variam as medidas de ajuste e/ou horizonte de previsão além de vantagens associadas a combinação de diversos modelos. / This study compared the predictive performance between neural network models and exponential smoothing, using daily data of live cattle future price (BM&FBOVESPA) from January 2010 to December 2015. The results show that relatively more complex models like neural networks do not necessarily display better performance compared to simpler ones. Results also shows that relative classification changes with respect to adjust measures and/or forecast horizons changes besides advantages achieved by model combinaion.
115

Filtro de difusão anisotrópica anômala como método de melhoramento de imagens de ressonância magnética nuclear ponderada em difusão / Anisotropic anomalous filter as image enhancement method to nuclear magnetic resonance diffusion weighted imaging

Senra Filho, Antonio Carlos da Silva 25 July 2013 (has links)
Métodos de suavização através de processos de difusão é frequentemente utilizado como etapa prévia em diferentes procedimentos em imagens. Apesar da difusão anômala ser um processo físico conhecido, ainda não é aplicada à suavização de imagens como a difusão clássica. Esta dissertação propõe e relata a implementação e avaliação de filtros de difusão anômala, tanto isotrópico quanto anisotrópico, como um método de melhoramento em imagens ponderadas em difusão (DWI) e imagens de tensor de difusão (DTI) dentro do imageamento por ressonãncia magnética (MRI). Aqui propõe-se generalizar a difusão anisotrópica e isotrópica com o conceito de difusão anômala em processamento de imagens. Como metodologia implementou-se computacionalmente as equações de difusão bidimensional e aplicou às imagens MRI para avaliar o seu potencial como filtro de melhoramento. Foram utilizadas imagens de ressonância magnética de aquisição DTI em voluntários saudáveis. Os resultados obtidos neste estudo foram a verificação que métodos baseados em difusão anômala melhoram a qualidade em processamento das imagens DTI e DWI quando observadas medidas de qualidade como a relação sinal ruído (SNR) e índice de similaridade estrutural (SSIM), e assim determinou-se parâmetros ótimos para as diferentes imagens e situações que foram avaliadas em função dos seus parâmetros de controle, em especial o parâmetro anômalo, chamado de q. Os resultados apresentados aqui permitem prever não apenas uma melhora na qualidade das imagens DTI e DWI resultantes do processamento proposto, como também possível redução de repetições na sequência de aquisição de MRI para um SNR predeterminado. / Smoothing methods through diffusion processes is often used as a preliminary step in different procedures in images. Although the anomalous diffusion is a known physical process, it is not applied to image smoothing as the classical diffusion. This paper proposes and describes implementation and evaluation of anomalous diffusion filters, both isotropic and anisotropic, as a method of improving on diffusion-weighted images (DWI) and diffusion tensor images (DTI) within the magnetic resonance imaging (MRI). Hereby is proposed to generalize the isotropic and anisotropic diffusion with the concept of anomalous diffusion in image processing. The methodology is implemented computationally as bidimensional diffusion equations and applied to MRI images to evaluate its potential as a filter for quality improvement. We used DTI and DWI imaging to acquire from healthy volunteers as image set. The results of this study verified that methods based on anomalous diffusion improved DWI and DTI image processing when observed quality measures such as signal to noise ratio (SNR) and structural similarity index (SSIM), and determined filter optimal parameters for different images and situations evaluated in terms of their control parameters, particularly the anomalous parameter, called q. The results presented here can predict not only an improvement in the quality of DWI and DTI images resulting from the proposed method, and additionally the possible reduction of repetitions following acquisition of MRI for a predetermined SNR.
116

Etude thermohydraulique expérimentale et numérique d'une boucle d'hélium supercritique en convection forcée soumise à des pulses périodiques de chaleur / Experimental and numerical thermohydraulics study of a forced flow supercritical helium loop under periodical heat loads

Lagier, Benjamin 11 March 2014 (has links)
Les futurs réacteurs expérimentaux comme ITER ou JT-60SA réaliseront des réactions de fusion thermonucléaire au sein de plasmas de plusieurs millions de degrés. Le confinement de la réaction au centre de la chambre est assuré par des champs magnétiques très intenses générés par des aimants supraconducteurs. Ces bobines sont refroidies à 4.4 K via une circulation forcée d’hélium supercritique. Le fonctionnement cyclique des machines engendre des charges thermiques pulsées qui devront être absorbées par les réfrigérateurs de plusieurs mégawatts de puissances électriques. L’expérience HELIOS, construite au CEA Grenoble, est une maquette à échelle réduite du système de distribution d’hélium du tokamak JT-60SA constituée d’un bain d’hélium à saturation et d’une boucle en hélium supercritique. Les travaux de thèse présentés ici explorent les possibilités d’HELIOS afin de réaliser les études expérimentale et numérique de trois stratégies de lissage de charges thermiques : l’utilisation du bain saturé en tant que volant thermique ouvert, la variation de la vitesse du circulateur et l’utilisation de la vanne de by-pass de la charge thermique. Le modèle EcosimPro développé ici rend bien compte des phénomènes de couplage transitoire entre le dépôt d’énergie, la montée en pression et en température de la boucle de circulation, de même que le couplage entre la boucle de circulation et le bain saturé. Des contrôles avancés ont été testés numériquement puis validés expérimentalement pour améliorer la stabilité du réfrigérateur et optimiser la puissance de réfrigération. / Future fusion reactor devices such as ITER or JT-60SA will produce thermonuclear fusion reaction inplasmas at several millions of degrees. The confinement in the center of the chamber is achieved byvery intense magnetic fields generated by superconducting magnets. These coils have to be cooleddown to 4.4 K through a forced flow of supercritical helium. The cyclic behavior of the machinesleads to pulsed thermal heat loads which will have to be handled by the refrigerator.The HELIOS experiment built in CEA Grenoble is a scaled down model of the helium distributionsystem of the tokamak JT-60SA composed of a saturated helium bath and a supercritical helium loop.The thesis work explores HELIOS capabilities for experimental and numerical investigations on threeheat load smoothing strategies: the use of the saturated helium bath as an open thermal buffer, therotation speed variation of the cold circulator and the bypassing of the heated section. Thedeveloped model describes well the physical evolutions of the helium loop (pressure, temperature,mass flow) submitted to heat loads observed during experiments. Advanced controls have beentested and validated to improve the stability of the refrigerator and to optimize the refrigerationpower.
117

MRI-based active shape model of the human proximal femur using fiducial and secondary landmarks and its validation

Zhang, Xiaoliu 01 May 2018 (has links)
Osteoporosis, associated with reduced bone mineral density and structural degeneration, greatly increases the risk of fragility fracture. Magnetic resonance imaging (MRI) has been applied to central skeletal sites including the proximal femur due to its non-ionizing radiation. A major challenge of volumetric bone imaging of the hip is the selection of regions of interest (ROIs) for computation of regional bone measurements. To address this issue, an MRI-based active shape model (ASM) of the human proximal femur is applied to automatically generate ROIs. The challenge in developing the ASM for a complex three-dimensional (3-D) shape lies in determining a large number of anatomically consistent landmarks for a set of training shapes. This thesis proposes a new method of generating the proximal femur ASM, where two types of landmarks, namely fiducial and secondary landmarks, are used. The method consists of—(1) segmentation of the proximal femur bone volume, (2) smoothing the bone surface, (3) drawing fiducial landmark lines on training shapes, (4) drawing secondary landmarks on a reference shape, (5) landmark mesh generation on the reference shape using both fiducial and secondary landmarks, (6) generation of secondary landmarks on other training shapes using the correspondence of fiducial landmarks and an elastic deformation of the landmark mesh, (7) computation of the active shape model. A proximal femur ASM has been developed using hip MR scans of 45 post-menopausal women. The results of secondary landmark generation were visually satisfactory, and no topology violation or notable geometric distortion artifacts were observed. Performance of the method was examined in terms of shape representation errors in a leave-one-out test. The mean and standard deviation of leave-one-out shape representation errors were 0.34mm and 0.09mm respectively. The experimental results suggest that the framework of fiducial and secondary landmarks allows reliable computation of statistical shape models for complex 3-D anatomic structures.
118

Differential item functioning procedures for polytomous items when examinee sample sizes are small

Wood, Scott William 01 May 2011 (has links)
As part of test score validity, differential item functioning (DIF) is a quantitative characteristic used to evaluate potential item bias. In applications where a small number of examinees take a test, statistical power of DIF detection methods may be affected. Researchers have proposed modifications to DIF detection methods to account for small focal group examinee sizes for the case when items are dichotomously scored. These methods, however, have not been applied to polytomously scored items. Simulated polytomous item response strings were used to study the Type I error rates and statistical power of three popular DIF detection methods (Mantel test/Cox's β, Liu-Agresti statistic, HW3) and three modifications proposed for contingency tables (empirical Bayesian, randomization, log-linear smoothing). The simulation considered two small sample size conditions, the case with 40 reference group and 40 focal group examinees and the case with 400 reference group and 40 focal group examinees. In order to compare statistical power rates, it was necessary to calculate the Type I error rates for the DIF detection methods and their modifications. Under most simulation conditions, the unmodified, randomization-based, and log-linear smoothing-based Mantel and Liu-Agresti tests yielded Type I error rates around 5%. The HW3 statistic was found to yield higher Type I error rates than expected for the 40 reference group examinees case, rendering power calculations for these cases meaningless. Results from the simulation suggested that the unmodified Mantel and Liu-Agresti tests yielded the highest statistical power rates for the pervasive-constant and pervasive-convergent patterns of DIF, as compared to other DIF method alternatives. Power rates improved by several percentage points if log-linear smoothing methods were applied to the contingency tables prior to using the Mantel or Liu-Agresti tests. Power rates did not improve if Bayesian methods or randomization tests were applied to the contingency tables prior to using the Mantel or Liu-Agresti tests. ANOVA tests showed that statistical power was higher when 400 reference examinees were used versus 40 reference examinees, when impact was present among examinees versus when impact was not present, and when the studied item was excluded from the anchor test versus when the studied item was included in the anchor test. Statistical power rates were generally too low to merit practical use of these methods in isolation, at least under the conditions of this study.
119

Connected Autonomous Vehicles: Capacity Analysis, Trajectory Optimization, and Speed Harmonization

Ghiasi, Amir 06 July 2018 (has links)
Emerging connected and autonomous vehicle technologies (CAV) provide an opportunity to improve highway capacity and reduce adverse impacts of stop-and-go traffic. To realize the potential benefits of CAV technologies, this study provides insightful methodological and managerial tools in microscopic and macroscopic traffic scales. In the macroscopic scale, this dissertation proposes an analytical method to formulate highway capacity for a mixed traffic environment where a portion of vehicles are CAVs and the remaining are human-driven vehicles (HVs). The proposed analytical mixed traffic highway capacity model is based on a Markov chain representation of spatial distribution of heterogeneous and stochastic headways. This model captures not only the full spectrum of CAV market penetration rates but also all possible values of CAV platooning intensities that largely affect the spatial distribution of different headway types. Numerical experiments verify that this analytical model accurately quantifies the corresponding mixed traffic capacity at various settings. This analytical model allows for examination of the impact of different CAV technology scenarios on mixed traffic capacity. We identify sufficient and necessary conditions for the mixed traffic capacity to increase (or decrease) with CAV market penetration rate and platooning intensity. These theoretical results caution scholars not to take CAVs as a sure means of increasing highway capacity for granted but rather to quantitatively analyze the actual headway settings before drawing any qualitative conclusion. In the microscopic scale, this study develops innovative control strategies to smooth highway traffic using CAV technologies. First, it formulates a simplified traffic smoothing model for guiding movements of CAVs on a general one-lane highway segment. The proposed simplified model is able to control the overall smoothness of a platoon of CAVs and approximately optimize traffic performance in terms of fuel efficiency and driving comfort. The elegant theoretical properties for the general objective function and the associated constraints provides an efficient analytical algorithm for solving this problem to the exact optimum. Numerical examples reveal that this exact algorithm has an efficient computational performance and a satisfactory solution quality. This trajectory-based traffic smoothing concept is then extended to develop a joint trajectory and signal optimization problem. This problem simultaneously solves the optimal CAV trajectory function shape and the signal timing plan to minimize travel time delay and fuel consumption. The proposed algorithm simplifies the vehicle trajectory and fuel consumption functions that leads to an efficient optimization model that provides exact solutions. Numerical experiments reveal that this algorithm is applicable to any signalized crossing points including intersections and work-zones. Further, the model is tested with various traffic conditions and roadway geometries. These control approaches are then extended to a mixed traffic environment with HVs, connected vehicles (CVs), and CAVs by proposing a CAV-based speed harmonization algorithm. This algorithm develops an innovative traffic prediction model to estimate the real-time status of downstream traffic using traffic sensor data and information provided by CVs and CAVs. With this prediction, the algorithm controls the upstream CAVs so that they smoothly hedge against the backward deceleration waves and gradually merge into the downstream traffic with a reasonable speed. This model addresses the full spectrum of CV and CAV market penetration rates and various traffic conditions. Numerical experiments are performed to assess the algorithm performance with different traffic conditions and CV and CAV market penetration rates. The results show significant improvements in damping traffic oscillations and reducing fuel consumption.
120

Analysis of Printed Electronic Adhesion, Electrical, Mechanical, and Thermal Performance for Resilient Hybrid Electronics

Neff, Clayton 13 November 2018 (has links)
Today’s state of the art additive manufacturing (AM) systems have the ability to fabricate multi-material devices with novel capabilities that were previously constrained by traditional manufacturing. AM machines fuse or deposit material in an additive fashion only where necessary, thus unlocking advantages of mass customization, no part-specific tooling, near arbitrary geometric complexity, and reduced lead times and cost. The combination of conductive ink micro-dispensing AM process with hybrid manufacturing processes including: laser machining, CNC machining, and pick & place enables the fabrication of printed electronics. Printed electronics exploit the integration of AM with hybrid processes and allow embedded and/or conformal electronics systems to be fabricated, which overcomes previously limited multi-functionality, decreases the form factor, and enhances performance. However, AM processes are still emerging technologies and lack qualification and standardization, which limits widespread application, especially in harsh environments (i.e. defense and industrial sectors). This dissertation explores three topics of electronics integration into AM that address the path toward qualification and standardization to evaluate the performance and repeatable fabrication of printed electronics for resilience when subjected to harsh environments. These topics include: (1) the effect of smoothing processes to improve the as-printed surface finish of AM components with mechanical and electrical characterization—which highlights the lack of qualification and standardization within AM printed electronics and paves the way for the remaining topics of the dissertation, (2) harsh environmental testing (i.e. mechanical shock, thermal cycling, die shear strength) and initiation of a foundation for qualification of printed electronic components to demonstrate survivability in harsh environments, and (3) the development of standardized methods to evaluate the adhesion of conductive inks while also analyzing the effect of surface treatments on the adhesive failure mode of conductive inks. The first topic of this dissertation addresses the as-printed surface roughness from individually fusing lines in AM extrusion processes that create semi-continuous components. In this work, the impact of surface smoothing on mechanical properties and electrical performance was measured. For the mechanical study, surface roughness was decreased with vapor smoothing by 70% while maintaining dimensional accuracy and increasing the hermetic seal to overcome the inherent porosity. However, there was little impact on the mechanical properties. For the electrical study, a vapor smoothing and a thermal smoothing process reduced the surface roughness of the surfaces of extruded substrates by 90% and 80% while also reducing measured dissipative losses up to 24% and 40% at 7 GHz, respectively. The second topic of this dissertation addresses the survivability of printed electronic components under harsh environmental conditions by adapting test methods and conducting preliminary evaluation of multi-material AM components for initializing qualification procedures. A few of the material sets show resilience to high G impacts up to 20,000 G’s and thermal cycling in extreme temperatures (-55 to 125ºC). It was also found that coefficient of thermal expansion matching is an important consideration for multi-material printed electronics and adhesion of the conductive ink is a prerequisite for antenna survivability in harsh environments. The final topic of this dissertation addresses the development of semi-quantitative and quantitative measurements for standardizing adhesion testing of conductive inks while also evaluating the effect of surface treatments. Without standard adhesion measurements of conductive inks, comparisons between materials or references to application requirements cannot be determined and limit the adoption of printed electronics. The semi-quantitative method evolved from manual cross-hatch scratch testing by designing, printing, and testing a semi-automated tool, which was coined scratch adhesion tester (SAT). By cross-hatch scratch testing with a semi-automated device, the SAT bypasses the operator-to-operator variance and allows more repeatable and finer analysis/comparison across labs. Alternatively, single lap shear testing permits quantitative adhesion measurements by providing a numerical value of the nominal interfacial shear strength of a coating upon testing while also showing surface treatments can improve adhesion and alter the adhesive (i.e. the delamination) failure mode of conductive inks.

Page generated in 0.1858 seconds