261 |
Controle de Informa??es Gerenciais na Rela??o /Fabricas/Montadoras e Fornecedores. / Control of Managemental Information in the Relation Manufacture / Assembly Plants and Suppliers.Delgado, Marco Ant?nio Gon?alves 21 August 2007 (has links)
Made available in DSpace on 2016-04-28T20:19:13Z (GMT). No. of bitstreams: 1
2007-Marco Antonio Goncalves Delgado.pdf: 491540 bytes, checksum: 2d942512e528ae62a25d15872df9be1e (MD5)
Previous issue date: 2007-08-21 / This study focuses mainly on explaining the practices of large assembly companies in the
control of information, purchases, development of relationships with suppliers and logistical
raw material operations in the manufacture of vehicles. This subject is justified because of
the major challenges in identifying the best processes, improving and discussing the
technologies that contribute to the productivity and value added of the businesses.
Theoretically, it is based on works of distinguished authors who have studied the subject,
analyzing knowledge as applied to the most current information technology. This has been
an ideal source for the collection of necessary data in the development of this research
project. The research methodology was used to to analyze exploratory and investigative
objectives using the case study method. This methodology permitted the comparison of the
realty of the company with the theoretical knowledge from the specialized literature. During
this comparative study, it was perceived that the same concepts are applied, with the
appropriate technology, to the internal processes of each company in relation to the culture,
tradition, geographic location, origin and other factors as well. It was concluded that the
development of information technology is a very important factor in the creation of new work
processes directed toward improved efficiency and effectiveness. The choice of objectives,
the theoretical reference and research within the companies, provided a glimpse of the
predominant practices and a comparison of the different applications of work methodologies
in the internal and external processes in the companies. / Constituiu o foco deste trabalho a elucida??o das pr?ticas reinantes nas grandes empresas
montadoras no controle de informa??es, compras, identifica??o, desenvolvimento e rela??es
com fornecedores e a opera??o log?stica de mat?rias-primas para a fabrica??o de ve?culos
automotores. Este tema se justifica pelo grande desafio em identificar os melhores processos,
melhorar e discutir tecnologias de forma a contribuir para a produtividade das empresas
agregando valor aos neg?cios. O referencial te?rico est? baseado em autores mais destacados
na explora??o do tema, tratando o conhecimento e aplica??o da Tecnologia da Informa??o
mais atual, consistindo em fonte ideal de coleta de dados necess?ria ao desenvolvimento da
atividade de pesquisa. A metodologia usada foi a pesquisa com fins explorat?rio/investigativo
atrav?s de estudo de caso. Essa metodologia permitiu comparar a realidade de cada empresa
pesquisada e o conhecimento te?rico da literatura especializada sobre o assunto. Durante o
estudo comparativo, percebeu-se que s?o usados os mesmos conceitos havendo uma
adequa??o da tecnologia dispon?vel aos processos internos empregados em cada empresa em
fun??o da cultura, tradi??o, localiza??o geogr?fica, origem e outros fatores. Foi conclu?do que
a Tecnologia da Informa??o contribui estrategicamente para o aprimoramento e a cria??o de
novos processos de trabalho voltados para a efici?ncia e a efic?cia. A escolha dos objetivos, o
referencial te?rico e as pesquisas nas empresas permitiram vislumbrar as pr?ticas reinantes e
comparar as diferentes aplica??es de metodologias de trabalho nos processos internos e
externos ?s empresas.
|
262 |
Servi?o ao cliente: uma avalia??o do n?vel de servi?o log?stico oferecido por uma empresa de bens de consumo / Customer Service: an assessment of the level of logistics services offered to key customers in a consumer goods companyRodrigues, Raphael Rossi 17 February 2011 (has links)
Submitted by Sandra Pereira (srpereira@ufrrj.br) on 2016-09-20T16:57:48Z
No. of bitstreams: 1
2011 - Raphael Rossi Rodrigues.pdf: 1927039 bytes, checksum: d62f71cc9d072958cd804d56ebe97f8a (MD5) / Made available in DSpace on 2016-09-20T16:57:48Z (GMT). No. of bitstreams: 1
2011 - Raphael Rossi Rodrigues.pdf: 1927039 bytes, checksum: d62f71cc9d072958cd804d56ebe97f8a (MD5)
Previous issue date: 2011-02-17 / This survey was conducted in a multinational company that operates in the sector of nondurable
consumer goods in the international context through the licensing of its brands.
Companies that have the right to produce and distribute licensed products are named bottlers.
The aim of this study was to measure 13 bottlers in the level of development of logistics
Customer Service activities for Key Accounts clients, for it was developed a diagnostic tool
based on the available literature on the subject. This diagnostic tool was used in three rounds
of assessment in order to check the evolution of the bottlers maturity level of logistics
Customer Service activities. They also identified and discussed a series of logistic activities of
Customer Service based on a visit in a multinational food industry considered benchmarking
in these activities. The diagnostic results show that there was an evolution of the maturity
level of logistics Customer Service activities over time, however this trend was not uniform,
resulting in a higher variance when comparing the levels of development of all bottlers in the
first assessment with the levels obtained in the third and final evaluation. / Esta pesquisa foi realizada em uma empresa multinacional que atua no setor de bens de
consumo n?o dur?veis no contexto internacional atrav?s do licenciamento de suas marcas. As
empresas que possuem o direito de produzir e distribuir os produtos licenciados s?o
denominados fabricantes. O objetivo desta pesquisa foi mensurar em 13 fabricantes o n?vel de
desenvolvimento das atividades de Customer Service log?stico para clientes-chave, para isso
foi desenvolvida uma ferramenta de diagn?stico com base na literatura dispon?vel sobre o
tema. Esta ferramenta de diagn?stico foi utilizada em tr?s rodadas de avalia??o com o
objetivo de verificar a evolu??o do n?vel de maturidade das atividades de Customer Service
Log?stico nos fabricantes. Tamb?m foi identificada e comentada uma s?rie de atividades de
Customer Service log?stico com base em uma visita realizada em uma multinacional do setor
de alimentos considerada benchmarking nestas atividades. Os resultados do diagn?stico
mostram que houve uma evolu??o do n?vel de maturidade das atividades de Customer Service
log?stico ao longo do tempo, entretanto esta evolu??o n?o foi uniforme, o que resultou em
uma maior vari?ncia quando comparamos os n?veis de desenvolvimento de todos os
fabricantes na primeira avalia??o com os n?veis obtidos na terceira e ?ltima avalia??o.
|
263 |
Deriving System Vulnerabilities Using Log AnalyticsHigbee, Matthew Somers 01 November 2015 (has links)
System Administrators use many of the same tactics that are implemented by hackers to validate the security of their systems, such as port scanning and vulnerability scanning. Port scanning is slow, and can be highly inaccurate. After a scan is complete, the results of the scan must be cross checked with a vulnerability database to discover if any vulnerabilities are present. While these techniques are useful, they have severe limitations. System Administrators have full access to all of their machines. They should not have to rely exclusively on port scanning them from the outside of their machines to check for vulnerabilities when they have this level of access. This thesis introduces a novel concept for replacing port scanning with a Log File Inventory Management System. This system will be able to automatically build an accurate system inventory using existing log files. This system inventory will then be automatically cross checked with a database of known vulnerabilities in real-time resulting in faster and more accurate vulnerability reporting than is found in traditional port scanning methods.
|
264 |
Development of a portable aerosol collector and spectrometer (PACS)Cai, Changjie 01 May 2018 (has links)
The overall goal of this doctoral dissertation is to develop a prototype instrument, a Portable Aerosol Collector and Spectrometer (PACS), that can continuously measure aerosol size distributions by number, surface area and mass concentrations over a wide size range (from 10 nm to 10 µm) while also collecting particles with impactor and diffusion stages for post-sampling chemical analyses.
To achieve the goal, in the first study, we designed, built and tested the PACS hardware. The PACS consists of a six-stage particle size selector, a valve system, a water condensation particle counter to measure number concentrations and a photometer to measure mass concentrations. The valve system diverts airflow to pass sequentially through upstream stages of the selector to the detectors. The stages of the selector include three impactor and two diffusion stages, which resolve particles by size and collect particles for chemical analysis. Particle penetration by size was measured through each stage to determine actual performance and account for particle losses. The measured d50 of each stage (aerodynamic diameter for impactor stages and geometric diameter for diffusion stages) was similar to the design. The pressure drop of each stage was sufficiently low to permit its operation with portable air pumps.
In the second study, we developed a multi-modal log-normal (MMLN) fitting algorithm to leverage the multi-metric, low-resolution data from one sequence of PACS measurements to estimate aerosol size distributions of number, surface area, and mass concentration in near-real-time. The algorithm uses a grid-search process and a constrained linear least-square (CLLS) solver to find a tri-mode (ultrafine, fine, and coarse), log-normal distribution that best fits the input data. We refined the algorithm to obtain accurate and precise size distributions for four aerosols typical of diverse environments: clean background, urban and freeway, coal power plant, and marine surface. Sensitivity studies were conducted to explore the influence of unknown particle density and shape factor on algorithm output. An adaptive process that refined the ranges and step sizes of the grid-search reduced the computation time to fit a single size distribution in near-real-time. Assuming standard density spheres, the aerosol size distributions fit well with the normalized mean bias (NMB) of -4.9% to 3.5%, normalized mean error (NME) of 3.3% to 27.6%, and R2 values of 0.90 to 1.00. The fitted number and mass concentration biases were within ± 10% regardless of uncertainties in density and shape. With this algorithm, the PACS is able to estimate aerosol size distributions by number, surface area, and mass concentrations from 10 nm to 10 µm in near-real-time.
In the third study, we developed a new algorithm–the mass distribution by composition and size (MDCS) algorithm–to estimate the mass size distribution of various particle compositions. Then we compared the PACS for measuring multi-mode aerosols to three reference instruments, including a scanning mobility particle sizer (SMPS), an aerodynamic particle sizer (APS) and a nano micro-orifice uniform deposit impactor (nanoMOUDI). We used inductively coupled plasma mass spectrometry to measure the mass of collected particles on PACS and nanoMOUDI stages by element. For the three-mode aerosol, the aerosol size distributions in three metrics measured with the PACS agreed well with those measured with the SMPS/APS: number concentration, bias = 9.4% and R2 = 0.96; surface area, bias = 17.8%, R2 = 0.77; mass, bias = -2.2%, R2 = 0.94. Agreement was considerably poorer for the two-mode aerosol, especially for surface area and mass concentrations. Comparing to the nanoMOUDI, for the three-mode aerosol, the PACS estimated the mass median diameters (MMDs) of the coarse mode well, but overestimated the MMDs for ultrafine and fine modes. The PACS overestimated the mass concentrations of ultrafine and fine mode, but underestimated the coarse mode. This work provides insight into a novel way to simultaneously assess airborne aerosol size, composition, and concentration by number, surface area and mass using cost-effective handheld technologies.
|
265 |
Differential item functioning procedures for polytomous items when examinee sample sizes are smallWood, Scott William 01 May 2011 (has links)
As part of test score validity, differential item functioning (DIF) is a quantitative characteristic used to evaluate potential item bias. In applications where a small number of examinees take a test, statistical power of DIF detection methods may be affected. Researchers have proposed modifications to DIF detection methods to account for small focal group examinee sizes for the case when items are dichotomously scored. These methods, however, have not been applied to polytomously scored items.
Simulated polytomous item response strings were used to study the Type I error rates and statistical power of three popular DIF detection methods (Mantel test/Cox's β, Liu-Agresti statistic, HW3) and three modifications proposed for contingency tables (empirical Bayesian, randomization, log-linear smoothing). The simulation considered two small sample size conditions, the case with 40 reference group and 40 focal group examinees and the case with 400 reference group and 40 focal group examinees.
In order to compare statistical power rates, it was necessary to calculate the Type I error rates for the DIF detection methods and their modifications. Under most simulation conditions, the unmodified, randomization-based, and log-linear smoothing-based Mantel and Liu-Agresti tests yielded Type I error rates around 5%. The HW3 statistic was found to yield higher Type I error rates than expected for the 40 reference group examinees case, rendering power calculations for these cases meaningless. Results from the simulation suggested that the unmodified Mantel and Liu-Agresti tests yielded the highest statistical power rates for the pervasive-constant and pervasive-convergent patterns of DIF, as compared to other DIF method alternatives. Power rates improved by several percentage points if log-linear smoothing methods were applied to the contingency tables prior to using the Mantel or Liu-Agresti tests. Power rates did not improve if Bayesian methods or randomization tests were applied to the contingency tables prior to using the Mantel or Liu-Agresti tests. ANOVA tests showed that statistical power was higher when 400 reference examinees were used versus 40 reference examinees, when impact was present among examinees versus when impact was not present, and when the studied item was excluded from the anchor test versus when the studied item was included in the anchor test. Statistical power rates were generally too low to merit practical use of these methods in isolation, at least under the conditions of this study.
|
266 |
A neural fuzzy approach for well log and hydrocyclone data interpretation.Wong, Kok W. January 1999 (has links)
A novel data analysis approach that is automatic, self-learning and self-explained, and which provides accurate and reliable results is reported. The data analysis tool is capable of performing multivariate non-parametric regression analysis, as well as quantitative inferential analysis using predictive learning. Statistical approaches such as multiple regression or discriminant analysis are usually used to perform this kind of analysis. However, they lack universal capabilities and their success in any particular application is directly affected by the problem complexity.The approach employs the use of Artificial Neural Networks (ANNs) and Fuzzy Logic to perform the data analysis. The features of these two techniques are the means by which the developed data analysis approach has the ability to perform self-learning as well as allowing user interaction in the learning process. Further, they offer a means by which rules may be generated to assist human understanding of the learned analysis model, and so enable an analyst to include external knowledge.Two problems in the resource industry have been used to illustrate the proposed method, as these applications contain non-linearity in the data that is unknown and difficult to derive. They are well log data analysis in petroleum exploration and hydrocyclone data analysis in mineral processing. This research also explores how this proposed data analysis approach could enhance the analysis process for problems of this type.
|
267 |
Design of an Analog VLSI CochleaShiraishi, Hisako January 2003 (has links)
The cochlea is an organ which extracts frequency information from the input sound wave. It also produces nerve signals, which are further analysed by the brain and ultimately lead to perception of the sound. An existing model of the cochlea by Fragni`ere is first analysed by simulation. This passive model is found to have the properties that the living cochlea does in terms of the frequency response. An analog VLSI circuit implementation of this cochlear model in CMOS weak inversion is proposed, using log-domain filters in current domain. It is fabricated on a chip and a measurement of a basilar membrane section is performed. The measurement shows a reasonable agreement to the model. However, the circuit is found to have a problem related to transistor mismatch, causing different behaviour in identical circuit blocks. An active cochlear model is proposed to overcome this problem. The model incorporates the effect of the outer hair cells in the living cochlea, which controls the quality factor of the basilar membrane filters. The outer hair cells are incorporated as an extra voltage source in series with the basilar membrane resonator. Its value saturates as the input signal becomes larger, making the behaviour rather closer to that of a passive model. The simulation results show this nonlinear phenomenon, which is also seen in the living cochlea. The contribution of this thesis is summarised as follows: a) the first CMOS weak inversion current domain basilar membrane resonator is designed and fabricated, and b) the first active two-dimensional cochlear model for analog VLSI implementation is developed.
|
268 |
Auto-transformations et géométrie des variétés de Calabi-YauDedieu, Thomas 04 November 2008 (has links) (PDF)
Cette thèse est constituée de deux parties.<br>Dans la première, je démontre que si certaines variétés de Severi universelles, qui paramètrent les courbes nodales de degré et de genre fixés existant sur une surface K3, sont irréductibles, alors une surface K3 projective générique ne possède pas d'endomorphisme rationnel de degré >1. J'établis également un certain nombre de contraintes numériques satisfaites par ces endomorphismes.<br>Voisin a modifié la pseudo-forme volume de Kobayashi en introduisant les K-correspondances holomorphes. Dans la seconde partie, j'étudie une version logarithmique de cette pseudo-forme volume. J'associe une pseudo-forme volume logarithmique intrinsèque à toute paire (X,D) constituée d'une variété complexe et d'un diviseur à croisements normaux et partie positive réduite. Je démontre qu'elle est génériquement non dégénérée si X est projective et K_X+D est ample. Je démontre d'autre part qu'elle s'annule pour une grande classe de paires à fibré canonique logarithmique trivial.
|
269 |
Programaci��n Binivel aplicada a la distribuci��n centralizada de recursosOlivares Aguila, Jessica 12 May 2011 (has links)
En esta tesis se estudia el Problema de Distribuci��n de Recursos en un Sistema de Gobierno Centralizado y se modela como un problema de programaci��n binivel. En la funci��n objetivo del nivel superior se minimiza la insatisfacci��n entre los estados y en la funci��n objetivo del nivel inferior se maximizan los beneficios de cada estado. Para encontrar soluciones factibles para este problema se propone un algoritmo que es una adaptaci��n del m��todo Nelder-Mead, que permite obtener resultados similares a los resultados reportados previamente en la literatura. Adicionalmente, se propone una adaptaci��n del m��todo de Hook-Jeeves restringido con una penalizaci��n de barrera que permite obtener los mejores resultados entre todos los m��todos. Adem��s, se implementa una heur��stica de B��squeda Dispersa con la que se obtiene buenos resultados. Se contempla un estudio computacional, para comparar los tres m��todos en tres instancias de diferentes tama��os. / Palabras Claves: Distribuci��n de recursos, Programaci��n Binivel, Nelder-Mead, Hooke-Jeeves, B��squeda Dispersa.
��
.
|
270 |
T��cnicas de programaci��n Binivel para el dise��o de redesVital Soto, Alejandro 12 May 2012 (has links)
En esta tesis se estudia el Problema de Dise��o de Redes Continuo (CNDP por sus siglas en ingl��s) y se modela como un problema de programaci��n matem��tica binivel. El nivel superior se define como la suma total de los tiempos de viaje y costos de inversi��n de los incrementos de capacidad de las conexiones de la red, mientras que el nivel inferior se enfoca en el equilibrio de flujo de los usuarios que se modela como un problema de minimizaci��n. Se utilizan tres t��cnicas que no usan derivadas para obtener soluciones factibles del problema: 1) el m��todo de Nelder-Mead adaptado al CNDP, 2) un Metaheur��stico de B��squeda dispersa (BD), 3) el uso del algoritmo de Nelder-Mead en BD como un m��todo de mejora de soluciones. Los m��todos propuestos obtienen resultados que igualan a los que se encuentran en la literatura, aunque el m��todo de BD con Nelder-Mead mejora resultados previamente reportados para dos instancias de prueba del problema. / Palabras claves: CNDP, Programaci��n Binivel, Nelder-Mead, B��squeda Dispersa.
|
Page generated in 0.0435 seconds