Spelling suggestions: "subject:"least square"" "subject:"yeast square""
321 |
[en] AN ALGORITHM FOR CURVE RECONSTRUCTION FROM SPARSE POINTS / [pt] UM ALGORITMO PARA RECONSTRUÇÃO DE CURVAS A PARTIR DE PONTOS ESPARSOSCRISTIANE AZEVEDO FERREIRA 23 January 2004 (has links)
[pt] A reconstrução de curvas e superfícies a partir de pontos
esparsos é um problema que tem recebido bastante atenção
ultimamente. A não-estruturação dos pontos (ou seja,
desconhecimento das relações de vizinhança e proximidade) e
a presença de ruído são dois fatores que tornam este
problema complexo. Para resolver este problema, várias
técnicas podem ser utilizadas, como triangulação de
Delaunay, reconstrução de iso-superfícies através de
Marching Cubes e algoritmos baseados em avanço de
fronteira. O algoritmo proposto consiste de quatro etapas
principais: a primeira etapa é a clusterização dos pontos
de amostragem de acordo com sua localização espacial. A
clusterização fornece uma estrutura espacial para os
pontos, e consiste em dividir o espaço em células
retangulares de mesma dimensão, classificando as células em
cheias (caso possuam pontos de amostragem em seu interior)
ou vazias (caso não possuam pontos de amostragem em seu
interior). A estrutura de dados gerada nesta etapa permite
também obter o conjunto dos pontos de amostragem de cada
uma das células. A segunda etapa é o processamento dos
pontos através de projeções MLS. A etapa de pré-
processameno visa reduzir ruído dos pontos de amostragem,
bem como adequar a densidade de pontos ao nível de detalhe
esperado, adicionando ou removendo pontos do conjunto
inicial. A terceira etapa parte do conjunto das células que
possuem pontos de amostragem em seu interior (células
cheias) e faz a esqueletonização deste conjunto de células,
obtendo, assim, uma aproximação digital para a curva a ser
reconstruída. Este esqueleto é encontrado através do
afinamento topológico das células que possuem pontos. A
implementação do algoritmo de afinamento é feita de modo
que o número de pontos em cada célula seja levado em
consideração, removendo primeiro sempre as células com
menor número de pontos. Na quarta etapa, a reconstrução da
curva é finalmente realizada. Para tal, parte-se do
esqueleto obtido na terceira etapa e constrói-se uma curva
linear por partes, onde cada vértice é obtido a partir da
projeção MLS do ponto médio de cada célula do esqueleto. / [en] Curve and surface reconstruction from sparse data has been
recognized as an important problem in computer graphics.
Non structured data points (i.e., a set of points with no
knowledge of connectivity and proximity) together with
the existence of noise make this problem quite difficult.
In order to solve it, several techniques have been
proposed, such as, some of them are based on Delaunay
triangulation, other are based on implicit surface
reconstruction or on the advancing front techniques. Our
algorithm consists basically in four steps. In the first
step, a clustering procedure is performed in order to group
the sample points according to their spatial location. This
procedure obtains an spatial structure for the points by
subdividing uniformly the plane in rectangular cells, and
classifying them into two categories: empty (when the cell
contains no point inside) or not empty (otherwise). At this
stage, a data structure is built in such way that it is
possible to query the set of sample points that belong to a
given rectangular cell. The second step processes the point
through the Moving Least Squares method. Its objective
is not only to reduce the noise on the data, but also to
adapt the number of point to the desired level, by adding
or removing points from the initial set. The third step
builds the skeleton of the set of cells that have sample
point on its interior. Such skeleton is in fact a digital
approximation for the curve that will be reconstructed. It
is obtained by the use of a topological thinning algorithm,
and its implementation is done in such a way that the
number of points in each cell is considered, for example,
the cells with less number of points are not considered for
the thinning. In the last step, the curve is finally
reconstructed To do so, the skeleton obtained in the third
step is used to construct a piecewise-linear approximation
for the curve, where each vertex is obtained from the MLS
projection on the middle point of the skeleton rectangular
cell.
|
322 |
L’adoption des innovations technologiques par les clients et son impact sur la relation client - Cas de la banque mobile - / The adoption of technological innovations by customers and its impact on customer relations - Case of mobile banking -Cheikho, Avin 04 November 2015 (has links)
Au cours de ces dernières années, les technologies mobiles ont créé des conditions de marché très concurrentielles. Face à cette nouvelle conjoncture, les banques ont lancé la banque mobile, une innovation technologique en milieu bancaire comme une nouvelle opportunité à saisir. Cette étude pose une question liée au cœur des principaux problèmes rencontrés dans le domaine bancaire : qui investit le plus dans les TIC, et qui vise à développer des relations à long terme avec ses clients. Afin de produire une valeur ajoutée sur les investissements technologiques, il devient important pour les banques d’assurer l’adoption de ces services par leurs clients dans un premier temps et d’assurer la survie de ces services (la continuité de l’utilisation) par le développement des relations durables et rentables avec les clients dans un deuxième temps. Ceci signifie que la compréhension des comportements des clients nécessite deux phases : la phase « adoption » et la phase « post-adoption ». La thèse vise, d’une part, à explorer les facteurs influençant l’adoption de la banque mobile par les clients et, d’autre part, à formuler un cadre explicatif de l’effet de ces facteurs pour établir et améliorer des relations entre les banques et leurs clients. L’analyse des données recueillies par questionnaire administré en face à face auprès de 282 répondants, identifie trois segments de clients : non utilisateurs, utilisateurs et adopteurs. L'analyse explicative réalisée par la méthode PLS relève le rôle important joué par quatre facteurs : l’utilité perçue, le risque perçu, la sécurité perçue et l’effort attendu dans les deux phases. / In recent years, mobile technologies have created very competitive market conditions. Facing this new environment, banks have launched mobile banking, a technological innovation in banking sector, as a new opportunity to seize. This study raises a question related to the heart of the main problems in banking: who invests the most in ICT, and who aims to develop long-term relationships with its clients. To produce added value on technological investments, it becomes important for banks to ensure the adoption of these services by their clients at first time and ensure the survival of these services (continuity of use) through the development of sustainable and profitable customer relationships in a second time. This means that the understanding of customer behavior requires two phases: the "adoption" phase and the "post-adoption" phase. The thesis aims, first, to explore the factors influencing the adoption of mobile banking by customers and, second, to formulate an explanatory framework of the effect of these factors to establish and improve relations between banks and their customers.The analysis of data collected by questionnaires administered face to face with 282 respondents identifies three customer segments: non-users, users and adopters. The explanatory analysis by the PLS method highlights the important role played by four factors: perceived usefulness, perceived risk, perceived safety and the expected effort in the two phases.
|
323 |
Automatic Pain Assessment from Infants’ Crying SoundsPai, Chih-Yun 01 November 2016 (has links)
Crying is infants utilize to express their emotional state. It provides the parents and the nurses a criterion to understand infants’ physiology state. Many researchers have analyzed infants’ crying sounds to diagnose specific diseases or define the reasons for crying. This thesis presents an automatic crying level assessment system to classify infants’ crying sounds that have been recorded under realistic conditions in the Neonatal Intensive Care Unit (NICU) as whimpering or vigorous crying. To analyze the crying signal, Welch’s method and Linear Predictive Coding (LPC) are used to extract spectral features; the average and the standard deviation of the frequency signal and the maximum power spectral density are the other spectral features which are used in classification. For classification, three state-of-the-art classifiers, namely K-nearest Neighbors, Random Forests, and Least Squares Support Vector Machine are tested in this work, and the experimental result achieves the highest accuracy in classifying whimper and vigorous crying using the clean dataset is 90%, which is sampled with 10 seconds before scoring and 5 seconds after scoring and uses K-nearest neighbors as the classifier.
|
324 |
Improved measure of orbital stability of rhythmic motionsKhazenifard, Amirhosein 30 November 2017 (has links)
Rhythmic motion is ubiquitous in nature and technology. Various motions of organisms like the heart beating and walking require stable periodic execution. The stability of the rhythmic execution of human movement can be altered by neurological or orthopedic impairment. In robotics, successful development of legged robots heavily depends on the stability of the controlled limit-cycle. An accurate measure of the stability of rhythmic execution is critical to the diagnosis of several performed tasks like walking in human locomotion. Floquet multipliers have been widely used to assess the stability of a periodic motion. The conventional approach to extract the Floquet multipliers from actual data depends on the least squares method. We devise a new way to measure the Floquet multipliers with reduced bias and estimate orbital stability more accurately. We show that the conventional measure of the orbital stability has bias in the presence of noise, which is inevitable in every experiment and observation. Compared with previous method, the new method substantially reduces the bias, providing acceptable estimate of the orbital stability with fewer cycles even with different noise distributions or higher or lower noise levels. The new method can provide an unbiased estimate of orbital stability within a reasonably small number of cycles. This is important for experiments with human subjects or clinical evaluation of patients that require effective assessment of locomotor stability in planning rehabilitation programs. / Graduate / 2018-11-22
|
325 |
Magnetic Rendering: Magnetic Field Control for Haptic InteractionZhang, Qi January 2015 (has links)
As a solution to mid-air haptic actuation with strong and continuous tactile force, Magnetic Rendering is presented as an intuitive haptic display method applying an electromagnet array to produce a magnetic field in mid-air where the force field can be felt as magnetic repulsive force exerted on the hand through the attached magnet discs. The magnetic field is generated by a specifically designed electromagnet array driven by direct current. By attaching small magnet discs on the hand, the tactile sensation can be perceived by the user. This method can provide a strong tactile force on multiple points covering user’s hand and avoid cumbersome attachments with wires, thus it is suitable for a co-located visual and haptic display. In my work, the detailed design of the electromagnet array for haptic rendering purposes is introduced, which is modelled and tested using Finite Element Method simulations. The model is characterized mathematically, and three methods for controlling the magnetic field are applied accordingly: direct control, system identification and adaptive control. The performance of the simulated model is evaluated in terms of magnetic field distribution, force strength, operation distance and force stiffness. The control algorithms are implemented and tested on a 3-by-3 and a 15-by-15 model, respectively. Simulations are performed on a 15-by-15 model to generate a haptic human face, which results in a smooth force field and accurate force exertion on the control points.
|
326 |
Výnosnost zemědělské půdy v závislosti na vybraných faktorech - ekonometrický model / The Productivity of Farmland depending on Chosen ElementsPartynglová, Soňa January 2010 (has links)
This thesis is focused on analysis of the factors that influence the yields of the wheat. This thesis is divided into three parts. The first part opens the problem of wheat cultivation. The second one concerns the methodologies of creating the econometrics models and the third one solves the problem as a whole. Considering a large data file I have a need to reduce it by the factor analysis. I estimate relevant econometric model by application different econometrics methods. This model will show the influences of technological, soil and climatic factors on the yields of wheat. At the end I confront the observed variables with predicted ones by the graininess of soil, climate and the year of the crops.
|
327 |
Metóda najemnších štvorcov genetickým algoritmom / Least squares method using genetic algorithmHolec, Matúš January 2011 (has links)
This thesis describes the design and implementation of genetic algorithm for approximation of non-linear mathematical functions using the least squares method. One objective of this work is to theoretically describe the basics of genetic algorithms. The second objective is to create a program that would be potentially used to approximate empirically measured data by the scientific institutions. Besides the theoretical description of the given subject, the text part of the work mainly deals with the design of the genetic algorithm and the whole application solving the given problem. Specific part of the assignment is that the developed application has to support approximation of points by various mathematical non-linear functions in several different intervals, and then it has to insure, that resulting functions are continuous throughout all the intervals. Described functionality is not offered by any available software.
|
328 |
Utilizing the Technology Acceptance Model to Assess Employee Adoption of Information Systems Security MeasuresJones, Cynthia 16 September 2009 (has links)
Companies are increasing their investment in technologies to enable better access to information and to gain a competitive advantage. Global competition is driving companies to reduce costs and enhance productivity, increasing their dependence on
information technology. Information is a key asset within an organization and needs to be protected. Expanded connectivity and greater interdependence between companies and consumers has increased the damage potential of a security breach to a company's
information systems. Improper unauthorized use of computer systems can create a devastating financial loss even to the point of causing the organization to go out of business. It is critically important to understand what causes users to understand, accept
and to follow the organization's information systems security measures so that companies can realize the benefits of their technological investments. In the past several years, computer security breaches have stemmed from insider misuse and abuse of the information systems and non-compliance to the information systems security measures.
The purpose of this study was to address the factors that affect employee acceptance of information systems security measures.
The Technology Acceptance Model was extended and served as the theoretical framework for this study to examine the factors that affect employee adoption of information systems security measures. The research model included three independent dimensions, perceived ease of use, perceived usefulness and subjective norm. These
constructs were hypothesized to predict intention to use information systems security measures, moderated by management support affecting subjective norm. Five hypotheses were posited. A questionnaire was developed to collect data from employees across
multiple industry segments to test these hypotheses. Partial least squares statistical methodology was used to analyze the data and to test the hypotheses. The results of the statistical analysis supported three of the five hypotheses with subjective norm and
management support showing the strongest effect on intention to use information systems security measures.
Few studies have used TAM to study acceptance of systems in a mandatory environment and to specifically examine the employee acceptance of computer information systems security measures. This study, therefore, adds to the body of knowledge. Further, it provides important information for senior management and
security professionals across multiple industries regarding the need to develop security policies and processes and to effectively communicate them throughout the organization and to design these measures to promote their use by employees in the organization.
|
329 |
Métodos locais de integração explícito e implícito aplicados ao método de elementos finitos de alta ordem / Explicit and implicit integration local methods applied to the high-order finite element methodFurlan, Felipe Adolvando Correia 07 July 2011 (has links)
Orientador: Marco Lucio Bittencourt / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-18T15:16:56Z (GMT). No. of bitstreams: 1
Furlan_FelipeAdolvandoCorreia_M.pdf: 1842661 bytes, checksum: 69ed6fc529cf4f757f3c8a2f42e20518 (MD5)
Previous issue date: 2011 / Resumo: O presente trabalho apresenta algoritmos locais de integração explícitos e implícitos aplicados ao método de elementos finitos de alta ordem, baseados na decomposição por autovetores das matrizes de massa e rigidez. O procedimento de solução é realizado para cada elemento da malha e os resultados são suavizados no contorno dos elementos usando a aproximação por mínimos quadrados. Consideraram-se os métodos de diferença central e Newmark para o desenvolvimento dos procedimentos de solução elemento por elemento. No algoritmo local explícito, observou-se que as soluções convergem para as soluções globais obtidas com a matriz de massa consistente. O algoritmo local implícito necessitou de subiterações para alcançar convergência. Exemplos bi e tridimensionais de elasticidade linear e não linear são apresentados. Os resultados mostraram precisão apropriada para problemas com solução analítica. Exemplos maiores também foram apresentados com resultados satisfatórios / Abstract: This work presents explicit and implicit local integration algorithms applied to the high-order finite element method, based on the eigenvalue decomposition of the elemental mass and stiffness matrices. The solution procedure is performed for each element of the mesh and the results are smoothed on the boundary of the elements using the least square approximation. The central difference and Newmark methods were considered for developing the element by element solution procedures. For the local explicit algorithm, it was observed that the solutions converge for the global solutions obtained with the consistent mass matrix. The local implicit algorithm required subiterations to achieve convergence. Two-dimensional and three-dimensional examples of linear and non-linear elasticity are presented. Results showed appropriate accuracy for problems with analytical solution. Larger examples are also presented with satisfactory results / Mestrado / Mecanica dos Sólidos e Projeto Mecanico / Mestre em Engenharia Mecânica
|
330 |
Ajuste de curvas por polinômios com foco no currículo do ensino médio / Curve fitting polynomials focusing on high school curriculumSantos, Alessandro Silva, 1973- 27 August 2018 (has links)
Orientador: Lúcio Tunes dos Santos / Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-27T11:38:45Z (GMT). No. of bitstreams: 1
Santos_AlessandroSilva_M.pdf: 6474871 bytes, checksum: 351d93b093e44b399a99cd42075cb4b5 (MD5)
Previous issue date: 2015 / Resumo: ste trabalho surge a partir de uma proposta de desenvolvimento do ajuste de curvas com uma abordagem que busca enriquecer o estudo de funções presente no currículo do ensino fundamental e médio. É apresentada ao aluno, desde o aspecto histórico do ajuste, passando pela interpolação de curvas, com foco na interpolação polinomial e o método dos quadrados mínimos, no qual são apresentados, a regressão linear, além de modelos como o ajuste exponencial. É também descrita nesta dissertação uma ferramenta de grande importância no cálculo numérico, o conceito de erro de ajuste, sua definição e forma de estimativa. Na interpolação polinomial, o aluno, ao desenvolver a forma de Lagrange, é estimulado a trabalhar as operações, forma fatorada e interpretação das vantagens do método, como o número e grau de dificuldade de operações realizadas por esse. Interpolação inversa e interpolação de curvas complementam o referido capítulo, em que busca, sempre que possível, utilizar situações problema . O método dos quadrados mínimos estimula o estudante a escolha da função de ajuste e determinação dessa a partir do conceito de minimização do erro. Polinômios de grau um,a regressão linear, e dois são trabalhados devido a sua importância no currículo do ensino médio. Explorando também conceitos como logaritmos e exponenciais, propõe-se o ajuste linear de modelos exponenciais, utilizando situações problema de uma área em evidência no momento, a Biomatemática, modelar dinâmicas de crescimento populacional. Dessa forma o aluno tem contato com formas de previsão que são úteis em importantes áreas como: a saúde pública, a modelagem de epidemias e desenvolvimento de patógenos; planejamento de políticas públicas com a modelagem do crescimento e distribuição da população; comportamento da economia como no caso de previsões de juros futuros. Para que este trabalho possa servir de auxílio aos professores de forma prática e interessante, o capítulo final traz sugestão de problemas na forma de planilha, facilitando sua reprodução e aplicação em sala de aula / Abstract: This study comes from a development proposal curves adjustment with an approach that seeks to enrich the study of present functions in the primary and secondary curriculum. It is presented to the student, from the historical aspect setting, through interpolation curves, focusing on polynomial interpolation and the method of least squares, which presents the linear regression, and models like the exponential fit. It is also described in this work a very important tool in numerical calculation, the concept of setting error, its definition and method of estimation. In polynomial interpolation, the student, to develop the form of Lagrange, is encouraged to work operations, factored form and interpretation of the advantages of the method, as the number and degree of difficulty of tasks for this. Inverse interpolation and interpolation curves complement the chapter on seeking, whenever possible, use problem situations. The method of least squares stimulates the student the choice of setting function and determine this from the concept of minimizing the error. Polynomials of degree one, linear regression, and two are worked because of its importance in the high school curriculum. Also exploring concepts such as logarithms and exponential, it is proposed that the linear fit of exponential models using problem situations evidence in area at the time, the Biomathematics, modeling dynamics of population growth. Thus the student has contact with forms of provision that are useful in important areas such as public health, the modeling of epidemics and development of pathogens; public policy planning with modeling of growth and distribution of population; behavior of the economy as in the case of future interest rate forecasts. For this work may be an aid to teachers in a practical and interesting, the final chapter brings problems suggestion in the form of sheet, facilitating its reproduction and use in the classroom / Mestrado / Matemática em Rede Nacional - PROFMAT / Mestre em Matemática em Rede Nacional - PROFMAT
|
Page generated in 0.0526 seconds