• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 531
  • 232
  • 68
  • 48
  • 28
  • 25
  • 20
  • 17
  • 13
  • 12
  • 8
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1179
  • 1033
  • 203
  • 194
  • 173
  • 161
  • 155
  • 147
  • 123
  • 121
  • 106
  • 96
  • 90
  • 84
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Metóda najemnších štvorcov genetickým algoritmom / Least squares method using genetic algorithm

Holec, Matúš January 2011 (has links)
This thesis describes the design and implementation of genetic algorithm for approximation of non-linear mathematical functions using the least squares method. One objective of this work is to theoretically describe the basics of genetic algorithms. The second objective is to create a program that would be potentially used to approximate empirically measured data by the scientific institutions. Besides the theoretical description of the given subject, the text part of the work mainly deals with the design of the genetic algorithm and the whole application solving the given problem. Specific part of the assignment is that the developed application has to support approximation of points by various mathematical non-linear functions in several different intervals, and then it has to insure, that resulting functions are continuous throughout all the intervals. Described functionality is not offered by any available software.
412

Utilizing the Technology Acceptance Model to Assess Employee Adoption of Information Systems Security Measures

Jones, Cynthia 16 September 2009 (has links)
Companies are increasing their investment in technologies to enable better access to information and to gain a competitive advantage. Global competition is driving companies to reduce costs and enhance productivity, increasing their dependence on information technology. Information is a key asset within an organization and needs to be protected. Expanded connectivity and greater interdependence between companies and consumers has increased the damage potential of a security breach to a company's information systems. Improper unauthorized use of computer systems can create a devastating financial loss even to the point of causing the organization to go out of business. It is critically important to understand what causes users to understand, accept and to follow the organization's information systems security measures so that companies can realize the benefits of their technological investments. In the past several years, computer security breaches have stemmed from insider misuse and abuse of the information systems and non-compliance to the information systems security measures. The purpose of this study was to address the factors that affect employee acceptance of information systems security measures. The Technology Acceptance Model was extended and served as the theoretical framework for this study to examine the factors that affect employee adoption of information systems security measures. The research model included three independent dimensions, perceived ease of use, perceived usefulness and subjective norm. These constructs were hypothesized to predict intention to use information systems security measures, moderated by management support affecting subjective norm. Five hypotheses were posited. A questionnaire was developed to collect data from employees across multiple industry segments to test these hypotheses. Partial least squares statistical methodology was used to analyze the data and to test the hypotheses. The results of the statistical analysis supported three of the five hypotheses with subjective norm and management support showing the strongest effect on intention to use information systems security measures. Few studies have used TAM to study acceptance of systems in a mandatory environment and to specifically examine the employee acceptance of computer information systems security measures. This study, therefore, adds to the body of knowledge. Further, it provides important information for senior management and security professionals across multiple industries regarding the need to develop security policies and processes and to effectively communicate them throughout the organization and to design these measures to promote their use by employees in the organization.
413

Métodos locais de integração explícito e implícito aplicados ao método de elementos finitos de alta ordem / Explicit and implicit integration local methods applied to the high-order finite element method

Furlan, Felipe Adolvando Correia 07 July 2011 (has links)
Orientador: Marco Lucio Bittencourt / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-18T15:16:56Z (GMT). No. of bitstreams: 1 Furlan_FelipeAdolvandoCorreia_M.pdf: 1842661 bytes, checksum: 69ed6fc529cf4f757f3c8a2f42e20518 (MD5) Previous issue date: 2011 / Resumo: O presente trabalho apresenta algoritmos locais de integração explícitos e implícitos aplicados ao método de elementos finitos de alta ordem, baseados na decomposição por autovetores das matrizes de massa e rigidez. O procedimento de solução é realizado para cada elemento da malha e os resultados são suavizados no contorno dos elementos usando a aproximação por mínimos quadrados. Consideraram-se os métodos de diferença central e Newmark para o desenvolvimento dos procedimentos de solução elemento por elemento. No algoritmo local explícito, observou-se que as soluções convergem para as soluções globais obtidas com a matriz de massa consistente. O algoritmo local implícito necessitou de subiterações para alcançar convergência. Exemplos bi e tridimensionais de elasticidade linear e não linear são apresentados. Os resultados mostraram precisão apropriada para problemas com solução analítica. Exemplos maiores também foram apresentados com resultados satisfatórios / Abstract: This work presents explicit and implicit local integration algorithms applied to the high-order finite element method, based on the eigenvalue decomposition of the elemental mass and stiffness matrices. The solution procedure is performed for each element of the mesh and the results are smoothed on the boundary of the elements using the least square approximation. The central difference and Newmark methods were considered for developing the element by element solution procedures. For the local explicit algorithm, it was observed that the solutions converge for the global solutions obtained with the consistent mass matrix. The local implicit algorithm required subiterations to achieve convergence. Two-dimensional and three-dimensional examples of linear and non-linear elasticity are presented. Results showed appropriate accuracy for problems with analytical solution. Larger examples are also presented with satisfactory results / Mestrado / Mecanica dos Sólidos e Projeto Mecanico / Mestre em Engenharia Mecânica
414

Ajuste de curvas por polinômios com foco no currículo do ensino médio / Curve fitting polynomials focusing on high school curriculum

Santos, Alessandro Silva, 1973- 27 August 2018 (has links)
Orientador: Lúcio Tunes dos Santos / Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-27T11:38:45Z (GMT). No. of bitstreams: 1 Santos_AlessandroSilva_M.pdf: 6474871 bytes, checksum: 351d93b093e44b399a99cd42075cb4b5 (MD5) Previous issue date: 2015 / Resumo: ste trabalho surge a partir de uma proposta de desenvolvimento do ajuste de curvas com uma abordagem que busca enriquecer o estudo de funções presente no currículo do ensino fundamental e médio. É apresentada ao aluno, desde o aspecto histórico do ajuste, passando pela interpolação de curvas, com foco na interpolação polinomial e o método dos quadrados mínimos, no qual são apresentados, a regressão linear, além de modelos como o ajuste exponencial. É também descrita nesta dissertação uma ferramenta de grande importância no cálculo numérico, o conceito de erro de ajuste, sua definição e forma de estimativa. Na interpolação polinomial, o aluno, ao desenvolver a forma de Lagrange, é estimulado a trabalhar as operações, forma fatorada e interpretação das vantagens do método, como o número e grau de dificuldade de operações realizadas por esse. Interpolação inversa e interpolação de curvas complementam o referido capítulo, em que busca, sempre que possível, utilizar situações problema . O método dos quadrados mínimos estimula o estudante a escolha da função de ajuste e determinação dessa a partir do conceito de minimização do erro. Polinômios de grau um,a regressão linear, e dois são trabalhados devido a sua importância no currículo do ensino médio. Explorando também conceitos como logaritmos e exponenciais, propõe-se o ajuste linear de modelos exponenciais, utilizando situações problema de uma área em evidência no momento, a Biomatemática, modelar dinâmicas de crescimento populacional. Dessa forma o aluno tem contato com formas de previsão que são úteis em importantes áreas como: a saúde pública, a modelagem de epidemias e desenvolvimento de patógenos; planejamento de políticas públicas com a modelagem do crescimento e distribuição da população; comportamento da economia como no caso de previsões de juros futuros. Para que este trabalho possa servir de auxílio aos professores de forma prática e interessante, o capítulo final traz sugestão de problemas na forma de planilha, facilitando sua reprodução e aplicação em sala de aula / Abstract: This study comes from a development proposal curves adjustment with an approach that seeks to enrich the study of present functions in the primary and secondary curriculum. It is presented to the student, from the historical aspect setting, through interpolation curves, focusing on polynomial interpolation and the method of least squares, which presents the linear regression, and models like the exponential fit. It is also described in this work a very important tool in numerical calculation, the concept of setting error, its definition and method of estimation. In polynomial interpolation, the student, to develop the form of Lagrange, is encouraged to work operations, factored form and interpretation of the advantages of the method, as the number and degree of difficulty of tasks for this. Inverse interpolation and interpolation curves complement the chapter on seeking, whenever possible, use problem situations. The method of least squares stimulates the student the choice of setting function and determine this from the concept of minimizing the error. Polynomials of degree one, linear regression, and two are worked because of its importance in the high school curriculum. Also exploring concepts such as logarithms and exponential, it is proposed that the linear fit of exponential models using problem situations evidence in area at the time, the Biomathematics, modeling dynamics of population growth. Thus the student has contact with forms of provision that are useful in important areas such as public health, the modeling of epidemics and development of pathogens; public policy planning with modeling of growth and distribution of population; behavior of the economy as in the case of future interest rate forecasts. For this work may be an aid to teachers in a practical and interesting, the final chapter brings problems suggestion in the form of sheet, facilitating its reproduction and use in the classroom / Mestrado / Matemática em Rede Nacional - PROFMAT / Mestre em Matemática em Rede Nacional - PROFMAT
415

Regularization Techniques for Linear Least-Squares Problems

Suliman, Mohamed Abdalla Elhag 04 1900 (has links)
Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA method deals with discrete ill-posed problems when the singular values of the linear transformation matrix are decaying very fast to a significantly small value. For the both proposed algorithms, the regularization parameter is obtained as a solution of a non-linear characteristic equation. We provide a details study for the general properties of these functions and address the existence and uniqueness of the root. To demonstrate the performance of the derivations, the first proposed COPRA method is applied to estimate different signals with various characteristics, while the second proposed COPRA method is applied to a large set of different real-world discrete ill-posed problems. Simulation results demonstrate that the two proposed methods outperform a set of benchmark regularization algorithms in most cases. In addition, the algorithms are also shown to have the lowest run time.
416

Signály s omezeným spektrem, jejich vlastnosti a možnosti jejich extrapolace / Bandlimited signals, their properties and extrapolation capabilities

Mihálik, Ondrej January 2019 (has links)
The work is concerned with the band-limited signal extrapolation using truncated series of prolate spheroidal wave function. Our aim is to investigate the extent to which it is possible to extrapolate signal from its samples taken in a finite interval. It is often believed that this extrapolation method depends on computing definite integrals. We show an alternative approach by using the least squares method and we compare it with the methods of numerical integration. We also consider their performance in the presence of noise and the possibility of using these algorithms for real-time data processing. Finally all proposed algorithms are tested using real data from a microphone array, so that their performance can be compared.
417

On the MSE Performance and Optimization of Regularized Problems

Alrashdi, Ayed 11 1900 (has links)
The amount of data that has been measured, transmitted/received, and stored in the recent years has dramatically increased. So, today, we are in the world of big data. Fortunately, in many applications, we can take advantages of possible structures and patterns in the data to overcome the curse of dimensionality. The most well known structures include sparsity, low-rankness, block sparsity. This includes a wide range of applications such as machine learning, medical imaging, signal processing, social networks and computer vision. This also led to a specific interest in recovering signals from noisy compressed measurements (Compressed Sensing (CS) problem). Such problems are generally ill-posed unless the signal is structured. The structure can be captured by a regularizer function. This gives rise to a potential interest in regularized inverse problems, where the process of reconstructing the structured signal can be modeled as a regularized problem. This thesis particularly focuses on finding the optimal regularization parameter for such problems, such as ridge regression, LASSO, square-root LASSO and low-rank Generalized LASSO. Our goal is to optimally tune the regularizer to minimize the mean-squared error (MSE) of the solution when the noise variance or structure parameters are unknown. The analysis is based on the framework of the Convex Gaussian Min-max Theorem (CGMT) that has been used recently to precisely predict performance errors.
418

Samonastavitelná regulace elektrického motoru / Self-tuning control of electric motor

Havlíček, Jiří January 2017 (has links)
The diploma thesis deals with the self-tuning PSD controllers. The parameters of the model are obtained by a non-recurring method of least squares. With the assistance of the Matlab/Simulink programme, the individual processes of the PSD controller are compared on a second order system. In the thesis, a simulation of the self-tuning cascade control of PMSM‘s current and speed loop is created. The following part of the thesis covers the implementation of individual algorithms on the dSPACE platform for the real PMSM.
419

Trialability, perceived risk and complexity of understanding as determinants of cloud computing services adoption

Etsebeth, Eugene Everard 16 February 2013 (has links)
In 2011 one-third of South African organisations did not intend to adopt cloud computing services because IT decision-maker lacked understanding of the related concepts and benefits (Goldstuck, 2011). This research develops a media-oriented model to examine the adoption of these services in South Africa. The model uses the technology acceptance model (TAM) and innovation diffusion theory (IDT) to develop variables that are considered determinants of adoption including trialability, complexity of understanding, perceived risk, perceived ease of use and perceived usefulness.An electronic survey was sent to 107 IT decision-makers. Over 80% of the respondents were C-suite executives. The Partial Least Squares (PLS) method was chosen to depict and test the proposed model. PLS is superior to normal regression models and is a second generation technique. The data analysis included evaluating and modifying the model, assessing the new measurement model, testing the hypotheses of the model structure and presenting the structural model.The research found that media, experts and word of mouth mitigate perceived risks including bandwidth, connectivity and power. Furthermore, trialability and perceived usefulness were affected by social influence, as well as influencing adoption. The results enable service providers and marketers to develop product roadmaps and pinpoint media messages. / Dissertation (MBA)--University of Pretoria, 2012. / Gordon Institute of Business Science (GIBS) / unrestricted
420

Automates cellulaires, fonctions booléennes et dessins combinatoires / Cellular automata, boolean functions and combinatorial designs

Mariot, Luca 09 March 2018 (has links)
Le but de cette thèse est l'étude des Automates Cellulaires (AC) dans la perspective des fonctions booléennes et des dessins combinatoires. Au-delà de son intérêt théorique, cette recherche est motivée par ses applications à la cryptographie, puisque les fonctions booléennes et les dessins combinatoires sont utilisés pour construire des générateurs de nombres pseudo aléatoires (Pseudorandom Number Generators, PRNG) et des schémas de partage de secret (Secret Sharing Schemes, SSS). Les résultats présentés dans la thèse ont été développés sur trois lignes de recherche, organisées comme suit. La première ligne porte sur l'utilisation des algorithmes d'optimisation heuristique pour chercher des fonctions booléennes ayant des bonnes propriétés cryptographiques, à utiliser comme des règles locales dans des PRNG basés sur les AC. La motivation principale est l'amélioration du générateur de Wolfram basé sur la règle 30, qui a été montré être vulnérable vis à vis de deux attaques cryptanalytiques. La deuxième ligne s'occupe des fonctions booléennes vectorielles engendrées par les règles globales des AC. La première contribution considère la période des pré-images des configurations spatialement périodiques dans les AC surjectifs, et l'analyse des propriétés cryptographiques des règles globales des AC. La troisième ligne se concentre sur les dessins combinatoires engendrés par les AC, en considérant les Carrés Latins Orthogonaux (Orthogonal Latin Squares, OLS), qui sont équivalents aux SSS. En particulier, on donne une caractérisation algébrique des OLS engendrés par les AC linéaires, et on utilise des algorithmes heuristiques pour construire des OLS basés sur des AC non linéaires. / The goal of this thesis is the investigation of Cellular Automata (CA) from the perspective of Boolean functions and combinatorial designs. Beside its theoretical interest, this research finds its motivation in cryptography, since Boolean functions and combinatorial designs are used to construct Pseudorandom Number Generators (PRNG) and Secret Sharing Schemes (SSS). The results presented in the thesis are developed along three research lines, organized as follows. The first line considers the use of heuristic optimization algorithms to search for Boolean functions with good cryptographic properties, to be used as local rules in CA-based PRNG. The main motivation is to improve Wolfram's generator based on rule 30, which has been shown to be vulnerable against two cryptanalytic attacks. The second line deals with vectorial Boolean functions induced by CA global rules. The first contribution considers the period of preimages of spatially periodic configurations in surjective CA, and analyze the cryptographic properties of CA global rules. The third line focuses on the combinatorial designs generated by CA, specifically considering Orthogonal Latin Squares (OLS), which are equivalent to SSS. In particular, an algebraic characterization of OLS generated by linear CA is given, and heuristic algorithms are used to build OLS based on nonlinear CA.

Page generated in 0.0757 seconds