• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 682
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1503
  • 1029
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Utilizing the Technology Acceptance Model to Assess Employee Adoption of Information Systems Security Measures

Jones, Cynthia 16 September 2009 (has links)
Companies are increasing their investment in technologies to enable better access to information and to gain a competitive advantage. Global competition is driving companies to reduce costs and enhance productivity, increasing their dependence on information technology. Information is a key asset within an organization and needs to be protected. Expanded connectivity and greater interdependence between companies and consumers has increased the damage potential of a security breach to a company's information systems. Improper unauthorized use of computer systems can create a devastating financial loss even to the point of causing the organization to go out of business. It is critically important to understand what causes users to understand, accept and to follow the organization's information systems security measures so that companies can realize the benefits of their technological investments. In the past several years, computer security breaches have stemmed from insider misuse and abuse of the information systems and non-compliance to the information systems security measures. The purpose of this study was to address the factors that affect employee acceptance of information systems security measures. The Technology Acceptance Model was extended and served as the theoretical framework for this study to examine the factors that affect employee adoption of information systems security measures. The research model included three independent dimensions, perceived ease of use, perceived usefulness and subjective norm. These constructs were hypothesized to predict intention to use information systems security measures, moderated by management support affecting subjective norm. Five hypotheses were posited. A questionnaire was developed to collect data from employees across multiple industry segments to test these hypotheses. Partial least squares statistical methodology was used to analyze the data and to test the hypotheses. The results of the statistical analysis supported three of the five hypotheses with subjective norm and management support showing the strongest effect on intention to use information systems security measures. Few studies have used TAM to study acceptance of systems in a mandatory environment and to specifically examine the employee acceptance of computer information systems security measures. This study, therefore, adds to the body of knowledge. Further, it provides important information for senior management and security professionals across multiple industries regarding the need to develop security policies and processes and to effectively communicate them throughout the organization and to design these measures to promote their use by employees in the organization.
462

Factors Related to the Selection of Information Sources: A Study of Ramkhamhaeng University Regional Campuses Graduate Students

Angchun, Peemasak 08 1900 (has links)
This study assessed students’ satisfaction with Ramkhamhaeng University regional library services (RURLs) and the perceived quality of information retrieved from other information sources. In particular, this study investigated factors relating to regional students’ selection of information sources to meet their information needs. The researcher applied the principle of least effort and Simon’s satisficing theory for this study. The former principle governs and predicts the selection of these students’ perceived source accessibility, whereas the latter theory explains the selection and use of the information retrieved without considering whether the information is optimal. This study employed a web-based survey to collect data from 188 respondents. The researcher found that convenience and ease of use were the top two variables relating to respondent’s selection of information sources and use. The Internet had the highest mean for convenience. Results of testing a multiple linear regression model of all four RURCs showed that these four independent variables (convenience, ease of use, availability, and familiarity) were able to explain 69% of the total variance in the frequency of use of information sources. Convenience and ease of use were able to increase respondents’ perceived source accessibility and explain the variance of the frequency of use of sources more than availability and familiarity. These findings imply that respondents’ selection of information sources at the RURCs were governed by the principle of least effort. Libraries could consider the idea of one-stop services in the design of the Web portal, making it user friendly and convenient to access. Ideally, students could have one card to check out materials from any library in the resources sharing network.
463

Métodos locais de integração explícito e implícito aplicados ao método de elementos finitos de alta ordem / Explicit and implicit integration local methods applied to the high-order finite element method

Furlan, Felipe Adolvando Correia 07 July 2011 (has links)
Orientador: Marco Lucio Bittencourt / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-18T15:16:56Z (GMT). No. of bitstreams: 1 Furlan_FelipeAdolvandoCorreia_M.pdf: 1842661 bytes, checksum: 69ed6fc529cf4f757f3c8a2f42e20518 (MD5) Previous issue date: 2011 / Resumo: O presente trabalho apresenta algoritmos locais de integração explícitos e implícitos aplicados ao método de elementos finitos de alta ordem, baseados na decomposição por autovetores das matrizes de massa e rigidez. O procedimento de solução é realizado para cada elemento da malha e os resultados são suavizados no contorno dos elementos usando a aproximação por mínimos quadrados. Consideraram-se os métodos de diferença central e Newmark para o desenvolvimento dos procedimentos de solução elemento por elemento. No algoritmo local explícito, observou-se que as soluções convergem para as soluções globais obtidas com a matriz de massa consistente. O algoritmo local implícito necessitou de subiterações para alcançar convergência. Exemplos bi e tridimensionais de elasticidade linear e não linear são apresentados. Os resultados mostraram precisão apropriada para problemas com solução analítica. Exemplos maiores também foram apresentados com resultados satisfatórios / Abstract: This work presents explicit and implicit local integration algorithms applied to the high-order finite element method, based on the eigenvalue decomposition of the elemental mass and stiffness matrices. The solution procedure is performed for each element of the mesh and the results are smoothed on the boundary of the elements using the least square approximation. The central difference and Newmark methods were considered for developing the element by element solution procedures. For the local explicit algorithm, it was observed that the solutions converge for the global solutions obtained with the consistent mass matrix. The local implicit algorithm required subiterations to achieve convergence. Two-dimensional and three-dimensional examples of linear and non-linear elasticity are presented. Results showed appropriate accuracy for problems with analytical solution. Larger examples are also presented with satisfactory results / Mestrado / Mecanica dos Sólidos e Projeto Mecanico / Mestre em Engenharia Mecânica
464

Ajuste de curvas por polinômios com foco no currículo do ensino médio / Curve fitting polynomials focusing on high school curriculum

Santos, Alessandro Silva, 1973- 27 August 2018 (has links)
Orientador: Lúcio Tunes dos Santos / Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-27T11:38:45Z (GMT). No. of bitstreams: 1 Santos_AlessandroSilva_M.pdf: 6474871 bytes, checksum: 351d93b093e44b399a99cd42075cb4b5 (MD5) Previous issue date: 2015 / Resumo: ste trabalho surge a partir de uma proposta de desenvolvimento do ajuste de curvas com uma abordagem que busca enriquecer o estudo de funções presente no currículo do ensino fundamental e médio. É apresentada ao aluno, desde o aspecto histórico do ajuste, passando pela interpolação de curvas, com foco na interpolação polinomial e o método dos quadrados mínimos, no qual são apresentados, a regressão linear, além de modelos como o ajuste exponencial. É também descrita nesta dissertação uma ferramenta de grande importância no cálculo numérico, o conceito de erro de ajuste, sua definição e forma de estimativa. Na interpolação polinomial, o aluno, ao desenvolver a forma de Lagrange, é estimulado a trabalhar as operações, forma fatorada e interpretação das vantagens do método, como o número e grau de dificuldade de operações realizadas por esse. Interpolação inversa e interpolação de curvas complementam o referido capítulo, em que busca, sempre que possível, utilizar situações problema . O método dos quadrados mínimos estimula o estudante a escolha da função de ajuste e determinação dessa a partir do conceito de minimização do erro. Polinômios de grau um,a regressão linear, e dois são trabalhados devido a sua importância no currículo do ensino médio. Explorando também conceitos como logaritmos e exponenciais, propõe-se o ajuste linear de modelos exponenciais, utilizando situações problema de uma área em evidência no momento, a Biomatemática, modelar dinâmicas de crescimento populacional. Dessa forma o aluno tem contato com formas de previsão que são úteis em importantes áreas como: a saúde pública, a modelagem de epidemias e desenvolvimento de patógenos; planejamento de políticas públicas com a modelagem do crescimento e distribuição da população; comportamento da economia como no caso de previsões de juros futuros. Para que este trabalho possa servir de auxílio aos professores de forma prática e interessante, o capítulo final traz sugestão de problemas na forma de planilha, facilitando sua reprodução e aplicação em sala de aula / Abstract: This study comes from a development proposal curves adjustment with an approach that seeks to enrich the study of present functions in the primary and secondary curriculum. It is presented to the student, from the historical aspect setting, through interpolation curves, focusing on polynomial interpolation and the method of least squares, which presents the linear regression, and models like the exponential fit. It is also described in this work a very important tool in numerical calculation, the concept of setting error, its definition and method of estimation. In polynomial interpolation, the student, to develop the form of Lagrange, is encouraged to work operations, factored form and interpretation of the advantages of the method, as the number and degree of difficulty of tasks for this. Inverse interpolation and interpolation curves complement the chapter on seeking, whenever possible, use problem situations. The method of least squares stimulates the student the choice of setting function and determine this from the concept of minimizing the error. Polynomials of degree one, linear regression, and two are worked because of its importance in the high school curriculum. Also exploring concepts such as logarithms and exponential, it is proposed that the linear fit of exponential models using problem situations evidence in area at the time, the Biomathematics, modeling dynamics of population growth. Thus the student has contact with forms of provision that are useful in important areas such as public health, the modeling of epidemics and development of pathogens; public policy planning with modeling of growth and distribution of population; behavior of the economy as in the case of future interest rate forecasts. For this work may be an aid to teachers in a practical and interesting, the final chapter brings problems suggestion in the form of sheet, facilitating its reproduction and use in the classroom / Mestrado / Matemática em Rede Nacional - PROFMAT / Mestre em Matemática em Rede Nacional - PROFMAT
465

DETERMINACIÓN DE COMUNIDADES FITOPLACTÓNICAS MEDIANTE ESPECTROSCOPÍA VISIBLE Y SU RELACIÓN CON LOS RECUENTOS POR MICROSCOPIA DE EPIFLUORESCENCIA

MARTÍNEZ GUIJARRO, MARÍA REMEDIOS 11 February 2010 (has links)
El fitoplancton es uno de los compuestos orgánicos de las aguas naturales y su diagnóstico es importante para evaluar el estado ecológico de los ecosistemas acuáticos, entre ellos las aguas costeras y de transición. El enriquecimiento de nutrientes antropogénicos y las alteraciones en la cadena de alimentación, incluyendo la reducción de consumidores de fitoplancton, produce un aumento espectacular de las existencias de fitoplancton. Esto ha causado cambios significativos en los ciclos de nutrientes de las áreas costeras, en la calidad del agua, en la biodiversidad y en el estado global del ecosistema. La caracterización de las comunidades fitoplanctónicas en ecosistemas acuáticos mediante el método de los recuentos microscópicos por epifluorescencia, es una tarea costosa en tiempo, material y recursos humanos altamente cualificados. El objetivo de este trabajo es, sin pretender sustituir a los recuentos con el microscopio sino complementarlos, poner a punto una técnica mediante espectrofotometría que disminuya estos costes, realizando medidas de espectros de absorción en el rango del visible en las muestras. Para llevar a cabo este trabajo se han tomado muestras en cinco zonas de la costa mediterránea de España. Estas zonas corresponden a ecosistemas acuáticos en los que influyen tanto las aguas continentales como las del mar Mediterráneo, es decir, zonas costeras influidas por aguas continentales (plumas continentales) y zonas continentales influidas por aguas marinas (estuarios). Las muestras tomadas presentan un gradiente de salinidad, en función de una mayor o menor influencia continental y también en función de la capa superficial de menor salinidad que yace sobre las aguas salinas más densas. En estas muestras con distintas salinidades también existen unas diferencias cualitativas y cuantitativas de la composición fitoplanctónica. / Martínez Guijarro, MR. (2010). DETERMINACIÓN DE COMUNIDADES FITOPLACTÓNICAS MEDIANTE ESPECTROSCOPÍA VISIBLE Y SU RELACIÓN CON LOS RECUENTOS POR MICROSCOPIA DE EPIFLUORESCENCIA [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/7106 / Palancia
466

Chinese cross-listing corporations performance study - focus on U.S. and Mainland China markets

Jing, Chu January 2013 (has links)
The purpose of this paper is to investigate the impact of cross-listing on companies' performance. It is divided into two aspects, one in short-term and the other in long-term. In short-run study, 6 companies cross-listing in NYSE and Chinese market are in the sample. In pre-cross-listing period, the abnormal returns are mostly positive and remain stable; the cumulative abnormal returns are close to 0 and the difference among them is very small; but on the cross-listing day, all the companies' abnormal returns decline, and after that day, the abnormal returns still fluctuate around 0 while most of them are negative, and the difference among each company's cumulative abnormal return become large. In long-run study, by using multiple regression of 99 Chinese companies listed in th U.S. markets form 2007 to 2012, there is a significant positive relationship between total asset turnover and cross-listing at 5% significance level and there is a significantly negative relation between market value and cross-listing at 10%significance level; return on equity and return on asset are both positive with cross-llisting, but not significant.
467

Regularization Techniques for Linear Least-Squares Problems

Suliman, Mohamed Abdalla Elhag 04 1900 (has links)
Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA method deals with discrete ill-posed problems when the singular values of the linear transformation matrix are decaying very fast to a significantly small value. For the both proposed algorithms, the regularization parameter is obtained as a solution of a non-linear characteristic equation. We provide a details study for the general properties of these functions and address the existence and uniqueness of the root. To demonstrate the performance of the derivations, the first proposed COPRA method is applied to estimate different signals with various characteristics, while the second proposed COPRA method is applied to a large set of different real-world discrete ill-posed problems. Simulation results demonstrate that the two proposed methods outperform a set of benchmark regularization algorithms in most cases. In addition, the algorithms are also shown to have the lowest run time.
468

Least and Inflationary Fixed-Point Logics: A Comparison of Expressive Strength

Dalglish, Steven Jack William January 2020 (has links)
No description available.
469

Signály s omezeným spektrem, jejich vlastnosti a možnosti jejich extrapolace / Bandlimited signals, their properties and extrapolation capabilities

Mihálik, Ondrej January 2019 (has links)
The work is concerned with the band-limited signal extrapolation using truncated series of prolate spheroidal wave function. Our aim is to investigate the extent to which it is possible to extrapolate signal from its samples taken in a finite interval. It is often believed that this extrapolation method depends on computing definite integrals. We show an alternative approach by using the least squares method and we compare it with the methods of numerical integration. We also consider their performance in the presence of noise and the possibility of using these algorithms for real-time data processing. Finally all proposed algorithms are tested using real data from a microphone array, so that their performance can be compared.
470

On the MSE Performance and Optimization of Regularized Problems

Alrashdi, Ayed 11 1900 (has links)
The amount of data that has been measured, transmitted/received, and stored in the recent years has dramatically increased. So, today, we are in the world of big data. Fortunately, in many applications, we can take advantages of possible structures and patterns in the data to overcome the curse of dimensionality. The most well known structures include sparsity, low-rankness, block sparsity. This includes a wide range of applications such as machine learning, medical imaging, signal processing, social networks and computer vision. This also led to a specific interest in recovering signals from noisy compressed measurements (Compressed Sensing (CS) problem). Such problems are generally ill-posed unless the signal is structured. The structure can be captured by a regularizer function. This gives rise to a potential interest in regularized inverse problems, where the process of reconstructing the structured signal can be modeled as a regularized problem. This thesis particularly focuses on finding the optimal regularization parameter for such problems, such as ridge regression, LASSO, square-root LASSO and low-rank Generalized LASSO. Our goal is to optimally tune the regularizer to minimize the mean-squared error (MSE) of the solution when the noise variance or structure parameters are unknown. The analysis is based on the framework of the Convex Gaussian Min-max Theorem (CGMT) that has been used recently to precisely predict performance errors.

Page generated in 0.0704 seconds