• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5940
  • 1422
  • 871
  • 726
  • 722
  • 669
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
961

Algorithm aversion in scenarios with acquisition and forfeiture framing

Strömstedt, Björn January 2021 (has links)
Humankind is becoming increasingly dependent on algorithms in their everyday life. Algorithmic decision support has existed since the entrance of computers but are becoming more sophisticated with elements of Articial Intelligence (AI). Though many decision support systems outperform humans in many areas, e.g. in forecasting task, the willingness to trust and use algorithmic decision support is lower than in a corresponding human. Many factors have been investigated to why this algorithm aversion exists but there is a gap in research about the eects of scenario characteristics. Results provided by this study showed that people prefer recommendations from a human expert over algorithmic decision support. This was also re ected in the self-perceived likelihood of keeping a choice when the decision support recommended the other option, where the likelihood was lower for the group with human expert as the decision support. The results also showed that the decision supports, regardless of type, are more trusted by the user in an acquisition framed scenario than in a forfeiture framed. However, very limited support was found for the hypothesized interaction between decision support and scenario type, where it was expected that algorithm aversion would be stronger for forfeiture than acquisition scenarios. Moreover, the results showed that, independent of the experimental manipulations, participants with a positive general attitude towards AI had higher trust in algorithmic decision support. Together, these new results may be valuable for future research into algorithm aversion but must also to be extended and replicated using dierent scenarios and situations.
962

Real-time visualization of 3D atmospheric data using OpenSpace

Lundqvist, William January 2021 (has links)
Visualization is an important tool for presenting data to humans in an easy-to-understand manner. With new radar technology in development that can gather 3D atmosphere data, it opens up new possibilities for using 3D visualization tools to visualize the data, e.g, OpenSpace. OpenSpace is an open-source tool for visualization of the cosmos and universe. An evaluation of different rendering methods inside OpenSpace is evaluated to answer which method is most suitable for visualizing atmospheric 3D data. The data is in the format of HDF5 files and contains a list of beams with samples scattered along the beams, an algorithm is implemented to transform the beam data into a 3D volume which is used inside OpenSpace to be rendered. Tests are implemented to gather information on which parameter in the algorithm affects CPU execution time the most. The tests consist of executing the algorithm 100 times with different combinations of parameters to see which parameter has the largest effect on execution time, and a complexity analysis is calculated to evaluate the complexity of the algorithm. The results of the tests shows that the height of the volume affects execution performance the most on larger sizes. On small sizes, the difference between the different dimensions are insignificant. With the combination of height and smoothing, it slowed the execution time by a larger margin compared to width and smoothing or depth and smoothing. By implementing Volumetric ray casting and Point clouds as rendering methods, the results showed that both can visualize the data in real-time. Volumetric ray casting rendered with a clearer result in comparison to Point clouds, thus, Volumetric ray-casting is the preferred method to use when rendering atmospheric 3D volumes that is able to meet certain criteria.
963

Assessing HTTP Security Header implementations : A study of Swedish government agencies’ first line of defense against XSS and client-side supply chain attacks

Johnson, Ludwig, Mårtensson, Lukas January 2021 (has links)
Background. Security on the web is a fundamental requirement as it becomes a bigger part of society and more information than ever is shared over it. However, as recent incidents have shown, even Swedish government agencies have had issues with their website security. One such example is when a client-side supply chain for several governmental websites was hacked and malicious javascript was subsequently found on several governmental websites. Hence this study is aimed at assessing the security of Swedish government agencies’ first line of defense against attacks like XSS and client-side supply chain. Objectives. The main objective of the thesis is to assess the first line of defense, namely HTTP security headers, of Swedish government agency websites. In addition, collecting statistics of what HTTP security headers are actually used by Swedish government agencies today were gathered for comparison with similar studies. Methods. To fulfill the objectives of the thesis, a scan of all Swedish government agency websites, found on Myndighetsregistret, was completed and an algorithm was developed to assess the implementation of the security features. In order to facilitate tunable assessments for different types of websites, the algorithm has granular weights that can be assigned to each test to make the algorithm more generalized. Results. The results show a low overall implementation rate of the various HTTP security headers among the Swedish government agency websites. However, when compared to similar studies, the adoption of all security features are higher among the Swedish government agency websites tested in this thesis. Conclusions. Previous tools/studies mostly checked if a header was implemented or not. With our algorithm, the strength of the security header implementation is also assessed. According to our results, there is a significant difference between if a security header has been implemented, and if it has been implemented well, and provides adequate security. Therefore, traditional tools for testing HTTP security headers may be inefficient and misleading.
964

An Investigation of Radiation Treatment Plan Quality and Time Constraints: Factors Affecting Optimization for RayStation's Collapsed Cone Algorithm

DeLuca, Enrico Donald January 2021 (has links)
No description available.
965

Algorithm-Based Meta-Analysis Reveals the Mechanistic Interaction of the Tumor Suppressor LIMD1 With Non-Small-Cell Lung Carcinoma

Wang, Ling, Sparks-Wallace, Ayrianna, Casteel, Jared L., Howell, Mary E.A., Ning, Shunbin 31 March 2021 (has links)
Non-small-cell lung carcinoma (NSCLC) is the major type of lung cancer, which is among the leading causes of cancer-related deaths worldwide. LIMD1 was previously identified as a tumor suppressor in lung cancer, but their detailed interaction in this setting remains unclear. In this study, we have carried out multiple genome-wide bioinformatic analyses for a comprehensive understanding of LIMD1 in NSCLC, using various online algorithm platforms that have been built for mega databases derived from both clinical and cell line samples. Our results indicate that LIMD1 expression level is significantly downregulated at both mRNA and protein levels in both lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC), with a considerable contribution from its promoter methylation rather than its gene mutations. The Limd1 gene undergoes mutation only at a low rate in NSCLC (0.712%). We have further identified LIMD1-associated molecular signatures in NSCLC, including its natural antisense long non-coding RNA LIMD1-AS1 and a pool of membrane trafficking regulators. We have also identified a subgroup of tumor-infiltrating lymphocytes, especially neutrophils, whose tumor infiltration levels significantly correlate with LIMD1 level in both LUAD and LUSC. However, a significant correlation of LIMD1 with a subset of immune regulatory molecules, such as IL6R and TAP1, was only found in LUAD. Regarding the clinical outcomes, LIMD1 expression level only significantly correlates with the survival of LUAD (p0.1) patients. These findings indicate that LIMD1 plays a survival role in LUAD patients at least by acting as an immune regulatory protein. To further understand the mechanisms underlying the tumor-suppressing function of LIMD1 in NSCLC, we show that LIMD1 downregulation remarkably correlates with the deregulation of multiple pathways that play decisive roles in the oncogenesis of NSCLC, especially those mediated by EGFR, KRAS, PIK3CA, Keap1, and p63, in both LUAD and LUSC, and those mediated by p53 and CDKN2A only in LUAD. This study has disclosed that LIMD1 can serve as a survival prognostic marker for LUAD patients and provides mechanistic insights into the interaction of LIMD1 with NSCLC, which provide valuable information for clinical applications.
966

Applying an Analytical Approach to Shop-Floor Scheduling: A Case Study

Swinehart, Kerry, Yasin, Mahmoud, Guimaraes, Eduardo 01 January 1996 (has links)
In the light of the complex and dynamic factors that exist in a typical production facility, manual development of an optimal shop-floor schedule is computationally impractical. This paper discusses the effective use of an heuristic algorithm approach to shop-floor scheduling at the TRW Rack and Pinion Division (RPD) Plant in Rogersville, Tennessee. The study documents the introduction of FAST, a computerised scheduling system that employs the Genetic Optimisation Algorithm. Results demonstrate a real potential advantage using this system for shop-floor scheduling, thus facilitating TRWs journey of continuous improvement.
967

Parallel Sparse Matrix-Matrix Multiplication: A Scalable Solution With 1D Algorithm

Hoque, Mohammad Asadul, Raju, Md Rezaul Karim, Tymczak, Christopher John, Vrinceanu, Daniel, Chilakamarri, Kiran 01 January 2015 (has links)
This paper presents a novel implementation of parallel sparse matrix-matrix multiplication using distributed memory systems on heterogeneous hardware architecture. The proposed algorithm is expected to be linearly scalable up to several thousands of processors for matrices with dimensions over 106 (million). Our approach of parallelism is based on 1D decomposition and can work for both structured and unstructured sparse matrices. The storage mechanism is based on distributed hash lists, which reduces the latency for accessing and modifying an element of the product matrix, while reducing the overall merging time of the partial results computed by the processors. Theoretically, the time and space complexity of our algorithm is linearly proportional to the total number of non-zero elements in the product matrix C. The results of the performance evaluation show that the algorithm scales much better for sparse matrices with bigger dimensions. The speedup achieved using our algorithm is much better than other existing 1D algorithms. We have been able to achieve about 500 times speedup with only 672 processors. We also identified the impact of hardware architecture on scalability.
968

Algorithm for the Management of Osteoporosis

Hamdy, Ronald C., Baim, Sanford, Broy, Susan B., Lewiecki, E. M., Morgan, Sarah L., Tanner, S. B., Williamson, Howard F. 01 October 2010 (has links)
Osteoporosis is a common skeletal disease that weakens bones and increases the risk of fractures. It affects about one half of women over the age of 60, and one third of older men. With appropriate care, osteoporosis can be prevented; and when present, it can be easily diagnosed and managed. Unfortunately, many patients with osteoporosis are not recognized or treated, even after sustaining a low-trauma fracture. Even when treatment is initiated, patients may not take medication correctly, regularly, or for a sufficient amount of time to receive the benefit of fracture risk reduction. Efforts to improve compliance and treatment outcomes include longer dosing intervals and parenteral administration. Clinical practice guidelines for the prevention and treatment of osteoporosis have been developed by the National Osteoporosis Foundation (NOF) but may not be fully utilized by clinicians who must deal with numerous healthcare priorities. We present an algorithm to help streamline the work of busy clinicians so they can efficiently provide state-of-the-art care to patients with osteoporosis.
969

A Study on Integrated Transportation and Facility Location Problem

Oyewole, Gbeminiyi John January 2019 (has links)
The focus of this thesis is the development and solution of problems that simultaneously involve the planning of the location of facilities and transportation decisions from such facilities to consumers. This has been termed integrated distribution planning problems with practical application in logistics and manufacturing. In this integration, different planning horizons of short, medium and long terms are involved with the possibility of reaching sub-optimal decisions being likely when the planning horizons are considered separately. Two categories of problems were considered under the integrated distribution models. The first is referred to as the Step-Fixed Charge Location and Transportation Problem (SFCLTP). The second is termed the Fixed Charge Solid Location and Transportation Problem (FCSLTP). In these models, the facility location problem is considered to be a strategic or long term decision. The short to medium-term decisions considered are the Step-Fixed Charge Transportation Problem (SFCTP) and the Fixed Charge Solid Transportation Problem (FCSTP). Both SFCTP and FCSTP are different extensions to the classical transportation problem, requiring a trade-off between fixed and variable costs along the transportation routes to minimize total transportation costs. Linearization and subsequent local improvement search techniques were developed to solve the SFCLTP. The first search technique involved the development of a hands-on solution including a numerical example. In this solution technique, linearization was employed as the primal solution, following which structured perturbation logic was developed to improve on the initial solution. The second search technique proposed also utilized the linearization principle as a base solution in addition to some heuristics to construct transportation problems. The resulting transportation problems were solved to arrive at a competitive solution as regards effectiveness (solution value) compared to those obtainable from standard solvers such as CPLEX. The FCSLTP is formulated and solved using the CPLEX commercial optimization suite. A Lagrange Relaxation Heuristic (LRH) and a Hybrid Genetic Algorithm (GA) solution of the FCSLTP are presented as alternative solutions. Comparative studies between the FCSTP and the FCSLTP formulation are also presented. The LRH is demonstrated with a numerical example and also extended to hopefully generate improved upper bounds. The CPLEX solution generated better lower bounds and upper bound when compared with the extended LRH. However, it was observed that as problem size increased, the solution time of CPLEX increased exponentially. The FCSTP was recommended as a possible starting solution for solving the FCSLTP. This is due to a lower solution time and its feasible solution generation illustrated through experimentation. The Hybrid Genetic Algorithm (HGA) developed integrates cost relaxation, greedy heuristic and a modified stepping stone method into the GA framework to further explore the solution search space. Comparative studies were also conducted to test the performance of the HGA solution with the classical Lagrange heuristics developed and CPLEX. Results obtained suggests that the performance of HGA is competitive with that obtainable from a commercial solver such as CPLEX. / Thesis (PhD)--University of Pretoria, 2019. / Industrial and Systems Engineering / PhD / Unrestricted
970

Förgreningsfria optimeringsmöjligheter för modern hårdvara

Lindfors, Mikael, Leuku Rudander, Max January 2020 (has links)
Traditionellt fokuserar algoritmanalys på tidskomplexitet där alla individuella instruktioner förmodas ta likvärdig “konstant” tid. Men med modern hårdvara går det att både kasta om ordningen på instruktioner (out-of-order-exekvering) och utföra dem parallellt (pipelining) vilket ändrar förutsättningarna för vad som kan betraktas som effektiv kod.I rapporten identifieras bitvisa operationer och conditional moves som två möjliga förgreningsfria tekniker. Möjligheterna att nå prestandaförbättringar genom att använda dessa tekniker undersöks med experiment där förgreningsfri kod jämförs med traditionell kod. Experimenten utförs på både enkla rutiner och sorteringsalgoritmer för att ta reda på om det blir någon skillnad i exekveringstid med de olika teknikerna.Resultatet tyder att man kan finna vissa vinster i exekveringstid med rätt förutsättningar. Vilken typ av data som rutinerna arbetar med, branch predictors förmåga att göra kvalificerade gissningar och kostnaden för de extra instruktioner som förgreningsfria kod innebär är avgörande faktorer. Dessutom är det avgörande att kompilatorn faktiskt lyckas utföra de optimeringar som conditional moves förlitar sig på. / Traditionally, algorithmic analysis focuses on time complexity where all individual instructions are assumed to take equal "constant" time. But with modern hardware it is possible to both reorder instructions (out-of-order execution) and execute them in parallel (pipelining) which changes the conditions for what can be regarded as effective code.The report identifies bitwise operations and conditional moves as two possible branch-free techniques. The possibilities of achieving improved performance through these techniques is investigated by experiments comparing branch-free code with traditional code. The experiments are performed on both simple routines and more advanced sorting algorithms to evaluate if these techniques will lead to a difference in execution-time. The results suggest that certain performance gains can be found in execution time with the right conditions. The type of data that the routines work with, the branch predictor's ability to make qualified guesses, and the cost of the extra instructions that branch-free code entails, are crucial factors. In addition, it is crucial that the compiler actually succeeds in performing the optimizations that conditional moves rely on.

Page generated in 0.0721 seconds