• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 719
  • 238
  • 238
  • 121
  • 67
  • 48
  • 21
  • 19
  • 13
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • Tagged with
  • 1771
  • 529
  • 473
  • 274
  • 184
  • 139
  • 137
  • 117
  • 117
  • 115
  • 114
  • 109
  • 107
  • 102
  • 102
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
651

Secure Coding Practice in Java: Automatic Detection, Repair, and Vulnerability Demonstration

Zhang, Ying 12 October 2023 (has links)
The Java platform and third-party open-source libraries provide various Application Programming Interfaces (APIs) to facilitate secure coding. However, using these APIs securely is challenging for developers who lack cybersecurity training. Prior studies show that many developers use APIs insecurely, thereby introducing vulnerabilities in their software. Despite the availability of various tools designed to identify API insecure usage, their effectiveness in helping developers with secure coding practices remains unclear. This dissertation focuses on two main objectives: (1) exploring the strengths and weaknesses of the existing automated detection tools for API-related vulnerabilities, and (2) creating better tools that detect, repair, and demonstrate these vulnerabilities. Our research started with investigating the effectiveness of current tools in helping with developers' secure coding practices. We systematically explored the strengths and weaknesses of existing automated tools for detecting API-related vulnerabilities. Through comprehensive analysis, we observed that most existing tools merely report misuses, without suggesting any customized fixes. Moreover, developers often rejected tool-generated vulnerability reports due to their concerns on the correctness of detection, and the exploitability of the reported issues. To address these limitations, the second work proposed SEADER, an example-based approach to detect and repair security-API misuses. Given an exemplar ⟨insecure, secure⟩ code pair, SEADER compares the snippets to infer any API-misuse template and corresponding fixing edit. Based on the inferred information, given a program, SEADER performs inter-procedural static analysis to search for security-API misuses and to propose customized fixes. The third work leverages ChatGPT-4.0 to automatically generate security test cases. These test cases can demonstrate how vulnerable API usage facilitates supply chain attacks on specific software applications. By running such test cases during software development and maintenance, developers can gain more relevant information about exposed vulnerabilities, and may better create secure-by-design and secure-by-default software. / Doctor of Philosophy / The Java platform and third-party open-source libraries provide various Application Pro- gramming Interfaces (APIs) to facilitate secure coding. However, using these APIs securely can be challenging, especially for developers who aren't trained in cybersecurity. Prior work shows that many developers use APIs insecurely, consequently introducing vulnerabilities in their software. Despite the availability of various tools designed to identify API insecure usage, it is still unclear how well they help developers with secure coding practices. This dissertation focuses on (1) exploring the strengths and weaknesses of the existing au- tomated detection tools for API-related vulnerabilities, and (2) creating better tools that detect, repair, and demonstrate these vulnerabilities. We first systematically evaluated the strengths and weaknesses of the existing automated API-related vulnerability detection tools. We observed that most existing tools merely report misuses, without suggesting any cus- tomized fixes. Additionally, developers often reject tool-generated vulnerability reports due to their concerns about the correctness of detection, and whether the reported vulnerabil- ities are truly exploitable. To address the limitations found in our study, the second work proposed a novel example-based approach, SEADER, to detect and repair API insecure usage. The third work leverages ChatGPT-4.0 to automatically generate security test cases, and to demonstrate how vulnerable API usage facilitates the supply chain attacks to given software applications.
652

Self-Assembled Multilayered Dielectric Spectral Filters

Chandran, Ashwin 11 January 2002 (has links)
Thin film optical filters are made by depositing thin films of optical materials on a substrate in such a way as to produce the required optical and mechanical properties. The Electrostatic Self Assembly (ESA) process is accomplished by the alternate adsorption of poly-anionic and poly-cationic molecules on progressive oppositely charged surfaces. This technique offers several advantages such as ease of fabrication, molecular level uniformity, stable multilayer synthesis and avoidance of the need for a vacuum environment. The ESA process offers an excellent choice for manufacturing optical thin film coatings due to its capability to incorporate multiple properties into films at the molecular level and its ability to be a fast and inexpensive process. The ESA process, as a method for manufacturing optical thin film filters has been investigated in detail in this thesis. A specific design was made and analyzed using TFCalc, a commercial thin film design software. Sensitivity analysis detailing the changes in filter response to errors in thickness and refractive index produced by the ESA process were done. These proved that with a high level of quality control, highly reliable and accurate optical thin films can be made by the ESA process. / Master of Science
653

Static and dynamic job-shop scheduling using rolling-horizon approaches and the Shifting Bottleneck Procedure

Ghoniem, Ahmed 10 July 2003 (has links)
Over the last decade, the semiconductor industry has witnessed a steady increase in its complexity based on improvements in manufacturing processes and equipment. Progress in the technology used is no longer the key to success, however. In fact, the semiconductor technology has reached such a high level of complexity that improvements appear at a slow pace. Moreover, the diffusion of technology among competitors shows that traditional approaches based on technological advances and innovations are not sufficient to remain competitive. A recent crisis in the semiconductor field in the summer 2001 made it even clearer that optimizing the operational control of semiconductor wafer fabrication facilities is a vital key to success. Operating research-oriented studies have been carried out to this end for the last 5 years. None of them, however, suggest a comprehensive model and solution to the operational control problem of a semiconductor manufacturing facility. Two main approaches, namely mathematical programming and dispatching rules, have been explored in the literature so far, either partially or entirely dealing with this problem. Adapting the Shifting Bottleneck (SB) procedure is a third approach that has motivated many studies. Most research focuses on optimizing a certain objective function under idealized conditions and thus does not take into consideration system disruptions such as machine breakdown. While many papers address the adaptations of the SB procedure, the problem of re-scheduling jobs dynamically to take disruptions and local disturbances (machines breakdown, maintenance...) into consideration shows interesting perspectives for research. Dealing with local disturbances in a production environment and analyzing their impact on scheduling policies is a complex issue. It becomes even more complex in the semiconductor industry because of the numerous inherent constraints to take into account. The problem that is addressed in this thesis consists of studying dynamic scheduling in a job-shop environment where local disturbances occur. This research focuses on scheduling a large job shop and developing re-scheduling policies when local disturbances occur. The re-scheduling can be applied to the whole production horizon considered in the instance, or applied to a restricted period T that becomes a decision variable of the problem. The length of the restricted horizon T of re-scheduling can influence significantly the overall results. Its impact on the general performance is studied. Future extensions can be made to include constraints that arise in the semiconductors industry, such as the presence of parallel and batching machines, reentrant flows and the lot dedication problem. The theoretical results developed through this research will be applied to data sets to study their efficiency. We hope this methodology will bring useful insights to dealing effectively with local disturbances in production environments. / Master of Science
654

Enhancing SAT-based Formal Verification Methods using Global Learning

Arora, Rajat 25 May 2004 (has links)
With the advances in VLSI and System-On-Chip (SOC) technology, the complexity of hardware systems has increased manifold. Today, 70% of the design cost is spent in verifying these intricate systems. The two most widely used formal methods for design verification are Equivalence Checking and Model Checking. Equivalence Checking requires that the implementation circuit should be exactly equivalent to the specification circuit (golden model). In other words, for each possible input pattern, the implementation circuit should yield the same outputs as the specification circuit. Model checking, on the other hand, checks to see if the design holds certain properties, which in turn are indispensable for the proper functionality of the design. Complexities in both Equivalence Checking and Model Checking are exponential to the circuit size. In this thesis, we firstly propose a novel technique to improve SAT-based Combinational Equivalence Checking (CEC) and Bounded Model Checking (BMC). The idea is to perform a low-cost preprocessing that will statically induce global signal relationships into the original CNF formula of the circuit under verification and hence reduce the complexity of the SAT instance. This efficient and effective preprocessing quickly builds up the implication graph for the circuit under verification, yielding a large set of logic implications composed of direct, indirect and extended backward implications. These two-node implications (spanning time-frame boundaries) are converted into two-literal clauses, and added to the original CNF database. The added clauses constrain the search space of the SAT-solver engine, and provide correlation among the different variables, which enhances the Boolean Constraint Propagation (BCP). Experimental results on large and difficult ISCAS'85, ISCAS'89 (full scan) and ITC'99 (full scan) CEC instances and ISCAS'89 BMC instances show that our approach is independent of the state-of-the-art SAT-solver used, and that the added clauses help to achieve more than an order of magnitude speedup over the conventional approach. Also, comparison with Hyper-Resolution [Bacchus 03] suggests that our technique is much more powerful, yielding non-trivial clauses that significantly simplify the SAT instance complexity. Secondly, we propose a novel global learning technique that helps to identify highly non-trivial relationships among signals in the circuit netlist, thereby boosting the power of the existing implication engine. We call this new class of implications as 'extended forward implications', and show its effectiveness through additional untestable faults they help to identify. Thirdly, we propose a suite of lemmas and theorems to formalize global learning. We show through implementation that these theorems help to significantly simplify a generic CNF formula (from Formal Verification, Artificial Intelligence etc.) by identifying the necessary assignments, equivalent signals, complementary signals and other non-trivial implication relationships among its variables. We further illustrate through experimental results that the CNF formula simplification obtained using our tool outshines the simplification obtained using other preprocessors. / Master of Science
655

An efficiency rating tool for process-level VHDL behavioral models

Wicks, John A. 06 June 2008 (has links)
Due to the great complexity of VHDL models that are created today, the amount of processing time required to simulate these models and the amount of labor required to develop these models have become critical issues. The amount of processing time required to simulate a model can be directly influenced by the efficient use of VHDL concepts in creating the model. This dissertation presents an approach to aiding the modeler in the development of more efficient VHDL models. This is done by measuring the simulation efficiency of process-level VHDL behavioral models. Research in the determination of what VHDL constructs and modeling styles are most efficient is presented. The development and use of a tool that parses VHDL behavioral models and reveals the efficiency of the code in the form of a numerical efficiency rating is also presented. / Ph. D.
656

Visualisering av rörelse i statisk media : Ett arbete om hur instruktioner kan gestaltas för att förmedla mänskliga rörelser i statiska medier. / Visualization of movement in static media : How instructions can be designed to convey human movements in static media.

Pernell, Liam January 2024 (has links)
This is a bachelor thesis in information design with a focus on informative illustration at Mälardalen University. The thesis was carried out in collaboration with Klättercentret Västerås and investigates how to visualize human movements in static media to facilitate the understanding of bouldering techniques for beginners. By applying various visualization techniques, the thesis aims to develop effective instructional materials that enable users to understand and perform complex movements without seeing any actual movement. The methods included the use of think-aloud protocol and semi-structured interviews to collect data from test subjects. Two main visualization techniques were tested: multiple static-simultaneous pictures and overlapping multiples. In addition to these, movement indicating arrows and movement lines were also used as clarifying additions. The test subjects studied the instructions and then performed the movements on a climbing wall. Their performance and feedback were documented and analyzed to assess the comprehensibility and usability of the material. The instructional material was developed in an iterative process, in which the feedback and performance from each user test was used to develop and refine the material. The results show that multiple static-simultaneous pictures were most effective in conveying complex movements. Users found these visualizations easier to understand and follow compared to overlapping multiples. The results also indicate that visualizations with clear visual cues, such as arrows and color markings, reduced cognitive load and facilitated comprehension. / Detta är ett examensarbete i informationsdesign med inriktningen informativ illustration vid Mälardalens universitet. Examensarbetet genomfördes i samarbete med Klättercentret Västerås och undersöker hur man kan visualisera mänskliga rörelser i statiska medier för att underlätta förståelsen av boulderingstekniker för nybörjare. Genom att tillämpa olika visualiseringstekniker syftar arbetet till att utveckla effektiva instruktionsmaterial som gör det möjligt för användare att förstå och utföra komplexa rörelser utan att se några faktiska rörelser. Metoderna inkluderade användningen av think-aloud protokoll och semistrukturerade intervjuer för att samla in data från testpersoner. Två huvudsakliga visualiseringstekniker testades: flera seriellt bildberättande och överlappande multiplar (overlapping multiples). Utöver dessa användes även rörelseindikerande pilar och rörelselinjer som förtydligande tillägg. Testpersonerna studerade instruktionerna och utförde sedan rörelserna på en klättervägg. Deras prestationer och feedback dokumenterades och analyserades för att bedöma materialets begriplighet och användbarhet. Instruktionsmaterialet utvecklades i en iterativ process, där feedback och prestanda från varje användartest användes för att utveckla och förfina materialet. Resultaten visar att seriellt bildberättande var mest effektivt för att förmedla komplexa rörelser. Testpersonerna ansåg att dessa visualiseringar var lättare att förstå och följa jämfört med överlappande multiplar. Resultaten tyder också på att visualiseringar med tydliga visuella signaler, såsom pilar och färgmarkeringar, minskade kognitiv belastning och underlättade förståelsen.
657

An applicable methodology for stress analysis of lightweight welded structures

Back, Elias January 2024 (has links)
This thesis work intended to verify an analysis method for welded, thin-walled geometries. Guidelines for stress evaluation in welded structures exist and are standardised, but they are often verified for structures with higher plate thicknesses, such as those found in the offshore industry. Thinner structures are commonly analysed using the hotspot method, but questions still exist wherever the method is valid and can provide conservative results in thin-walled geometries.  One goal of the thesis work was to create a test plan to experimentally verify the results given by FE models of welded structures, as well as to investigate the strain gradient close to the weld toe. The plan, as well as two different welded specimens were designed and manufactured on which future analysis can be performed.  The hotspot method was also evaluated using FE analysis on geometries where two tubes were welded together with a T-joint with varying diameter, thickness and applied loads. A total of 13 different models were created using solid elements where hotspot stress extrapolation was evaluated using different extrapolation points and evaluation paths. In conclusion, it was found that the method provides a correct extrapolation of the geometric stress when stress extrapolation points at a distance of 0,4t and 1,0t from the weld toe are used (t=plate thickness). It was also found through the analysis that the geometric stress can be harder to differentiate from the non-linear part of the stress gradient for some profiles with a thickness of 0,89 mm. In some cases, this resulted in a small part of the non-linear stress being included in the extrapolation which increased the extrapolated hotspot stress. Comparisons between the hotspot stress and geometric parameters showed that stress concentration factors can be created which reduce the need for time-consuming FE models.
658

Into the Gates of Troy : A Comparative Study of Antivirus Solutions for the Detection of Trojan Horse Malware.

Hinne, Tom January 2024 (has links)
In the continuously evolving field of malware investigation, a Trojan horse, which appears as innocent software from the user's perspective, represents a significant threat and challenge for antivirus solutions because of their deceptive nature and the various malicious functionalities they provide. This study will compare the effectiveness of three free antiviruses for Linux systems (DrWeb, ClamAV, ESET NOD32) against a dataset of 1919 Trojan malware samples. The evaluation will assess their detection capabilities, resource usage, and the core functionalities they offer. The results revealed a trade-off between these three aspects: DrWeb achieved the highest detection rate (93.43%) but consumed the most resources and provided the most comprehensive functionalities. While ClamAV balanced detection and resource usage with less functionality, ESET NOD32 prioritised low resource usage but showcased a lower detection rate than the other engines (80.93%). Interestingly, the results showed that the category of Trojan horse malware and the file format analysed can affect the detection capabilities of the evaluated antiviruses. This suggests that there is no “silver bullet” for Linux systems against Trojans, and further research in this area is needed to assess the detection capabilities of antivirus engines thoroughly and propose advanced detection methods for robust protection against Trojans on Linux systems.
659

Investigation of the application of UPFC controllers for weak bus systems subjected to fault conditions : an investigation of the behaviour of a UPFC controller : the voltage stability and power transfer capability of the network and the effect of the position of unsymmetrical fault conditions

Jalboub, Mohamed January 2012 (has links)
In order to identify the weakest bus in a power system so that the Unified Power Flow Controller could be connected, an investigation of static and dynamic voltage stability is presented. Two stability indices, static and dynamic, have been proposed in the thesis. Multi-Input Multi-Output (MIMO) analysis has been used for the dynamic stability analysis. Results based on the Western System Coordinate Council (WSCC) 3-machine, 9-bus test system and IEEE 14 bus Reliability Test System (RTS) shows that these indices detect with the degree of accuracy the weakest bus, the weakest line and the voltage stability margin in the test system before suffering from voltage collapse. Recently, Flexible Alternating Current Transmission systems (FACTs) have become significant due to the need to strengthen existing power systems. The UPFC has been identified in literature as the most comprehensive and complex FACTs equipment that has emerged for the control and optimization of power flow in AC transmission systems. Significant research has been done on the UPFC. However, the extent of UPFC capability, connected to the weakest bus in maintaining the power flows under fault conditions, not only in the line where it is installed, but also in adjacent parallel lines, remains to be studied. In the literature, it has normally been assumed the UPFC is disconnected during a fault period. In this investigation it has been shown that fault conditions can affect the UPFC significantly, even if it occurred on far buses of the power system. This forms the main contribution presented in this thesis. The impact of UPFC in minimizing the disturbances in voltages, currents and power flows under fault conditions are investigated. The WSCC 3-machine, 9-bus test system is used to investigate the effect of an unsymmetrical fault type and position on the operation of UPFC controller in accordance to the G59 protection, stability and regulation. Results show that it is necessary to disconnect the UPFC controller from the power system during unsymmetrical fault conditions.
660

Valuation and Hedging of Foreign Exchange Barrier Options / Ocenění a zajíštění měnových bariérových opcí

Mertlík, Jakub January 2004 (has links)
The main aim of this thesis is in analyzing and empirically testing the various valuation models and hedging schemes of foreign exchange barrier options and their robustness with respect to changing of market conditions. The purpose of the main empirical section is to get a detailed understanding of the static and dynamic performance of the analyzed models for the barrier options payoff mainly in the extreme market conditions, where we performed a benchmarking of the various hedging schemes. As a by-product, we analyzed the accomplishment of some of the model assumptions in real world setting, and the model dependency of the barrier options.

Page generated in 0.0858 seconds