Spelling suggestions: "subject:"coverage."" "subject:"overage.""
181 |
Directing greybox fuzzing to discover bugs in hardware and softwareCanakci, Sadullah 23 May 2022 (has links)
Computer systems are deeply integrated into our daily routines such as online shopping, checking emails, and posting photos on social media platforms. Unfortunately, with the wide range of functionalities and sensitive information stored in computer systems, they have become fruitful targets for attackers. Cybersecurity ventures estimate that the cost of cyber attacks will reach $10.5 trillion USD annually by 2025. Moreover, data breaches have resulted in the leakage of millions of people’s social security numbers, social media account passwords, and healthcare information. With the increasing complexity and connectivity of computer systems, the intensity and volume of cyber attacks will continue to increase. Attackers will continuously look for bugs in the systems and ways to exploit them for gaining unauthorized access or leaking sensitive information.
Minimizing bugs in systems is essential to remediate security weaknesses. To this end, researchers proposed a myriad of methods to discover bugs. In the software domain, one prominent method is fuzzing, the process of repeatedly running a program under test with “random” inputs to trigger bugs. Among different variants of fuzzing, greybox fuzzing (GF) has especially seen widespread adoption thanks to its practicality and bug-finding capability. In GF, the fuzzer collects feedback from the program (e.g., code coverage) during its execution and guides the input generation based on the feedback. Due to its success in finding bugs in the software domain, GF has gained traction in the hardware domain as well. Several works adapted GF to the hardware domain by addressing the differences between hardware and software. These works demonstrated that GF can be leveraged to discover bugs in hardware designs such as processors.
In this thesis, we propose three different fuzzing mechanisms, one for software and two for hardware, to expose bugs in the multiple layers of systems. Each mechanism focuses on different aspects of GF to assist the fuzzing procedure for triggering bugs in hardware and software. The first mechanism, TargetFuzz, focuses on producing an effective seed corpus when fuzzing software. The seed corpus consists of a set of inputs serving as starting points to the fuzzer. We demonstrate that carefully selecting seeds to steer GF towards potentially buggy code regions increases the bug-finding capability of GF. Compared to prior works, TargetFuzz discovered 10 additional bugs and achieved 4.03× speedup, on average, in the total elapsed time for finding bugs.
The second mechanism, DirectFuzz, adapts a specific variant of GF for software fuzzing, namely directed greybox fuzzing (DGF), to the hardware domain. The main use case of DGF in software is patch testing where the goal is to steer fuzzing towards recently modified code region. Similar to software, hardware design is an incremental and continuous process. Therefore, it is important to prioritize testing of a new component in a hardware design rather than previously well-tested components. DirectFuzz takes several differences between hardware and software (such as clock sensitivity, concurrent execution of multiple code fragments, hardware-specific coverage) into account to successfully adapt DGF to the hardware domain. DirectFuzz relies on coverage feedback applicable to a wide range of hardware designs and requires limited design knowledge. While this increases its ease of adoption to many different hardware designs, its effectiveness (i.e., bug-finding success) becomes limited in certain hardware designs such as processors. Overall, compared to a state-of-the-work hardware fuzzer, DirectFuzz covers specified targets sites (e.g., modified hardware regions) 2.23× faster.
Our third mechanism named ProcessorFuzz relies on novel coverage feedback tailored for processors to increase the effectiveness of fuzzing in processors. Specifically, ProcessorFuzz monitors value changes in control and status registers which form the backbone of a processor. ProcessorFuzz addresses several drawbacks of existing works in processor fuzzing. Specifically, existing works can introduce significant instrumentation overhead, result in misleading guidance, and have lack of support for widely-used hardware languages. ProcessorFuzz revealed 8 new bugs in widely-used open source processors and identified bugs 1.23× faster than a prior work.
|
182 |
Real-time auto-test monitoring systemBlixt, Fanny January 2021 (has links)
At Marginalen Bank, there are several microservices containing endpoints that are covered bytest automation. The documentation of which microservices and endpoints that are covered byautomated tests is currently done manually and is proven to contain mistakes. In the documentation, the test coverage for all microservices together and for every individual microserviceis presented. Marginalen Bank needs a way to automate this process with a system that cantake care of test coverage documentation and present the calculated data. Therefore, the purpose of this research is to find a way to create a real-time auto-test monitoring system thatautomatically detects and monitors microservices, endpoints, and test automation to documentand present test automation coverage on a website. The system is required to daily detect andupdate the documentation to be accurate and regularly find eventual changes. The implemented system that detects and documents the test automation coverage is calledTest Autobahn. For the system to detect all microservices, a custom hosted service was implemented that registers microservices. All microservices with the custom hosted service installedand extended to registers to Test Autobahn when deployed on a server. For the system todetect all endpoints of each microservice, a custom middleware was implemented that exposesall endpoints of a microservice with it installed. For the microservices to be able to install theseand get registered, a NuGet package containing the custom hosted service and the custom middleware, was created. To detect test automations, custom attributes models were created thatare supposed to be inserted into each test automation project. The custom attributes are placedin every test class and method within a project, to mark which microservice and endpoint thatis being tested within every automated test. The attributes of a project can be read throughthe assembly. To read the custom attributes within every test automation project, a consoleapplication, called Test Autobahn Automation Detector (TAAD), was implemented. TAADreads the assembly to detect the test automations and sends them to Test Autobahn. Test Autobahn couples the found test automation to the corresponding microservices and endpoints.TAAD is installed and ran on the build pipeline in Azure DevOps for each test automationproject to register the test automations. To daily detect and update the documentation of the test coverage, Quartz.NET hosted serviceis used. With Quartz.NET implemented, Test Autobahn can execute a specified job on a schedule. Within the job, Test Autobahn detects microservices and endpoints and calculates the testautomation coverage for the detection. The calculation of the test coverage from the latestdetection is presented on the webpage, containing both the test coverage for all microservicestogether and the test coverage for each microservice. According to the evaluations, the systemseems to function as anticipated, and the documentation is displaying the expected data.
|
183 |
Reporting the Performance of Confidence Intervals in Statistical Simulation Studies: A Systematic Literature ReviewKabakci, Maside 08 1900 (has links)
Researchers and publishing guidelines recommend reporting confidence intervals (CIs) not just along with null hypothesis significance testing (NHST), but for many other statistics such as effect sizes and reliability coefficients. Although CI and standard errors (SEs) are closely related, examining standard errors alone in simulation studies is not adequate because we do not always know if a standard error is small enough. Overly small SEs may lead to increased probability of Type-I error and CIs with lower coverage rate than expected. Statistical simulation studies generally examine the magnitude of the empirical standard error, but it is not clear if they examine the properties of confidence intervals. The present study examines confidence interval investigating and reporting practices, particularly with respect to coverage and bias as diagnostics in published statistical simulation studies across eight psychology journals using a systematic literature review. Results from this review will inform editorial policies and hopefully encourage researchers to report CIs.
|
184 |
Associations of Medicaid Expansion with Insurance Coverage, Stage at Diagnosis, and Treatment among Patients with Genitourinary Malignant NeoplasmsMichel, Katharine F., Spaulding, Aleigha, Jemal, Ahmedin, Yabroff, K. R., Lee, Daniel J., Han, Xuesong 19 May 2021 (has links)
Importance: Health insurance coverage is associated with improved outcomes in patients with cancer. However, it is unknown whether Medicaid expansion through the Patient Protection and Affordable Care Act (ACA) was associated with improvements in the diagnosis and treatment of patients with genitourinary cancer. Objective: To assess the association of Medicaid expansion with health insurance status, stage at diagnosis, and receipt of treatment among nonelderly patients with newly diagnosed kidney, bladder, or prostate cancer. Design, Setting, and Participants: This case-control study included adults aged 18 to 64 years with a new primary diagnosis of kidney, bladder, or prostate cancer, selected from the National Cancer Database from January 1, 2011, to December 31, 2016. Patients in states that expanded Medicaid were the case group, and patients in nonexpansion states were the control group. Data were analyzed from January 2020 to March 2021. Exposures: State Medicaid expansion status. Main Outcomes and Measures: Insurance status, stage at diagnosis, and receipt of cancer and stage-specific treatments. Cases and controls were compared with difference-in-difference analyses. Results: Among a total of 340552 patients with newly diagnosed genitourinary cancers, 94033 (27.6%) had kidney cancer, 25770 (7.6%) had bladder cancer, and 220749 (64.8%) had prostate cancer. Medicaid expansion was associated with a net decrease in uninsured rate of 1.1 (95% CI, -1.4 to -0.8) percentage points across all incomes and a net decrease in the low-income population of 4.4 (95% CI, -5.7 to -3.0) percentage points compared with nonexpansion states. Expansion was also associated with a significant shift toward early-stage diagnosis in kidney cancer across all income levels (difference-in-difference, 1.4 [95% CI, 0.1 to 2.6] percentage points) and among individuals with low income (difference-in-difference, 4.6 [95% CI, 0.3 to 9.0] percentage points) and in prostate cancer among individuals with low income (difference-in-difference, 3.0 [95% CI, 0.3 to 5.7] percentage points). Additionally, there was a net increase associated with expansion compared with nonexpansion in receipt of active surveillance for low-risk prostate cancer of 4.1 (95% CI, 2.9 to 5.3) percentage points across incomes and 4.5 (95% CI, 0 to 9.0) percentage points among patients in low-income areas. Conclusions and Relevance: These findings suggest that Medicaid expansion was associated with decreases in uninsured status, increases in the proportion of kidney and prostate cancer diagnosed in an early stage, and higher rates of active surveillance in the appropriate, low-risk prostate cancer population. Associations were concentrated in population residing in low-income areas and reinforce the importance of improving access to care to all patients with cancer.
|
185 |
Separation of Points and Interval Estimation in Mixed Dose-Response Curves with Selective Component LabelingFlake, Darl D., II 01 May 2016 (has links)
This dissertation develops, applies, and investigates new methods to improve the analysis of logistic regression mixture models. An interesting dose-response experiment was previously carried out on a mixed population, in which the class membership of only a subset of subjects (survivors) were subsequently labeled. In early analyses of the dataset, challenges with separation of points and asymmetric confidence intervals were encountered. This dissertation extends the previous analyses by characterizing the model in terms of a mixture of penalized (Firth) logistic regressions and developing methods for constructing profile likelihood-based confidence and inverse intervals, and confidence bands in the context of such a model. The proposed methods are applied to the motivating dataset and another related dataset, resulting in improved inference on model parameters. Additionally, a simulation experiment is carried out to further illustrate the benefits of the proposed methods and to begin to explore better designs for future studies. The penalized model is shown to be less biased than the traditional model and profile likelihood-based intervals are shown to have better coverage probability than Wald-type intervals. Some limitations, extensions, and alternatives to the proposed methods are discussed.
|
186 |
Minimizing Aggregate Movements for Interval CoverageAndrews, Aaron M. 01 May 2014 (has links)
We present an efficient algorithm for solving an interval coverage problem. Given n intervals of the same length on a line L and a line segment B on L, we want to move the intervals along L such that every point of B is covered by at least one interval and the sum of the moving distances of all intervals is minimized. As a fundamental computational geometry problem, it has applications in mobile sensor barrier coverage in wireless sensor networks. The previous work gave an O(n2) time algorithm for it. In this thesis, by discovering many interesting observations and developing new algorithmic techniques, we present an O(nlogn) time algorithm for this problem. We also show that Ω(n log n) is the lower bound for the time complexity. Therefore, our algorithm is optimal. Further, our observations and algorithmic techniques may be useful for solving other related problems.
|
187 |
Multi-Robot Complete Coverage Using Directional ConstraintsMalan, Stefanus 01 January 2018 (has links)
Complete coverage relies on a path planning algorithm that will move one or more robots, including the actuator, sensor, or body of the robot, over the entire environment. Complete coverage of an unknown environment is used in applications like automated vacuum cleaning, carpet cleaning, lawn mowing, chemical or radioactive spill detection and cleanup, and humanitarian de-mining.
The environment is typically decomposed into smaller areas and then assigned to individual robots to cover. The robots typically use the Boustrophedon motion to cover the cells. The location and size of obstacles in the environment are unknown beforehand. An online algorithm using sensor-based coverage with unlimited communication is typically used to plan the path for the robots.
For certain applications, like robotic lawn mowing, a pattern might be desirable over a random irregular pattern for the coverage operation. Assigning directional constraints to the cells can help achieve the desired pattern if the path planning part of the algorithm takes the directional constraints into account.
The goal of this dissertation is to adapt the distributed coverage algorithm with unrestricted communication developed by Rekleitis et al. (2008) so that it can be used to solve the complete coverage problem with directional constraints in unknown environments while minimizing repeat coverage. It is a sensor-based approach that constructs a cellular decomposition while covering the unknown environment.
The new algorithm takes directional constraints into account during the path planning phase. An implementation of the algorithm was evaluated in simulation software and the results from these experiments were compared against experiments conducted by Rekleitis et al. (2008) and with an implementation of their distributed coverage algorithm.
The results of this study confirm that directional constraints can be added to the complete coverage algorithm using multiple robots without any significant impact on performance. The high-level goals of complete coverage were still achieved. The work was evenly distributed between the robots to reduce the time required to cover the cells.
|
188 |
(Mis)Diagnosing Silence: A Cultural Criticism of the Virginia Tech News Coverage of Silence as Public PedagogyHao, Richie Neil 01 January 2009 (has links) (PDF)
On April 16, 2007, Virginia Tech became the site of the deadliest school shooting in U.S. history (Breed, 2007). Because of the tragedy at Virginia Tech, news reports all over the U.S. probed about what caused the perpetrator, Seung-Hui Cho, to kill 32 students and faculty members. As the mainstream media talked about the possible causes, they ultimately pointed out that Cho's silence should have been detected as a warning sign to his violent rampage. As a result, the media named Cho a "silent killer." Due to the U.S. media's construction of silence through Cho, I argue that the news coverage helped perpetuate the notion that silence is not only a negative attribute in education, but it is also a dangerous behavior that can pose a threat to people's safety. Therefore, I ask in this dissertation, how did the news coverage of Virginia Tech serve as a public pedagogy of silence? That is, I argue that the news coverage of Virginia Tech served as what Giroux (1994) calls "public pedagogy" in which the media educate and influence the public about how silence should be understood in the classroom. With the media's construction of silence through Cho, it is timely to address how the meaning of silence has changed pedagogically. Even though numerous scholars have written about silence, very few--if any--frame silence within the performance paradigm, specifically in pedagogy. In this dissertation, I introduce silence as a pedagogical performance by using critical communication pedagogy as a theoretical framework to deconstruct problematic media constructions of silence. Through the use of cultural criticism, I analyze 36 mainstream U.S. mediated texts (e.g., newspapers, magazines, and news transcripts) to understand how the media rhetorically defined and constructed silence as a dangerous behavior. I also use an interview as part of multi-methodological approach to cultural criticism, adding clarification on how the surveillance of student behaviors, bodies, and pedagogical practices that do not fit the image of safety affect students in university classrooms.
|
189 |
Filtering Islam : an analysis of 'the expert on Islam' in Canadian news mediaPopowich, Morris January 2005 (has links)
No description available.
|
190 |
Contributions to the Shape Synthesis of Directivity-Maximized Dielectric Resonator AntennasNassor, Mohammed 08 August 2023 (has links)
Antennas are an important component of wireless ("without wires") communications, regardless of their use. As these systems have become increasingly complex, antenna design requirements have become more demanding. Conventional antenna design consists of selecting some canonical radiator structure described by a handful of key dimensions, and then adjusting these using an optimization algorithm that improves some performance-related objective function that is (during optimization) repeatedly evaluated via a full-wave computational electromagnetics model of the structure. This approach has been employed to great effect in the enormously successful development of wireless communications antenna technology thus far, but is limiting in the sense that the "design space" is restricted to a library of canonical (or regular near-canonical) shapes. As increased design constraints and more complicated placement requirements arise such an approach to antenna design could eventually become a bottleneck. The use of antenna shape synthesis, a process also referred to as inverse design, can widen the "design space", and include such aspects as occupancy and fabrication constraints, the presence of a platform, even weight constraints, and much more. Dielectric resonator antennas (DRAs) hold the promise of lower losses at higher frequencies. This thesis uses a three-dimensional shape optimization algorithm along with a characteristic mode analysis and a genetic algorithm to shape synthesize DRAs. Until now, a limited amount of work on such shape synthesis has been performed for single-feed fixed-beam DRAs. In this thesis we extend this approach by devising and implementing a new shaping methodology for significantly more complex problems, namely directivity-maximized multi-port fixed-beam DRAs, and multi-port DRAs capable of the beam-steering required to satisfy certain spherical coverage constraints, where the location, type and number of feed-ports need not be specified prior to shaping. The approach enables even low-profile enhanced-directivity DRAs to be shape synthesized.
|
Page generated in 0.0834 seconds