• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 8
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 80
  • 27
  • 24
  • 17
  • 15
  • 14
  • 12
  • 10
  • 8
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

HARDWARE DESCRIPTION LANGUAGE PROGRAM SLICING AND WAY TO REDUCE BOUNDED MODEL CHECKING SEARCH OVERHEAD

Ou, Jen-Chieh January 2007 (has links)
No description available.
2

Estimating the execution time of Fortran programs on distributed memory, parallel computers

Dunlop, Alistair Neil January 1997 (has links)
No description available.
3

A weighted grid for measuring program robustness

Abdallah, Mohammad Mahmoud Aref January 2012 (has links)
Robustness is a key issue for all the programs, especially safety critical ones. In the literature, Program Robustness is defined as “the degree to which a system or component can function correctly in the presence of invalid input or stressful environment” (IEEE 1990). Robustness measurement is the value that reflects the Robustness Degree of the program. In this thesis, a new Robustness measurement technique; the Robustness Grid, is introduced. The Robustness Grid measures the Robustness Degree for programs, C programs in this instance, using a relative scale. It allows programmers to find the program’s vulnerable points, repair them, and avoid similar mistakes in the future. The Robustness Grid is a table that contains Language rules, which is classified into categories with respect to the program’s function names, and calculates the robustness degree. The Motor Industry Software Reliability Association (MISRA) C language rules with the Clause Program Slicing technique will be the basis for the robustness measurement mechanism. In the Robustness Grid, for every MISRA rule, a score will be given to a function every time it satisfies or violates a rule. Furthermore, Clause program slicing will be used to weight every MISRA rule to illustrate its importance in the program. The Robustness Grid shows how much each part of the program is robust and effective, and assists developers to measure and evaluate the robustness degree for each part of a program. Overall, the Robustness Grid is a new technique that measures the robustness of C programs using MISRA C rules and Clause program slicing. The Robustness Grid shows the program robustness degree and the importance of each part of the program. An evaluation of the Robustness Grid is performed to show that it offers new measurements that were not provided before.
4

Implementace mechanismů zajišťujících “RAN Slicing” v simulačním nástroji Network Simulator 3 / Implementation of mechanisms ensuring “RAN Slicing” in the simulation tool Network Simulator 3

Motyčka, Jan January 2021 (has links)
This thesis deals with the topic of network slicing technology in 5G networks, mainly on the RAN part. In the theoretical part, basic principles of 5G network slicing in core network part and RAN part are presented. Practical part contains a simulation scenario created in NS3 simulator with LENA 5G module. Results of this simulation are presented and discussed with the emphasis on RAN slicing.
5

vizSlice: An Approach for Understanding Slicing Data via Visualization

Kaczka Jennings, Rachel Ania 28 April 2017 (has links)
No description available.
6

An Investigation of Routine Repetitiveness in Open-Source Projects

Arafat, Mohd 13 August 2018 (has links)
No description available.
7

Transformation of round-trip web application to use AJAX

Chu, Jason 19 June 2008 (has links)
AJAX is a web application programming technique that allows portions of a web page to be loaded dynamically, separately from other parts of the web page. This gives the user a much smoother experience when viewing the web page. This technique also conserves bandwidth by transmitting only new data relevant to the user, keeping all other content on the web page unchanged. The migration from traditional round-trip web application to AJAX-based web application can be difficult to implement due to the many details required by AJAX. In this thesis, an approach is presented to automate the process of AJAX conversion using source transformation and backward slicing techniques. The result is an AJAX-based web page that will enhance the user experience and also conserve bandwidth. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2008-06-13 09:43:55.515
8

Regression test selection by exclusion

Ngah, Amir January 2012 (has links)
This thesis addresses the research in the area of regression testing. Software systems change and evolve over time. Each time a system is changed regression tests have to be run to validate these changes. An important issue in regression testing is how to minimise reuse the existing test cases of original program for modied program. One of the techniques to tackle this issue is called regression test selection technique. The aim of this research is to signicantly reduce the number of test cases that need to be run after changes have been made. Specically, this thesis focuses on developing a model for regression test selection using the decomposition slicing technique. Decomposition slicing provides a technique that is capable of identifying the unchanged parts of the system. The model of regression test selection based on decomposition slicing and exclusion of test cases was developed in this thesis. The model is called Regression Test Selection by Exclusion (ReTSE) and has four main phases. They are Program Analysis, Comparison, Exclusion and Optimisation phases. The validity of the ReTSE model is explored through the application of a number of case studies. The case studies tackle all types of modication such as change, delete and add statements. The case studies have covered a single and combination types of modication at a time. The application of the proposed model has shown that signicant reductions in the number of test cases can be achieved. The evaluation of the model based on an existing framework and comparison with another model also has shown promising results. The case studies have limited themselves to relatively small programs and the next step is to apply the model to larger systems with more complex changes to ascertain if it scales up. While some parts of the model have been automated tools will be required for the rest when carrying out the larger case studies.
9

Fault Location via Precise Dynamic Slicing

Zhang, Xiangyu January 2006 (has links)
Developing automated techniques for identifying a fault candidate set (i.e., subset of executed statements that contains the faulty code responsible for the failure during a program run), can greatly reduce the effort of debugging. Over 15 years ago precise dynamic slicing was proposed to identify a fault candidate set as consisting of all executed statements that influence the computation of an incorrect value through a chain of data and/or control dependences. However, the challenge of making precise dynamic slicing practical has not been addressed. This dissertation addresses this challenge and makes precise dynamic slicing useful for debugging realistic applications. First, the cost of computing precise dynamic slices is greatly reduced. Second, innovative ways of using precise dynamic slicing are identified to produce small failure candidate sets. The key cause of high space and time cost of precise dynamic slicing is the very large size of dynamic dependence graphs that are constructed and traversed for computing dynamic slices. By developing a novel series of optimizations the size of the dynamic dependence graph is greatly reduced leading to a compact representation that can be rapidly traversed. Average space needed is reduced from 2 Gigabytes to 94 Megabytes for dynamic dependence graphs corresponding to executions with average lengths of 130 Million instructions. The precise dynamic slicing time is reduced from up to 20 minutes for a demand-driven algorithm to 16 seconds. A compression algorithm is developed to further reduce dependence graph sizes. The resulting representation achieves the space efficiency such that the dynamic execution history of executing a couple of billion instructions can be held in a Gigabyte of memory. To further scale precise dynamic slicing to longer program runs, a novel approach is proposed that uses checkpointing/logging to enable collection of dynamic history of only the relevant window of execution. Classical backward dynamic slicing can often produce fault candidate sets that contain thousands of statements making the task of identifying faulty code very time consuming for the programmer. Novel techniques are proposed to improve effectiveness of dynamic slicing for fault location. The merit of these techniques lies in identifying multiple forms of dynamic slices in a failed run and then intersecting them to produce smaller fault candidate sets. Using these techniques, the fault candidate set size corresponding to the backward dynamic slice is reduced by nearly a factor of 3. A fine-grained statistical pruning technique based on value profiles is also developed and this technique reduces the sizes of backward dynamic slices by a factor of 2.5. In conclusion, this dissertation greatly reduces the cost of precise dynamic slicing and presents techniques to improve its effectiveness for fault location.
10

Effect of Dispersion on SS-WDM Systems

Wongpaibool, Virach 23 September 1998 (has links)
The purpose of this thesis is to investigate the effect of dispersion on a spectrum-sliced WDM (SS-WDM) system, specifically a system employing a single-mode optical fiber. The system performance is expressed in term of the receiver sensitivity defined as the average number of photon per bit <i>N<sub>p</i> </sub>required for a given probability of bit error <i>P<sub>e</sub></i>. The receiver sensitivity is expressed in terms of two normalized parameters: the ratio of the optical bandwidth per channel and the bit rate <i>m</i>=<i>B</i><sub>0</sub><i>/R<sub>b</sub></i>=<i>B</i><sub>0</sub><i>T</i>, and the transmission distance normalized by the dispersion distance <i>z/L<sub>D</sub></i>. The former represents the effect of the excess beat noise caused by the signal fluctuation. The latter represents the effect of dispersion. The excess beat noise can be reduced by increasing the value of <i>m</i> (increasing the optical bandwidth<i> B</i><sub>0</sub> for a given bit rate<i> R<sub>b</sub></i>). However, a large <i>m</i> implies that the degradation due to the dispersion is severe in a system employing a single-mode fiber. Therefore, there should be an optimum <i>m</i> resulting from the two effects. The theoretical results obtained from our analysis have confirmed this prediction. It is also shown that the optimum <i>m</i> (<i>m<sub>opt</sub></i>) decreases with an increase in the normalized distance. This suggests that the dispersion strongly affects the system performance. The increase in the excess beat noise is traded for the decrease in the dispersion effect. Additionally, the maximum transmission distance is relatively short, compared to that in a laser-based system. This suggests that the SS-WDM systems with single-mode fibers are suitable for short-haul systems, such as high-speed local-access network where the operating bit rate is high but the transmission distance is relatively short. / Master of Science

Page generated in 0.0969 seconds