Spelling suggestions: "subject:"computer program"" "subject:"coomputer program""
561 |
A research in SQL injection.January 2005 (has links)
Leung Siu Kuen. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 67-68). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.1 / Chapter 1.1.1 --- A Story --- p.1 / Chapter 1.2 --- Overview --- p.2 / Chapter 1.2.1 --- Introduction of SQL Injection --- p.4 / Chapter 1.3 --- The importance of SQL Injection --- p.6 / Chapter 1.4 --- Thesis organization --- p.8 / Chapter 2 --- Background --- p.10 / Chapter 2.1 --- Flow of web applications using DBMS --- p.10 / Chapter 2.2 --- Structure of DBMS --- p.12 / Chapter 2.2.1 --- Tables --- p.12 / Chapter 2.2.2 --- Columns --- p.12 / Chapter 2.2.3 --- Rows --- p.12 / Chapter 2.3 --- SQL Syntax --- p.13 / Chapter 2.3.1 --- SELECT --- p.13 / Chapter 2.3.2 --- AND/OR --- p.14 / Chapter 2.3.3 --- INSERT --- p.15 / Chapter 2.3.4 --- UPDATE --- p.16 / Chapter 2.3.5 --- DELETE --- p.17 / Chapter 2.3.6 --- UNION --- p.18 / Chapter 3 --- Details of SQL Injection --- p.20 / Chapter 3.1 --- Basic SELECT Injection --- p.20 / Chapter 3.2 --- Advanced SELECT Injection --- p.23 / Chapter 3.2.1 --- Single Line Comment (--) --- p.23 / Chapter 3.2.2 --- Guessing the number of columns in a table --- p.23 / Chapter 3.2.3 --- Guessing the column name of a table (Easy one) --- p.26 / Chapter 3.2.4 --- Guessing the column name of a table (Difficult one) . --- p.27 / Chapter 3.3 --- UPDATE Injection --- p.29 / Chapter 3.4 --- Other Attacks --- p.30 / Chapter 4 --- Current Defenses --- p.32 / Chapter 4.1 --- Causes of SQL Injection attacks --- p.32 / Chapter 4.2 --- Defense Methods --- p.33 / Chapter 4.2.1 --- Defensive Programming --- p.34 / Chapter 4.2.2 --- hiding the error messages --- p.35 / Chapter 4.2.3 --- Filtering out the dangerous characters --- p.35 / Chapter 4.2.4 --- Using pre-complied SQL statements --- p.36 / Chapter 4.2.5 --- Checking for tautologies in SQL statements --- p.37 / Chapter 4.2.6 --- Instruction set randomization --- p.38 / Chapter 4.2.7 --- Building the query model --- p.40 / Chapter 5 --- Proposed Solution --- p.43 / Chapter 5.1 --- Introduction --- p.43 / Chapter 5.2 --- Natures of SQL Injection --- p.43 / Chapter 5.3 --- Our proposed system --- p.44 / Chapter 5.3.1 --- Features of the system --- p.44 / Chapter 5.3.2 --- Stage 1 - Checking with current signatures --- p.45 / Chapter 5.3.3 --- Stage 2 - SQL Server Query --- p.45 / Chapter 5.3.4 --- Stage 3 - Error Triggering --- p.46 / Chapter 5.3.5 --- Stage 4 - Alarm --- p.50 / Chapter 5.3.6 --- Stage 5 - Learning --- p.50 / Chapter 5.4 --- Examples --- p.51 / Chapter 5.4.1 --- Defensing BASIC SELECT Injection --- p.52 / Chapter 5.4.2 --- Defensing Advanced SELECT Injection --- p.52 / Chapter 5.4.3 --- Defensing UPDATE Injection --- p.57 / Chapter 5.5 --- Comparison --- p.59 / Chapter 6 --- Conclusion --- p.62 / Chapter A --- Commonly used table and column names --- p.64 / Chapter A.1 --- Commonly used table names for system management --- p.64 / Chapter A.2 --- Commonly used column names for password storage --- p.65 / Chapter A.3 --- Commonly used column names for username storage --- p.66 / Bibliography --- p.67
|
562 |
Efficient Xpath query processing in native XML databases. / CUHK electronic theses & dissertations collectionJanuary 2007 (has links)
A general XML index can itself be sizable leading to low efficiency. To alleviate this predicament, frequently asked queries can be indexed by a database system. They are referred to as views. Answering queries using materialized views is always cheaper than evaluating over the base data. Traditional techniques solve this problem by considering only a single view. We approach this problem by exploiting the potential relationships of multiple views, which can be used together to answer a given query. Experiments show that significant performance gain can be achieved from multiple views. / A XML query can be decomposed to a sequence of structural joins (e.g., parent/child and ancestor/descendant) and content joins. Thus, structural join optimization is a key to improving join-based evaluation. We optimize structural join with two orthogonal methods: partition-based method exploits the spatial specialities of XML encodings by projecting them on a plane; and location-based method improves structural join by accurately pruning all irrelevant nodes, which cannot produce results. / As XML (eXtensible Markup Language) becomes a universal medium for data exchange over the Internet, efficient XML query processing is now the focus of considerable research and development activities. This thesis describes works toward efficient XML query evaluation and optimization in native XML databases. / XML indexes are widely studied to evaluate XML queries and in particular to accelerate join-based approaches. Index-based approaches outperform join-based approaches (e.g., holistic twig join) if the queries match the index. Existing XML indexes can only support a small set of XML queries because of the varieties in XML query representations. A XML query may involve child-axis only, both child-axis and branches, or additional descendant-or-self-axis but only in the query root. We propose novel indexes to efficiently support a much wider range of XML queries (with /, //, [], *). / Tang, Nan. / "December 2007." / Advisers: Kam-Fei Wong; Jeffrey Xu Yu. / Source: Dissertation Abstracts International, Volume: 69-08, Section: B, page: 4861. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 152-163). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
|
563 |
Feasibility of GNU/Linux as the OS for a PC-based medical product / Feasibility of GNU/ Linux as the operating system for a personal computer -based medical productLustbader, Steven B. (Steven Benjamin), 1980- January 2003 (has links)
Thesis (M.Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, June 2003. / Includes bibliographical references (leaves 20-21). / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Linux has become a viable alternative to Windows in recent years. This investigation looks at the feasibility of porting the software for a PC-based medical device to Linux. Using an open-source operating system frees developers from the constraints imposed by relying on a single company for the development platform. Several porting methods are considered. The port method chosen allows development on the Windows version to continue while simultaneously testing on Linux, without creating separate versions of the software. Differences in the way the software interacts with the operating system and with the hardware have to be addressed. A Linux environment was created in which to run the software and determine how to reconcile these differences. No major hurdles to using Linux exist, so it appears to be a viable platform on which to conduct future development. / by Steven B. Lustbader. / M.Eng.and S.B.
|
564 |
A LISP interpreter : scanner and parserBosserman, David Clarence January 2010 (has links)
Typescript, etc. / Digitized by Kansas Correctional Industries
|
565 |
A PILOT language systemWalker, Larry Tristan January 2010 (has links)
Typescript, etc. / Digitized by Kansas Correctional Industries
|
566 |
On the adaptability of multipass Pascal compilers to variants of (Pascal) P-code machine architecturesLitteken, Mark A January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
567 |
Exploiting and/or Parallelism in PrologShah, Bankim 01 January 1991 (has links)
Logic programming languages have generated increasing interest over the last few years. Logic programming languages like Prolog are being explored for different applications. Prolog is inherently parallel. Attempts are being made to utilize this inherent parallelism. There are two kinds of parallelism present in Prolog, OR parallelism and AND parallelism. OR parallelism is relatively easy to exploit while AND parallelism poses interesting issues. One of the main issues is dependencies between literals.
It is very important to use the AND parallelism available in the language structure as not exploiting it would result in a substantial loss of parallelism. Any system trying to make use of either or both kinds of parallelism would need to have the capability of performing faster unification, as it affects the overall execution time greatly.
A new architecture design is presented in this thesis that exploits both kinds of parallelism. The architecture efficiently implements some of the key concepts in Conery's approach to parallel execution [5]. The architecture has a memory hierarchy that uses associative memory. Associative memories are useful for faster lookup and response and hence their use results in quick response time. Along with the use of a memory hierarchy, execution algorithms and rules for ordering of literals are presented. The rules for ordering of literals are helpful in determining the order of execution.
The analysis of response time is done for different configurations of the architecture, from sequential execution with one processor to multiple processing units having multiple processors. A benchmark program, "query," is used for obtaining results, and the map coloring problem is also solved on different configurations and results are compared.
To obtain results the goals and subgoals are assigned to different processors by creating a tree. These assignments and transferring of goals are simulated by hand.
The total time includes the time needed for moving goals back and forth from one processor to another. The total time is calculated in number of cycles with some assumptions about memory response time, communication time, number of messages that can be sent on the bus at a particular instant, etc. The results obtained show that the architecture efficiently exploits the AND parallelism and OR parallelism available in Prolog. The total time needed for different configurations is then compared and conclusions are drawn.
|
568 |
Measurement Techniques for Noise Figure and Gain of Bipolar TransistorsJung, Wayne Kan 02 June 1993 (has links)
First, the concepts of reflection coefficients, s-parameters, Smith chart, noise figure, and available power gain will be introduced. This lays out the foundation for the presentation of techniques on measuring noise figure and gain of high speed bipolar junction transistors. Noise sources in a bipolar junction transistor and an equivalent circuit including these noise sources will be presented. The process of determining the noise parameters of a transistor will also be discussed. A Pascal program and several TEKSPICE scripts are developed to calculate the stability, available power gain, and noise figure circles. Finally, these circles are plotted on a Smith chart to give a clear view of how a transistor will perform due to a change in source impedances.
|
569 |
Annotation-Enabled Interpretation and Analysis of Time-Series DataVenugopal, Niveditha 07 November 2018 (has links)
As we continue to produce large amounts of time-series data, the need for data analysis is growing rapidly to help gain insights from this data. These insights form the foundation of data-driven decisions in various aspects of life. Data annotations are information about the data such as comments, errors and provenance, which provide context to the underlying data and aid in meaningful data analysis in domains such as scientific research, genomics and ECG analysis. Storing such annotations in the database along with the data makes them available to help with analysis of the data. In this thesis, I propose a user-friendly technique for Annotation-Enabled Analysis through which a user can employ annotations to help query and analyze data without having prior knowledge of the details of the database schema or any kind of database programming language. The proposed technique receives the request for analysis as a high-level specification, hiding the details of the schema, joins, etc., and parses it, validates the input and converts it into SQL. This SQL query can then be executed in a relational database and the result of the query returned to the user. I evaluate this technique by providing real-world data from a building-data platform containing data about Portland State University buildings such as room temperature, air volume and CO2 level. This data is annotated with information such as class schedules, power outages and control modes (for example, day or night mode). I test my technique with three increasingly sophisticated levels of use cases drawn from this building science domain. (1) Retrieve data with include or exclude annotation selection (2) Correlate data with include or exclude annotation selection (3) Align data based on include annotation selection to support aggregation over multiple periods. I evaluate the technique by performing two kinds of tests: (1) To validate correctness, I generate synthetic datasets for which I know the expected result of these annotation-enabled analyses and compare the expected results with the results generated from my technique (2) I evaluate the performance of the queries generated by this service with respect to execution time in the database by comparing them with alternative SQL translations that I developed.
|
570 |
Common subexpression detection in dataflow programsJones, Philip E. C. (Philip Ewan Crossley) January 1989 (has links) (PDF)
Processed. Bibliography: leaves 123-124.
|
Page generated in 0.1252 seconds