Spelling suggestions: "subject:"specifications."" "subject:"pecifications.""
21 |
Computerized system of architectural specifications : with a database link to drawingsSuddarth, Jane January 1983 (has links)
The architecture profession, like many other design professions, is being revolutionized through the usage and development of computers. Both computer hardware and software are reaching levels of high development in the area of graphics. Even though graphic systems are becoming more sophisticated, there is no current linkage of textual information with graphics. Architectural projects consist of both text (specifications) and graphics (working drawings). Consequently, this creative project develops a computer software system (series of programs) for linking and unifying text information (specifications) with working drawings. / Department of Architecture
|
22 |
A study of the utilization of educational specifications in the designing process of new educational facilitiesBenson, Dennis McLean January 1973 (has links)
The purpose of the study was to describe how educational specifications are utilized by architects in the design process for conventional, systems, and design-build methods of construction for educational facilities. Both a review of the related literature and interviews with architects experienced in utilizing educational specifications for designing educational facilities served as data for the study.
|
23 |
Designing Software from Formal SpecificationsMacDonald , Anthony John Unknown Date (has links)
This thesis investigates the process of designing software from formal specifications, in particular, specifications expressed in the Z notation. The initial phases of software design have significant impact on software quality and the transition from formal specification to design is not clearly understood. There is often no visible or obvious connection between the specification and the finished design. It is possible to add traceability with either verification or refinement, but I wish to understand and guide the design process. Investigating the design of software from formal specifications highlighted possible relationships between parts of the specification and parts of the design. A design strategy is introduced, that combines software architectural styles and formal specifications to influence the generated design. The design process is architecturally-specific, but a template for instantiating the design process to a chosen architectural style is presented. Specializations of the template are presented for the ADT-based architectural style and the event-based architectural style. These specializations of the template produce an architecturally-constrained, specification-influenced design process. Providing an architecturally-constrained, specification-influenced design process enables the software designer to produce better quality software. The constrained design process allows the designer to focus on the difficult aspects of design: understanding the problem, choosing the best abstractions, and finding a suitable solution.
|
24 |
Ephedra: a C to Java migration environmentMartin, Johannes 30 October 2018 (has links)
The Internet has grown in popularity in recent years, and thus it has gained importance for many current businesses. They need to offer their products and services through their Web sites. To present not only static content but also interactive services, the logic behind these services needs to be programmed.
Various approaches for programming Web services exist. The Java programming language can be used to implement Web services that run both on Internet clients and servers, either exclusively or in interaction with each other. The Java programming language is standardised across computing platforms and has matured over the past few years, and is therefore a popular choice for the implementation of Web services.
The amount of available and well-tested Java source code is still small compared to other programming languages. Rather than taking the risks and costs of redeveloping program libraries, it is often preferable to move the core logic of existing solutions to Java and then integrate it into Java programs that present the services in a Web interface.
In this Ph.D. dissertation, we survey and evaluate a selection of current approaches to the migration of source code to Java. To narrow the scope of the dissertation to a reasonable limit, we focus on the C and C++ programming languages as the source languages. Many mature programs and program libraries exist in these languages.
The survey of current migration approaches reveals a number of their restrictions and disadvantages in the context of moving program libraries to Java and integrating them with Java programs. Using the experiences from this survey, we established a number of goals for an improved migration approach and developed the Ephedra approach by closely following these goals. To show the practicality of this approach, we implemented an automated tool that performs the migration according to the Ephedra approach and evaluated the migration process and its result with respect to the goals we established using selected case studies.
Ephedra provides a high degree of automation for the migration process while letting the software-engineer make decisions where multiple choices are possible. A central problem in the migration from C to Java is the trans formation of C pointers to Java references. Ephedra provides two different strategies for this transformation and explains their applicability to subject systems. The code resulting from a migration with Ephedra is maintainable and functionally equivalent to the original code save some well documented exceptions. Performance trade-offs are analysed and evaluated in the light of the intended subject systems. / Graduate
|
25 |
A methodology for analyzing hardware description language specifications of legacy designsCosti, Claudio 07 June 2018 (has links)
In order to increase productivity, methodologies based on reuse of previously designed components are exploited by the Integrated Circuit (IC) design community. However, designers are often faced with the problem of reusing a legacy design for which the behavior is unclear due to missing documentation and the complexity of the design. In this dissertation a methodology to assist designers in retrieving the original intent of a design from its Hardware Description Language (HDL) specification is described. The methodology is based on code analysis and techniques which produce different views of HDL code. These views represent the behavior of a design in more abstract terms than the HDL code. / Graduate
|
26 |
Using computer vision to categorize tyres and estimate the number of visible tyres in tyre stockpile imagesEastwood, Grant January 2017 (has links)
Pressures from environmental agencies contribute to the challenges associated with the disposal of waste tyres, particularly in South Africa. Recycling of waste tyres in South Africa is in its infancy resulting in the historically undocumented and uncontrolled existence of waste tyre stockpiles across the country. The remote and distant locations of such stockpiles typically complicate the logistics associated with the collection, transport and storage of waste tyres prior to entering the recycling process. In order to optimize the logistics associated with the collection of waste tyres from stockpiles, useful information about such stockpiles would include estimates of the types of tyres as well as the quantity of specific tyre types found in particular stockpiles. This research proposes the use of computer vision for categorizing individual tyres and estimating the number of visible tyres in tyre stockpile images to support the logistics in tyre recycling efforts. The study begins with a broad review of image processing and computer vision algorithms for categorization and counting objects in images. The bag of visual words (BoVW) model for categorization is tested on two small data sets of tread tyre images using a random sub-sampling holdout method. The categorization results are evaluated using performance metrics for multiclass classifiers, namely the average accuracy, precision, and recall. The results indicated that corner-based local feature detectors combined with speeded up robust features (SURF) descriptors in a BoVW model provide moderately accurate categorization of tyres based on tread images. Two feature extraction methods for extracting features for use in training neural networks (NNs) for tyre count estimations in tyre stockpiles are proposed. The two feature extraction methods are used to describe images in terms of feature vectors that can be used as input for NNs. The first feature extraction method uses the BoVW model with histograms of oriented gradients (HOG) features collected from overlapping sub-images to create a visual vocabulary and describe the images in terms of their visual word occurrence histogram. The second feature extraction method uses the image gradient magnitude, gradient orientation, and edge orientations of edges detected using the Canny edge detector. A concatenated histogram is constructed from individual histograms of gradient orientations and gradient magnitude. The histograms are then used to train NNs using backpropogation to approximate functions from the feature vectors describing the images to scalar count estimations. The accuracy of visible object count predictions are evaluated using NN evaluation techniques to determine the accuracy of predictions and the generalization ability of the fit model. The count estimation experiments using the two feature extraction methods for input to NNs showed that fairly accurate count estimations can be obtained and that the fit model could generalize fairly well to unseen images.
|
27 |
The Design of Specifications for the Development of Broadcast English Materials in Academic Listening/Speaking CoursesBarlow, Amy 09 August 2010 (has links) (PDF)
ESL students in academic listening/speaking courses often listen to long audio recordings similar to a lecture or other academic passages. When listening to these passages, students can lose their interest which impedes their learning of new strategies for understanding academic language. Students in the Level Four Listening/Speaking classes at Brigham Young University's English Language Center (ELC), under the previous curriculum, would experience this challenge. All of the passages were available only in audio and were long in duration. The students would lose interest and improve little in their listening skills. Under the new curriculum at the ELC, students in Academic Levels A and B practice listening using both audio and video. When only audio recordings are used, the students are observed to lose interest and they do not practice the strategies. In order to build student interest, broadcast news video clips can be used. These clips offer a multi-sensory experience for the students and they can vary in length. Also, these clips expose the students to language and content that they will experience in university settings, thus providing them with an authentic experience. In order to create cohesive and coherent materials using video clips, specifications for these materials needed to be designed and developed. The designed specifications discuss ten steps for developers to follow in order to create these materials. As a part of these specifications, two sample sections were created. Because of the context, the specifications focus on the use of broadcast news clips; however, they can easily be adapted for use in other contexts as well. The developed sample sections were piloted in order to assess the usefulness of the specifications. Feedback was received from my Project Chair, the listening/speaking coordinator, the students who participated in the pilot, and the other teacher who participated in the pilot. Using the feedback revisions were made to the specifications and the sample sections.
|
28 |
A computer simulation model to predict airport capacity enhancementsNunna, Vijay Bhushan G. 22 October 2009 (has links)
The ever increasing demand on the air transportation system is causing a lot of congestion and delays, leading to large monetary losses and passenger inconvenience. This has prompted the development of many analysis tools to help the understanding of the airport system where some improvements could be performed to enhance the capacity of the airports.
The Center for Transportation Research at Virginia Tech, in line with the FAA’s Capacity Enhancements Plan, is developing strategies to alleviate the airport congestion problem by developing a model (REDIM) to design and optimally locate high-speed exit taxiways. The objective of this research is to develop a computer simulation model to predict the airport capacity enhancements due to the above mentioned high-speed exit taxiways and as well as due to other changes in operational procedures, aircraft characteristics, airport environmental conditions, etc.
RUNSIM (RUNway Simulation Model), a discrete event simulation model was developed using SIMSCRIPT 11.5 language. This model simulates dual operations on a single runway, with capabilities of simulating FAA standard and REDIM designed high-speed exits, variable intrail separations, different aircraft mixes, and weights, arrival rates and patterns, etc. Currently it has a 30 aircraft data base to perform the simulation. Its output includes such global statistics as total arrival and departure delays, weighted average ROT and its standard deviation, aircraft exit assignment table, arrival and departure event lists. It has the capability to perform multiple iterations on a single application, which helps in performing statistical analyses on the results for better inference. / Master of Science
|
29 |
The formal specification of the Tees Confidentiality ModelHowitt, Anthony January 2008 (has links)
This thesis reports an investigation into authorisation models, as used in identity and access management. It proposes new versions of an authorisation model, the Tees Confidentiality Model (TCM), and presents formal specifications in B, and verifications and implementations of the key concepts using Spec Explorer, Spec# and LinQ. After introducing the concepts of authorisation and formal models, a formal methods specification in B of Role Based Access Control (RBAC) is presented. The concepts in RBAC have heavily influenced authorisation over the last two decades, and most of the research has been with their continued development. A complete re-working of the ANSI RBAC Standard is developed in B, which highlights errors and deficiencies in the ANSI Standard and confirms that B is a suitable method for the specification of access control. A formal specification of the TCM in B is then developed. The TCM supports authorisation by multiple concepts, with no extra emphasis given to Role (as in RBAC). The conceptual framework of Reference Model and Functional Specification used in the ANSI RBAC Standard is used to structure the TCM formal model. Several improvements to the original TCM are present in the formal specification, notably a simplified treatment of collections. This new variation is called TCM2, to distinguish it from the original model. Following this, a further B formal specification of a TCM reduced to its essential fundamental components (referred to as TCM3) was produced. Spec Explorer was used to animate this specification, and as a step towards implementation An implementation of TCM3 using LinQ and SQL is then presented, and the original motivating healthcare scenario is used as an illustration. Finally, classes to implement the versions of the TCM models developed in the thesis are designed and implemented. These classes enable the TCM to be implemented in any authorisation scenario. Throughout the thesis, model explorations, animations, and implementations are illustrated by SQL, C# and Spec# code fragments. These illustrate the correspondence of the B specification to the model design and implementation, and the effectiveness of using formal specification to provide robust code.
|
30 |
FILTERING CONSIDERATIONS WHEN TELEMETERING SHOCK AND VIBRATION DATAWalter, Patrick L. 10 1900 (has links)
International Telemetering Conference Proceedings / October 22-25, 2001 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The accurate measurement of shock and vibration data via flight telemetry is necessary to validate structural models, indicate off-nominal system performance, and/or generate environmental qualification criteria for airborne systems. Digital telemetry systems require anti-aliasing filters designed into them. If not properly selected and located, these filters can distort recorded time histories and modify their spectral content. This paper provides filter design guidance to optimize the quality of recorded flight structural dynamics data. It is based on the anticipated end use of the data. Examples of filtered shock data are included.
|
Page generated in 0.0705 seconds