Spelling suggestions: "subject:"databases,"" "subject:"atabases,""
401 |
Full-Text Aggregation: An Examination Metadata Accuracy And the Implications For Resource SharingCummings, Joel January 2003 (has links)
The author conducted a study comparing of two lists of full-text content available in Academic Search Full-Text Elite. EBSCO provided the lists to the University College of the Fraser Valley. The study was conducted to compare the accuracy of the claims of full-text content, because the staff and library users at University College of the Fraser Valley depend on this database as part of the librariesâ journal collection. Interlibrary loan staff routinely used a printed list of Academic Search Full Text Elite to check whether the journal was available at UCFV in electronic form; therefore, an accurate supplemental list or lists of the libraries electronic journals was essential for cost conscious interlibrary loan staff. The study found inaccuracies in the coverage of 57 percent of the journals sampled.
|
402 |
Towards Improving Conceptual Modeling: An Examination of Common Errors and Their Underlying ReasonsCurrim, Sabah January 2008 (has links)
Databases are a critical part of Information Technology. Following a rigorous methodology in the database lifecycle ensures the development of an effective and efficient database. Conceptual data modeling is a critical stage in the database lifecycle. However, modeling is hard and error prone. An error could be caused by multiple reasons. Finding the reasons behind errors helps explain why the error was made and thus facilitates corrective action to prevent recurrence of that type of error in the future. We examine what errors are made during conceptual data modeling and why. In particular, this research looks at expertise-related reasons behind errors. We use a theoretical approach, grounded in work from educational psychology, followed up by a survey study to validate the model. Our research approach includes the following steps: (1) measure expertise level, (2) classify kinds of errors made, (3) evaluate significance of errors, (4) predict types of errors that will be made based on expertise level, and (5) evaluate significance of each expertise level. Hypotheses testing revealed what aspects of expertise influence different types of errors. Once we better understand why expertise related errors are made, future research can design tailored training to eliminate the errors.
|
403 |
Static Conflict Analysis of Transaction ProgramsZhang, Connie January 2000 (has links)
Transaction programs are comprised of read and write operations issued against the database. In a shared database system, one transaction program conflicts with another if it reads or writes data that another transaction program has written. This thesis presents a semi-automatic technique for pairwise static conflict analysis of embedded transaction programs. The analysis predicts whether a given pair of programs will conflict when executed against the database. There are several potential applications of this technique, the most obvious being transaction concurrency control in systems where it is not necessary to support arbitrary, dynamic queries and updates. By analyzing transactions in such systems before the transactions are run, it is possible to reduce or eliminate the need for locking or other dynamic concurrency control schemes.
|
404 |
Aligning global and local aspects of a national information programme for health : developing a critical and socio-technical appreciationHarrop, Stephen Nicholas January 2010 (has links)
Written by a full-time clinician, this thesis explores an example of ‘Big IT’ in healthcare, the National Programme for IT in the United Kingdom National Health Service. It is unique in exploring the interaction between people and information technology in the healthcare workplace, from an engaged standpoint within one of the National Programme’s implementation sites, in order to provide a critical and a socio-technical appreciation.
|
405 |
The Usage of Smartphone and PDA Based Electronic Drug Databases Among PharmacistsBluder, Steven January 2012 (has links)
Class of 2012 Abstract / Specific Aims: To assess the use of PDA/smartphone based electronic drug databases among pharmacists as it has changed over time. The working hypothesis is that the use of PDA/Smartphone based electronic drug databases has increased over time.
Methods: A systematic review of the literature regarding the usage of PDA/smartphone based electronic drug databases among pharmacists using data that was obtained through literature searches.
Main Results: Since 2006, the percentage of pharmacists that are using PDA/smartphone based electronic drug databases has increased. Conclusions: The usage of smartphones and PDA based electronic drug databases has increased among pharmacists since 2006 (p<0.05). Easier and cheaper access to the technology has likely led to the products being available to more pharmacists.
|
406 |
Sample entropy and random forests a methodology for anomaly-based intrusion detection and classification of low-bandwidth malware attacksHyla, Bret M. 09 1900 (has links)
Sample Entropy examines changes in the normal distribution of network traffic to identify anomalies. Normalized Information examines the overall probability distribution in a data set. Random Forests is a supervised learning algorithm which is efficient at classifying highlyimbalanced data. Anomalies are exceedingly rare compared to the overall volume of network traffic. The combination of these methods enables low-bandwidth anomalies to easily be identified in high-bandwidth network traffic. Using only low-dimensional network information allows for near real-time identification of anomalies. The data set was collected from 1999 DARPA intrusion detection evaluation data set. The experiments compare a baseline f-score to the observed entropy and normalized information of the network. Anomalies that are disguised in network flow analysis were detected. Random Forests prove to be capable of classifying anomalies using the sample entropy and normalized information. Our experiment divided the data set into five-minute time slices and found that sample entropy and normalized information metrics were successful in classifying bad traffic with a recall of .99 and a f-score .50 which was 185% better than our baseline.
|
407 |
Experimentation in a collaborative planning environmentSmith, Diane M. 06 1900 (has links)
Research Enterprise (FIRE) system at NPS has facilitated rapid advancement of TW scope and capabilities as well as delivered a significantly improved final product to NETWARCOM.
|
408 |
Factores asociados al uso regular de fuentes de información en estudiantes de medicina de cuatro cuidades del PerúMejía, Christian R., Valladares Garrido, Mario J., Luyo Rivas, Aldo, Valladares Garrido, Danai, Talledo-Ulfe, Lincolth, Vilela Estrada, Martín A., Araujo Chumacero, Mary M., Red GIS Perú 31 July 2015 (has links)
Objetives. To determine the factors associated with regular use of sources of information by medical students in four
cities in Peru. Materials and methods. In this cross-sectional study, medical students were surveyed in four cities of
Peru, gathering information on the use of 14 sources of information and other educational and computer variables.
Frequent use of the information source was defined if the respondent reported that they access an information source
at least once a week. P values were obtained by generalized linear models adjusted for each respondent site. Results.
2,300 students were surveyed. The median age was 21 years and 53% were women. Having received training in the
use of sources increased the use in twelve of the consulted bases, not in SciELO (p=0.053) or in the university library
(p=0.509).When adjusting for owning a laptop/netbook, these associations remained. After also adjusting for owning a
smartphone the association was lost with the BVS Peru database (p=0.067). The association was also lost after making
the final adjustment, if the respondent had carried out any research activities. Conclusions. The frequent use of sources
of information is associated with having received training, conducting research and use of information technologies and
communication. This should be taken into account in training programs and continuous improvement in undergraduate
education. / christian.mejia.md@gmail.com / Article / Objetivos. Determinar los factores asociados al uso regular de fuentes de información en estudiantes de Medicina de
cuatro ciudades de Perú. Materiales y métodos. Estudio transversal analítico, se encuestó a estudiantes de Medicina
de cuatro ciudades del Perú, recopilando información del uso de 14 fuentes de información y otras variables educativas
e informáticas. Se definió uso frecuente de la fuente de información si accedía a ella mínimo una vez a la semana.
Se obtuvieron los valores p mediante modelos lineales generalizados ajustando por la sede de cada encuestado.
Resultados. Se encuestaron 2300 estudiantes con una mediana de edad de 21 años, el 53% fueron mujeres. El
recibir una capacitación para el uso de las fuentes incrementó el uso en doce de las bases consultadas, no en SciELO
(p=0,053) ni en la biblioteca universitaria (p=0,509). Cuando se añadió el ajuste por poseer una laptop/netbook se
mantuvieron dichas asociaciones. Al ajustar también por poseer un smartphone se perdió la asociación con la base
BVS Perú (p=0,067), lo mismo ocurrió al hacer el último ajuste, si había realizado alguna actividad de investigación.
Conclusiones. El uso frecuente de las fuentes de la información está asociado con haber recibido capacitación,
realizar investigación y el uso de las tecnologías de la información y comunicación. Esto debe ser tomado en cuenta en
programas de capacitación y mejora continua en el pre y posgrado.
|
409 |
A Unifying Version Model for Objects and Schema in Object-Oriented Database SystemShin, Dongil 08 1900 (has links)
There have been a number of different versioning models proposed. The research in this area can be divided into two categories: object versioning and schema versioning. In this dissertation, both problem domains are considered as a single unit. This dissertation describes a unifying version model (UVM) for maintaining changes to both objects and schema. UVM handles schema versioning operations by using object versioning techniques. The result is that the UVM allows the OODBMS to be much smaller than previous systems. Also, programmers need know only one set of versioning operations; thus, reducing the learning time by half. This dissertation shows that UVM is a simple but semantically sound and powerful version model for both objects and schema.
|
410 |
Analysis and Experimental Comparison of Graph Databases / Analysis and Experimental Comparison of Graph DatabasesKolomičenko, Vojtěch January 2013 (has links)
In the recent years a new type of NoSQL databases, called Graph databases (GDBs), has gained significant popularity due to the increasing need of processing and storing data in the form of a graph. The objective of this thesis is a research on possibilities and limitations of GDBs and conducting an experimental comparison of selected GDB implementations. For this purpose the requirements of a universal GDB benchmark have been formulated and an extensible benchmarking tool, named BlueBench, has been developed.
|
Page generated in 0.436 seconds