41 |
Information Technology Adoption by Small Businesses OwnersRagab, Soha Elaskalani 01 January 2016 (has links)
Small business owners need effective strategies to increase profitability. One such strategy is the adoption of information technology (IT). The purpose of this multiple case study was to explore the strategies used by small business enterprise (SBE) owners to implement IT solutions for increased profitability within 3 years of opening their business. The population consisted of 3 small business owners in Orange County Southern California who were profitable by their third year of business. The conceptual framework for this study was based upon general systems theory. Data for this study were collected through semistructured interviews and a review of company documents. Transcript review and member checking were included for validity and reliability purposes. Methodological triangulation, achieved through analysis of business plans, financial documents, and probability trends documents allowed identification of 4 emergent themes: Essential strategies small business owners use to implement IT solutions for increased profitability, the essential relationship between network orientation and successful IT implementation, the relationship between IT consultants and successful implementation, and the relationship between internal IT resources and successful implementation. The findings from this study could impact social change because when SBEs are profitable, SBE owners will contribute to the affluence of their workers, communities, local economies, and society.
|
42 |
Strategies to Lower Information Technology Employee Voluntary TurnoverVelez, Nelson 01 January 2019 (has links)
For information technology (IT) professionals, the average turnover rate of voluntary employees is approximately 21.5% and occurs in fewer than 5 years. The purpose of this single case study was to explore strategies IT outsourcing business executives used to retain key IT employees in the New England region of the United States. Three IT business leaders from a single organization were selected to participate because they had implemented strategies to retain key IT employees. Herzberg's 2-factor theory of motivation was used as the conceptual framework for this doctoral study. Data were collected using semistructured interviews and review of company policies and personnel handbooks. Clarke and Braun's thematic analysis was used for data analysis, including assembling the data, creating codes from the data, compiling codes to generate themes, and interpreting and presenting themes. Member checking and triangulation processes helped increase study validity and reliability. Three themes emerged from the study: building personal relationships, creating positive company culture, and investing in employee training. The findings of this study may help IT leaders increase employee retention by focusing on work relationships, company culture, and employee training. Findings may contribute to social change by helping IT leaders so they can be civically engaged and address issues of public concern by increasing community volunteering, participating in charitable activities and philanthropy, and becoming politically active through petitioning and collaborating with local authorities.
|
43 |
An Effectual Approach for the Development of Novel Applications on Digital PlatformsMalgonde, Onkar Shamrao 30 June 2018 (has links)
The development of novel software applications on digital platforms differs from traditional software development and provides unique challenges to the software development manager and team. Application producers must achieve application-platform match, application-market match, value propositions exceeding platform’s core value propositions, and novelty. These desired properties support a new vision of the software development team as entrepreneurs with a goal of developing novel applications on digital platforms. Digital platforms are characterized by an uncertain, risky, and resource-constrained environment, where existing approaches—plan-driven, ad-hoc, and controlled-flexible—have limited applicability. Building on the theoretical basis of the theory of effectuation from the entrepreneurship domain, this dissertation proposes an effectual approach to software development. Preliminary studies are conducted to provide prima facia evidence of effectual thinking in software development teams. Also, pilot interviews at local organizations are conducted to augment the approach. Finally, two case studies are conducted to validate the approach. We find conclusive evidence for the efficacy of effectual software development to develop novel applications on digital platforms. We also find that novel ideas are identified, honed, and incorporated, in the application, using effectual thinking. This study contributes to information systems literature by proposing and validating an effectual approach to software development. This study contributes to entrepreneurship literature by illustrating the role of planning and visionary approaches in effectuation settings. This study also contributes to practitioners by highlighting the theoretical underpinnings of existing approaches and the effectual approach which allows software development teams to incorporate effectual thinking and develop novel software applications. Finally, we conclude with a discussion on the theoretical contributions of this study, limitations, and future research avenues.
|
44 |
View maintenance in nested relations and object-relational databasesLiu, Jixue January 2000 (has links)
A materialized view is a derived data collecton stored in a database. When the source data for a materialized view is updated, the materialized view also needs to be updated. The process of updating a materialized view in response to changes in the source data is called view maintenance. There are two methods for maintaining a materialized view - recomputation and incremental computation. Recomputation computes the new view instance from scratch using the updated sources data. Incremental computation on the other hand, computes the new view instance by using the update to the source data, the old view instance, and possibly some source data. Incremental computation is widely accepted as a less expensive mathod of maintaining a view when the size of the update to the source data is small in relation to the size of the source data. / thesis (PhD)--University of South Australia, 2000
|
45 |
Development of a functional prototype of an environmental risk assessment parameter database on the World-Wide WebPotter, Nathan Kent 06 August 1997 (has links)
The goal of the project was to develop a functional prototype of an environmental
risk assessment parameter database on the World-Wide Web. The ability to develop a
consolidated environmental database has become possible due to the phenomenal growth
of the Internet and the World-Wide Web over the past few years. A large number of
environmental resources do currently exist; however, with the large volume of
information available, access, management, reliability, and retrievability have become
increasingly difficult.
To illustrate the prototype database, a practical environmental concern and the
tools necessary to evaluate and characterize that concern were needed. Uranium (�������U)
daughters leaching from abandoned mill tailing piles at three abandoned uranium mines
in southwestern Colorado were chosen to demonstrate the database concept. The
RESRAD environmental pathway modeling code served as the evaluation and
characterization tool. Due to the size and complexity of RESRAD, a single radionuclide
release rate equation was isolated as a controllable component of the code. The equation
was a small part of the water pathway factor and examined the rate at which
radionuclides absorbed in soil were leached by infiltrating water. This serves as the
source term for groundwater contamination and directly applies to the �������U progeny
leaching from mill tailing piles scenario. Parameters selected from the equation dealt with the background data that directly influenced the mobility of contaminates in the environment. Environmental data for the three Colorado sites were gathered and interpreted. Probability Density Functions (PDFs) were developed for input parameters and the results were then incorporated into the web site. / Graduation date: 1998
|
46 |
Join-order optimization with Cartesian productsVance, Bennet 01 1900 (has links) (PDF)
Ph.D. / Computer Science and Engineering / Join-order optimization plays a central role in the processing of relational database queries. This dissertation presents two new algorithms for join-order optimization: a deterministic, exhaustive-search algorithm, and a stochastic algorithm that is based on the deterministic one. The deterministic algorithm achieves new complexity bounds for exhaustive search in join-order optimization; and in timing tests, both algorithms are shown to run many times faster than their predecessors. In addition, these new, fast algorithms search a larger space of join orders than is customary in join-order optimization. Not only do they consider all the so-called bushy join orders, rather than just the left-deep ones, but-what is more unusual-they also consider all join orders that contain Cartesian products. The novel construction of these algorithms enables them to search a space including Cartesian products without paying the performance penalty that is conventionally associated with such a search.
|
47 |
The Completeness Problem of Ordered Relational DatabasesJiang, Wei January 2010 (has links)
Support of order in query processing is a crucial component in
relational database systems, not only because the output of a
query is often required to be sorted in a specific order, but also
because employing order properties can significantly reduce the
query execution cost. Therefore, finding an effective approach to
answer queries over ordered data is important to the efficiency of
query processing in relational databases.
In this dissertation, an ordered relational database model is
proposed, which captures both data tuples of relations and tuple
ordering in relations. Based on this conceptual model, ordered
relational queries are formally defined in a two-sorted first-order calculus, which serves as a yardstick to evaluate
expressive power of other ordered query representations.
The primary purpose of this dissertation is to investigate the
expressive power of different ordered query representations.
Particularly, the completeness problem of ordered relational
algebras is studied with respect to the first-order calculus:
does there exist an ordered algebra such that any first-order expressible ordered
relational query can be expressed by a finite sequence of ordered
operations? The significance of studying the completeness problem
of ordered relational algebras is in that the completeness of
ordered relational algebras leads to the possibility of
implementing a finite set of ordered operators to express all
first-order expressible ordered queries in relational databases.
The dissertation then focuses on the completeness problem of
ordered conjunctive queries. This investigation is performed in an
incremental manner: first, the ordered conjunctive queries with
data-decided order is considered; then,
the ordered conjunctive queries with t-decided order is
studied; finally, the completeness problem for the general ordered
conjunctive queries is explored. The completeness theorem
of ordered algebras is proven for all three classes of ordered
conjunctive queries.
Although this ordered relational database model is only
conceptual, and ordered operators are not implemented in this
dissertation, we do prove that a complete set of ordered operators
exists to retrieve all first order expressible ordered queries in
the three classes of ordered conjunctive queries. This research
sheds light on the possibility of implementing a complete set of
ordered operators in relational databases to solve the performance
problem of order-relevant queries.
|
48 |
Use of incident databases for cause and consequence analysis and national estimatesObidullah, A.S.M. 25 April 2007 (has links)
Many incidents have occurred because industries have ignored past incidents or failed to
learn lessons from the past. Incident databases provide an effective option for managing
large amounts of information about the past incidents. Analysis of data stored in
existing databases can lead to useful conclusions and reduction of chemical incidents
and consequences of incidents. An incident database is a knowledge based system that
can give an insight to the situation which led to an incident. Effective analysis of data
from a database can help in development of information that can help reduce future
incidents: cause of an incident, critical equipment, the type of chemical released, and the
type of injury and victim. In this research, Hazardous Substances Emergency Events
Surveillance (HSEES) database has been analyzed focusing on manufacturing events in
Texas from 1993-2004.
Between thirteen to sixteen states have participated in the HSEES incident reporting
system and it does not include all the near miss incidents. Petroleum related incidents
are also excluded from the HSEES system. Studies show that HSEES covers only 37%
of all incidents in the US. This scaling ratio was used to estimate the total universe size.
|
49 |
The development and testing of the academic information system surveyPlummer, Lionel. January 2008 (has links)
Thesis ( M.L.A.) -- University of Texas at Arlington, 2008.
|
50 |
Distributed information systems design through software teams /Durrett, John Randall, January 1999 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 1999. / Vita. Includes bibliographical references (leaves 97-103). Available also in a digital version from Dissertation Abstracts.
|
Page generated in 0.0193 seconds