Now showing items 21-40 of 111

    • An Analysis of Charging Practices and their Impact on Battery Degradation in North American Electric Vehicles Built Between 2010-2020

      Ferrier, Douglas William Edward (Indiana State University, 2022-05)
      Electric vehicles (EVs) are emerging as a component of the global solution to combat climate change. However, in North America, particularly in the United States and Canada, the transition away from internal combustion engines (ICE) has been slow. North America faces unique challenges due to its geographical size and population in comparison to other continents. The good news is that EV adoption is increasing within North America. Along with increased EV adoption, governments and public companies are constructing charging infrastructure to support increased consumer EV purchases. Despite increased adoption, many future and current owners throughout North American society have concerns about an electric vehicles’ key feature: the battery. Many EV owners are concerned about the battery's State of Health (SOH) – how to keep batteries healthy and use best practices to keep their range at maximum capacity. SOH is influenced by five key factors: (1) temperature, (2) charge/discharge rate, (3) charge/discharge depth, (4) cyclic charging, and (5) ending State of Charge (SOC). This study primarily focuses on data centered around charging. This dissertation examines data generated by everyday EV users and uses it to predict how charging habits affect batteries over time. Charging effects include decreasing battery SOH and capacity degradation. Lowering the SOH reduces the battery's viability for continuous use; at approximately 70% SOH the battery is 'typically' deemed End of Life (EoL). The overall range of the EV is affected by capacity degradation; as batteries degrade, the total km (or miles) available decreases. This study uses regression analysis to examine relationships and predictors of SOH, temperature, levels of charging, and SOC. The data collected and analyzed determine best practices for charging batteries at home and abroad for consumers. There were two methods for analyzing data: (1) Using EV generated data (SOH, Charger Type) saved in CSV files via a smartphone application, and (2) Analyzing consumed energy in a large dataset using a segmentation process based on equivalent SOC differences between two points in time. The current study makes use of one of the largest datasets of "real world" data ever collected from EVs in the United States and Canada, with over one million lines. Eighteen models of EVs are used to make comparisons for amounts of degradation over one year. A discussion of how these findings affect EV owners’ usage of models from 2010-2020 is included. Multiple recommendations for future studies are provided.
    • A STATIC CODE ANALYSIS AND PATTERN RECOGNITION ALGORITHM-DRIVEN, QUANTITATIVE, MATHEMATICAL MODEL-ORIENTED RISK ASSESSMENT FRAMEWORK OF CLOUD-BASED HEALTH INFORMATION APPLICATIONS

      Park, Dennis B. (Indiana State University, 2021-07)
      According to a survey, the healthcare industry is one of the least cloud-adopting industries. The low adoption reflects the healthcare industry's ongoing concerns about the security of the cloud. Business applications, according to another survey, are among the most vulnerable components of business information systems. Many risk assessment frameworks available today, particularly for health information applications, require significant customization before they can be used. This study created a new framework to assess cloud risks specifically for their health information applications, utilizing data-driven risk assessment methodologies to avoid surveys, interviews, and meetings for data collection. For the feasibility study, the open-source application codes were chosen from over 190 million GitHub repositories using a decision tree method, while a purposive sampling method was used to choose for a simulated patient information database from the healthcare industry. Using these methods, the researcher discovered security warnings and privacy violation suspects and subsequently converted them into quantitative measures to calculate the risks of the cloud-based health information application and a database. The significance of this study lies in the collection of data directly from applications and databases with a quantitative approach for risk calculation.
    • ORGANIZATIONAL CULTURE: AN EXAMINATION OF THE ROLE OF LEADERSHIP IN NEUTRALIZING THE NEGATIVE EFFECTS OF TECHNOSTRESS DURING OPERATIONAL SHIFTS

      Kirkland, David P. (Indiana State University, 2021-12)
      COVID-19 created a support problem for public universities across the United States and required that IT departments and professionals alter how they performed in 2020, and perhaps beyond. IT professionals tasked with safeguarding large amounts of data were required to shift to a teleworking posture to continue offering a similar level of service as previously expected. In addition to the technological shift that organizations experienced because of COVID-19, leadership challenges also impacted IT departments across the United States. The rapid shift of operational duties has the propensity to increase technology-related stress, due to employee perception of being successful in their role. The purpose of this quantitative, non-experimental, correlational pilot study was to examine the relationship between technostress, job satisfaction, burnout, and demographic characteristics of age, gender, and years of experience of an IT professional working in higher education. This pilot study included a convenience sample of IT professionals from a single public university in the United States and an online survey was administered to discover the impact operational shifts have on levels of technostress, job satisfaction, and job burnout. To be considered, the respondent had to meet specific criteria: (a) be an adult of at least 18 years of age, (b) work as an IT professional within the university, and (c) work for a minimum of one year as an IT professional. The sample of 116 potential respondents were emailed to request participation in the study. There were 46 survey submissions received (roughly 40% of likely respondents). Of those surveys received, there were 31 completed cases (approximately 27%), which were analyzed using multiple linear regression. Results of this study suggested there was no predictive relationship of technostress on job satisfaction. However, results did show decreased job satisfaction for demographic characteristics, such as age. Additionally, there was no overall predictive relationship of technostress on job burnout, however, results suggest that compared with people over 55, people who were between 35-44 experienced increased burnout overall.
    • LEVERAGING MACHINE LEARNING TO IDENTIFY QUALITY ISSUES IN THE MEDICAID CLAIM ADJUDICATION PROCESS

      Hoseini, Cyrus (Indiana State University, 2020-12)
      Medicaid is the largest health insurance in the U.S. It provides health coverage to over 68 million individuals, costs the nation over $600 billion a year, and subject to improper payments (fraud, waste, and abuse) or inaccurate payments (claim processed erroneously). Medicaid programs partially use Fee-For-Services (FFS) to provide coverage to beneficiaries by adjudicating claims and leveraging traditional inferential statistics to verify the quality of adjudicated claims. These quality methods only provide an interval estimate of the quality errors and are incapable of detecting most claim adjudication errors, potentially millions of dollar opportunity costs. This dissertation studied a method of applying supervised learning to detect erroneous payment in the entire population of adjudicated claims in each Medicaid Management Information System (MMIS), focusing on two specific claim types: inpatient and outpatient. A synthesized source of adjudicated claims generated by the Centers for Medicare & Medicaid Services (CMS) was used to create the original dataset. Quality reports from California FFS Medicaid were used to extract the underlying statistical pattern of claim adjudication errors in each Medicaid FFS and data labeling utilizing the goodness of fit and Anderson-Darling tests. Principle Component Analysis (PCA) and business knowledge were applied for dimensionality reduction resulting in the selection of sixteen (16) features for the outpatient and nineteen (19) features for the inpatient claims models. Ten (10) supervised learning algorithms were trained and tested on the labeled data: Decision tree with two configurations - Entropy and Gini, Random forests with two configurations - Entropy and Gini, Naïve Bayes, K Nearest Neighbor, Logistic Regression, Neural Network, Discriminant Analysis, and Gradient Boosting. Five (5) cross-validation and event-based sampling were applied during the training process (with oversampling using SMOTE method and stratification within oversampling). The prediction power (Gini importance) for the selected features were measured using the Mean Decrease in Impurity (MDI) method across three algorithms. A one-way ANOVA and Tukey and Fisher LSD pairwise comparisons were conducted. Results show that the Claim Payment Amount significantly outperforms the rest of the prediction power (highest Mean F-value for Gini importance at the α = 0.05 significance) for both claim types. Finally, all algorithms' recall and F1-score were measured for both claim types (inpatient and outpatient) and with and without oversampling. A one-way ANOVA and Tukey and Fisher LSD pairwise comparisons were conducted. The results show a statistically significant difference in the algorithm's performance in detecting quality issues in the outpatient and inpatient claims. Gradient Boosting, Decision Tree (with various configurations and sampling strategies) outperform the rest of the algorithms in recall and F1-measure on both datasets. Logistic Regression showing better recall on the outpatient than inpatient data, and Naïve Bays performs considerably better from recall and F1- score on outpatient data. Medicaid FFS programs and consultants, Medicaid administrators, and researchers could use this study to develop machine learning models to detect quality issues in the Medicaid FFS claim datasets at scale, saving potentially millions of dollars.
    • THE PERCEIVED EFFECTIVENESS OF THE SCALED AGILE FRAMEWORK® IN SOFTWARE DEVELOPMENT ORGANIZATIONS

      Carilli, James F. (Indiana State University, 2021-05)
      Software development projects experience very high failure rates. Due to the high cost of project failure, coupled with studies that found failure rates are closely tied to the software development method used, the purpose of this mixed methods exploratory case study was to examine the extent of perceived effectiveness of the Scaled Agile Framework (SAFe®) in software development organizations using Complex Adaptive Systems as a lens to guide the study. This research focused on the extent of perceived effectiveness of the Scaled Agile Framework® on organizational outcomes, team management, stakeholder and customer management, management of emerging requirements and overall organizational agility. Three organizations took participated from Retail, Government and Logistics industries. Each organization transitioned from the Waterfall method to SAFe®. In all three cases, the participants reported the transition to SAFe® helped improve strategic alignment, facilitate business / IT coordination, increase speed of delivery, improve software quality, and reduce rework by applying Lean-Agile principles resulting in lower overall costs and reduced risk. Principle challenges included the need for change management and training to help assimilate the new structure, roles and responsibilities. Another significant challenge cited was the transition from project management measures (e.g., cost, scope, schedule, earned value) to SAFe® measures of throughput (i.e., working software) and value (i.e., prioritized features based on business value). Interactions with “non-SAFe®” organizations were cited as a concern for dependencies on other teams that could result in schedule and priority misalignment.
    • JOB SATISFACTION AND TURNOVER INTENTION IN HIGHER EDUCATION: A STUDY OF INFORMATION TECHNOLOGY PROFESSIONALS IN THE CALIFORNIA STATE UNIVERSITY SYSTEM

      Banks, Brooke Ferrier (Indiana State University, 2019-08)
      The purpose of this study was to survey perceived job satisfaction and turnover intention of information technology professionals in the California State University (CSU) system. Employee satisfaction facets (work, pay, opportunities for promotion, supervision, and coworkers), overall satisfaction, and turnover intention were measured. Further, the study identified whether there was a significant difference in perceived job satisfaction or turnover intention based on years of service in the CSU system, gender, or campus in the CSU system. The study also examined the uniqueness of information technology professionals at campuses in the CSU system. This study utilized a mixed-methods methodology with two distinct phases. The quantitative phase of the study involved participants responding to an on-line survey. An invitation was sent to 622 information technology professionals at six campuses in the CSU system with a request to complete the survey. A total of 59 information technology employees responded, for a response rate of 9.49%. The quantitative results support earlier studies that report a negative correlation between overall job satisfaction and turnover intention. Of the five facets of job satisfaction, the mean satisfaction with opportunities for promotion was the lowest. The qualitative phase followed the quantitative phase and involved interviewing information technology managers from the CSU system, using a semi-structured interview protocol, to gain additional clarity about the data gathered in the quantitative phase. The managers did not perceive a difference between the job satisfaction of information technology professionals and other professionals. The majority of managers reported viewing turnover positively, but suggested that their view of turnover is highly situational depending on whether the turnover is of a high or low performer. The culture of information technology professional turnover intention was described as somewhat different for other professionals given the ease of skills transfer and demand for information technology professionals in the market.
    • THE IMPLEMENTATION OF A SYSTEMATIC METHOD FOR IDENTIFYING THE ROOT CAUSES OF MANUFACTURING NEGATIVE PROCESS YIELD IN A NORTH AMERICAN FASTENER MANUFACTURING FACILITY

      Venable, Bradley J. (Indiana State University, 2019-05)
      Negative process yield is an unexplained expense that does have a significant impact on profits. As like most other manufacturing processes, there are multiple processing operations that a finished good can be exposed to during the course of the entire process. Negative process yield can occur during these known operations but without accurate reporting and data collection, it becomes a moot point. In order to investigate this unexplainable phenomena properly, understanding of what could possibly cause this from occurring must be achieved. In the case of the North American fastener manufacturing plant, missing pieces that are not accounted for through the normal scrap reporting process are often referred to as “negative process yield”. Loss of material from the various processes has been identified as a problem for the North American fastener manufacturing plant. The set of tools that assist in the identification and steady elimination of waste also referred to as lean principles and techniques must be used during the study. This study will examine and analyze the data collected and associated with the loss of unexplained production pieces throughout the production process and what the financial implications or effects have on margins or profits. The Taguchi Design of Experiment (DOE) method is to be used for the experiment. Roy stated that the main focus of the application of DOE is to improve quality. The definition of quality varies widely depending on the applications, but it must be defined before any experimental technique can be produced with meaningful results. Taguchi offers a generalized definition for quality of performance. He regards performance as the major component of product or process quality. A reduced variation results in a reduction in scrap, less rejection of product, and fewer warranty returns, consequently reducing costs, and improving profits and improving customer satisfaction. The result of the study indicated that the form tool was the most significant factor and the levels or grades of material used for those tools could have an effect on the machining process. There was also an indication that the drilling operation needed to be focused on and that the grade of material used, either carbide or high speed steel, be seriously considered based on the application and desired speeds and feeds of the drill depending on the substrate material of the fastener.
    • PREOPERATIVE PREDICTORS FOR 30-DAY POSTOPERATIVE EMERGENCY DEPARTMENT VISIT AFTER A BARIATRIC SURGERY

      Bhandari, Pawan (Indiana State University, 2022-05)
      One of US hospitals' widely used critical performance or quality outcome measures is the 30-day emergency department (ED) visit after a surgical procedure. Such ED visits add millions of dollars each year as a cost burden to US healthcare. This study aimed to identify key predictors known before the patient's surgery, contributing to undesirable ED visits within 30 days of a bariatric surgical procedure. The study was conducted in three phases. The first phase of the study engaged a panel of experts to narrow down important preoperative factors for patients undergoing bariatric surgery in the form of a Delphi study. The second phase of the study included quantitative data analysis, which utilized the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program Participant Use Data File of the year 2019 to identify statistically significant preoperative factors that can contribute to the likelihood of patients returning to the emergency department within 30 days of bariatric surgery. There were N = 193,774 cases with complete information from 868 MBSAQIP-accredited bariatric surgery centers across the United States in the Data File among which 15,533 (8% of the total cases) visited an ED without needing admission as inpatients. The analysis also examined the feasibility of developing a predictive model with only statistically significant factors and checking if the model has an acceptable fit. The third phase of the study reengaged the same panel of experts from the first phase to validate the findings from the second phase and to document the subject matter experts' perception regarding the model developed and the overall findings. Out of 33 preoperative variables, only 9 variables were selected in the first phase of the study with the help of a panel of experts. Out of the 9 chosen variables, 8 variables, i.e., Pre-Op GERD requiring medication, Number of Hypertensive Medications, Pre-Op BMI closest to bariatric surgery, Highest Recorded Pre-Op BMI, Pre-Op vein thrombosis requiring therapy, Pre-Op diabetes mellitus, Pre-Op history of COPD, and Pre-Op Steroid/Immunosuppressant Use for Chronic Condition significantly contributed to the likelihood of patients coming back to ED within 30 days of bariatric surgery. The study's second phase also yielded a predictive model using only the statistically significant and weighted variables, and each predictor exhibited statistical significance. In the third phase, a panel of experts weighed in mostly with positive feedback deeming the study clinically and operationally valuable for the bariatric patient population. The practical implication of this study is that the MBSAQIP Centers can use the model to determine the probability of a patient's likelihood of returning to ED after a bariatric surgical procedure. Based on the set criteria, if the patient has a higher chance of returning to ED, the care team can take interventions during and in the first few days or weeks of the discharge to prevent potential postoperative ED visits within 30 days of bariatric surgery.
    • DETERMINING HOW TEAM COMMUNICATION AND PROJECT OUTCOMES EVOLVE IN A MIDWESTERN HEALTHCARE COMPANY: A CASE STUDY APPROACH

      Ash, Gregory J. (Indiana State University, 2021-05)
      The purpose of the study was to determine the perspective of the Business Operation team and IT team during project team communication and interaction towards an awareness of the critical business processes and systems needed to ensure project success. For companies to remain competitive, a catalyst for growth through software development projects is required. Impediments to successful software project outcomes include inefficient communication, onesided team communication, siloed team interaction, lack of business knowledge, lack of information sharing, and insufficient training resources. This study was a Convergent Parallel Mixed Methods Design with survey questions for the quantitative component and open-ended questions for the qualitative part. The mixed methods research study included an in-depth review of the Business Operation teams' and IT teams' perspectives concerning team communication and interaction. An Independent Samples T-test was conducted to understand the differences between the Business Operation team and the IT team regarding General Communication, Team Communication, Team Interaction, and Project Outcomes. The T-test results indicated a difference in the perspective between the groups with General Communication but no difference in the perspective of Team Communication, Team Interaction, and Project Outcomes. The themes emerging from the qualitative component indicated an opportunity for training to acquire the knowledge and skills required to understand the underlying business process to facilitate a software project discussion.
    • TECHNOLOGY ATTRIBUTES, ORGANIZATIONAL LEARNING ATTRIBUTES, SERVICE ATTRIBUTES, AND ELECTRONIC HEALTH RECORD IMPLEMENTATION SUCCESS

      Rangarajan, Anuradha (Indiana State University, 2020-07)
      Electronic Health Record (EHR) is a technology innovation which has the potential to offer valuable benefits to the healthcare industry such as improved quality of patient care and safety, optimization of healthcare workflow processes and availability of electronic data for clinical research. The implementation success of EHR is therefore significant to the healthcare industry in the United States and around the world. Prior studies in research literature have considered the impact of technology attributes, organizational learning attributes, and service attributes on information technology implementations in various other domains based on theories such as Theory of Reasoned Action (TRA), Theory of Planned Behavior (TRB) and Technology Acceptance Model (TAM), but none have considered their association with implementation success in a comprehensive manner within a single study pertaining to the healthcare domain as this study does. Hence, this study addresses an essential research gap. The approach used by this study in conducting the research based on a multi-factor research model (including the aforementioned attributes) is consistent with the general method used by academic researchers whereby the ability of a unique and selective list of factors to predict certain outcomes is leveraged. The data for this research study was collected using a questionnaire survey instrument based on the Likert scale. Structural Equation Modeling (SEM) was used for data analysis due to the presence of latent variables in the research model. The results of the statistical analyses support the hypotheses confirming positive associations between technology attributes (ease of use, result demonstrability, performance expectancy), organizational learning attributes (organizational learning capability, organizational absorptive capacity), service attributes (service-dominant orientation), and EHR implementation success. The results of this study are of importance to both academicians and practitioners.
    • AN ANALYSIS OF OPERATOR EYE BEHAVIOR WHEN MONITORING SIMULATED, PETROCHEMICAL MANUFACTURING, SUPERVISORY CONTROL AND DATA ACQUISITION ALERTS AND WARNINGS WITH BACKGROUND NOISE

      Huenerfauth, Angela (Indiana State University, 2019-05)
      There are a number of potential distractions for operators when viewing plant status-monitoring information in a petrochemical plant: background noises of other employees speaking, sounds of manufacturing equipment and processes and other ambient noises like HVAC and building operation noise. When the monitoring equipment for a chemical or petrochemical plant is not designed to take into account that operators can be distracted by this noise, there is a potential safety hazard for the people in that work environment. Alarms can be missed and fundamental information can be overlooked. SCADA is “a type of control system that collects and displays data and allows users to manipulate and control the system from a distant location (Koffskey, 2010).” SCADA is used in various industries such as energy, food and beverage, manufacturing, oil and gas, power, recycling, transportation, and water and waste water (Gould, 2017). In the petrochemical industry, the status of individual instruments is monitored by one or a few supervisors at a central command station with a SCADA screen. The focus of this research is to determine if there is a difference in user eye behavior (Time to First Fixation, Fixation Frequency per AOI, Gaze Duration Mean, and Gaze Percentage per AOI) between no (minimal) ambient noise and ambient noise for users when viewing alerts/warnings on a petrochemical manufacturing SCADA user interface? 100 participants (with a science, engineering, or manufacturing background) were asked to watch two sets of simulated SCADA prototypes (half with petrochemical manufacturing noise and half without) while wearing a set of eye-tracking glasses. The Wilconox Rank Sum Test determined that there was a statistically significant difference in the data sets (with three of the four dependent variables) demonstrating that sound is statistically significant in distracting operators watching a petrochemical SCADA user interface.
    • A QUALITATIVE STUDY OF INFORMATION TECHNOLOGY PROJECT PORTFOLIO MANAGEMENT IN HIGHER EDUCATION

      Miller, L. Andy (Indiana State University, 2020-05)
      This study was conducted to understand the variables, processes, organizational structures, and governance structures that are important and/or that support higher education decision makers in their selection and prioritization of information technology (IT) projects into their universities’ portfolios. IT project portfolio management (PPM) is comprised of many different activities, and the selection and prioritization of projects are just two interconnected activities amongst many. Research has suggested that these PPM activities are both important and beneficial; but there is a dearth of research on the subject specifically within higher education IT environments, and some higher education organizations struggle in this area. This study follows recent recommendations from other researchers to perform practice-based research on IT PPM. Research streams and standards bodies have long espoused the ideals of strategic IT PPM, where organizational strategy is perceived as a driver that strongly guides the practical activities and operations of IT PPM. However, there is a growing recognition that there is room for practice-based research because those ideals of strategic IT PPM are often not aligned with actual IT PPM practices and outcomes, and because IT PPM in practice often results in a bottom-up means for affecting strategy. This study used a qualitative research design, and included a practice-based exploratory multiple-case study focused on project selection and prioritization activities as they occur within real world higher education IT settings at eight universities in the California State University system. Each university acted as an individual case within the multiple-case study. Interviews were conducted with 27 subjects across these eight universities, and a breadth of other evidence was collected including documentation, physical artifacts, and archival records. Converging lines of data were developed through triangulation and corroboration of all the evidence, and this formed the informational basis for each case. Results from each case were reported independently, and a cross case synthesis was conducted to aggregate findings across all eight cases. In addition to questions about the mechanics of project selection and prioritization, the interviews also included questions that were designed to compare and contrast perceptions of technical and non-technical stakeholders. Twelve themes emerged as issues of importance including objectivity, formality, flexibility, alignment with the strategic plan, the difficulty for small projects to compete with large/enterprise projects, senior leadership involvement, transparency in decision-making, transparency in PPM mechanics, the need for consultation and responsiveness, capacity planning, governing bodies’ makeup and their representation of campus stakeholders, and satisfaction with the IT organization (and with its project management office). Technical and non-technical subjects’ perceptions were aligned throughout most of the twelve themes, but there were indeed areas where opinions differed.
    • TECHNOLOGY THREAT AVOIDANCE FACTORS AS PREDICTORS OF RISKY CYBERSECURITY BEHAVIOR WITHIN THE ENTERPRISE

      Gillam, Andrew R. (Indiana State University, 2019-05)
      Recent research of information technology (IT) end-user cybersecurity-related risky behaviors has focused on items such as IT user decision-making, impulsiveness, and internet use as predictors of human cyber vulnerability. Theories which guide user human behavioral intent, such as protection motivation theory (PMT, introduced by Rogers, 1975) and technology threat avoidance theory (TTAT, introduced by Liang and Xue, 2009) have not been widely investigated as antecedents of risky cybersecurity behavior (RScB). This dissertation describes exploratory research that analyzed and evaluated PMT/TTAT factors as predictors of RScB by enterprise IT users. This work uniquely contributes to the literature by investigating associations between accepted behavioral motivation models and RScB. Findings are intended to provide human resource development (HRD) practitioners and researchers innovative techniques to identify factors which may compel enterprise IT users to avoid risky cybersecurity behaviors in the workplace. Findings, based on survey responses by 184 working professionals in the United States, were largely consistent with previous TTAT-focused works. New insights arose regarding the predictive impact of perceived cost as a predictor of RScB (p = .003) with small-to-medium effect sizes. Predictability was further leveraged using discriminant analysis to predict RScB category membership derived from k-means clustering. Significant outcomes were noted with practical utility. An overarching goal of this study was to more fully inform the HRD community of scholar-practitioners of the urgent need to design, deliver, implement, and evaluate initiatives that could be utilized to diminish inappropriate and costly cybersecurity behaviors in various workplace environments.
    • DETERMINING THE INFLUENCE OF THE NETWORK TIME PROTOCOL (NTP) ON THE DOMAIN NAME SERVICE SECURITY EXTENSION (DNSSEC) PROTOCOL

      Cold, Sherman J. (Indiana State University, 2015-04)
      Recent hacking events against Sony Entertainment, Target, Home Depot, and bank Automated Teller Machines (ATMs) fosters a growing perception that the Internet is an insecure environment. While Internet Privacy Concerns (IPCs) continue to grow out of a general concern for personal privacy, the availability of inexpensive Internet-capable mobile devices increases the Internet of Things (IoT), a network of everyday items embedded with the ability to connect and exchange data. Domain Name Services (DNS) has been integral part of the Internet for name resolution since the beginning. Domain Name Services has several documented vulnerabilities; for example, cache poisoning. The solution adopted by the Internet Engineering Task Force (IETF) to strengthen DNS is DNS Security Extensions (DNSSEC). DNS Security Extensions uses support for cryptographically signed name resolution responses. The cryptography used by DNSSEC is the Public Key Infrastructure (PKI). Some researchers have suggested that the time stamp used in the public certificate of the name resolution response influences DNSSEC vulnerability to a Man-in-the-Middle (MiTM) attack. This quantitative study determined the efficacy of using the default relative Unix epoch time stamp versus an absolute time stamp provided by the Network Time Protocol (NTP). Both a two-proportion test and Fisher’s exact test were used on a large sample size to show that there is a statistically significant better performance in security behavior when using NTP absolute time instead of the traditional relative Unix epoch time with DNSSEC.
    • GENERIC ORGANIZATIONAL STRATEGY INTEGRATION IMPACTS ON PROFIT MARGIN RATIO AND INVENTORY TURNOVER IN PUBLICALLY TRADED OKLAHOMA MANUFACTURING ORGANIZATIONS

      Bell, Christopher (Indiana State University, 2015-05)
      This study sought to determine if and to what extent strategy integration was related to the financial indicators profit margin ratio and inventory turnover for publically traded manufacturing organizations in Oklahoma. Current strategy theory states that the more thoroughly an organization adopts a given strategy the greater the effect will be on these financial indicators. Hence the need to more fully understand the extent and rates at which strategy integration effects these indicators. This study looked at perceived strategy integration scores for publically traded Oklahoma manufacturing organizations taken from June to August 2014 and financial indicators from 2012 and 2013. The perceived strategy integration scores were obtained via survey while the financial indicators were calculated using Section 10-K filings from the United States Securities and Exchange Commission (US SEC or SEC). Reliable financial information is not publically available for many private organizations, so, they were excluded from the study. Summary analysis of the data indicated that strategies were not in use in equal proportions with Niche Differentiation being most popular by far. Market focus appeared to be an indicator of inventory turnover standard deviation with Broad focus and Combination strategy groups having lower standard deviation. While the product focus appeared to indicate profit margin ratio range with Low Cost strategies having lower profit margins. After performing additional analysis it was found that performance enhancing technologies and other complicating factors may have had a larger impact than previously believed. A correlation was unable to be established for most strategies. For the Niche Low Cost Strategy a relationship was found where profit margins decreased 1.634% for each 1 point increase in perceived strategy integration score. It was also found that the Broad Differentiation Strategy it was found that inventory turns increased 0.7006 turns for every 1 point increase in perceived strategy integration score. No other strategies were found to have correlation coefficients that were statistically different from the null hypothesis. However, anecdotal evidence was found in support of several other of Porter’s theories.
    • A MODEL OF PACKET LOSS CAUSED BY INTERFERENCE BETWEEN THE BLUETOOTH LOW ENERGY COMPONENT OF AN IOS WEARABLE BODY AREA NETWORK AND RESIDENTIAL MICROWAVE OVENS

      Barge, William C. (Indiana State University, 2015-05)
      Cardiovascular diseases are the leading cause of death in the United States. Advances in wireless technology have made possible the remote monitoring of a patient’s heart sensors as part of a body area network. Previous studies have suggested that stray wireless transmissions in the industrial, scientific, and medical (ISM) band cause interference resulting in packet loss in Bluetooth piconets. This study investigates the impact that wireless transmissions from residential microwave ovens have on the Bluetooth Low Energy (BLE) component of the body area network. Using a systematic data collection approach, two variables were manipulated. The distance between the microwave oven and the BLE piconet was varied from 0.5 meter to 5.0 meters at one-half meter increments. At each distance, the power level of the microwave oven was varied from the lowest power setting to the highest power setting. The two variables that were collected were the microwave interference generated by channel and the packet loss by channel. The results suggest more packet loss is due to the microwave oven’s power level than by the distance, the interference caused by the microwave oven affects all BLE channels equally, and the packet loss by channel is a good predictor of microwave oven interference. The significance of this study lies in providing beneficial information to the medical and digital communication industries concerning the causes and solutions to disruptions in the Bluetooth-enabled body area network devices in a very common situation. The results of this study may lend support for improvements and widespread use of body area network medical systems, which may have the benefit of better monitoring, more data, and reduced fatalities due to misdiagnosed heart conditions.
    • INVESTIGATING TRANSFORMATIONAL LEADERSHIP AND RETURN ON EQUITY OUTCOMES IN SMALL AND MEDIUM-SIZED COMMUNITY BANKS

      Dean, Jason C. (Indiana State University, 2014-12)
      The focus of this investigation is small financial institutions based in the United States and insured by the Federal Deposit Insurance Corporation (FDIC). Without a viable market for community banks individual and small business investors and borrowers, particularly in rural areas, will have less financing options and may be forced to accept the terms dictated by larger financial institutions (Walser & Anderlik, 2004). Additionally, within the literature there is minimal empirical evidence informing community banks of appropriate initiatives that may be implemented to unleash human expertise through training and development interventions related to successful leadership styles. As competition in the banking industry continues to increase, community banks may be compelled to utilize Human Resource Development (HRD) initiatives and interventions, similar to the propositions mentioned herein, to enhance their overall competitiveness and survivability. The primary purpose of this investigation was to identify the leadership styles of branch managers and financial performance outcomes as measured by the ROE framework at both small and medium-sized community banks. Second, this investigation sought to add to previous research by providing HRD scholars and practitioners with new strategies for evaluating the impact of HRD initiatives and interventions within a banking industry context. Third, through this investigation it was determined that particular styles and/or sets of leadership dimensions are common at successful small and medium-sized community banking institutions.
    • DEVELOPMENT OF A METHODOLOGY FOR EVALUATING QUALITY CHARACTERISTICS OF FUSED DEPOSITION MODELING

      Winston Sealy, Dominique (Indiana State University, 2014-12)
      Additive Manufacturing rapid reproductive systems are gaining popularity within the manufacturing industry. One of the many benefits of such systems has been the exploration of building practical sacrificial patterns for investment casted metals. Methods such as, Castform and Quickcast, has been developed for selective laser sintering and Stereolithography apparatus technologies respectively. Research has demonstrated significant cost savings when Additive manufacturing rapid reproductive systems are utilized for customized or small batch production of sacrificial patterns. The purpose of this study was to develop a methodology for evaluating quality characteristics of Fused Deposition Modeling. Since Fused Deposition Modeling have been demonstrated by a number of experimental studies as a viable alternative to wax sacrificial patterns, this study explored the effects of wall thickness and raster resolution on quality characteristics such as, diametric accuracy, cylindricity, and concentricity. The results of the study indicated raster resolution had no effect on the measured quality characteristics, however, the ANOVA and Kruskal-Wallis tests showed statistical significance (α=0.05) for wall thickness of cylindricity of a small diameter (0.5”) and concentricity of two cylindrical features of diameters 0.5” and 1”. Moreover, the main contributions of this study involved the development of an accurate and robust design of experiment methodology. In addition, implications and recommendations for practice were also discussed.
    • ANALYSIS OF IPV6 READINESS OF END-USER ENTERPRISES IN THE NORTH CAROLINA EASTERN REGION

      Pickard, John (Indiana State University, 2014-12)
      On February 3rd, 2011, the Internet Addressing and Numbers Authority (IANA) allocated the last five /8 blocks of IPv4 addresses to each of the five Regional Internet Registries (RIRs). Since that event, four of the five RIRs have depleted their IPv4 allocations and began operating under final IPv4 address depletion policies. The exhaustion of the IPv4 address pools maintained by the registries means that IPv4 is now a legacy protocol and that all future Internet growth will be over IPv6. This exhaustion also means that organizations must take action to accommodate IPv6 adoption or risk compromising business agility and continuity – especially those organizations with public-facing content that rely on the Internet. Yet, anecdotal evidence and recent published studies indicate that few organizations have moved to adopt IPv6. The evidence suggests a low sense of urgency and lack of understanding among organizational leaders regarding the potential consequences that IPv4 exhaustion will have on their organization’s business model. An understanding pertaining to the IPv6 adoption readiness within organizations is needed so that programs can be established to raise the awareness of organizational decision makers to risks of not having an IPv6 strategy and to inspire them to take action. This study achieved this objective by investigating the IPv6 readiness of enterprise organizations located in eastern North Carolina through a survey sent to the senior IT decision makers of 463 end-user enterprise organizations. IPv6 readiness was measured across five facets of organizational IPv6 preparedness; training, high-level planning, assessment of the current environment, IPv6 policy, and IPv6 deployment. Statistical analyses identified the significant technology adoption factors associated with IPv6 readiness as measured on a six-stage Guttman scale, ranging from simply “aware” of IPv6, to general IPv6 deployment. Results revealed that the majority of organizations have made little to no preparation toward IPv6 adoption and do not see IPv6 adoption as an urgent issue. Further it was found that the factors most significantly associated with low levels of IPv6 readiness were lack of perceived advantages of IPv6 and lack of perceived pressures from industry partners and customers to adopt IPv6. Based on the findings of this study, a recommended approach to developing an effective IPv6 strategy, as well as, a framework for IPv6 adoption planning is presented for organizational leaders and IT decision makers to use as a guide toward a successful IPv6 transition.
    • Analytical Modeling and Feasibility Study for Adoption of Renewable Energy Sources in a Single Family Dwelling

      Linn, Jimmy B. (Indiana State University, 2014-12)
      In the last four or five decades, increased political and social pressure has been placed on commercial and residential consumers to reduce consumption of fossil fuels and invest in alternative methods of energy production for electricity, heating and cooling. Commercially, wind turbines and photovoltaic energy production equipment has sprung up all over the country. Other forms of energy production such as hydroelectric and geothermal energy production facilities have also been built. During this time however, very few residential ‘green energy’ investments have been made. Only in recent years have residential home owners begun to “wet their feet” on ‘green’ energy equipment. Cost has been the major factor. Of late though, costs have been coming down and efficiency has been going up making home owners begin to sense that alternative energy may now be entering the realm of economic feasibility. Unfortunately, home owners have had no reliable or credible tools to assess economic viability of such systems. The purpose of this research is to develop a tool to access the potential of alternative energy sources and test it statistically by surveying subjects in five different ‘green’ energy categories. Since atmospheric (air-to-air) heat pumps have been around for many years and represent a mature heating and cooling technology, upgrading older inefficient HVAC equipment to new high efficiency atmospheric heat pumps is the category used to baseline the experiment. Ground source heat pumps and direct solar heating systems were modeled and compared to the baseline. Wind energy and photovoltaic energy production systems were modeled, surveyed and compared to using only grid supplied electricity. Results show that in four of the five cases tested, the less mature ‘green’ energy equipment; photovoltaic solar, direct solar and ground source heat pump equipment are in general not economically viable without tax rebates to significantly lower the net investment. Setback rules and environmental and aesthetic ordinances against siting them in those counties severely restrict the population of wind energy devices so that an effective test of this category using the model could not be done. The model performed well with the baseline data. Performance of the model with ground source heat pumps was reasonable, but improvements in the model reflecting differing features of ground source heat pumps need to be made. Performance of the model with photovoltaic energy production equipment was also good. Extending the test population to all fifty states and extending the utility bill test range from one year to five years will provide much more useful data to test and improve the model. Although the model development and testing done in this work only represents a small contribution to the bridging of a large gap in consumer confidence in green energy products, it represents a big step into an area that very few have attempted to venture into.