Introduction

Research publications are a crucial part of the scientific process. Writing proficiencies are consequently essential for the effective communication of research outputs (Kotz et al., 2013). Mastery of these skills facilitates the methodical articulation of research methodologies, empirical findings, and theoretical contributions (Zain et al., 2011). Lucid exposition, structural coherence, and rhetorical precision are critical determinants of manuscript acceptance in high-impact journals, as these attributes enhance comprehensibility, reproducibility, and the cumulative advancement of scientific literature (Balch et al., 2018). Despite its epistemic significance, research writing remains a formidable challenge for many scholars, particularly those contending with linguistic barriers, disciplinary writing conventions, cognitive overload, and the absence of immediate formative feedback (Aitchison & Lee, 2006; Drubin & Kellogg, 2012; Lim & Phua, 2019).(Aitchison & Lee, 2006; Drubin & Kellogg, 2012; Lim & Phua, 2019). Therefore, researchers frequently seek external interventions to augment their writing capabilities at various stages of the academic publishing process (Echanique & Portillo, 2020). Some scholars may necessitate conceptual scaffolding to structure complex datasets and theoretical frameworks, whereas others—particularly non-native English speakers—may require lexical refinement and syntactic optimization (Hanauer et al., 2018; Smirnova et al., 2021).(Hanauer et al., 2018; Smirnova et al., 2021).

Given these challenges, many researchers opt for writing assistance. Some examples include availing of manuscript editing services (Zakaria, 2022), enrolling in writing workshops and courses (Wortman-Wunder & Wefes, 2020), and participating in writing groups (Colombo & Carlino, 2015). In recent years, academic writing tools powered by artificial intelligence (AI) have become an increasingly pivotal enabler of automated textual refinement and linguistic optimization (Khalifa & Albadawy, 2024). AI is the computer simulation of human intelligence processes to accomplish various tasks, such as playing chess (Pilueta et al., 2022), generating artwork (Garcia, 2024), classifying diseases (Maaliw et al., 2022), and more. Gayed et al. (2022) noted that natural language processing (NLP) is a major area in the recent acceleration of AI research and development. This assertion is supported by Nazari et al. (2021), who listed automated written corrective feedback, automated essay scoring, and automated writing evaluation as some of the computer-based applications that facilitate writing processes. From an NLP-centric perspective, Dale and Viethen (2021) remarked that automated writing assistance traditionally consists of three distinct features that may help researchers address weaknesses in their writing. These capabilities are style-checking, grammar-checking, and spell-checking. They also noted some of the popular commercially available tools for writing assistance, including Grammarly, Wordtune, ProWritingAid, WriteSonic, Gramara, Copysmith, and more.

Recently, a new AI tool powered by a large language model (LLM) called ChatGPT has attracted the attention of the scientific community (Bozkurt et al., 2024; Li et al., 2024; Miller et al., 2025; Pigg, 2024).(Bozkurt et al., 2024; Li et al., 2024; Miller et al., 2025; Pigg, 2024). Introduced in November 2022 by OpenAI, this generative AI system was initially predicated on GPT-3.5, with subsequent versions leveraging the GPT-4 architecture. Unlike existing writing tools that are conventionally capable of checking styles, grammar, and spelling in texts that have already been written (Dale & Viethen, 2021), ChatGPT possesses generative capabilities that enable it to produce syntactically coherent and semantically plausible content across various computational linguistics applications (e.g., chatbot, language summarization, sentiment analysis, and question-answering systems). Albeit not a common practice, it is technically feasible for researchers to incorporate AI-assisted text generation into their manuscript development workflows (Fecher et al., 2025). While ChatGPT lacks hermeneutic comprehension and does not exhibit the domain-specific inferential reasoning characteristic of human researchers, it nonetheless constitutes a valuable assistive mechanism in scholarly composition. For instance, it can be employed for automated draft generation in structured sections of research articles or for condensing extensive literature reviews into thematically organized narratives. Given its recent inception and evolving applications in academic discourse, ongoing empirical investigations are imperative to ascertain its functional efficacy, ethical ramifications, and broader acceptance trajectories within research writing.

Considering the rapid development of AI and the viability of ChatGPT as an academic writing tool, investigating whether researchers intend to adopt this software, as well as the factors that influence their decision, is of scholarly and societal relevance. This inquiry addresses salient gaps in AI, research writing, and AWS literature. First, while NLP is recognized as the most prolific subfield of AI research, and writing has been the predominant linguistic skill examined between 1990 and 2020 (Liang et al., 2021), most empirical inquiries have been contextualized within essay writing and intelligent tutoring systems (ITS). Although essay writing and research writing are both forms of academic writing, they have different purposes, structures, and audiences. Meanwhile, while an ITS provides personalized instruction and feedback, AWS primarily facilitates textual production and refinement. This distinction likely explains why previous studies have disproportionately recruited students (Gayed et al., 2022) and educators (Wilson et al., 2021), while researchers remain underrepresented in the literature despite the concerns expressed by the scientific community (e.g., Nakazawa et al., 2022). While recent studies have examined researchers' awareness, perceptions, and attitudinal dispositions toward ChatGPT (Abdelhafiz et al., 2024), there remains a paucity of research explicating the cognitive, technological, and contextual contingencies shaping their adoption decisions. Addressing this gap is critical for academic progress, as the findings of this study hold substantive implications for a spectrum of academic and institutional stakeholders, including researchers, funding bodies, scholarly publishers, policy architects, and the broader scientific community.

Theoretical Foundations

Technology Acceptance Model

First proposed by Davis (21)in 1989, the Technology Acceptance Model (TAM) is a theoretical framework that elucidates the cognitive and behavioral mechanisms underlying technology adoption. TAM has been widely employed to model the acceptance (or rejection) of a wide range of technologies, including computer systems (Garcia, 2023), devices (Zheng & Li, 2020), digital services (Al-Ghaith et al., 2010), social media (Al-Qaysi et al., 2021), and other information and communication technologies (Muriithi et al., 2016). The applicability of TAM has been empirically validated across heterogeneous industrial sectors, including healthcare (Ahn & Park, 2022), agriculture (Siyum et al., 2022), and education (Leem & Sung, 2019). Over the years, scholars have augmented the TAM framework by incorporating exogenous constructs such as trust (Belanche et al., 2012), perceived risk (Kesharwani & Singh Bisht, 2012), and social influence (Beldad & Hegner, 2018). More recent empirical inquiries have further explored moderating variables, including demographic factors and prior technological exposure, in modulating the predictive pathways of TAM's core constructs (e.g., Garcia et al., 2022; Park et al., 2021).(e.g., Garcia et al., 2022; Park et al., 2021).

Building on established theoretical trajectories, Davis (1989) conceptualized two primary determinants of technology adoption: perceived ease of use (PEOU) and perceived usefulness (PU). According to a systematic review (Mustafa & Garcia, 2021), these two constructs remained the most powerful determinants of technology acceptance despite new factors added to TAM. PEOU refers to the degree to which an individual perceives a technological system as intuitively navigable and minimally effort-intensive. It can be influenced by several design and contextual parameters, including usability heuristics, interface ergonomics, system compatibility with users' cognitive schemas, and the availability of technical documentation or instructional scaffolding. Meanwhile, PU refers to the extent to which an individual believes that a given technology enhances task efficiency and overall productivity. It can be influenced by several factors, including relevance, compatibility, social influence, and the outcome of using technology. Extensive empirical validation has substantiated the causal interrelationship between PEOU and PU, wherein PEOU significantly predicts PU, and both constructs collectively determine an individual's behavioral intention (BI) to engage with the technology (e.g., Al-Qaysi et al., 2021; Garcia, 2023).(e.g., Al-Qaysi et al., 2021; Garcia, 2023). Applying these theoretical postulations to the present study, researchers are more likely to adopt ChatGPT when they perceive it as functionally advantageous and intrinsically user-friendly. Grounded in these premises, this study posits the following hypotheses:

H1a. PEOU will positively influence PU in the context of ChatGPT.

H1b. PEOU will positively influence BI when writing research papers using ChatGPT.

H1c. PU will positively influence BI when writing research papers using ChatGPT.

Task Technology Fit

Task-technology fit (TTF) is a theoretical construct that delineates the degree of alignment between a specific technological system and the task it is designed to facilitate (Goodhue & Thompson, 1995). Within this framework, tasks are operationalized as the procedural and cognitive activities undertaken by users to transform inputs into intended outputs. Building upon rational choice principles, Dishaw and Strong (1999) argued that experienced and discerning users inherently favor technologies that afford superior task efficacy. For instance, individuals may adopt word processing applications due to their intrinsic compatibility with document creation and editorial workflows. Empirical investigations corroborate the assertion that task-technology congruence exerts a substantial influence on both task efficiency and output precision (e.g., Jeyaraj, 2022; Roth et al., 2023; Wang et al., 2022). Although TTF has undergone multiple theoretical refinements and empirical validations, its original formulation remains extensive and methodologically complex, rendering it challenging to operationalize within a single empirical study (Goodhue & Thompson, 1995). Despite being less theoretically mature than TAM, the conceptual primacy of technological compatibility in adoption decisions underscores its significance as a determinant of user engagement with novel systems (Yen et al., 2010). Rather than supplanting TAM, numerous studies have proposed an integrative synthesis of TAM and TTF to leverage their synergistic explanatory strengths and construct a more comprehensive adoption framework (Alkhwaldi & Abdulmuhsin, 2022; Liqin & Mengmeng, 2016).

In the basic TTF model, several constructs were proposed: task characteristics (TASK), technology characteristics(TECH), and task-technology fit (FIT). TASK is the nature of the task at hand and the knowledge and skills required to perform it, while TECH refers to the features of the technology that is being evaluated for a specific task. Both TASK and TECH are significant predictors of FIT – or the degree of compatibility between the technology and the task. In the technology-to-performance chain conceptualized by Goodhue and Thompson (1995), additional constructs—including precursors of utilization, performance impact metrics, utilization rates, and individual characteristics—were postulated as auxiliary determinants. However, for methodological parsimony and model tractability, this study follows the empirical approach of Yen et al. (2010), focusing exclusively on TASK, TECH, and FIT. Several exclusions were also made in the operationalization of TTF within this study. First, the individual characteristics construct was omitted, as prior research suggests that its predictive influence on FIT is marginal (Goodhue & Thompson, 1995). Meanwhile, the utilization construct was removed based on the premise that ChatGPT is not primarily considered a research writing tool. Finally, the tool functionality construct proposed by Dishaw and Strong (1999) in their TAM-TTF extension was not incorporated as it conceptually overlaps with TECH as delineated in the original TTF model. Grounded in these empirical justifications, this study advances the following hypotheses:

H2a. TECH of ChatGPT will positively influence FIT in terms of research writing.

H2b. TASK of research writing will positively influence FIT in the context of ChatGPT.

Trust in Specific Technology

The degree to which individuals rely on automation is contingent upon their trust in technological systems (Lee & See, 2004). Conceptually framed as the willingness to depend on an external entity in the presence of uncertainty, trust constitutes a foundational construct in social psychology and human-machine interaction paradigms. Within the domain of human-automation partnerships, trust serves as a decisive determinant of technology utilization. When users perceive an automation system as reliable, predictable, and aligned with their cognitive and task-based expectations, they exhibit a greater propensity to engage with it. Conversely, diminished trust engenders skepticism and hesitancy, attenuating adoption intent. Empirical validations substantiate the pivotal role of trust in shaping technology acceptance behaviors, with multiple studies incorporating trust as an explanatory mechanism within technology adoption frameworks (e.g., Belanche et al., 2012; Ghazizadeh et al., 2012; Kiran & Verbeek, 2010).(e.g., Belanche et al., 2012; Ghazizadeh et al., 2012; Kiran & Verbeek, 2010). This relationship extends to AI-driven systems, wherein trust calibration mechanisms influence the perceived reliability and ethical legitimacy of intelligent agents (Choung et al., 2022). The salience of trust in human-technology interactions has also prompted its integration into various theoretical models, including TAM (e.g., Wu et al., 2011) and TTF (e.g., Wang et al., 2021). Despite the proliferation of trust-related inquiries in information systems research, Mcknight et al. (2011) observed that scholarly discourse predominantly centers on interpersonal trust rather than trust in technological artifacts. Thus, they proposed the Trust in Specific Technology (TST) framework to delineate a structured trust development process within human-technology interactions.

The TST model postulates a causal hierarchy of constructs that directly and indirectly shape trust in a specific technology. Foremost among these is the propensity to trust general technology, operationalized through trusting stance (TS) and faith in general technology (FGT). These constructs underscore the predisposition of users to extend trust toward novel technological entities, contingent upon their prior experiences with familiar systems. Consistent with the trust transfer principle (Stewart, 2003), users extrapolate pre-existing trust from known technologies onto emergent systems when they perceive contextual or functional congruence. For instance, researchers who trust Grammarly as a writing aid may transfer this trust to ChatGPT, assuming it operates within a similar domain of linguistic assistance. However, TS and FGT were excluded from this study, as they evaluate generalized trust in technology rather than trust specific to ChatGPT. Furthermore, in an era of ubiquitous technological reliance, trust in digital infrastructures is often presupposed, rendering these constructs less instrumental in differentiating adoption behaviors. Given these considerations, this study focuses exclusively on institution-based trust mechanisms, particularly situational normality (SN) and structural assurance (SA). SN reflects the extent to which users perceive technology as an established and socially sanctioned norm within a given context. The applicability of SN in analyzing ChatGPT is particularly salient, given the ongoing discourse within the scientific community regarding its ethical and epistemological implications (Nakazawa et al., 2022). Meanwhile, SA denotes the degree to which users' confidence in technology is reinforced by the presence of supporting infrastructures, including legal, contractual, and regulatory frameworks. Users are more likely to trust AI-driven applications when institutional safeguards exist to govern their responsible deployment. Therefore, this study proposes these additional hypotheses:

H3a. SA surrounding researchers' use of ChatGPT will positively affect TB.

H3b. SN surrounding researchers' use of ChatGPT will positively affect TB.

Integrated Model of TAM, TTF, and TST

Drawing upon empirical substantiation and theoretical rigor, this study purposefully synthesized TAM, TTF, and TST as the foundational framework for examining the adoption intentions of ChatGPT in research writing. From a theoretical perspective, combining these models provides a more comprehensive framework for understanding the complex nature of technology acceptance, particularly in the context of AI-driven academic writing tools (van Niekerk et al., 2025). Beyond their individual explanatory contributions, the constructs of these models exhibit conceptual interdependencies. For instance, when a technology effectively supports the demands of a task (FIT), users develop greater trust (TB) in its reliability and performance, as they gain confidence that the tool can successfully assist their academic writing needs. This trust is particularly crucial in AI applications (Omrani et al., 2022), where concerns about accuracy, bias, and ethical implications often shape adoption decisions. Similarly, users are more likely to adopt (BI) technology when its characteristics match the characteristics of the task (FIT) because a high task-technology fit reduces effort, enhances efficiency, and improves performance. If a tool does not align with users' needs, adoption may be hindered regardless of its general usability. Additionally, users are also more likely to use technology (BI) when they trust (TB) the reliability, integrity, and ability of technology because trust minimizes perceived risk and uncertainty, which are significant barriers to technology adoption (Marikyan et al., 2023). Researchers are more likely to integrate ChatGPT into their academic workflows when they perceive it as a credible, transparent, and dependable writing assistant. Based on these theoretical interrelations, this study posits the following hypotheses:

H4a. FIT will positively affect TB when writing research papers using ChatGPT.

H4b. FIT will positively affect BI when writing research papers using ChatGPT.

H4c. TB will positively affect BI when writing research papers using ChatGPT.

The predictors of TTF may also potentially influence the constructs of TAM and TST. For instance, when the features of a technology (TECH) reduce the complexity of a task and are integrated into an intuitive interface, it can increase how users perceive the usefulness (PU) and ease of use (PEOU) of the technology because reducing cognitive load and simplifying interaction makes the tool more accessible and efficient. Users are more likely to view ChatGPT as beneficial when it streamlines research writing without requiring extensive learning or technical expertise (Shahzad et al., 2024). Meanwhile, the more frequently a task is performed (TASK), the more likely it is to be considered a normal part of the work process (SN). Repeated exposure fosters familiarity and reinforces the perception that using the technology is a standard practice (Choi, 2020). When researchers frequently engage in academic writing, the use of ChatGPT may become socially normalized. In the same light, the more critical a task is (TASK), the more likely an infrastructure is needed to handle the importance and sensitivity of the task and provide accurate results (SA). Higher-stakes tasks require reliable support systems to ensure quality, precision, and security. In academic writing, where accuracy and credibility are essential (Acut et al., 2025; Garcia, 2024), researchers may seek additional safeguards—such as plagiarism detection, citation validation, and content accuracy checks—before fully relying on ChatGPT for manuscript preparation. Accordingly, the study proposes the following hypotheses:

H4d. TECH will have a positive and significant influence on ChatGPT's PEOU.
H4e. TECH will have a positive and significant influence on ChatGPT's PU.
H4f. TASK has a positive influence on the SN surrounding researchers' use of ChatGPT.
H4g. TASK has a positive influence on the SA surrounding researchers' use of ChatGPT.

The proposed integrated model with the corresponding hypothesized paths is presented in Figure 1, which is composed of nine constructs operationally defined in Table 1.

Methods

This study is a cross-sectional investigation using a structural equation modeling (SEM) approach to build a theoretical framework that explicates researchers' intention to adopt ChatGPT in academic writing workflows. SEM represents a robust multivariate statistical technique capable of simultaneously estimating relationships among multiple latent constructs. It differs from other modeling approaches by measuring path coefficients for both direct and indirect effects on pre-assumed causal relationships (Fan et al., 2016). This methodological choice is justified by SEM's capacity to model latent variables, accommodate measurement error, and empirically validate complex theoretical propositions within a unified analytical framework. Following the methodological precedent used by Garcia (2023), this study adheres to a systematic three-stage modeling approach. The initial phase involved constructing an integrated model contextualized within a rigorous empirical setting. Constructs derived from TAM, TTF, and TST were operationalized, and causal linkages were delineated based on an extensive literature review. The second phase was dedicated to the development of a measurement instrument to assess nine latent constructs (BI, PEOU, PU, TASK, TECH, FIT, SA, SN, and TB). Confirmatory factor analysis was conducted to assess construct validity, reliability, and factor loading adequacy within the measurement model. The final phase entailed iterative model modifications to ensure that construct-level adjustments remained theoretically and empirically substantiated. As emphasized by Garcia (2023), incremental refinement of construct definitions and interrelationships is crucial to mitigating spurious effects and preserving model parsimony. All methodological procedures adhered to the ethical research protocols established by the affiliated institution, with strict compliance to the ethical tenets articulated in the Declaration of Helsinki.

Constructs Definition
Technology Acceptance Model – Adapted from Davis (1989)
Perceived Usefulness The degree to which researchers believe that using ChatGPT in writing manuscripts would enhance their performance as researchers.
Perceived Ease of Use The degree to which researchers believe that using ChatGPT to assist them in their research writing would be free of effort.
Behavioral Intention The degree to which researchers believe that they are going to use ChatGPT to assist them in writing manuscripts in the future.
Task Technology Fit – Adapted from Goodhue and Thompson (1995)
Task Characteristics The degree to which researchers believe that the defining features of writing manuscripts as a routine task can be completed using ChatGPT.
Technology Characteristics The degree to which researchers believe that ChatGPT has the necessary features, capabilities, and attributes relevant to their research writing tasks.
Task-Technology Fit The degree to which researchers believe that ChatGPT (i.e., technology) can assist them in writing manuscripts (i.e., tasks) efficiently.
Trust in Specific Technology – Adapted from Mcknight et al. (2011)
Situational Normality The degree to which researchers believe that utilizing ChatGPT to support them in writing manuscripts is normal and acceptable.
Structural Assurance The degree to which researchers believe that the success of using ChatGPT is likely because of the structural conditions (e.g., guarantees and support).
Trusting Beliefs The degree to which researchers believe that ChatGPT has the capability, functionality, or features to assist them in writing manuscripts.

Measurement Items

Table 1 presents the nine constructs adopted from the following frameworks: TAM (Davis, 1989), TTF (Goodhue & Thompson, 1995), and TST (Mcknight et al., 2011). Their definitions and corresponding measurement items in the instrument were contextualized to reflect ChatGPT usage in research writing. External researchers scrutinized the initial questionnaire in terms of format, consistency, relevancy, completeness, and readability using a judgment approach (Garcia, 2023). Their feedback led to minor adjustments, either by adding new or simplifying existing items. The revised instrument was then pilot-tested with a convenience sample of researchers to evaluate its reliability and validity and identify other potential problems. Cronbach's alpha coefficient for the whole scale was found to be 0.92, whereas the computed values for individual factors ranged from 0.78 to 0.91. With Cronbach's alpha values exceeding 0.70 for all constructs, the questionnaire exhibits strong internal consistency across individual items and the entire instrument. The final validated questionnaire contained two main sections: (1) demographic information and (2) construct measurement. The first section collected basic characteristics of the respondent, including age, gender, highest educational attainment, career length, academic status, number of publications, research funding sources, experience in using writing assistant tools as well as ChatGPT, and more. The second section is composed of 36 items to measure nine constructs presented in the proposed integrated research model (Figure 1). All measurement items adopted a five-point Likert scale, with possible responses ranging from 1 (strongly disagree) to 5 (strongly agree). Instead of the "neutral" option, "unsure" was used as the middle point.

Characteristics n %
Gender Male Female
374 190

66.31 33.69
Age Less than 25 25 – 34 35 – 44 45 – 54 55 – 64 65 and over
3 147 272 130 12 0

0.53 26.06 48.23 23.05 2.13 0.00
Highest Educational Attainment Bachelor Master Doctorate
5 371 188

0.89 65.78 33.33
Country Philippines India Portugal Finland Egypt Mexico Malaysia Taiwan South Africa United Arab Emirates Singapore
235 97 76 43 31 24 15 13 13 9 8

41.67 17.20 13.48 7.62 5.50 4.26 2.66 2.30 2.30 1.60 1.42
Academic Status Graduate Student Post-Doctoral Researcher Faculty Researcher Independent Researcher Not applicable
73 156 323 12 0

12.94 27.66 57.27 2.13 0.00
Usual Funding Sources Personal Fund Government Grants Private Foundations Philanthropy University or Institution Funding Crowdfunding Industry Sponsorship International Organizations I couldn't get funding for my research I don't need money to do research
21 83 31 2 250 0 12 5 55 105

3.72 14.72 5.50 0.35 44.33 0.00 2.13 0.89 9.75 18.62
Type of Institution Public Private
241 323

42.73 57.27
Number of Publications Less than 10 10 – 20 21 – 50 51 – 100 More than 100
116 187 255 5 1

20.57 33.16 45.21 0.89 0.18
Utilization of Writing Assistant Tools Yes No
498 66

88.30 11.70
ChatGPT Experience Yes No
530 34

93.97 6.03

Sample and Data Collection

The target population comprised academics, scientists, and researchers who were still actively engaged in research activities in any discipline at the time of the study. Potential participants were recruited using convenience and chain referral non-probability sampling techniques. As affirmed by Garcia (2023), it is acceptable to enlist a non-probability sample when the aim is to examine the hypothesized theoretical assumptions. The self-administered questionnaire was hosted online using Google Forms from November 28, 2023, to January 9, 2024, and was distributed on various social media networks (e.g., Facebook and LinkedIn). Research colleagues and previous co-authors were contacted to request the dissemination of the online questionnaire to their professional networks and respective institutions. A total of 564 researchers from 12 countries participated in the survey (see Table 2), of whom most were from the Philippines (n = 235, 41.67%), India (n = 97, 17.20%), Portugal (n = 76, 13.48%), and Finland (n = 43, 7.62%). Most authors are male (n = 374, 66.31%) with an age ranging from 35 to 44 years (n = 272, 48.23%, mean = 37.02, standard deviation = 8.42). They are mostly faculty researchers (n = 323, 57.27%) with a master's degree (n = 371, 65.78%) working in private institutions (n = 323, 57.27%), and their number of publications ranged from 21 to 50 papers (n = 255, 45.21%). Institution funding is the most common source of financial support (n = 250, 44.33%), followed by unfunded research (n = 105, 18.62%). They use writing assistant tools (n = 498, 88.30%) in their research activities and have experience in using ChatGPT (n = 530, 93.97%).

Goodness-of-Fit Measures Good Fit Acceptable Fit
Chi-square/Degree of Freedom (χ2/df) 0 ≤ χ2 / df ≤ 2 2 < χ2 / df ≤ 3
Goodness of Fit Index (GFI) .95 ≤ GFI ≤ 1.00 .90 ≤ GFI < .95
Adjusted Goodness-of-Fit Index (AGFI) .90 ≤ AGFI ≤ 1.00 .85 ≤ AGFI <.90
Normed Fit Index (NFI) .95 ≤ NFI ≤ 1.00 .90 ≤ NFI < .95
Non-Normed Fit Index (NNFI) .97 ≤ NNFI ≤ 1.00 .95 ≤ NNFI < .97
Comparative Fit Index (CFI) .97 ≤ CFI ≤ 1.00 .95 ≤ CFI < .97
Root Mean Square Error of Approximation (RMSEA) 0 ≤ RMSEA ≤ .05 .05 < RMSEA ≤ .08

Data Analysis

The collected data were analyzed and reported using descriptive statistics in IBM SPSS Statistics 22 and SEM in IBM SPSS Amos 22. The SEM methodology was implemented via a multistage analytical strategy to empirically validate the proposed integrated model (Garcia, 2023). The first stage involved testing the model to explore the causal relationships between latent variables and measurement items. On the test measurement model, confirmatory factor analysis was performed to evaluate construct dimensionality and psychometric soundness. Cronbach's alpha, common method bias, composite reliability, discriminant validity, convergent validity, and factor loadings were analyzed. In the next stage, SEM was conducted to estimate correlation coefficients and standardized path coefficients for each hypothesized relationship. The structural model's empirical adequacy was evaluated using a comprehensive set of goodness-of-fit indices following the benchmark recommendations of Schermelleh-Engel et al. (2003). These indices included Chi-square/Degree of Freedom (χ2/df), Goodness of Fit Index (GFI), Adjusted Goodness-of-Fit Index (AGFI), Normed Fit Index (NFI), Non-Normed Fit Index (NNFI), Comparative Fit Index (CFI), and Root Mean Square Error of Approximation (RMSEA). The recommended threshold values for these indices are presented in Table 3. Finally, the 14 research hypotheses were tested using a 0.05 level of statistical significance, with each hypothesis either accepted or rejected based on empirical findings.

Constructs Construct Reliability Average Variance Extracted Average Shared Variance Maximum Shared Variance
Behavioral Intention (BI) 0.834 0.626 0.423 0.322
Perceived Usefulness (PU) 0.833 0.555 0.351 0.290
Perceived Ease of Use (PEOU) 0.849 0.585 0.359 0.301
Task Characteristics (TASK) 0.799 0.571 0.291 0.317
Technology Characteristics (TECH) 0.816 0.525 0.250 0.294
Task-Technology Fit (FIT) 0.834 0.556 0.359 0.310
Situational Normality (SN) 0.895 0.680 0.419 0.343
Structural Assurance (SA) 0.856 0.597 0.397 0.317
Trusting Beliefs (TB) 0.792 0.560 0.322 0.239

Results

The results of the measurement model analysis are presented in Table 4. Regarding its reliability aspect, composite reliability (CR) shows that constructs ranged from 0.792 to 0.895. Each construct exceeded the suggested 0.7 threshold (Hair et al., 2022), which means that the questionnaire has an acceptable level of internal consistency. The common method bias was assessed using Harman's Single-Factor Test, and it was determined that no risk of bias was detected because the total variance extracted by a single factor did not exceed the 50% threshold (Podsakoff et al., 2003). In terms of convergent validity, the average variance extracted (AVE) shows that the values ranged from 0.525 to 0.680. All constructs have an AVE of greater than 0.50 and are higher than both the maximum shared variance (MSV) and average shared variance (ASV). These values are indicative of good convergent validity (Garcia, 2023). Finally, the discriminant validity was examined by comparing AVE with the squared correlation between pairs of constructs. As presented in Table 6, all values were below the square root of AVE (i.e., the bold and italic diagonal values), indicating compliance with the Fornell and Larcker (1981) criterion. The heterotrait-monotrait ratio of correlations (HTMT) criterion was also used to detect potential discriminant validity issues. Garcia (2023) used the same technique in evaluating productivity software adoption, asserting that this criterion can also be used for covariance-based SEM. For this study, the HTMT values ranged from 0.207 to 0.795, which is less than the 0.90 threshold. Values higher than this threshold are an indication of poor discriminant validity.

Constructs Variables M ± SD Factor Loading
Initial Final
Behavioral Intention (BI) BI1 BI2 BI3 3.03 ± 1.505 3.21 ± 1.412 2.79 ± 1.428 .771 .802 .723 .803 .812 .758
Perceived Usefulness (PU) PU1 PU2 PU3 PU4 PU5 4.42 ± 1.004 4.39 ± 1.094 4.42 ± 1.092 4.44 ± 1.085 3.56 ± 1.433 .728 .719 .722 .735 .493 .788 .743 .715 .731 --
Perceived Ease of Use (PEOU) PEOU1 PEOU2 PEOU3 PEOU4 4.33 ± 1.093 4.28 ± 1.064 4.14 ± 0.985 4.18 ± 0.998 .778 .771 .763 .755 .785 .771 .751 .752
Task Characteristics (TASK) TASK1 TASK2 TASK3 2.43 ± 0.952 2.56 ± 1.041 3.39 ± 1.211 .766 .763 .791 .761 .711 .792
Technology Characteristics (TECH) TECH1 TECH2 TECH3 TECH4 4.02 ± 0.825 3.98 ± 0.829 4.00 ± 0.795 4.01 ± 0.811 .722 .711 .714 .718 .721 .718 .721 .739
Task-Technology Fit (FIT) FIT1 FIT2 FIT3 FIT4 3.55 ± 1.122 3.70 ± 1.096 3.83 ± 1.111 3.50 ± 1.015 .744 .752 .764 .741 .743 .761 .734 .745
Situational Normality (SN) SN1 SN2 SN3 SN4 1.99 ± 0.628 2.01 ± 0.711 2.02 ± 0.714 1.74 ± 0.525 .812 .825 .819 .826 .813 .829 .813 .844
Structural Assurance (SA) SA1 SA2 SA3 SA4 SA5 3.50 ± 1.140 3.56 ± 1.118 3.45 ± 1.111 3.54 ± 1.138 2.51 ± 0.825 .785 .737 .767 .796 .497 .791 .735 .752 .811 --
Trusting Beliefs (TB) TB1 TB2 TB3 TB4 2.74 ± 0.811 3.43 ± 1.101 3.28 ± 1.114 3.30 ± 1.088 .781 .768 .734 .745 -- .771 .732 .741

Descriptive statistics using mean and standard deviation (M ± SD), as well as the initial and final factor loadings, are presented in Table 5. Despite recognizing ChatGPT as useful (4.42 ± 1.004) and easy to use (4.23 ± 0.991), researchers remain uncertain about adopting it for research paper writing (3.01 ± 1.594). Interestingly, while researchers perceive ChatGPT as a good fit for their research writing activities (3.65 ± 0.895), they do not believe that research writing should require or depend on AI software (2.79 ± 0.781). They are also not comfortable using ChatGPT to write research papers, and they believe it is not normal for researchers to do so (1.94 ± 0.699). Meanwhile, all constructs were found to be significant predictors of authors' intention to use ChatGPT in their research writing using a 0.05 level of significance as the reference value. It was also identified that the items PU5 (Using ChatGPT would be useful for my job), SA5 (I feel safe using ChatGPT because it has legal measures in place), and TB1 (ChatGPT is a very reliable artificial intelligence software) have values less than the 0.05 threshold. Therefore, the model was modified by removing non-significant latent indicators to strengthen the model's fit.

BI PU PEOU TASK TECH FIT SN SA TB
BI 0.791
PU 0.572 0.745
PEOU 0.439 0.721 0.765
TASK 0.234 0.672 0.223 0.755
TECH 0.648 0.665 0.295 0.534 0.725
FIT 0.762 0.445 0.239 0.681 0.694 0.746
SN 0.547 0.323 0.332 0.712 0.558 0.329 0.825
SA 0.195 0.295 0.194 0.345 0.356 0.165 0.266 0.773
TB 0.668 0.533 0.211 0.533 0.453 0.246 0.357 0.536 0.748
Note: BI = Behavioral Intention, PU = Perceived Usefulness, PEOU = Perceived Ease of Use, TASK = Task Characteristics, TECH = Technology Characteristics, FIT = Task-Technology Fit, SN = Situational Normality, SA = Structural Assurance, and TB = Trusting Beliefs.

As can be seen in Table 6, there were inter-construct correlations (ICC) greater than the 0.60 threshold value. A high ICC between constructs could be a sign of a high degree of similarity between the indicators that are supposed to measure different constructs. When the model is over-parameterized, it can lead to a problem of multicollinearity. This condition is worth investigating because it can cause several problems, including unstable and unreliable estimates of regression coefficients and difficulty in estimating the model. Consequently, it is harder to determine the true effect of an independent variable on the dependent variable. A supplementary test was conducted using the variance inflation factor (VIF) to determine if multicollinearity is present. The tolerance values for each construct were also determined. Garcia (2023) cited that multicollinearity exists when the tolerance values of individual constructs are less than 0.10 or when the VIF values are greater than 10. No multicollinearity was detected in this dataset because the lowest tolerance value was 0.26, and the highest VIF was 5.67.

H# Structural Paths Path Coefficients p-value Empirical Evidence
H1a PEOU → PU .635 .001 Supported
H1b PEOU → BI .453 .053 Rejected
H1c PU → BI .446 .051 Rejected
H2a TECH → FIT .373 .047 Supported
H2b TASK → FIT .308 .055 Rejected
H3a SA → TB .148 .081 Rejected
H3b SN → TB .596 .002 Supported
H4a FIT → TB .331 .013 Supported
H4b FIT → BI .453 .002 Supported
H4c TB → BI .721 .000 Supported
H4d TECH → PEOU .245 .046 Supported
H4e TECH → PU .251 .046 Supported
H4f TASK → SN .153 .067 Rejected
H4g TASK → SA .156 .072 Rejected
Note: BI = Behavioral Intention, PU = Perceived Usefulness, PEOU = Perceived Ease of Use, TASK = Task Characteristics, TECH = Technology Characteristics, FIT = Task-Technology Fit, SN = Situational Normality, SA = Structural Assurance, and TB = Trusting Beliefs.

After confirming that the measurement model (confirmatory factor model) is satisfactorily adequate, this study conducted SEM analysis to test the research hypotheses and verify the causal relationships between the constructs of TAM, TTF, and TST. This analysis will allow the modeling of complex relationships among variables and estimate their direct and indirect effects. Evaluating the overall structural model fit was accomplished using a set of the commonly used fit indices. The recommended values for the measures of goodness-of-fit (good fit and acceptable fit) by Podsakoff et al. (2003) were presented in Table 3. Results of the analysis show that the fit indices for the final model showed either an acceptable or good structural model fit: χ 2/df = 1.79; GFI = 0.91; AGFI = 0.87; NFI = 0.91; NNFI = 0.94; CFI = 0.95; and RMSEA = 0.07. As indicated by these fit indices, the results also indicate that there is a relatively good match between the observed sample data and both the measurement model and structural model. Overall, this finding indicates that the model is a good representation of the relationships among the variables.

Finally, the summary of the hypothesis testing results is presented in Table 7. The proposed research model explained 74% of researchers' intention to use ChatGPT for research writing. The BI construct was jointly predicted by FIT (β = .453, p > .002) and TB (β = .721, p > .000) but not by PEOU (β = .453, p > .053) and PU (β = .446, p > .051). Thus, H4b and H4c were accepted, while H1b and H1c were rejected. This finding indicates that the TAM constructs were not as useful as TST and TTF constructs in explaining the authors' acceptance of ChatGPT. From the TAM perspective, only the relationship between PEOU and PU, as indicated in H1a, was supported (β = .635, p > .001). Meanwhile, the FIT construct was influenced by TECH (β = .373, p > .047) but not by TASK (β = .308, p > .055), supporting H2a and rejecting H2b, respectively. TECH was also successful in influencing TAM variables such as PEOU (H4d; β = .245, p > .046) and PU (H4e; β = .251, p > .046). Conversely, TASK failed to influence TST variables such as SN (H4f; β = .153, p > .067) and SA (H4g; β = .156, p > .072). For the TST constructs, SA (H3a; β = .148, p > .081) was found to influence TB, unlike SN (H3b; β = .596, p > .002), which did not. Overall, of the 14 proposed hypotheses, only eight were supported. From a theoretical perspective, the intention of authors to write research papers using ChatGPT was influenced by a different set of constructs compared to other technologies (e.g., Beldad & Hegner, 2018; Garcia et al., 2022; Jeyaraj, 2022; Leem & Sung, 2019).(e.g., Beldad & Hegner, 2018; Garcia et al., 2022; Jeyaraj, 2022; Leem & Sung, 2019). Figure 2 presents the final model for evaluating the factors affecting teachers' intention to adopt productivity software applications.

Discussion

Research writing is an indispensable cognitive and epistemic competency for researchers, academics, or professionals operating within disciplines necessitating the systematic articulation and dissemination of empirical findings (Kotz et al., 2013). Mastery of academic writing conventions significantly enhances the probability of manuscript acceptance in high-impact journals, as clarity, logical cohesion, and structural organization serve as fundamental evaluative criteria in editorial and peer-review processes. Despite the centrality of proficient research writing, many scholars encounter substantive challenges that necessitate reliance on external writing assistance. AI writing tools are increasingly becoming a popular option, with ChatGPT emerging as a focal point of interest within the scientific community (Bin-Nashwan et al., 2023; Cheng, 2023; Malik et al., 2024).(Bin-Nashwan et al., 2023; Cheng, 2023; Malik et al., 2024). However, the extent to which researchers exhibit a proclivity to integrate ChatGPT into their writing workflows remains indeterminate, as does the constellation of cognitive, technological, and contextual determinants influencing this adoption trajectory. Despite ongoing academic discourse on the epistemic legitimacy and ethical ramifications of AI writing tools (Garcia, 2024), prior investigations have predominantly focused on teachers and students as primary user groups. This participant selection bias has resulted in a notable empirical void regarding researchers' adoption patterns, thereby obstructing the formulation of a theoretically robust framework capable of explicating the decision-making heuristics underpinning researchers' engagement with ChatGPT in scholarly writing. Addressing this gap in empirical inquiry holds profound implications for academic and institutional stakeholders. Providing this empirical evidence could inform researchers, funding bodies, academic publishers, policy architects, and broader scientific communities, each of whom harbors vested epistemic and operational interests in the evolving landscape of AI-mediated knowledge production.

Ambivalence Toward ChatGPT in Research Writing

New technologies are either resisted or accepted by their target users. However, diverging from conventional technology acceptance paradigms, this study identified a pronounced ambivalence among researchers regarding their intention to use ChatGPT for scholarly writing. As a relatively new technology, it is plausible that researchers exhibit limited operational fluency in leveraging the computational affordances of generative AI tools. Familiarity—conceptualized as prior exposure and experiential interaction with a technological system—has been empirically established as a determinant of both PEOU and PU, exerting a significant influence on users' post-adoption behaviors (Choi, 2020). Researchers' propensity to integrate ChatGPT into their writing workflows is likely to increase as they gain functional literacy in its algorithmic mechanisms, output generation, and epistemic constraints. This experiential familiarity could subsequently engender positive affective attitudes, which have been recognized as pivotal antecedents of BI in previous empirical inquiries (e.g., Hussein, 2017; Luan & Teo, 2011).(e.g., Hussein, 2017; Luan & Teo, 2011).

Simultaneously, skepticism toward ChatGPT's epistemic reliability and algorithmic precision emerged as a critical barrier to adoption. Given that validity and integrity constitute the cornerstone principles of scientific inquiry, researchers exhibit hesitancy in utilizing a system that lacks deterministic reasoning and is inherently probabilistic in nature. This hesitancy is further reinforced by a phenomenon that can be characterized as AI shaming (Acut et al., 2025)—a form of social stigmatization whereby academics fear being perceived as less rigorous, original, or ethical for relying on generative AI tools. Such stigma may be amplified by institutional norms, peer judgment, or uncertainties around authorship and academic integrity. Beyond these socio-institutional factors, concerns surrounding intellectual ownership and creative authenticity may likewise complicate researchers' willingness to adopt ChatGPT. For many scholars, writing is not merely a mechanical process but an epistemic and expressive act. The involvement of generative AI in ideation or articulation raises anxieties about diluting one's scholarly voice or compromising the originality of intellectual contributions. Parallel to this is a growing unease about potential cognitive offloading and skill atrophy. Some researchers speculate that routine reliance on AI tools may erode essential academic competencies over time, such as critical thinking, argument construction, and nuanced scholarly writing (Gustilo et al., 2024).

From an information systems perspective, reliability and accuracy are quintessential attributes of high-functioning technological infrastructures (Zhou et al., 2022). Concerns regarding bias propagation further compound this reluctance—not only in academic writing but also across broader educational contexts (Bozkurt et al., 2024).(Bozkurt et al., 2024). In machine learning research, systematic deviations from ground-truth values are extensively documented, underscoring algorithmic biases embedded within data-driven models (Akter et al., 2022; Kumagai et al., 2022).(Akter et al., 2022; Kumagai et al., 2022). Within an academic context, bias in research outputs has profound ramifications, including epistemic distortions, reputational damage to scholars, and a potential erosion of public trust in scientific institutions. This resistance is not merely technical but also psychological, often shaped by affective responses such as mistrust, anxiety, or a perceived loss of agency in the research process. Researchers' hesitation to adopt ChatGPT thus underscores the need for a rigorous examination of the latent cognitive and socio-technical factors shaping their uncertainty.

Rethinking the Relevance of TAM Constructs

Intriguingly, despite perceived usefulness (PU) and perceived ease of use (PEOU) being among the most robust predictors of behavioral intention (BI) across a multitude of technology adoption frameworks (e.g., Garcia, 2023)—including the use of ChatGPT in academic writing by doctoral students (Zou & Huang, 2023)—the present study found these canonical TAM constructs to be statistically insignificant in the context of ChatGPT acceptance. Prior research posits that participant intelligence levels may attenuate the association between PU, PEOU, and BI (Yarbrough & Smith, 2007). However, this explanation should be interpreted with epistemological caution, as intelligence is a multidimensional construct that cannot be linearly mapped onto technology adoption behaviors. Researchers may exhibit cognitive dispositions that prioritize exploratory engagement over immediate functional utility, shifting their focus toward theoretical or conceptual affordances of ChatGPT rather than its pragmatic applications. Contextualizing this assertion within the present study implicitly assumes that researchers possess above-average intelligence relative to the general population. While 99.11% of the study's participants hold master's or doctoral degrees, it remains methodologically problematic to equate educational attainment with cognitive ability in the domain of technology acceptance. The observed insignificance of PU and PEOU warrants further empirical scrutiny to ascertain the cognitive, epistemic, and contextual contingencies influencing researchers' AI adoption behaviors. Nevertheless, PEOU retains its role as a critical antecedent of PU within the ChatGPT adoption framework (e.g., Garcia, 2023; Zhou et al., 2022; Zou & Huang, 2023).(e.g., Garcia, 2023; Zhou et al., 2022; Zou & Huang, 2023). This finding reaffirms that perceptions of usability shape perceived instrumental benefits, albeit without directly translating into adoption intent in the case of generative AI in research writing.

Task-Technology Fit and Trust as Key Determinants of Adoption

Rather than the conventional TAM-derived constructs, FIT and TB emerged as the proximal determinants of BI. The statistically significant association between FIT and BI suggests that researchers prioritize task congruence over generic usability metrics such as PEOU and PU. Fundamentally, they assess the alignment between their research writing exigencies and the functional affordances of the technology. Given that most participants have prior exposure to ChatGPT, they may possess a refined awareness of its intrinsic constraints in fulfilling the cognitive and epistemic rigor demanded by scholarly writing. This discernment likely stems from an appraisal of ChatGPT's proficiencies and deficiencies in executing complex academic writing tasks, including the generation of logically coherent arguments, disciplinary lexicon integration, and adherence to domain-specific rhetorical conventions (Cheng, 2023). Concomitantly, the significant association between TB and BI suggests that researchers exhibit a propensity to adopt ChatGPT for manuscript composition contingent upon their trust in both the system and its generative output. This finding corroborates assertions that skepticism toward AI-driven text generation originates from apprehensions regarding content fidelity, algorithmic opacity, and latent biases embedded in generative outputs. As trust constitutes a foundational social-cognitive construct in human-automation dyads (Lee & See, 2004), researchers necessitate a perception of ChatGPT as epistemically reliable and intellectually robust prior to its assimilation into their academic workflows. Notably, TB exhibits greater explanatory power than FIT in predicting BI, which implies that researchers prioritize trust calibration over mere task-technology alignment.

Given the paramount influence of TB on BI, it is particularly salient that SN exerts a direct and substantive impact on TB. As posited by TST, researchers' trust propensity toward ChatGPT is significantly shaped by normative influences within their scholarly communities. Social endorsement functions as a critical heuristic mechanism in trust formation, as academics rely on peer validation and institutional precedent to determine the legitimacy of AI-assisted manuscript composition. Given that AI-driven writing assistance remains an emergent and contentious phenomenon, researchers may be reluctant to trust ChatGPT if its usage does not align with entrenched disciplinary norms and scholarly conventions. This aligns with recent findings indicating that electronic word-of-mouth communication exerts a tangible influence on the acceptance and institutional legitimation of ChatGPT (Bin-Nashwan et al., 2023). From a normative and ethical standpoint, it is imperative to delineate how researchers envisage the role of ChatGPT within the scholarly writing process. Certain researchers may perceive ChatGPT as a linguistic augmentation tool restricted to syntactic refinement, stylistic enhancement, and orthographic rectification—akin to conventional writing assistive technologies (Dale & Viethen, 2021). Conversely, others may regard the use of AI-generated text in manuscript authorship as an epistemic and ethical transgression. ChatGPT itself has acknowledged the risk of unethical applications, including academic misconduct and unauthorized assistance in scholarly outputs (King & ChatGPT, 2023). Given these epistemological tensions, this study underscores the necessity for rigorous scholarly interrogation into the implications of generative AI in research writing. The findings substantiate the imperative for multidimensional analyses of AI adoption patterns, institutional regulations, and ethical boundaries.

Theoretical and Practical Implications

From a theoretical standpoint, this study furnishes novel empirical substantiation to the extensive corpus of scholarship on technology acceptance paradigms by empirically validating an integrated tripartite model encompassing TAM, TTF, and TST. Prior technology adoption frameworks predominantly conjoin TAM and TTF (Mustafa & Garcia, 2021) or incorporate trust as an ancillary construct within these theoretical schemas (Wang et al., 2021; Wu et al., 2011).(Wang et al., 2021; Wu et al., 2011). This study represents one of the few empirical inquiries to synthesize three established theoretical models, which marks a pioneering effort in integrating TST alongside TAM and TTF within the domain of generative AI-assisted scholarly research writing.

Interestingly, the constructs of TTF and TST exhibited greater explanatory power than the conventional TAM-based determinants of technology acceptance. This divergence underscores the need for further epistemological interrogation into the convergence, demarcation, and theoretical complementarities among these models. Nevertheless, the resultant structural model provides a robust analytical framework for explicating the cognitive heuristics underlying researchers' adoption proclivities toward ChatGPT. From a pragmatic standpoint, the empirical findings yield actionable insights with direct operational, institutional, and policy-level ramifications. First, a granular understanding of the determinants influencing researchers' acceptance of ChatGPT can inform the iterative development and optimization of AI-driven manuscript composition systems. Possessing this knowledge ensures these tools are bespoke to the comprehensive exigencies of the academic milieu. Enhancing the linguistic, structural, and epistemic affordances of such technologies may catalyze greater assimilation into research workflows. Second, by illuminating the salience of SN and TB, this study underscores the imperative for institutional and technological interventions that foster a normative and epistemically trustworthy ecosystem for AI integration in academic knowledge production. Institutions and software architects must devise strategic implementations that normalize AI adoption, mitigate cognitive resistance, and fortify researchers' confidence in the ethical and intellectual legitimacy of AI-mediated writing assistance. Furthermore, funding agencies and regulatory bodies may leverage these findings to calibrate resource allocation, refine policy frameworks, and articulate governance mechanisms that delineate ethical best practices for AI deployment in scholarly inquiry. Publishing companies and academic gatekeeping institutions could similarly adapt editorial policies and peer-review protocols to accommodate the growing entanglement of generative AI in research communication.

In sum, this theoretically anchored investigation advances the discourse on AI-mediated epistemic labor by offering empirical elucidation of the socio-technical determinants shaping AI adoption trajectories in academic settings. By bridging conceptual rigor with applied significance, this study lays the groundwork for a more informed, ethically grounded, and academically synergistic integration of AI-driven language models in research ecosystems.

Limitations and Future Directions

This study is not without its methodological and conceptual constraints, which warrant further scholarly inquiry. Given that this research employed a cross-sectional design, a longitudinal methodological approach is strongly recommended, as technology acceptance is inherently an iterative and dynamic process rather than a singular event. Researchers' perceptions may undergo substantial recalibration in response to advancements in AI-generated content, shifts in regulatory paradigms, and evolving academic epistemologies.

Second, while this study deliberately focused on researchers (a demographic underrepresented in AI and AWS literature), future investigations could broaden the scope by incorporating educators and students to explore ChatGPT adoption across the educational continuum. Such an expansion would be particularly pertinent given ongoing pedagogical discourses surrounding AI-mediated learning and the growing institutional restrictions on generative AI tools due to concerns over academic integrity and epistemic reliability.

Third, subsequent research should critically interrogate the epistemological, ethical, and integrity-related implications of ChatGPT and analogous LLMs. Potential areas include misinformation analysis, particularly regarding AI-generated outputs that produce synthetic but factually erroneous claims, hallucinatory citations, or data misrepresentations presented as legitimate references (Acut et al., 2025). Additionally, the legal, institutional, and governance dimensions of ChatGPT's adoption (spanning authorship attribution, algorithmic opacity, systemic biases, and compliance with academic regulatory structures) require rigorous scrutiny, as these factors have profound ramifications for AI integration in research and pedagogical contexts.

Fourth, cross-cultural, inter-institutional, and jurisdictional comparisons would yield valuable insights into differential adoption trajectories and ethical concerns across diverse academic ecosystems. Variability in digital infrastructures, AI literacy levels, institutional policies, and scholarly writing conventions across geopolitical contexts may substantially modulate adoption rates and normative perceptions (Akpan et al., 2024). Future research should deploy context-sensitive analytical models or statistical weighting techniques to control for these asymmetries, ensuring a more ecologically valid and globally representative analysis.

Finally, synergistic integrations with complementary AI-powered research tools merit further exploration. Future studies could examine how ChatGPT interoperates with other computational writing assistants, algorithmic plagiarism detection systems, and generative AI-enhanced adaptive learning technologies. Such an inquiry would illuminate the multifaceted affordances of AI in scholarly communication, thereby advancing discourse on the convergence of human cognition and machine intelligence in knowledge production.

Conclusion

Research writing constitutes an indispensable facet of the scientific enterprise and serves as a foundational skill for the precise dissemination of scholarly contributions. Given its inherently meticulous and cognitively demanding nature, many researchers contemplate leveraging computational assistance in manuscript composition to optimize temporal efficiency and mitigate cognitive load (e.g., through the automation of syntactic structuring and stylistic refinement). Furthermore, such tools augment the linguistic integrity of manuscripts by systematically identifying lexical, grammatical, and orthographic deviations. Considering these exigencies, it is unsurprising that the scientific community has exhibited considerable interest in ChatGPT, particularly due to its superior computational linguistics capabilities relative to conventional manuscript-enhancement software. Building upon the escalating prominence of LLMs in scholarly communication, this study undertakes a theoretical exposition and empirical validation of an integrative conceptual framework underpinned by TAM, TTF, and TST. The SEM approach was adopted to analyze data collected from 564 researchers in 12 countries and determine the factors influencing scholars' propensity to employ AI-assisted manuscript composition tools. The findings underscore that TTF and TST exert a more pronounced influence on adoption intention than the classical constructs of TAM. Researchers ascribe greater weight to the congruence between technological affordances (ChatGPT) and task exigencies (academic writing) and their cognitive trust schema, rather than to the perceived instrumental benefits and usability of the system. Moreover, trust in the technology emerged as a principal determinant, surpassing even task-technology alignment, with normative perceptions of ChatGPT's acceptability in academic writing serving as a pivotal antecedent of trust formation. Overall, this study advances the discourse on AI-driven scholarly writing by offering empirical insights into the interplay between task-technology alignment, trust dynamics, and adoption behavior.

Appendix A. Survey Questionnaire

SECTION 1: Demographic Information

What is your age? (years)
____________________________________

What is your gender?

  • 〇 Male
  • 〇 Female
  • 〇 Prefer not to say

What is your highest educational attainment?

  • 〇 Bachelor
  • 〇 Masters
  • 〇 Doctorate

In which country do you live? Select country ▼

What is your academic status?

  • 〇 Graduate Student
  • 〇 Post-Doctoral Researcher
  • 〇 Faculty Researcher
  • 〇 Independent Researcher
  • 〇 Not applicable

How long have you been engaged in research? (years)
____________________________________

Where does your research funding usually come from?

  • 〇 Personal Fund
  • 〇 Government Grants
  • 〇 Private Foundations
  • 〇 Philanthropy
  • 〇 University or Institution Funding
  • 〇 Crowdfunding
  • 〇 Industry Sponsorship
  • 〇 International Organizations
  • 〇 I couldn't get funding for my research
  • 〇 I don't need money to do research

In what sector does your institution belong?

  • 〇 Public
  • 〇 Private

How many publications do you have?
____________________________________

Do you use writing assistant tools (e.g., Grammarly) in your work?

  • 〇 Yes
  • 〇 No

Do you have experience in using ChatGPT for any writing purposes?

  • 〇 Yes
  • 〇 No

SECTION 2: TAM, TTF, and TST

TECHNOLOGY ACCEPTANCE MODEL (TAM)

Behavioral Intention

BI1: Assuming I had access to a productivity tool, I intend to use it.

BI2: Given that I had access to a productivity tool, I predict that I would use it.

BI3: I plan to use ChatGPT for my research writing in the future.

TAM - Perceived Usefulness

PU1: Using ChatGPT would enable me to write papers more quickly.

PU2: Using ChatGPT would enhance my job performance as a researcher.

PU3: Using ChatGPT would make research writing easier.

PU4: Using ChatGPT would increase my research productivity.

PU5: Using ChatGPT would be useful for my job.

TAM - Perceived Ease of Use

PEOU1: Learning to operate ChatGPT would be easy for me.

PEOU2: I would find it easy to get ChatGPT to do what I want them to do.

PEOU3: My interaction with ChatGPT would be clear and understandable.

PEOU4: It would be easy for me to become skillful at using ChatGPT.

TASK-TECHNOLOGY FIT (TTF)

TTF - Technology Characteristics

TECH1: ChatGPT offers me the ability and support to write research papers.

TECH2: ChatGPT has features that would help me in research writing.

TECH3: ChatGPT provide human-like content suited for my research.

TECH4: ChatGPT is easily accessible at any time or place.

TTF - Task Characteristics

TASK1: Research writing is a task that requires the features of ChatGPT.

TASK2: Research writing is a task dependent on writing tools like ChatGPT.

TASK3: Research writing is a task that would benefit from using ChatGPT.

TTF - Task-Technology Fit

FIT1: ChatGPT would be a good writing tool for research.

FIT2: ChatGPT would be suitable for my research writing activities.

FIT3: ChatGPT would fit well in my research workflow.

FIT4: ChatGPT would allow me to write research papers efficiently.

TRUST IN SPECIFIC TECHNOLOGY (TST)

TST - Situational Normality

SN1: I am comfortable writing research papers using ChatGPT.

SN2: I am confident that right things will happen when I use ChatGPT.

SN3: I am convinced that everything is fine even when I use ChatGPT.

SN4: I believe it is normal for researchers to use ChatGPT.

TST - Structural Assurance

SA1: I feel safe using ChatGPT because it can be used in a controlled environment.

SA2: I feel safe using ChatGPT because it was developed by a research organization.

SA3: I feel safe using ChatGPT because it does not collect any personal information.

SA4: I feel safe using ChatGPT because it has safety and security features.

SA5: I feel safe using ChatGPT because it has legal measures in place.

TST – Trusting Beliefs

TB1: ChatGPT is a very reliable artificial intelligence software.

TB2: ChatGPT provides sufficient responses to my requests.

TB3: ChatGPT is dependable when it comes to generating content.

TB4: ChatGPT can assist me in writing research papers.