2. Review of the Related Literature
Perhaps the most well-known scale for characterizing the maturity level of a technology is the technology readiness level (TRL) scale developed by the National Aeronautics and Space Administration (NASA) in the United States of America. Some university technology transfer offices have begun applying this scale in some manner as part of their processes for vetting technologies and making portfolio decisions.
NASA employee Stand Sadin developed the original TRL scale in 1974 as a tool to help program managers estimate the maturity level of technologies being considered for inclusion in space programs (Banke, 2010; Mankins, 2009). In 1995, NASA employee John C. Mankins extended the scale from its original seven (7) ordinal levels to nine (9) and expanded the descriptions for each level (Mankins, 1995; Mankins, 2009). In 2001, after the General Accounting Office (GAO) recommended that the U.S. Department of Defense (DoD) make use of the scale in its programs, the Deputy Under Secretary of Science and Technology issued a memorandum endorsing the use of the TRL scale in new DoD programs. Guidance for using the scale to assess the maturity of technologies was incorporated into the Defense Acquisition Guidebook and detailed in the 2003 DoD Technology Readiness Assessment Deskbook (Nolte, 2008; Nolte & Kruse, 2011; Deputy Under Secretary of Defense for Science and Technology, 2003). In the mid-2000s, the European Space Agency (ESA) adopted a TRL scale that closely followed the NASA TRL scale (European Association of Research and Technology Organisations, 2014).
Technology transfer professionals have encountered challenges using the NASA TRL scale, which is not surprising. The NASA TRL scale is essentially a typology framework. Putting typologies into practice can be challenging (see Collier, Laporte, & Seawright, 2012). Also, as an agency of the federal government, NASA developed the TRL scale in the context of public sector applications. The public sector is not motivated by economic profit in the same way as the private sector. Moreover, technology development projects that have private sector applications are also likely to comprise a much broader range of types and kinds of technologies than those in any given federal agency in the public sector. Finally, there is no indication that anyone has ever established the validity and reliability of the NASA TRL scale. Consequently, developing and validating a generalized TRL scale would fill an important knowledge gap and could prove very useful in facilitating and advancing technology transfer research, policy, and practice.
The literature relevant to the topic of assessing technology maturity level is very limited. The construct of technology maturity level itself is difficult to define. Most of the reviewed literature about technology maturity level fails to explicitly define the construct (see e.g., Albert, 2016; Bhattacharya, Kumar, & Nishad, 2022; Chukhray, N., Shakhovska, N., Mrykhina, O., Bublyk, M., & Lisovska, L., 2019; Mankins, 1995; Mankins, 2009; Munteanu, 2012; Juan, Wei, & Xiamei, 2010). In a broad sense, one can think of technology maturity level as the phase of technological progress expressed as a function of some independent variable such as time, dollars spent refining the technology, number of workers utilizing the technology, or number of technical publications about the technology (O’Brien, 1962). Nolte (2008) resorted to analogies and scenarios to try to explain technology maturity level but never provided an exact definition. Nolte’s concept of technology maturity level is inextricably tied to an instrumental definition of technology. Based on the discussion that Nolte offered, it seems reasonable to broadly define technology maturity level as the degree to which a technology is ready for use as intended to achieve an end that is acceptable.
It is worth noting that Nolte’s conception of technology maturity level has the characteristics of value neutrality, context dependency, and multi-dimensionality (Nolte 2008). A given technology maturity level is neither “good” nor “bad” in and of itself. One’s assessment of whether a given technology maturity level is acceptable depends entirely on the context in which one is using the technology (Nolte). A technology deemed to be at a given maturity level in one setting may be deemed to be at a different maturity level in another setting. Moreover, a comprehensive assessment of technology maturity level requires an examination from multiple perspectives (Nolte).
The literature suggests that the content domain of technology maturity extends beyond just technical development. The trajectory of technology is not just dictated by technical considerations (Stokes, 1997). A comprehensive assessment of technology maturity also needs to capture economic-related performance (Mankins, 2009). Market considerations also greatly influence the development and adoption of technology (Stokes). Adherents of the cultural school of technology studies would likely argue that culture too greatly influences the development and adoption of technology.
Technology maturity level seems closely associated with risk as the term is used in everyday language, which is more aptly called uncertainty
2. According to Blank and Dorf (2012), there are two primary types of uncertainty that commercial endeavors must manage. The first type is invention uncertainty (i.e., technical uncertainty), which is the possibility that the technology cannot be made to work as desired (Blank & Dorf). The second type is market uncertainty, which is the possibility that end users will not adopt the technology even if it can be made to work as desired (Blank & Dorf). Speser (2012) also discussed these differences in kinds of uncertainty and included firm-specific uncertainty
3 as a third type. Speser argued that one could not control market uncertainty but the lean startup methodology
4 , which has gained widespread acceptance among entrepreneurship practitioners and support organizations, calls this into question. Nolte (2008) argued that there are at least four dimensions of technology maturity level comprising technical, programmatic, developer, and customer viewpoints.
Given that most instances of technology transfer from federal laboratories and universities are intended to occur in the context of private sector commercial endeavors, it is logical to conclude that successful technology transfer also requires managing both technical and market uncertainty. Approaches that address market uncertainty without consideration of technical uncertainty will fail because they unduly raise hopes and make empty promises. They simply will not deliver. Those that address technical uncertainty without consideration of market uncertainty will fail in the market for lack of demand. No one will care. In both cases, the result is an unsuccessful attempt at technology transfer.
The NASA TRL scale is one of several approaches to operationalizing technology maturity level found in the literature. Use of the NASA TRL scale as a measure of maturity of a technology seems to be gaining traction in the field of technology transfer. Technology transfer professionals in the public sector use the NASA TRL scale (or an adaptation of it) because its use in NASA and DoD programs is Congressionally mandated. Many university technology transfer professionals also use the NASA TRL scale in some capacity to assess the maturity of technologies. This may be because the NASA TRL scale is the most well-known or it could be driven by the fact that the federal government funds much of the research conducted at U.S. universities who are likely taking their cues from the federal government.
The NASA TRL scale is not without its shortcomings. Olechowski, Eppinger, Tomascheck, and Joglekar (2020) investigated the challenges associated with using the NASA TRL scale in practice. Using an exploratory sequential mixed methods design consisting of qualitative semi-structured interviews and an online survey that included a best-worst scaling (BWS) experiment, they identified 15 challenges that practitioners face when using the NASA TRL scale. The participants in the study were predominantly private-sector practitioners from the aerospace, defense and government, and technology industries who had roles related to hardware development and advanced systems engineering (Olechowski, Eppinger, Tomascheck, & Joglekar). The study found that challenges encountered by practitioners were related to either system complexity, planning and review, or assessment validity. System complexity challenges pertained to incorporating new technologies into highly complex systems (Olechowski, Eppinger, Tomascheck, & Joglekar). This is likely the situation for most technology transfer endeavors in both the public and private sector. Challenges related to planning and review concerned the integration of NASA TRL assessment outputs with existing organizational processes, particularly those related to planning, review, and decision making (Olechowski, Eppinger, Tomascheck, & Joglekar). This seems to touch on the concept of absorptive capacity. Assessment validity challenges had to do with the reliability and repeatability of assessments using the NASA TRL scale (Olechowski, Eppinger, Tomascheck, & Joglekar). This suggests that the NASA TRL scale, as constituted, may be susceptible to idiosyncratic variation. This poses a potential impediment to advancing technology transfer theory and practice.
It is not surprising that private sector practitioners of technology transfer, even in the context of engagements with the federal government, would encounter challenges using the NASA TRL scale. As an agency of the federal government, NASA developed the TRL scale in the context of public sector applications. The public sector is not motivated by economic profit in the same way as the private sector. The NASA TRL scale focuses on technical uncertainty (i.e., invention uncertainty). As such, it likely does not capture important economic and social factors relevant to technology development that are significant considerations for private sector decisions about opportunities to obtain and assimilate technologies from federal laboratories and universities.
Several scholars and practitioners have proposed alternative approaches to address shortcomings of the NASA TRL scale as well as alternate scales that express the notion of technology maturity level in various contexts or capture other aspects of its content domain (
Table 1). Most of these scales also seem to focus on technical uncertainty. In fact, so many alternatives and variants of readiness level scales have been offered, introduced, or adapted for various situations that “readiness level proliferation” has become a problem in the public sector (Nolte & Kruse, 2011). However, few, if any, of these scales appear to have been validated in any scientifically meaningful way.
All of this suggests that there is a significant knowledge gap regarding the measurement and application of technology maturity in technology transfer policy and practice. To address this gap, I undertook an effort to develop a generalized technology readiness level (GTRL) scale that a variety of actors could use in a broad array of applications. This effort included conducting a pilot study of the validity and reliability of the GTRL scale.
Pilot studies are an important and valuable step in the process of empirical analysis, but their purpose is often misunderstood. In social science research, pilot studies can take the form of a trial run of a research protocol in preparation for a larger study or the pre-testing of a research instrument (Van Teijlingen & Hundley, 2010). The goal of a pilot study is to ascertain the feasibility of a methodology being considered or proposed for a larger scale study; it is not intended to test the hypotheses that will be evaluated in the larger scale study (Leon, Davis, & Kraemer, 2011; Van Teijlingen & Hundley). Unfortunately, pilot studies are often not reported because of publication bias that leads publishers to favor primary research over manuscripts on research methods, theory, and secondary analysis even though it is important to share lessons learned with respect to research methods to avoid duplication of effort by researchers (Van Teijlingen & Hundley).
The goal of the pilot study of the validity and reliability of the GTRL scale was to produce data and information to help answer several specific questions and take steps to fill the knowledge gap regarding approaches to properly validate instruments for assessing the maturity of technologies. These questions included: (1) What challenges are likely to be encountered in a larger study to assess the validity and reliability of the GTRL scale and other such instruments? (2) Can the methods for assessing content validity be applied to the GTRL scale? (3) Should participants in a larger validation study of the reliability of the GTRL scale be limited to university and federal laboratory technology transfer professionals? (4) How difficult will it be to recruit study participants? (5) How should a larger study familiarize participants with the GTRL scale? (6) What factors should be controlled for in a larger validity and reliability study? (7) How viable are asynchronous web-based methods for administering a validity and reliability study? (8) How viable is the approach of presenting marketing summaries of technologies to participants for them to rate? (9) How burdensome will it be for study participants to rate 10 technologies in multiple rounds? (10) How usable will the collected data be? and (11) What modifications, if any, to the GTRL scale should be considered before performing a larger validity and reliability study?