Search

Adoption and uses of LLMs among U.S. tech workers

Share with
Or share with link

 

Editorial note

Rethink Priorities is an independent, non-partisan, non-profit 501(c)3 policy think tank that does polling and policy analysis. Rethink Priorities is not funded by any candidate or political party committee and does not poll on behalf of any political candidate or party.

This report is the third in a three part series on the usage and utility of LLMs. Part 1 focuses on in-depth qualitative interviews with LLM power users and can be found here. Part 2 focuses on the usage of LLMs in the US general population, and can be found here.

We thank Open Philanthropy (Open Philanthropy Project LLC) for funding this research report. The views expressed herein are not necessarily endorsed by Open Philanthropy

Executive summary

This report presents findings from a survey of 1963 U.S. respondents with software development or programming backgrounds, surveyed in Q1 2025, focusing on how and to what extent they use large language models (LLMs) in their work. The goal was to provide an empirical grounding for current and future assessments of the adoption and impact of AI, and to surface LLM’s most common use cases, perceived usefulness and productivity benefits, and barriers to adoption.

Key metrics at a glance

1. High adoption

  • 91% of respondents reported having used an LLM for work – almost double our estimate for the U.S. general population
  • 82% specifically reported using ChatGPT/GPT, with significant usage of Microsoft Copilot (43%), Bard/Gemini (29%), GitHub Copilot (25%), and Claude (16%).

2. Frequency and amount of use

  • Usage frequency varied substantially, with 29% using LLMs nearly every day or multiple times a day, but with 23% only using LLMs occasionally or less than once a week, and 11% not using LLMs for work generally.
  • The amount of time spent using LLMs was mostly modest, with half of respondents using LLMs for two hours or less per week, and a further ~11% not using them for work.

3. Typical chat interfaces dominate usage

  • Chat interfaces (83%) remain the primary form of access to LLMs, followed by LLMs integrated into other software (41%) and code editors (33%).
  • More advanced interactions such as API calls (13%) and local LLMs (7%) were less common – this matches with 50% of respondents still considering themselves ‘typical’ users of LLMs

5. Use cases and productivity benefits

  • The top three most common use cases were the same as for the general public: Search (72%), Explanation (56%), and Writing (56%), with Coding (51%) and Data analysis following (45%).
  • Among those who engaged with each use case, a majority rated all use cases as at least very useful, with the highest rated use cases being Language learning/translation (4.21/5), Writing (4.13/5), and Explanation (4.05/5)
  • Within each use case, time savings were positively associated with usefulness ratings, but the highest rated use cases were not those reported as saving the most time per week – factors besides simple time savings may affect how useful people find each use case, or usefulness ratings may reflect time savings proportional to the amount of time spent on the task
  • 78% reported that LLMs had unlocked new tasks, with 69% indicating that the new tasks were at least somewhat valuable
  • Almost three quarters (74%) of respondents reported that LLMs had made them more productive relative to before using LLMs, and 69% expected their productivity would decrease if they could no longer use LLMs
    • Estimated productivity gains from LLMs were approximately 1.22x, with a drop in productivity (.88x) if no longer able to use them. Those with greater LLM proficiency gained more (1.34x) than those with lower proficiency (1.14x)

7. Barriers to use

  • The top reported barrier was Accuracy at 57%, which is more than double the percentage expressing this concern in the general population (23%).
  • Privacy and security concerns were also raised by nearly half of respondents (47%), and Ethical concerns by about one third (32%).
  • Work policies were reported as a barrier by 25% of respondents

Conclusions

These findings highlight the rapid and widespread uptake of LLMs among technically proficient professionals, at a greater extent than amongst the general population. While high adoption rates suggest that these tools are now a standard part of the software development landscape, the relatively low intensity of use for many (with half using LLMs two hours or less per week), and modest expected productivity decreases if no longer able to use them, suggest the role of LLMs is not yet central for most tech workers.

LLMs are seen as most useful – and are most used – for general-purpose cognitive tasks such as search, explanation, and writing, rather than more technical or specialized workflows. This suggests that their perceived usefulness lies not only in time savings, but in lowering cognitive effort or providing flexible support for knowledge work. Though various other use cases were not rated quite so highly, LLMs were still rated as very useful by most users for all use cases we asked about, including for coding and task automation.

Users report quite substantial productivity gains of approximately 1.22x, with more proficient users reportedly experiencing greater gains. Still, concerns about accuracy, privacy, and security remain as barriers to wider or deeper adoption. Beyond providing a snapshot of current usage and utility of LLMs, these findings can serve as a benchmark against which the impact of future developments in and wider adoption of LLMs can be compared.

Introduction

In this report we present findings from a survey on LLM usage among U.S. workers with a background in software development/programming (‘tech workers’). The overall goal of the survey was to more specifically focus on awareness and use of LLMs within the context of tech work, and amongst those who may be particularly proficient or well placed to develop uses of LLMs. The current findings can help empirically ground assessments of the impact and uptake of AI at work. In addition, by creating a baseline measurement and running similar surveys over time, we can track how LLM adoption and usage patterns evolve. This approach helps identify emerging trends and measure indicators of changes in capabilities, automation, and productivity shifts as these technologies develop.

Sample description

Our sampling strategy focused on reaching technically skilled workers in industries where LLM use was likely. Because this is a niche population, we used multiple platforms with different targeting criteria to find suitable respondents. This approach helped us include professionals with various employment types, technical backgrounds, and industry roles who might be using LLMs in their work. Topline results represent the responses of 1963 respondents, surveyed between January and March of 2025.

Respondents were recruited from Prolific and from CloudResearch’s Connect, using specific recruitment filters in order to obtain a sample of respondents with relevant backgrounds (details are provided in the appendix section Participant recruitment details). On Prolific, we only recruited participants who reported having computer programming skills and whose function related to data analysis or engineering (a category that explicitly highlighted software engineering). On Connect, we recruited participants reporting specific technical skills (e.g., ML & GenAI engineering, computer science, machine learning), as well as respondents who freelance or do independent work as a software developer.

To better understand the work contexts of our respondents, we asked them to select their work sector within the survey. While our recruitment targeted technical skills and job functions rather than specific industries, this information helps characterize where these technically skilled workers were employed. Approximately half of respondents reported working in IT / Computer Science (53%), with much lower proportions of respondents working in other sectors (e.g., 12% in Business / Management / Finance; 8% in Science / Research).

Among a subset of the respondents we also asked about their proficiencies. Half of the subsample reported being proficient in Computer Science (50%), which was the most commonly reported proficiency (see appendix section Work sectors of respondents). Regarding AI specific proficiencies, we found that 17% of the subsample reported being proficient in ML & GenAI engineering and 17% was proficient in AI Frameworks & Libraries.

In terms of basic demographics, 70% of respondents were male, 30% were female, and <1% reported another identification. The average age in the sample was 35.3 years (SD = 10.2), with a minimum of 18 and a maximum of 79. Most respondents were White or Caucasian (64%). Thirteen percent were Black or African American, 11% were Asian or Asian American, and 10% percent were Hispanic or Latino.

Almost half of the respondents graduated from college (48%), and one third completed graduate school (33%). A small minority had completed high school or less (3%).

Incidence of LLM use

The vast majority (91%) of the sample of U.S. workers with software development backgrounds reported having ever used an LLM as part of their work, which stems predominantly from having used ChatGPT/GPT (82%) (Figure 1). This high adoption rate among technical professionals contrasts with our previous research on the general public, in which we found overall LLM usage at 47% (with 40% specifically using ChatGPT).

Figure 1: Estimated use of LLMs and other AI tools

Our findings on LLM usage among software development professionals align closely with other surveys of similar populations. For example, the Stack Overflow Developer Survey[1] from 2024 found that ChatGPT was used by 82% of professional developers. The same report found Google’s Gemini usage at 22%, also similar to our results. While they found that 44% used GitHub Copilot compared to our 25%, this discrepancy may stem from us separately including Microsoft Copilot as an option, which was selected by 41% of our respondents. We found higher Claude usage compared to the Stack Overflow survey (16% vs. 8%), potentially due to our survey being conducted later (early 2025 vs. May-June 2024).

Surveys of more general workforce populations show notably lower adoption rates than our targeted sample. Pew Research reported[2] in October 2024 that only 16% of workers are AI users, while Gallup found[3] about three in 10 employees use AI. Crane, Green, & Soto (2025), who collected multiple surveys[4] on workplace AI adoption, reported rates between 20 and 40 percent. However, they noted substantially higher rates in certain occupations like computer programming, with one survey showing usage as high as 97%.

These differences reflect not only the varying populations sampled but also differences in how questions are framed. Our question about LLM usage was intentionally lenient, asking whether respondents had “ever used” an LLM as part of their work, without specifying a timeframe. This approach captures even occasional or experimental use. It’s important to note that our high percentage of LLM usage doesn’t necessarily indicate regular use. To better understand actual integration into work practices, we also asked about usage frequency and amount in a typical week, reported in the following section.

Demographic differences

We also assessed differences in LLM usage between various demographic groups, such as sex, age, racial identity, education, and income.

Men and women did not significantly differ in having ever used LLMs generally, but men were more likely than women to have used GitHub Copilot (28.5% vs. 17.1%) and Grok (12.2% vs. 6.2%).

Comparing three different age groups (18-24, 25-44, 45-64), only one significant difference was observed: 18-24 year olds were more likely to have used GitHub Copilot (31.2%) than 45-64 year olds (18.8%).

No differences were observed between White respondents and respondents with another racial identity.

Respondents with a higher level of formal education (graduated from college, completed graduate school) were more likely to use GitHub Copilot (26.6%) compared to those with a lower educational background (high school or less, some college but no degree; 18.7%), as were those with incomes over $80,000 (29.1% vs. 17.9%).

Frequency and amount of LLM usage

Similarly sized groups of respondents reported using LLMs ‘multiple times a day’, ‘daily or almost every day’, several times per week, or only occasionally (Figure 2). Usage amounts showed a clearer pattern, with most respondents using LLMs for less than an hour (24%) or 1-2 hours (26%) in a typical week (Figure 3). Specific percentages excluding those who do not use LLMs for work are available in the appendix section LLM usage frequency and amount excluding those who do not use LLMs for work.

Figure 2: Frequency of LLM use

Figure 3: LLM usage amount

LLM interfaces

By far the most common method of interaction was Chat interfaces (83%), followed by Software integration (41%) and code editors (33%). Mobile apps were also quite common (31%) (Figure 4).

A small minority of respondents reported using more complex methods of interacting with LLMs, such as using API calls (13%) and running a local LLM (7%).

Figure 4: Methods of interacting with LLMs

Among those who reported using API calls, we also asked about their token usage in a typical month.[5] Of those respondents, light (20%), moderate (40%), and heavy (32%) usage were most common, with only a handful of respondents reaching very heavy or enterprise-level usage (Figure 5).

Figure 5: Reported API token usage per month

Proficiency with LLMs

Half of the respondents considered themselves typical users (i.e., ‘understand basic use of LLMs’) and another sizable group (30%) considered themselves advanced users (i.e., ‘use LLMs for complex tasks, such as complex workflows’). Only five percent of respondents claimed to have expert proficiency with LLMs (i.e., fine-tune models or set up advanced automations).

Figure 6: Reported proficiency with LLMs among users

Respondents were also asked what techniques they used to improve the performance of LLMs for their specific needs (Figure 7). Almost half of the respondents reported that they use prompt engineering (49%), followed by using an LLM agent (25%), chaining (18%), fine-tuning (13%), and retrieval augmented generation (RAG; 9%). There may be inflation of some advanced use cases owing to people misinterpreting or being liberal in their interpretations of the terms (e.g., referring to a typical chatbot as an agent).

Figure 7: LLM techniques

Uses of LLMs

Use cases

Among those who reported using LLMs for work, the most frequent use case was Search (72%), followed by Explanation (56%) and Writing (56%). These top three use cases are the same as those observed in our report on the US general population. This was, in turn, followed by more job-specific use cases such as Coding (51%) and Data analysis (45%) (Figure 8).

We also specifically asked about more advanced use cases such as Task Automation.[6] Almost one quarter (24%) of workers reported using LLMs for Task Automation. However, open responses describing what tasks had been automated indicated that many respondents were taking a liberal definition of automation, such as writing an email or summarizing a document.[7]

Figure 8: Uses of LLMs

Demographics and use cases

There was only one significant difference between men and women regarding the use cases for LLMs, with men in our sample being more likely to use LLMs for Coding than were women (53.9% vs. 42.9%).

Regarding age, respondents between 45-64 years old were more likely to use LLMs for Writing (63.1%) compared to younger respondents aged 18-24 (45.8%). Respondents aged 25-44 were similar to the older age group (56.1%). Respondents aged 45-64 were also more likely to Play / Experiment with LLMs (42.6%) compared to those between 18-24 (25.9%), with respondents aged 25-44 in between the two age groups (33.1%). Respondents aged 45-64 were also more likely than those aged 18-24 to report using LLMs for writing (63.1% vs. 45.8% respectively).

We did not observe any significant differences between those identifying as White and those of other racial identifications.

Two significant differences were observed between those with low vs. high formal educational attainment. Those with a higher education were more likely to use LLMs for coding (52.9% vs. 42.2%), and less likely to use LLMs for Playing / Experimenting (32.0% vs. 41.9%). Those with higher incomes ($80,000 or more) were more likely to use LLMs for Coding than those with lower incomes (56.3% vs. 41.6%).

Usefulness of each use case

Respondents rated the usefulness of LLMs for each use case that they reported (Figure 9).

LLMs were considered useful for all use cases, with the lowest usefulness rating being 3.82 out of 5 for Problem Solving. This should not be taken to mean that LLMs are useful for anyone who might try them, as usefulness ratings were only given by those who engaged with each use case, who presumably do so because they mostly consider them useful.

The highest rated use cases, each receiving on average above 4 out of 5, were Explanation, Writing, and Language learning/translation.

Figure 9: Uses of LLMs

Time savings per week for each use case

It may be additionally informative to consider time saved per use case, rather than perceived usefulness. For each reported use case, respondents indicated the amount of time saved in a typical week through using an LLM, ranging from none to more than 10 hours per week (Figure 10).

Figure 10: Time savings through LLM use

Task automation reportedly saved the most time, with an average of 192 minutes saved per week (even with the looser understanding of task automation that respondents seemed to apply). This was followed by Data analysis at 184 minutes and Coding at 182 minutes.

Within each use case, those who reported saving more time tended to report the use case as more useful (Figure 11). However, at the level of use cases, average reported usefulness tended to be negatively correlated with time savings (Figure 12).

Figure 11: Correlations between usefulness and time saving ratings within each use case

Figure 11: Comparing usefulness and time savings ratings

Though counterintuitive, there are several possible explanations for this observation. For example, people may report higher usefulness ratings for tasks that they do not typically enjoy or find cognitively taxing, such that even small time savings are perceived as more useful for certain types of tasks. Usefulness ratings could also reflect appreciation for surprising improvements or things for which there exist very few existing alternatives for productivity gains. In addition, people were reporting absolute time savings, not proportional time savings. It is possible that high reported time saving per week for tasks such as coding or data analysis represents only a relatively small proportion of the overall time spent on these tasks, making such savings seem less useful.

Given our focus on software developers, we also split these results between those who reported working in IT / Computer science and those working in other sectors (see appendix section Time savings amongst those in vs. not in IT / Computer science). The ranking of the use cases in terms of time saved were identical in both groups, but those who worked in IT / Computer science tended to report more time saved for each use case.

New types of tasks

In addition to asking about the usefulness of LLMs in terms of time saving, we asked respondents whether LLMs had unlocked new kinds of tasks for them.

Overall, 78% reported that LLMs had unlocked new tasks, with 69% indicating that the new tasks were at least somewhat valuable.

Figure 12: Have LLMs unlocked new types of tasks for respondents?

Productivity

In addition to asking about specific use cases, we asked in two different ways about general productivity. One group of respondents was asked to indicate to what extent they have become more or less productive due to using LLMs (compared to before using LLMs), while a second group was asked to imagine no longer being able to use LLMs and how this would affect their productivity at work.

Figure 13: Productivity compared with before using LLMs

Almost three quarters (74%) of respondents reported that LLMs had made them more productive relative to before using LLMs, with the most common responses being a 5-10% or 25% increase in productivity (Figure 13). Twenty percent of respondents reported gains as large or larger than 50%. Only 10% of respondents felt they had become less productive, while 15% indicated they were similarly productive as before LLMs.

Among respondents who imagined no longer being able to use LLMs, 69% reported they would be less productive than now, with the most common response – selected by about one third (32%) of people – being a decrease of 5-10% (Figure 14). Similarly to the alternative productivity question, 12% felt they would be more productive without LLMs, while 19% thought they would be similarly productive.

We also compared these outcomes amongst those working in the IT/Computer Science work sector vs. those who were not (for full plots, see appendix section Productivity among those in vs. not in IT/Computer science). Responses were very similar when framed as how productive respondents would be if they could no longer use LLMs, but those in the IT/Computer Science sector showed a small tendency to be more likely to rate their productivity as having increased since using LLMs (an effect of .17 [.03; .30] in standard deviation units in an ordinal model).

Productivity estimates can also be expressed as approximate proportional increases/decreases by converting the ordinal statements into values (e.g., “About 25% more productive” = 1.25, “Less than half as productive as now” = .33).[8] Looking at estimates of productivity in this way (Figure 15), reported productivity increases appear quite substantial, with an average 1.22x increase in productivity as a result of being able to use LLMs. Respondents also tended to report that they would be marginally less productive (.88x) if they could no longer use LLMs. Specifically for increases as a result of access to LLMs, we found that users claiming greater proficiency (‘Advanced’ or ‘Expert’) reported significantly greater productivity gains than those with lower proficiency levels. It should be stressed that these productivity increases are perceived, and may not be accurate. For example, it was reported in a recent, not yet peer reviewed study, that several highly experienced developers perceived that access to LLMs had sped up their work, while time measurements pointed to a slight slowing down.[9]

Figure 14: Productivity if no longer able to use LLMs

Figure 15: More proficient users reported greater productivity gains

Barriers to using LLMs

We asked all respondents, regardless of whether they used LLMs for work, what they perceived to be the main barriers to using LLMs for work (Figure 16).

Accuracy concerns were the main barrier (57%), followed by privacy and security concerns (47%) and ethical concerns (32%). Limited knowledge (18%), environmental concerns (17%), and technical limitations (14%) were the least reported concerns, although they were still selected by sizable proportions of respondents. Eleven percent of the sample reported not facing any barriers to using LLMs.

The percentage of respondents highlighting concerns over accuracy was more than double the estimated percentage expressing this concern in the general population (23%).

Figure 16: Barriers to using LLMs

Limitations

Some key limitations should be kept in mind when drawing inferences from the present results. Of primary importance is that the recruited tech workers are a convenience sample, and so were not drawn randomly from the total population of U.S. tech workers. Furthermore, as we do not have information regarding the makeup of the total U.S. tech population, it is not possible to weight the sample or use other approaches to make it reliably representative of this target population. Hence, it is possible that biases in sampling could produce some level of mis-estimation of our quantities of interest.

As with all surveys, responses also rely on self-reported outcomes. Although we have techniques to capture and exclude respondents engaging in dissimulation, is it also possible that respondents lack insight into certain effects of LLMs, even when they are being completely honest. As a result, we cannot claim that LLMs are very useful for all the use cases identified, only that respondents consistently perceive them to be. Likewise, respondents report having become more productive on average, but this perception could be inaccurate.

Concluding remarks

Software developers and those whose work involves programming have been quick to take up LLMs as a potential tool to increase their productivity. The present findings offer a baseline for understanding current levels of LLM adoption, usefulness, and barriers to adoption, and already paint a picture of changing workflows and perceived productivity gains. As LLMs continue to evolve, future research can track their possible deepening integration into professional workflows, and empirically assess their promised transformative potential.

Appendix

Participant recruitment details

Participants were recruited from two different platforms, using the following screener criteria:

  • Prolific (N = 846)
  • Current country of residence: United States
  • Age: 18-64
  • Employment status: Full-Time
  • Computer programming skills: Yes
  • Work function: Data Analysis or Engineering (e.g., software)
  • CloudResearch’s Connect (N = 1117)
    • Employment status: Full-time, part-time, business owner
    • Freelance/Independent Work: Software developer
    • Technical Skills: ML & GenAI engineering, statistical modeling, computer science, machine learning, software engineering, AI frameworks & libraries, cybersecurity, quantitative research

Work sectors of respondents

Area proficiencies of a subsample of the respondents

A question regarding area proficiencies was added to the questionnaire to obtain additional information approximately one quarter of the way through data collection for respondents from Connect.

LLM usage frequency and amount excluding those who do not use LLMs for work

Time savings amongst those in vs. not in IT / Computer science

Productivity among those in vs. not in IT/Computer science

Contributions and acknowledgments

Willem Sleegers, Jamie Elsey, and David Moss contributed to survey design and data collection. Analyses were conducted by Willem Sleegers and Jamie Elsey. Willem Sleegers and Jamie Elsey wrote this report with editing and review from David Moss.

  1. https://survey.stackoverflow.co/2024/
  2. https://www.pewresearch.org/social-trends/2025/02/25/u-s-workers-are-more-worried-than-hopeful-about-future-ai-use-in-the-workplace/
  3. https://www.gallup.com/workplace/651203/workplace-answering-big-questions.aspx
  4. https://www.federalreserve.gov/econres/notes/feds-notes/measuring-ai-uptake-in-the-workplace-20240205.html
  5. Specifically, respondents were asked:“You noted using API calls in your work with LLMs. In a typical month, approximately how many tokens in total do you use through your LLM API access?For reference, here are approximate token counts for different tasks:– A typical tweet-length message (280 characters) is around 60-70 tokens

    – A typical email (400 words) is about 800-1000 tokens

    – A long technical document (2000 words) is about 3000-5000 tokens

    – A detailed conversation with 10 back-and-forth exchanges is typically 4000-5000 tokens, assuming moderate length messages

    – Code review for a medium-sized pull request (500 lines) is about 5000-8000 tokens depending on complexity”

  6. Task automation was further described using examples, such as creating multi-step workflows or delegating several steps in a workflow to be performed automatically by an LLM.
  7. From the respondents’ perspective, these might count as ‘automating’ multiple steps in their workflow, i.e. reading an email, checking it for errors, and suggesting improvements, but these would not count as ‘automations’ in our stricter sense.
  8. This necessarily involves adding some noise, as the percentage changes were in brackets or mentioned as approximate. We did not request direct percentages owing to previous experience indicating that such numeric tasks are easily misunderstood by respondents.
  9. Dhanore, Y. M. (2025). The Impact of Generative AI Tools in Open-Source Software Development.