Editorial noteRethink Priorities is an independent, non-partisan, non-profit 501(c)3 policy think tank that does polling and policy analysis. Rethink Priorities is not funded by any candidate or political party committee and does not poll on behalf of any political candidate or party. This report is the second in a three part series on the usage and utility of LLMs. Part 1 focuses on in-depth qualitative interviews with LLM power users and can be found here. Part 3 focuses on the usage of LLMs amongst programmers, and can be found here. |
Executive summary
This report presents findings from a survey of 1,370 U.S. adults conducted in November 2024. Nationally representative estimates were generated by using multilevel regression and poststratification (MRP) to account for Age, Sex, Racial identity, Household income, Educational attainment, State and Census region, and political affiliation. The report assesses LLM awareness and usage, highlighting prevalence, use cases and usefulness, perceived productivity impacts, and barriers to adoption.
Key Metrics at a Glance
1. Growing reach, especially among key demographic groups
- We estimate that 47% of U.S. adults have used any LLM, and 40% have used ChatGPT specifically.
- This represents a substantial increase in use over time, when compared with estimates from earlier in 2024
- Multiple demographic characteristics are associated with having used an LLM: higher educational attainment, younger age, greater household income, and being male
- Other factors, such as engagement with social media or working in particular sectors (such as Business/Management/Finance) were also positively associated with LLM use
2. Modest usage intensity despite high penetration
- We estimate 56% of US adults not using LLMs in a typical week, and a further 15% with less than an hour of usage
- Over half of LLM users could be captured under a usage bracket of 1-2 hours or less in a typical week for either work or personal usage.
- Chat interfaces (62%) and LLMs integrated into other software (54%) remain the dominant modes of usage
3. Search, explanation, and writing lead use cases
- Most common use cases were Search (61%), Explanation (42%), and Writing (40%).
- The fourth highest use case was Playing/Experimenting (31%), suggesting users may still be coming to grips with the ways in which they can use LLMs, or using them for recreational purposes
- Use cases varied across demographic groups: men were more likely to report coding or data analysis; the 25-34 age brackets were especially more likely to report coding, amongst other demographic trends.
4. Usefulness is consistently high
- Across all reported use cases, the majority of those engaged in each respective use case found LLMs very or extremely useful.
- Language learning/translation was rated highest in usefulness, followed by problem solving, data analysis, and idea generation
5. Proficiency affects uses and usefulness
- We estimate that approximately one third (34%) of U.S. LLM users would rate themselves as basic users, half (50%) as typical users, and only slim minorities as advanced (13%) or expert users (3%)
- Women and older respondents rated themselves as having lower LLM proficiency than male, younger respondents
- Higher proficiency users were more likely to engage in every use case except simple ‘search’, which they were much less likely to use (.45x likely), and they were especially more likely to use LLMs for coding (5.8x), data analysis (5.3x) and integrating into other software (7.5x)
- Higher proficiency users also tended to rate all use cases as more useful than lower proficiency users
6. Perceived productivity gains were mostly modest or negligible
- In a subsample of respondents using LLMs for work/study, 46% thought LLMs had made them more productive, and 28% thought they were about the same
- This translated to relatively negligible productivity benefits on average of approximately 1.03x, though respondents also thought they would be .93x as productive if they could no longer use LLMs
7. Barriers to broader use
- Privacy and security concerns (37%) and lack of knowledge (35%) were the most common barriers to LLM use
- Concerns varied across demographic subgroups, with older users especially more likely to cite limited knowledge (55%), and environmental concerns more common amongst 18-24 year olds (though still low, at 14%)
Conclusions
These findings offer a comprehensive picture of how LLMs are currently being used, perceived, and adopted among U.S. adults. While awareness and uptake is high, most users are not engaging intensively with LLMs, and utilize them most for activities such as search, explanation, and writing, with more advanced use cases relatively untapped.
This report establishes a population-level benchmark for LLM usage, serving as a useful reference point as these tools continue to evolve, and to assess the extent to which they become more deeply embedded in everyday life and work.
US population sample
This report is based on the responses of 1370 US adults, surveyed online in November 2024. Multilevel regression and poststratification (MRP) was used to generate estimates representative of the US adult population, accounting for Age, Sex, Racial identity, Education, Household income, State and Census region, and Political party affiliation.[1]
Incidence of LLM use
Population level estimates
Estimates for how many people have ever used any LLM[2] is 47% (Figure 1). This aligns closely with a ‘wisdom of the crowds’ estimate, in which respondents were requested to guess the percentage of US adults who have used an LLM. We estimate that 40% of US adults have tried ChatGPT specifically. These numbers do not represent the percentages of people who are expected to be using LLMs on a regular basis, which is the subject of the usage section below.
Figure 1: Estimated use of LLMs and other AI tools among US adults

Taken at face value, these estimates represent a substantial increase in comparison to those from other sources earlier in 2024 and in 2023. A Pew survey estimated 23% of US adults had ever used ChatGPT as of February 2024, up from 18% in July 2023. A UK-based study of internet users by Ofcom/YouGov, fielded in June 2024, generated estimates closer to those from the current research, with 33% of UK internet users aged 16 and above reportedly having used ChatGPT in the last year.[3]
Figure 2: Growth trajectory of ChatGPT use estimates over time
If the present and former estimates of US adults who have ever used ChatGPT are accurate, then growth in those who have used ChatGPT would have had to increase from February to November 2024, relative to July 2023 to February 2024 (Figure 2).
A back-of-the-envelope calculation utilizing US ChatGPT web traffic suggests that the present estimate of 40% would correspond to monthly unique users of ChatGPT representing about 38% of people who have ever used the LLM. This level of retention does not seem unreasonable for a useful tool being used on a monthly basis, but is speculative.
However, the apparent growth could reflect over-estimation due to biased sampling that cannot be corrected with the use of MRP and the data we have available. Although our MRP methodology generates representative estimates for other outcomes, such as presidential approval, it is possible that samples generated from the online panels we used are more ‘tech savvy’ and inclined to utilize – or at least have tried out – novel technology. The sample is also likely more ‘online’ than the Pew sample, as our respondents must have access to the internet to participate, whereas the Pew panels include those who would not otherwise have access to the internet by providing them with a tablet and wireless connection for completing surveys.
We corrected for some aspects of internet affinity and tech usage by balancing recruitment of the sample to approximate population-level estimates of usage of social media platforms such as Youtube, Reddit, Facebook and Tiktok.[4] Other factors that might affect LLM usage but which are not included in the present survey methodology (or typically those of other pollsters) might include the number of people who work remotely or in particular work sectors. We further examine some of these possible sources of influence in the section on factors associated with LLM use below.
Subgroup breakdowns of LLM use
The subgroup breakdowns of LLM use below focus on ChatGPT (Figure 3) and ‘Any LLM’ (Figure 5). Both follow similar demographic patterns, as ChatGPT makes up the bulk of LLMs used, and most people who have used other LLMs have also used ChatGPT.
There were sizable differences in having used ChatGPT or an LLM depending on both age and education. The youngest age bracket – 18-24 – was more than three times as likely to report having used ChatGPT and over two times as likely to report having used any LLM than those aged 65+. There was also a gap of nearly 20 percentage points between the most and least formally educated for ChatGPT or LLMs generally. We observed a slight tapering off in reports of use at the very highest level of education, likely owing to those who have completed graduate school being more likely than those who have graduated college to sit in the oldest age brackets: The tapering off is minimized when breaking down responses by both education and age, indicating that within age brackets the proportions of those who have used ChatGPT are similar for those who have completed college and those who have completed postgraduate school (Figure 4).
In addition, we observed that respondents with higher household incomes tended to be more likely to report having used ChatGPT/an LLM; a trend that remained when breaking down by both age and by education, though plots are not shown.
Male respondents were more likely to report having used an LLM or ChatGPT than women by 10 percentage points. There were also some fluctuations by racial identity, with White/Caucasian and Black/African American respondents being less likely to report having used ChatGPT than Asian/Asian American respondents by approximately 10 percentage points.
ChatGPT
Figure 3: US adult population and subgroup estimates for having used ChatGPT

Figure 4: ChatGPT use broken down by Education and Age

Any LLM
Figure 5: US adult population and subgroup estimates for having used any LLM

Additional factors associated with LLM use
We additionally looked at associations between LLM usage and several factors not included in those variables used for demographic representation.
Social media
Use of each of the social media platforms we assessed was significantly and positively associated with reporting having used an LLM, and those who did not use any such platform were by far the least likely to report LLM use: LLM usage was reported by just 8% of non-social media users vs. 41% for those who had used any of the social media platforms (Figure 6).
Figure 6: Use of all social media platforms was significantly and positively associated
with reporting LLM use

Work sectors and employment
Employment status was strongly associated with reporting having used an LLM, with unweighted estimates going from 26% among those not currently employed and 43% among the part-time employed, to 59% among those in full time employment (Chi square test of association p < .001).
Of those who were employed, one third reported some form of remote work. Remote work was also associated with having used LLMs, with reported usage at 66% among those working remotely vs. 50% among those who are not.
Respondents for the survey came from a range of work sectors, with the most commonly reported being Healthcare / Medicine (6% of sample), Customer service (5%), Retail (4%), Manufacturing / production (3%), and Education / Teaching (3%) (Figure 7).
Figure 7: Percentages of respondents who reported different work sectors

Within those who provided any work sector, and restricting to work sectors reported by at least .5% of the total sample, we conducted tests of association comparing those who did vs. did not report each particular sector (Figure 8). A handful of work sectors had statistically significant, positive associations with reported LLM usage, namely Business/Management/Finance, Creative Arts/Design, and Sales/Marketing.
It should be noted that these estimates of association do not necessarily reflect representative people within these specific industries, and also that associations could exist in other work sectors than those highlighted as significant here, owing to the sample being underpowered to detect associations across sectors. For example, those reporting work in Science / Research were substantially more likely to report having used an LLM (86%) than those who did not (54%), but there were only seven respondents reporting this sector. Conversely, those working in Agriculture were less likely (29%) than those who were not (55%), but again with only seven respondents in this sector.
Figure 8: Links between work sectors and LLM usage

Figure 9: Links between LLM usage and internet usage

Frequency of internet usage was also highly positively associated with reporting having used an LLM (Figure 9). Given that our sample comes from people using the internet, there are fewer people than are estimated in the general population as rarely using the internet (e.g., 1.1% reported using the internet less than several times a week, and 7.6% Daily or several times a week, versus an estimated 6.1% and 9.2% respectively in the general population by Pew). The presence of only internet users in our sample may result in estimates of LLM usage being slightly inflated.
Amount of LLM usage
While relatively high percentages of people reported having ever used an LLM, slightly lower percentages of respondents reported using LLMs in a typical week (Figure 10). We estimate 66% of US adults either not using any LLM or not using them for work/study purposes, with a further 10% reporting less than an hour of usage per week. For personal uses, we estimate 58% of US adults not using LLMs, with an additional 17% reporting less than an hour in a typical week.
Combining work and personal uses together, and taking the greatest usage reported, an estimated 56% of US adults were not using LLMs for work or personal use in a typical week, with 15% reporting less than an hour of usage on each.
People reporting using LLMs less than an hour in a typical week could plausibly be referring to sporadic or only very occasional use (including less than weekly). Hence, a more conservative estimate of those using LLMs in a typical week could be closer to approximately 29% of US adults.[5]
Figure 10: Reported amount of LLM usage

Focusing in on those who reported having used any LLM, 29% were not using them for work and 12% were not using them for personal use, with declining proportions of people having used them for increasing amounts of time (Figure 11). Over 50% of LLM users could be captured under a usage bracket of 1-2 hours or less in a typical week for either work or personal usage. This suggests while the prevalence of usage is remarkably high for a fledgling technology, the amount of use is quite low. However, we also recognize that providing one’s amount of LLM use may be difficult, as usage is often fleeting and can vary considerably depending on the nature of tasks performed and extent of iteration required.
LLM interfaces
Among users of LLMs in the population, the majority reported interacting with them either via Chat interfaces or through Mobile applications (Figure 12). The top six methods of interacting with LLMs were all via systems seemingly provided by ready-made tools, from Chat interfaces to browser extensions and built-in AIs on one’s device. There was a sizable drop-off in the percentages of people who reported interacting via API calls, through code editors, or command line interfaces.
Figure 11: Amount LLMs are used in different contexts

Figure 12: Methods of interacting with LLMs

Proficiency with LLMs
Figure 13: Reported proficiency with LLMs among users

Among LLM users, the majority considered themselves either beginners or typical users (typical users were defined as ‘understanding basic uses of LLMs, whereas advanced users were those who ‘use LLMs for complex tasks, such as complex workflows’, and expert was ‘fine tune models or set up advanced automations’) (Figure 13). Respondents may slightly inflate their expertise levels depending on how they interpret these definitions. In the section on use cases below, for example, we highlight how some respondents referred to quite simple uses as automation.
Uses of LLMs
Use cases
By far the greatest usage of LLMs among those who reported usage was Search (61%), followed by Explanation (42%) and Writing (40%). More advanced uses such as Task Automation were much lower (11%) (Figure 14).
It should also be stressed that users self-reported each use case, and they may have been less advanced than the label implies. For example, data analysis was reported by 17% of users, but this could be just asking an LLM to find some numbers related to a topic, or giving it a spreadsheet and asking it to make an average. Qualitative responses that we collected with respect to automation suggest that truly advanced use cases are probably rare. For example, several users who reported ‘automation’ were simply asking the LLM or AI-based tool to complete one task which, though it might involve several steps, would not reach the bar of being an automated process. For example:
“I ask it to make me a travel guide for a two day trip, for example. Steps may be finding places to stay or visit, things to do, and where to eat.”
“I use siri, I tell her turn on my lights and she finds the bluetooth lights and the app and turns them on for me”
“Generating recipes and ideas for coworkers and family”
Hence, rather than considering these use cases as strictly involving ‘data analysis’ or ‘automation’, it would better be considered that they represent what typical people might plausibly refer to as encompassing that particular use case, which is often a low bar. However, a handful of respondents did suggest some more properly automated processes:
“At work, our customer service line is now guided by an LLM before they ever reach a human. It lets us know what the call is about, and if they even need to speak with us at all. It probably takes 3 or 4 steps away from us at least.”
“Categorizing data from text based surveys and assigning it to a spreadsheet and database that is shareable. The [number of steps] automated are between 5 and 7.”
Figure 14: Uses of LLMs

Demographics and use cases
Some use cases diverged in popularity depending on demographics. For example, male respondents also tended to be more likely to report using LLMs for coding and data analysis, whereas women tended to be more likely to report using LLMs for writing (Figure 15). In addition, older respondents were more likely to report the relatively simple use of search, with 25-34 year-olds being more likely to report coding and content creation (Figure 16).
Figure 15: Use cases broken down by sex

At least some of these differences in age and sex may relate to differences in purported proficiency, which varies by age and sex (Figures 17 & 18) and – as can be seen in the next section – is associated with different use cases.
Figure 16: Use cases broken down by age

Figure 17: Older users were more likely to report having lower levels of LLM proficiency

Figure 18: Male users were more likely to profess advanced proficiency than female users

Proficiency and use cases
We assessed in the raw data the extent to which different proficiency levels might relate to engaging in different use cases, revealing substantial differences in uses depending on proficiency (Figure 19). Respondents professing higher proficiency (Advanced or Expert users) were more likely than those claiming lower proficiency (Basic or Typical users) to report every use case, with the exception of ‘search’, which was substantially less likely among higher proficiency users. Notably, several of the most complex uses were especially more likely to be reported by the higher proficiency users (Figure 20).
Figure 19: User proficiency and probability of use cases

Figure 20: Relative of likelihood of use cases for Higher proficiency / Lower proficiency

Usefulness of each use case
Across use cases, very few respondents indicated finding LLMs ‘not at all useful’ (Figure 21). It should be noted that this does not strictly mean that LLMs are useful for all these use cases on average, as usefulness ratings were only provided by those who engage in each use, and we would assume people who don’t find them useful for that use case do not continue to engage in it.
The use case receiving the highest overall usefulness rating was language learning/translation, whereas using LLMs for play/entertainment/simple chat purposes was rated as least useful. For every use case, over half of the people reporting that use gave ratings of very or extremely useful (Figures 21 & 22).
Figure 21: Reported usefulness of LLMs for those who report each use case

Figure 22: Average usefulness of different use cases

Usefulness tended to be higher among those with higher proficiency, and more complex tasks (coding, automation, and analysis) tended to rise to the top in usefulness amongst these higher proficiency users (Figure 23).
Figure 23: Those with higher proficiency levels found LLMs more useful across use cases

Productivity
Respondents who reported using LLMs for work or study were split to see one of two questions. One asked them how productive they think they are now, relative to before having had access to LLMs. The other asked them how productive they would be if they could no longer use LLMs. For both versions of the question, the most common response was that people would be equally productive as now.
Responses suggest that although very few users feel LLMs have made them doubly as productive as they used to be (only 5 of 175 respondents), somewhat more may feel they are reliant on LLMs, and would be half as productive as now if they could no longer use them (35 of 173 selected this response) (Figure 24).
Figure 24: Reported productivity from LLMs among those using them for work/study

These productivity effects can also be expressed as approximate proportional increases/decreases by converting the ordinal statements into values (e.g., “About 25% more productive” = 1.25, “Less than half as productive as now” = .33).[6] Looking at estimates of productivity in this way (Figure 25), productivity increases produced by being able to use LLMs were, on average, 1.03x relative to before using LLMs, with the confidence interval crossing over no productivity gains. For how productivity would change if the respondent could no longer use LLMs, respondents on average expect a small drop in productivity (.93), with the CI only just excluding no change.
Figure 25: On average, productivity gains from LLMs were small

We did not observe any reliable differences in productivity depending on user proficiency: higher proficiency users tended to indicate having received greater productivity gains than lower proficiency users, but were marginally less likely to indicate that they would lose productivity if they could no longer use LLMs (Figure 26). These differences were not statistically significant. Given that proficiency was not actually tested, it is also possible that rating oneself as higher proficiency reflects a more general attitude of confidence or optimism, with these people tending to see themselves as more capable, receiving more benefits, and also being better able to deal with losing access to LLMs.
Figure 26: LLM productivity gains were not reliably different depending on proficiency

Barriers to using LLMs
Estimating across the whole US population, the most commonly reported barriers for using LLMs were concerns over privacy and security, followed closely by having limited knowledge of them (Figure 27). The lowest barriers to use were work policies and environmental concerns.
Figure 27: Barriers to using LLMs

Demographics and barriers
We observed some slight demographic trends for some barriers to usage. Older respondents in particular were more likely to report a lack of knowledge being a barrier to using LLMs, and also tended to be more likely to have concerns over Privacy and Security and Transparency (Figure 28). Environmental concerns tended to be more common amongst younger than older respondents.
For education, there was a slight tendency for those of lower educational attainment to be less worried about ethical as well as privacy and intellectual property concerns, with a tapering off at the highest level of educational attainment (Figure 29).
Figure 28: Barriers to using LLM usage, broken down by age

Figure 29: Barriers to using LLM usage, broken down by education

Concluding remarks
U.S. adults have rapidly become aware of LLMs, with nearly half reporting they’ve tried one, and many seeing them as useful tools for simple tasks such as search, writing, and explanation. Most reported usage remains relatively light, though a subset of more proficient users appear to utilize LLMs for a more expansive range of tasks, and find them more useful.
This report offers a population-level picture of LLM awareness, usage patterns, and perceived impacts—establishing a benchmark for future study. As LLMs continue to evolve, future research can assess whether they become more embedded in daily workflows and whether more advanced or transformative use cases take hold beyond a minority of users.
Appendix
MRP-based estimation of presidential approval

We frequently utilize a ‘canary variable’ when generating estimates intended to be representative of the general population. This involves using our methodology to generate representative estimates of another variable for which there are many other recent and independent estimates. A good example of this is Presidential Approval, which is measured by numerous pollsters and aggregated (formerly by FiveThirtyEight and now by The Silver Bulletin). Our estimates fell close to the averages and within the margins of error based on other polls, suggesting that our methodology can produce accurate estimates – assuming that the aggregate of other pollsters is a reliable signal of the population sentiment.
Model incorporating additional confounds
MRP is relatively more restricted in terms of information it can incorporate into estimates[7], but we can add some of the above items and make weighted estimates. We used a weighted model that included Internet Frequency and Reddit use crossed with Age, alongside more typical demographic variables of Household income, Racial identity, Political party identification, Education, Region, Urban vs. Rural vs. Suburban, and Sex. Based on this model, the mean estimate for the percentage of US adults who have ever used an LLM drops from 47% to 44%, and the estimate for the percentage of US adults who have ever used ChatGPT drops from 40% to 36%. However, this likely just represents a difference in how MRP and weighting generate estimates – when we ran the weighted model but without these additional potential confounds, it also generated similar lower estimates. Typically, we consider MRP to be more accurate and more useful in allowing better subgroup estimates.

Estimation of ChatGPT users based upon third party information
It may be worth cross-referencing our estimates of LLM usage with those from other sources to further assess their plausibility. To do this, we will focus primarily on ChatGPT, as this is the most recognized and popular single entity, as opposed to trying to focus on use of any kind of LLM. We also included Claude for reference. Firstly, we looked at some freely available metrics from web analytics platform SimilarWeb. According to SimilarWeb, there were over 300 million unique visitors to ChatGPT in October 2024, and 10 million to Claude. We also have the proportion of web traffic that comes from the US. While noting that this proportion is the percentage of traffic overall as opposed to the percentage of unique visitors (which could be important if the US proportion of unique visitors is different from its proportion of traffic overall), and also that people may use these LLMs directly via desktop or web apps, we can get a rough estimate of the number of unique monthly visitors to each site around October 2024.[8] In turn, we can then see what percentage of the US adult population this would be (where the US adult population is 258.3 million). The estimate comes out to around 17.70% of the US population.
Table 1: Estimated US monthly users of ChatGPT
| Oct 2024 | ChatGPT | Claude | Relative |
|---|---|---|---|
| % of traffic from USA | 14.58% | 23.98% | |
| Unique visitors (M) | 313.60 | 10.18 | |
| Estimated unique US visitors (M) | 45.72 | 2.44 | 18.73 |
| % of US population | 17.70% | 0.95% | |
| Estimated unique US visitors (M) with 15% correction for non-adults | 38.86 | 2.07 | 18.73 |
| % of US adults | 15.05% | 0.80% |
We have also considered a correction to account for some percentage of users being aged under 18. Demographic information from SimilarWeb for ChatGPT traffic across age brackets is shown in Table 2. Based on the pattern of usage, we estimate that possibly as much as 15% of ChatGPT users might be aged under 18, although we think this probably overstates the case: Approximately 20% of the US population is under 18, but of these a sizable proportion are much younger, and ChatGPT requires users aged 13-18 to receive parental permission. While we therefore think that this 15% correction could be high, we err on the side of generating conservative estimates of adult ChatGPT usage. This percentage does somewhat naturally fit the trajectory of usage across ages, and it seems likely that many school students would have at least tried GPT to help with school work or to play with a new technology. Including this 15% correction, we end up with 15.05% of US adults as at least monthly users of ChatGPT.
Table 2: Age trends in ChatGPT web traffic
| Under 18 | 18-24 | 25-34 | 35-44 | 45-54 | 55-64 | 65+ | |
|---|---|---|---|---|---|---|---|
| SimilarWeb %ages | NA | 25.43% | 31.18% | 18.81% | 12.34% | 7.47% | 4.77% |
| Adding 15% under 18 | 15.00% | 21.62% | 26.50% | 15.99% | 10.49% | 6.35% | >4.05% |
We can compare this with our estimates for the percentage of US adults who have ever used ChatGPT. Our estimates suggested that 40% of US adults had ever tried ChatGPT, which is 2.66x greater than the October estimate for monthly unique US users generated in this section. Phrased differently, monthly unique users would be around 38% of the people who have ever used ChatGPT. This number does not seem entirely out of the question, although it is difficult to think of a reference class for this kind of novel tool (for example, we assume that the vast majority of people who have tried a useful tool such as Google search are monthly or more frequent users, but retention across different software tools likely varies greatly and also depends on the intent of the user in testing out the software in the first place).
In the numbers above we also included Claude as a comparison. This was because in our estimates, having ever used ChatGPT was about 13 times greater than having ever used Claude. To immediate intuition this seemed slightly surprising, as most people in our sphere who have tried ChatGPT have also tried Claude. However, this is very likely due to the generally greater awareness of LLM models in our local environment, and in particular that Anthropic is especially well known in the EA community. The relative traffic for ChatGPT and Claude supports the idea of a large difference in use between the two.
Social media usage in the sample
The raw sample showed moderate differences relative to a population estimate of social media usage.[9] One of the largest associates of LLM use (Reddit) was slightly under-represented, as was Youtube.

Perceived prevalence of LLM use
Along with directly estimating LLM use from self-report, we asked respondents to provide estimates of the percentage of US adults who’ve used an LLM. This kind of ‘third person’ or ‘wisdom of the crowds’ estimation has been suggested as an alternative means of gaining insight into population-level phenomena, and leverages people’s knowledge of others in a way that might compensate for possible biases in sampling,[10] and has also been suggested as a validity check on other population-level estimates.[11]

Estimates based on this approach align very closely with our MRP estimate of LLM use, with a population mean and median of 48% (vs. 47% from our MRP model), and a single most common response ranging from 33%-54%. However, this alignment might also be a partial artifact of high uncertainty in the population about how many others have used an LLM, with the average estimate simply being close to 50%.
Such third-person estimates may also be biased, as people likely have more awareness of people similar to them. Indeed, those who reported having used an LLM also gave higher estimates for the percentage of the general population that had used one.
Contributions and acknowledgmentsJamie Elsey, Willem Sleegers, and David Moss contributed to survey design and data collection. Jamie Elsey conducted analyses and generated visualizations, and wrote this report with editing and review from David Moss. |
- MRP is a method for generating accurate estimates for a target population using data from a sample that may not be fully representative. It models the relationship between individual characteristics and the outcome of interest, then combines these estimates with known population demographics to produce predictions for the full population and its subgroups. The appendix includes results from an alternative model using weighting, which produced slightly lower percentage estimates for some outcomes. ↑
- ‘Any LLM’ included GPT/ChatGPT, Claude, Llama, Bard/Gemini, Grok, Microsoft copilot, Github copilot, and ‘other LLM’ if the open comment response indicated an LLM. ↑
- The sample was conducted using a YouGov online panel and so may simply reflect the typical weighting for this panel, which is treated as representative of the population. Report documentation states: “The survey was carried out online. The figures have been weighted and are representative of all UK adults (aged 16+)”, while the tables refer to “All Online UK Adults 16+”. Whether specifically for all online UK 16+, or just UK 16+, the numbers are likely close to representative of UK 16+ generally, given that the Ofcom report also notes 96% of UK adults have access to the internet at home. ↑
- Use of certain platforms, especially Reddit, appears strongly related to having tried out an LLM, and caused severe bias in a previous sample we collected. ↑
- 100% – (56% + 15%), where 56% and 15% reflect the percentages of non-use and less than one hour of use. ↑
- This necessarily involves adding some noise, as the percentage changes were in brackets or mentioned as approximate. We did not request direct percentages owing to previous experience indicating that such numeric tasks are easily misunderstood by respondents. ↑
- A fully valid MRP model requires either a complete cross-tabulation of how any new feature maps on to all the existing demographics, or extensive additional modelling to generate a predicted crosstabulation. ↑
- We are unsure of the source/reasoning, but an alternative estimate from eMarketer stated they expected ChatGPT to have 67.7 million monthly US users by the end of 2024, which is substantially greater than the estimate presented in the table (https://perma.cc/232V-5P39) ↑
- Based on a report by Jeffrey Gottfried of Pew Research Center, ‘Americans’ Social Media Use’. ↑
- For an applied example, see Zheng, T., Salganik, M. J., & Gelman, A. (2006). How many people do you know in prison? Using overdispersion in count data to estimate social structure in networks. Journal of the American Statistical Association, 101(474), 409-423. ↑
- Prelec, D. (2004). A Bayesian truth serum for subjective data. Science, 306(5695), 462-466. ↑
