AI – What do real people actually do and think, and is there some push back from those aged under 25?
- Steve King

- Mar 26
- 7 min read

AI Slop is everywhere. AI ‘article’ Slop is equally prevalent. Even when people are going to point out any drawbacks in a particular element of AI usage, they seem to be obliged to start their commentary with ‘We know that AI is going to change the world etc etc’ (order bias anyone? If every argument starts by stressing the importance of AI, then any less positive commentary in the same article is bound to get lost). Much of the commentary seems to be led by people/organisations that have invested a lot of time/money/reputation in the success of particular AI initiatives and, as such, can hardly be seen as unbiased commentators. Shoppercentric’s WindowOn data (December 2025) gives us a glimpse into what real people are thinking and how they are behaving, with no agenda (or crystal ball predictions). This suggests conscious ‘usage’ is growing but is not yet universally embraced, evangelists have a very specific age and social class profile and those aged under 25 may be less enthused than the previous generation. It also shows that concerns over autonomous vehicles and image creation are likely to need addressing (as is accuracy for any decisions that can have significant consequences if the wrong ones are made).
There is no question that AI usage is increasing, with the majority saying they have used at some point
In terms of overall usage, 51% of those responding to the WindowOn survey said they had used AI as part of their day-to-day life. This is an increase from 31% in the December 2023 version of the survey.
Personal use has seen the greatest level of usage, with 39% of all respondents using for this reason, compared with 20% using at work and 15% for study/training. Perhaps, not surprisingly, there is significant variation in usage by age, with the general rule being that younger people are more likely to be users at home and at work (see Figure 1).

It is interesting to note usage peaks with those aged 25-34 and that there is a sharp decrease in those using for work amongst those aged 45 or older. Whilst there is no difference by gender, there is a marked difference by social grade with 60% of ABC1 respondents having ever used compared with 42% of C2DE (only 7% of those classified as C2DE say they have used for work).
Amongst those that do use AI, just over a quarter use daily and a further third use every few days.
Whilst we noted that similar proportions of men and women say they have ever used AI, there is a difference in terms of the frequency of use, with 67% of males that use AI using at least every few days, compared to only 47% of female users.
The difference by social class is exacerbated if we look at the frequency of use data. Among those individuals classified as ABC1 who have used AI, 70% say they use at least every few days. In contrast only 48% of C2DE users said they use with that frequency.
As with overall usage, frequency of use tends to tail off when we look at the older age groups.
There is polarisation across the board in terms of being excited or concerned about specific usage areas for AI (with large proportions neither excited nor concerned).
WindowOn looked at a number of different things that AI could be used to help with and asked respondents whether they were excited or concerned about it being used in this way. There were very similar levels of ‘top box excitement’ for all elements (c10% very excited) whilst there is more variance in terms of being very concerned (ratings in a range from 11% to 29%).
As can be seen in Figure 2, concern tends to outweigh excitement for most factors.

The greatest disparity (lowest net excitement score) is for driverless vehicles. Whilst Waymo may be stepping up trials in London with a view to introducing driverless passenger services in the Autumn, the WindowOn data does suggest they may need to address some customer concerns first. Provision of legal advice receives the second lowest net excitement scores. This suggests that the perceived level of impact if things ‘go wrong’ could be impacting here.
The only four areas where there was more excitement than concern were elements where these can be seen as building upon functionality already associated with the internet – providing answers to online searches, travel advice, product recommendations and, marginally, helping children/students. This is still a significant step away from embracing AI Agents (agentic AI) that not only recommend but actually remove the human from the decision making. Indeed, the fact that excitement for these lower functionality assistance services is muted may be an indicator that the rise of ‘decision making’ AI agents is still very much in the ‘monitor and plan for the worse’ phase, rather than needing significant action at this point for current retailers/brands. In an article in January 2026, Grocery Trader refers to 3% of UK shoppers using AI for grocery shopping, so whilst the potential is there for greater disruption to the market it is not yet a huge threat to established shopping journeys.
Looking at how these findings tend to fit with the general ‘buzz’ around AI suggests that the impact on jobs may be ‘overestimated’ in public discourse, whilst public concerns over driverless vehicles and AI generation of images and videos are underestimated
We thought it would be interesting to look at what an LLM would predict for the likely level of concern/excitement around these factors. This is not designed as a test for the accuracy of the LLM (we only used one LLM, gave no contextual information etc.) but to examine how our data reflects the general level of discussion/data around this topic. In effect we are using the LLM as a proxy that reflects the general level of internet chatter around sentiment towards AI services. Figure 3 shows the question we asked and the LLM response.

In a number of areas, it is pretty accurate in its prediction – the top 3 factors for respondents are in the top 4 predicted factors (and predicted to be net positive).
Where we are asking for professional advice (financial, legal, medical etc) the LLM correctly says views are likely to be net negative.
However, there are some areas where it varies from the actual responses significantly.
It overestimates concern for the impact on ‘my job’ – the LLM suggests that this would be the element where there was the highest level of concern – in reality it came mid-table (sixth out of the twelve factors examined).
Based on the general noise relating to the impact of AI on employment this is a perfectly understandable position for the LLM to take. However, rightly or wrongly, our respondents were more concerned about other elements. Indeed, nearly half (49%) said they were neither excited nor concerned about the impact AI would have on their job (which was the highest level of neutrality seen for any of the statements). Interestingly, those who have used AI in their job were more likely to be excited about its impact than concerned (with a Net Excitement score of +39).
The LLM prediction underestimates the concern for AI use in Driverless vehicles (it suggests people will be relatively neutral to these, whilst this elicited the greatest level of concern) and Usage to generate images and video (LLM predicts a net positive sentiment whilst the reality was greater levels of negativity).
To a lesser extent it also didn’t predict that legal advice would be the professional service that generates most concern.
Whilst younger people are generally more excited by AI than older ones, there are some interesting nuances within this, in particular, there does appear to be some push back from those under 25.
As illustrated in Figure 4, those aged 18-24 are almost as equally likely to say they have ever used AI as those aged 25-34. However, there frequency of use is lower (57% using every few days compared with 72% for those aged 25-34). We also examined the core group of AI advocates who were excited about all 12 of the factors examined. Whilst a quarter of those aged 25-34 were very or quite excited about all of the factors, just under half of this number of 18-24 were equally as enthused.

Whilst more examination is needed in this area there is a strong suggestion that those aged 25-34 are the core AI advocates whilst those in the younger age category are evangelical.
In Conclusion
Usage of AI is not universal (and this is self-certified usage; there are probably a number of unseen usages that individuals are not necessarily aware of) with some very specific groups of advocates.
Whilst it is not surprising that usage declines when we look at older respondents it is perhaps a little surprising that ‘older’ in this context means 45+. It is also worth monitoring the attitudes and behaviours of the younger cohort covered and how this develops over time (will this be a general rejection or simply transform to a general level of acceptance and usage over time)?
There should be a concern that distinct differences in Social Class usage could exacerbate social inequality, in much the same way that access to internet services was a concern in the past. However, there also needs to be recognition that not all jobs are office based or largely digital in nature, and the impact of AI outside of this may be overestimated by the experience set of commentators.
Appeal (vs concern) over driverless vehicles should be taken seriously, because it can be done doesn’t mean it should or indeed that these will be widely accepted even if they are available. Equally a concern over AI led legal advice could impact this sector if they embrace automated advice procedures too quickly/without taking customers with them.
There has been a lot in the news recently over the use of deepfake AI technology (whether that’s inappropriate images from Grok or Hollywood pushing back on copyright issues produced by Seedance 2.0). Despite viral trends in producing cartoon representations of you and your hobbies (or yourself as a doll etc) there is a broad level of concern, amongst the public at large, over the use of AI when it comes to generating images and videos.
Steve King
Head of Experience Research
Assumptions cost money. Understanding behaviour makes it.

References





Comments