The image above is AI-generated and does not depict a Morrisons store or any real individual. This post is not about Morrisons specifically, but about the wider use and evolution of facial recognition and in-store surveillance technology across retail.
You can’t help but notice, as you walk into Morrisons in Reading, the facial recognition screen and camera system spotting, highlighting and logging you as you enter the store. My first thought was - so what?
Supermarkets have had CCTV for decades. Theft is a real issue, staff abuse is rising and no one seriously expects modern retail to operate on blind trust and crossed fingers. Cameras, in themselves, aren’t controversial, they’re part of the furniture; but then the questions start - not alarm bells, just questions. What, exactly, is happening to that data?
Is the system doing something relatively simple, scanning faces in real time, checking them against a known list of previous offenders and then discarding everyone else immediately? If so, the interaction is fleeting, functional and largely invisible; a digital equivalent of a security guard clocking your face and moving on; or is something more layered going on?
Is my presence being logged - arrival time, loiter time, frequency of visits and quietly stitched together with other data Morrisons already holds? Morrisons 'More' card, my transaction history, my product preferences. Not to catch me doing something wrong but to understand me doing something normal.
At that point, the technology stops being purely defensive and starts becoming analytical. That isn’t inherently sinister. I realise that retail has always analysed behaviour: footfall counters, heat maps, basket analysis, promotions tied to past purchases etc. Facial recognition simply lowers the friction. The difference is that you no longer opt in with a card or a barcode scan; your face becomes the identifier.
And that’s where curiosity turns into something more legitimate. Facial data isn’t just another data point. Under UK GDPR, biometric data used for identification sits in a special category for a reason. You can change a password, you can cancel a loyalty card, but you can’t easily change your face unless you’re John Travolta or Nicholas Cage.
So the reasonable questions follow naturally:
Is facial data being processed at all or merely analysed transiently? Is any of it stored and if so, for how long? Is it linked even indirectly to other customer data and crucially, what problem is this technology actually being used to solve? Security? Loss prevention? Staff safety? Or insight, optimisation and behavioural modelling?
I appreciate that none of those are illegitimate aims but they are very different purposes, with very different implications for transparency and proportionality. What’s interesting is not that people notice these systems, it’s that when they do, their instinct isn’t outrage, more - uncertainty, a sense that something meaningful is happening just out of view, without an obvious explanation and that’s where the conversation really begins.
From observer to gatekeeper
There’s another aspect to this technology that’s worth exploring; the gatekeeper question. At what point does a system designed to observe quietly begin to decide? Today, the cameras may be there to deter theft or alert staff to genuine risk. Tomorrow, the same infrastructure could just as easily shift from monitoring to permissioning, from observing who enters, to deciding who may enter.
That transition doesn’t require a dramatic policy change. It’s incremental. A tweak to a ruleset. A broader definition of “risk”. A new category quietly added to a watchlist. Once a system exists that can identify individuals in real time at the door, the technical leap from alert to deny is not a large one. That isn’t an accusation; it’s a systems observation.
Supposing a customer writes a blog post questioning facial recognition in supermarkets. It gains a bit of traction. It’s noticed internally. Could that ever feed into a “be on the lookout for” mindset? Not because the individual has stolen anything but because they’re now seen as potentially problematic, disruptive or simply unhelpful?
Fast forward a few weeks. The same customer walks into the store. The system flags them, not as a criminal but as someone who’s 'on a list'. A security colleague is quietly alerted. A polite conversation follows. 'Sorry, you’re not welcome in this store' - no accusation, no appeal, no obvious explanation required or given.
This isn’t a claim that such things are happening. It’s a thought experiment about power asymmetry. When identification systems operate invisibly, the person being identified has no way of knowing whether they’ve been flagged; why, by whom or how to challenge it and that’s where my discomfort creeps in. Not because technology exists, but because its boundaries aren’t always visible.
Most people are comfortable with rules when they’re clear, bounded and accountable. What unsettles them (me) is when systems quietly move from watching behaviour to judging individuals, especially when those judgments happen out of sight and this is why transparency matters so much to me. Not because retailers can’t be trusted (I love Morrisons) but because trust isn’t static. It has to be reinforced as capabilities grow.
If facial recognition systems are limited strictly to loss prevention with clear thresholds, deletion rules and no linkage to opinion, commentary or lawful behaviour, then saying so openly strengthens confidence. If there are hard lines that will never be crossed, articulating them matters because once technology exists that can act as a gatekeeper, the question people naturally ask isn’t “why did you install it?” It’s “what stops it being used differently later?” That isn’t cynicism, it’s systems literacy.
In a world where access decisions can be made in milliseconds by tools we never see, curiosity shouldn’t be mistaken for suspicion.
GDPR NOTE:
Under UK GDPR, facial recognition data counts as biometric data used for identification, which is classed as special category personal data. That puts it in a higher-risk bracket than standard CCTV footage, browsing history, or loyalty card data. In plain terms, it’s treated as sensitive because it’s uniquely tied to who you are. You can reset a password. You can cancel a card. You can’t reset your face. That doesn’t mean facial recognition is banned. It means organisations must meet a higher bar.
They must be able to show:
Crucially, they must also carry out a Data Protection Impact Assessment (DPIA) before deploying such systems essentially a formal exercise in asking “should we be doing this, and what could go wrong?” More like this (digital ID) - link - more like this (shopping) - link
Supermarkets have had CCTV for decades. Theft is a real issue, staff abuse is rising and no one seriously expects modern retail to operate on blind trust and crossed fingers. Cameras, in themselves, aren’t controversial, they’re part of the furniture; but then the questions start - not alarm bells, just questions. What, exactly, is happening to that data?
Is the system doing something relatively simple, scanning faces in real time, checking them against a known list of previous offenders and then discarding everyone else immediately? If so, the interaction is fleeting, functional and largely invisible; a digital equivalent of a security guard clocking your face and moving on; or is something more layered going on?
Is my presence being logged - arrival time, loiter time, frequency of visits and quietly stitched together with other data Morrisons already holds? Morrisons 'More' card, my transaction history, my product preferences. Not to catch me doing something wrong but to understand me doing something normal.
At that point, the technology stops being purely defensive and starts becoming analytical. That isn’t inherently sinister. I realise that retail has always analysed behaviour: footfall counters, heat maps, basket analysis, promotions tied to past purchases etc. Facial recognition simply lowers the friction. The difference is that you no longer opt in with a card or a barcode scan; your face becomes the identifier.
And that’s where curiosity turns into something more legitimate. Facial data isn’t just another data point. Under UK GDPR, biometric data used for identification sits in a special category for a reason. You can change a password, you can cancel a loyalty card, but you can’t easily change your face unless you’re John Travolta or Nicholas Cage.
So the reasonable questions follow naturally:
Is facial data being processed at all or merely analysed transiently? Is any of it stored and if so, for how long? Is it linked even indirectly to other customer data and crucially, what problem is this technology actually being used to solve? Security? Loss prevention? Staff safety? Or insight, optimisation and behavioural modelling?
I appreciate that none of those are illegitimate aims but they are very different purposes, with very different implications for transparency and proportionality. What’s interesting is not that people notice these systems, it’s that when they do, their instinct isn’t outrage, more - uncertainty, a sense that something meaningful is happening just out of view, without an obvious explanation and that’s where the conversation really begins.
From observer to gatekeeper
There’s another aspect to this technology that’s worth exploring; the gatekeeper question. At what point does a system designed to observe quietly begin to decide? Today, the cameras may be there to deter theft or alert staff to genuine risk. Tomorrow, the same infrastructure could just as easily shift from monitoring to permissioning, from observing who enters, to deciding who may enter.
That transition doesn’t require a dramatic policy change. It’s incremental. A tweak to a ruleset. A broader definition of “risk”. A new category quietly added to a watchlist. Once a system exists that can identify individuals in real time at the door, the technical leap from alert to deny is not a large one. That isn’t an accusation; it’s a systems observation.
Supposing a customer writes a blog post questioning facial recognition in supermarkets. It gains a bit of traction. It’s noticed internally. Could that ever feed into a “be on the lookout for” mindset? Not because the individual has stolen anything but because they’re now seen as potentially problematic, disruptive or simply unhelpful?
Fast forward a few weeks. The same customer walks into the store. The system flags them, not as a criminal but as someone who’s 'on a list'. A security colleague is quietly alerted. A polite conversation follows. 'Sorry, you’re not welcome in this store' - no accusation, no appeal, no obvious explanation required or given.
This isn’t a claim that such things are happening. It’s a thought experiment about power asymmetry. When identification systems operate invisibly, the person being identified has no way of knowing whether they’ve been flagged; why, by whom or how to challenge it and that’s where my discomfort creeps in. Not because technology exists, but because its boundaries aren’t always visible.
Most people are comfortable with rules when they’re clear, bounded and accountable. What unsettles them (me) is when systems quietly move from watching behaviour to judging individuals, especially when those judgments happen out of sight and this is why transparency matters so much to me. Not because retailers can’t be trusted (I love Morrisons) but because trust isn’t static. It has to be reinforced as capabilities grow.
If facial recognition systems are limited strictly to loss prevention with clear thresholds, deletion rules and no linkage to opinion, commentary or lawful behaviour, then saying so openly strengthens confidence. If there are hard lines that will never be crossed, articulating them matters because once technology exists that can act as a gatekeeper, the question people naturally ask isn’t “why did you install it?” It’s “what stops it being used differently later?” That isn’t cynicism, it’s systems literacy.
In a world where access decisions can be made in milliseconds by tools we never see, curiosity shouldn’t be mistaken for suspicion.
GDPR NOTE:
Under UK GDPR, facial recognition data counts as biometric data used for identification, which is classed as special category personal data. That puts it in a higher-risk bracket than standard CCTV footage, browsing history, or loyalty card data. In plain terms, it’s treated as sensitive because it’s uniquely tied to who you are. You can reset a password. You can cancel a card. You can’t reset your face. That doesn’t mean facial recognition is banned. It means organisations must meet a higher bar.
They must be able to show:
- a lawful basis for processing
- a specific purpose (not “just in case”)
- necessity and proportionality
- clear retention limits
- strong safeguards against misuse
Crucially, they must also carry out a Data Protection Impact Assessment (DPIA) before deploying such systems essentially a formal exercise in asking “should we be doing this, and what could go wrong?” More like this (digital ID) - link - more like this (shopping) - link

No comments:
Post a Comment