1. Overview of the Crime Survey for England and Wales screener and victimisation module redesign
The Qualitative and Data Collection Methodology (QDCM) team at the Office for National Statistics (ONS) were asked to redesign and conduct cognitive testing of the screener and victimisation modules of the Crime Survey for England and Wales (CSEW), which measure the incidence and prevalence of crime. This work forms part of the CSEW Transformation programme (PDF, 558KB).
This report describes the second of three pieces of Discovery research we have conducted towards redesigning the screener and victimisation modules. Part 1: Discovery research on the redesign of multi-mode questions was published in May 2023. Discovery Part 2 was conducted in 2023.
Part 2 involved mental models qualitative interviews and user journeys desk research. Interviews aimed to understand participants’ mental models of crime experienced in the last 12 months, including how they conceptualised different offence types, the terminology they used and how they recalled their experience of incidents. We concluded that participants varied in their mental models, including how they articulated and ordered their experiences, and which details they could recall.
The user journeys methodology involved inputting various crime scenarios through the existing CSEW screener and victimisation modules. This was to understand the potential experience of respondents completing the survey.
The potential difficulties we identified informed the Discovery Part 3: Redesign of the screener module, including the order and wording of questions. The aim was to provide a good respondent experience in an online self-completion mode, while meeting complex data requirements. The user journey findings will also be used to inform redesign of the victimisation module.
Nôl i'r tabl cynnwys2. Redesign of the Crime Survey for England and Wales screener and victimisation modules
2.1 Reasons for undertaking the redesign
The Crime Survey for England and Wales (CSEW) is a victimisation survey that asks people aged 16 years and over about their experience of crime in the 12 months prior to interview.
In 2022, a multimodal, longitudinal panel design was introduced, with Wave 1 conducted face to face, and subsequent waves conducted annually by telephone.
In May 2022, the Centre for Crime and Justice (CCJ) at the Office for National Statistics (ONS) conducted a public consultation on the redesign of the CSEW. CCJ sought responses from stakeholders on proposals to:
- implement a longitudinal panel design
- develop a multi-modal survey
- improve screener questions
- review CSEW offence coding
Concerns were raised about mode effects, data comparability and the effectiveness of an online mode compared with interviewer modes in capturing complex crimes (multiple victimisation, including repeat incidents or multi-feature incidents (MFI), which involve more than one offence type). However, respondents to the consultation acknowledged the potential benefits of improving screener questions and implementing a multi-modal instrument and panel design on data quality and sample representativeness.
The Qualitative and Data Collection Methodology (QDCM) team at the ONS were asked by CCJ to redesign the screener and victimisation modules for a potential online version of the CSEW at Wave 2 onwards, as part of the transformation programme. These modules measure the incidence and prevalence of crime, which comprise the CSEW’s main estimates or headline measures, and collect further details on the nature and costs of crime.
The redesign sought to address issues with the existing CSEW questions and adapt them for an online, self-completion mode, as identified in our Discovery Part 1 methodology and previous work by Verian (formerly Kantar Public, the CSEW fieldwork contractor): Re-design of Crime Survey for England and Wales (CSEW) Core Questions for Online Collection (2018) and Research on Transforming the Crime – Work Package A (PDF, 1.89MB) (2022).
Our findings from Discovery Part 1 concluded that while an online mode might accurately capture crime incidence experienced by participants with simple profiles (no, or one, experience of crime), potential challenges were identified for those with more complex profiles. However, it was agreed that the redesign would attempt to capture all experiences of crime, as per the existing survey, and the feasibility of an online mode would be assessed.
The screener module currently includes 31 questions, each of which asks whether a respondent has experienced a particular incident in the last year. Screener questions are grouped into “traditional” crimes, and fraud and computer misuse. Traditional crimes include:
- household crimes (against the main residence)
- vehicle crimes (against the vehicle they or someone in their household owns)
- personal crimes (against the person and their property away from home)
Fraud and computer misuse include:
- having personal information used without permission
- being deceived out of money or goods
- interference with computers or other devices
Incidents identified in the screener questions are followed up with a victimisation module (for up to six incidents per respondent). There are different victimisation modules for traditional and fraud incidents. These modules ask for further details, for example, about when and where the incident occurred, who did it and what happened.
Office-based coders then assess whether the incident did or did not amount to an offence and which offence code should be assigned. Coders use a coding manual in which the classifications are largely aligned with the Home Office Crime Recording Rules for frontline officers and staff (HOCR).
If a respondent has experienced more than one incident of a particular crime and considers any of them to be “similar”, this is classed as a series and only the most recent incident is asked about in the victimisation module. Currently, both traditional and fraud screeners are asked before the questions to identify series crime.
2.2 Purpose of this research
This report describes the qualitative methodology adopted in our Discovery Part 2 research, including mental models and user journeys. These research methods are in line with the Government Analysis Function’s three levels of Respondent Centred Design Framework (RCDF) (2023).
The RCDF is “a core set of methods and approaches to keep the respondent at the heart of what we develop” (Wilson and Dickinson, Respondent Centred Surveys, (2021), page 43). Analysis of mental models enables us to better understand how respondents conceptualise crime, recall their experiences and the terminology they use. This provided evidence to inform our question redesign with the aim to reduce respondent burden, enhance usability and increase participation in the survey.
The recommendations outlined in this report derive from the mental models research. Not all suggestions have been implemented in the redesign process because of wider considerations and priorities for the redesign.
Since completing Discovery Part 2, we have redesigned the screener module – see our Discovery Part 3: Redesign of the screener module methodology. Redesign of the victimisation module will be in a later stage of the development and depend on how successful the redesigned screener is in testing.
Nôl i'r tabl cynnwys3. Research aims and methods
This research aimed to:
- understand how people think about and organically communicate their experience of crime (including attempted crime), particularly in relation to the different types of multiple victimisation
- examine the order in which participants speak about their crime experiences, for example by: severity (in law), impact on the victim, how recent the crime was, or other factors to determine optimal screener question order
- understand how participants think about the 12-month reference period and recall details of crime they have experienced
- explore how likely participants would be to answer survey questions about their experience of crime by different collection modes
3.1 Methodology
3.1.1 Sample design
We used a purposive sampling approach which is a selective, non-probability method. Participants were selected because they had certain characteristics or features that related to the research topic. Purposive sampling does not aim to be statistically representative but aims to include relevant characteristics.
Researchers relied on their judgement to target groups and demographics related to the Crime Survey for England and Wales (CSEW) screener module topics and questions. Participant eligibility requirements included:
- having experienced a crime in the last 12 months
- living in England or Wales
- being 16 years and over
We aimed to achieve a sample of varied crime types and demographics, including age, sex, and ethnicity.
Prospective participants were screened during recruitment to identify what type of crime(s) they had experienced and whether they had experienced single, multiple, repeat or series crime. We aimed to sample participants with experience of:
- burglary and attempted burglary, with or without criminal damage and at a current or previous address (in the last 12 months)
- vehicle and bicycle-related crimes, such as theft and damage
- personal crimes, including theft, attempted theft, and deliberate damage to belongings
- sexual assault
- physical assault
- threats and harassment
- fraud and computer misuse
We also aimed to understand how participants with experience of these crimes as part of ongoing domestic abuse or repeated hate crime recalled and conceptualised them, for example, holistically or by individual incidents.
3.1.2 Recruitment
We collected basic information about participants’ experiences during recruitment, asking them to categorise their experience by crime type but not to provide any detail. Participants were informed about the purpose of the research, without priming them to consider their experiences in any depth before the research interview. To minimise the potential influence of CSEW questions, terminology or definitions of crime on participants, we screened them for previous participation in the CSEW to reduce the chance of interviewing former respondents.
We used multiple recruitment sources to capture diverse crime types. Participants were recruited through support charities, victim organisations for people who have experienced crime, and Neighbourhood Watch. We also used our database of previous Qualitative and Data Collection Methodology (QDCM) research participants who were happy to be recontacted for future projects.
We anticipated that victims of certain crimes, such as bike thefts, may be less likely to access support through a charity, so we also attempted to recruit participants by leaving leaflets in public places, including gyms and libraries. However, this method was unsuccessful in recruiting participants.
The recruitment methods used may have affected our sample composition and impacted research findings. For example, Neighbourhood Watch members were well-represented; they may have reported or discussed their experience of crime with other people before the research interview, potentially making their accounts less spontaneous. Our sample composition can be found in Appendix 2.
3.1.3 Research method
Interview conduct
In August 2023, we conducted 28 semi-structured interviews in person or via Microsoft Teams. Interviews were conducted in pairs with one researcher (the “interviewer”) leading the interview using a topic guide, which included probes to explore the research questions (see Appendix 3). A second researcher (the “observer”) recorded detailed notes in a structured observation sheet (Appendix 4). We offered participants the option of only having an interviewer present, but this was never requested.
For interviews conducted online, only the interviewer appeared on camera to minimise impact on participants. Participants were told prior to the interview that an observer was also on the call. Observers did not interject at any point during interviews but were given the opportunity to ask questions at the end or seek clarifications.
Online interviews were video recorded with participant consent, and participants were given the option to turn their camera off. In-person interviews were audio recorded only.
The interviewer and observer met to debrief, review the observation sheet and discuss the main findings following each interview. This process aimed to maximise the accuracy and quality of data and encouraged reflexivity to reduce potential researcher bias among those involved in the project. The observation sheets were used in the initial stage of data analysis.
In line with Office for National Statistics (ONS) qualitative research practices, participants were given an e-voucher or postal voucher of £50 as a token of appreciation for their time and effort.
Interview structure and content
Each interview started with a standard introduction which reiterated the purpose of the research and our safeguarding and confidentiality procedures. Each participant was then asked a very open question to describe any crimes they had experienced in the last 12 months. This was intended to obtain organic, unprompted thoughts without probes to avoid leading participants, imposing assumptions, or using language or terminology that might influence how they describe their experience. The only intervention by the interviewer during this section was to encourage further thoughts.
We did not aim to test specific CSEW questions or its existing design. However, once the participant had finished answering the open question, we asked questions more closely related to the content and design of the existing CSEW (see Appendix 3). For example, asking whether they considered any of their crime experiences to be “related” and asking how precisely they could remember when an incident happened.
Ethics and well-being
As the research covered sensitive topics and the participants were victims of crime, they were potentially vulnerable, for example, to difficult emotions being triggered. Some interviews were conducted away from the ONS offices, including at participant homes. To ensure the safety and well-being of both participants and researchers, and that we appropriately managed the personal information collected, we:
- completed a National Statistician’s Data Ethics Committee application
- created interviewer safety guidance
- completed a risk assessment
- consolidated support resources for our participants
- held workshops with the team on interviewer safety and well-being, and how to support participants during challenging or sensitive interviews
- provided the team with access to additional training, such as the Safeguarding Adults and Children via Civil Service Learning
- completed a Data Protection Impact Assessment (DPIA)
- anonymised data from the point of collection by allocating participant numbers and ensuring data (such as observation sheets) were not attached to names or locations
3.1.4 Qualitative data analysis
The analysis of the mental models research was split into two strands:
- Delivery of high-level findings
- Further analysis
High-level findings
Our high-level findings aimed to provide an initial indication of participants’ mental models relating to our research aims (Section 3: Research aims and methods) and any themes across characteristics, such as the type of crime experienced. These findings were intended to give stakeholders in the Centre for Crime and Justice (CCJ) early insights and enable further elements of the overall work programme, such as the redesign of the screener module, to begin in accordance with our timeline.
To generate the high-level findings, we conducted a rapid analysis, informed by this Introduction to rapid qualitative evaluation. This method relied primarily on interview observation notes rather than the recordings and transcripts. A charting framework spreadsheet was created in Excel, and structured according to the topic guide and observation sheets. Themes were explored methodically across the different research aims and, once individual team members had completed their analyses, the emergent themes were discussed in a collaborative session to quality assure the findings.
Further analysis
We identified that further analysis was needed to add breadth and depth to the high-level findings and answer emerging research questions as our question redesign proceeded. The main aims of this analysis were to:
- further explore complex crime profiles such as multiple and repeat victimisation, including domestic abuse and hate crime to understand if and how these were thought of as individual incidents
- explore mental models within offence types, predominantly threats, harassment, assault and criminal damage
- explore fraud and computer misuse as a distinct crime type
- explore the recollection of when incidents occurred inside and outside of the reference period, and the recollection of other incident details
To address these aims, we referred to interview transcripts which allowed us to:
- quality assure our high-level findings and add necessary context that may have been missing from observer notes
- note the terminology used by participants to describe their experiences
- use real-life crime personas to create user journeys and pass them through the CSEW questions as a further analysis method (see Section 5: User journeys)
Some of the high-level themes that emerged were complex and required even deeper analysis. In these cases, we used cognitive mapping which is the visual representation of a mental model for a given concept (see Cognitive Maps, Mind Maps, and Concept Maps: Definitions, Gibbons (2019)). This method of analysis was chosen as it allowed participants’ mental models to be combined across cases and visually compared, meaning we could draw conclusions efficiently.
To quality assure our cognitive maps, we devised a “buddy” pairing system for researcher triangulation. Researchers working on different research aims met to talk through their work, compare methodological ideas, check for biases and omissions, and avoid duplication to maximise the reliability and validity of our findings.
This further analysis provided valuable insights to inform the screener module redesign, such as: the order of the questions, how series crime and multi-feature incident (MFIs) can be identified, which terminology could be used, and which areas needed further testing.
Nôl i'r tabl cynnwys4. Findings
This section outlines findings from the initial open question, followed by findings from more specific probes about multiple victimisation, specific offence types and recall. We have combined spontaneous and elicited findings on the same theme to draw insights for the screener redesign. Design recommendations are made throughout.
4.1 General experiences of crime in the last 12 months
4.1.1 Experiences mentioned spontaneously
Initially, participants were asked a very open question to understand how they recall their experiences of crime organically, without further probing. When asked “Can you tell me about any crimes you have experienced over the last 12 months?” (the Crime Survey for England and Wales (CSEW) reference period), participants disclosed a range of experiences, including things that the CSEW would classify:
- as a crime, such as burglary, criminal damage, robbery, fraud, sexual offences, threats and harassment, theft of possessions, and vehicle- or bicycle-related thefts or damage
- as an attempted crime, such as attempted snatch thefts, vehicle crime and arson
- as out of scope, for example, crimes experienced more than 12 months ago, experiences not asked about in the CSEW, and crimes (or potential crimes) they had witnessed, including drink-driving and business-related crime at commercial premises
4.1.2 Experiences mentioned only after probing
As the interview progressed, participants were asked probing questions to understand if they had experienced other crimes in the last 12 months and if so, why they had not mentioned them at the initial open question. They disclosed additional experiences, which would likely be coded as a crime by the CSEW, for example:
- fraud (they were not sure if paying for something in advance and not receiving it constituted a crime)
- parts being removed from a car, without proof that they were stolen
- being shouted at in the street (making them feel slightly, but not very, unsafe)
- verbal racial harassment or abuse
Crimes experienced more than 12 months ago that were impactful for the participant, such as child abuse, were also mentioned at this point.
Participants also recalled experiences they were unsure constituted a crime, for example:
- finding footsteps outside their home, suggesting someone had trespassed (the police had passed this onto their community team and the CSEW only captures trespassing inside homes under burglary)
- their dog being attacked by another person’s dog
Personal experiences that participants considered crimes but might be out of scope on the CSEW were also disclosed as the interview progressed, for example:
- medical negligence
- being involved in a road rage incident with items thrown at their car (although this may be coded as an “Other threat” or vehicle damage depending on the answers the participant would give in the victimisation module)
Participants mentioned experiences they had witnessed or were aware had happened to someone else, for example:
- domestic violence
- someone being threatened with a knife
- anti-social behaviour
- suspecting someone of driving under the influence of drugs
- speeding or other minor driving “offences”
- seeing people avoiding fares
It should be noted that participants may have disclosed such experiences because our probes did not explicitly ask them to exclude things that happened to other people. This differs from the existing CSEW screeners, which ask focused questions.
Participants explained that they only mentioned certain crime experiences after probing, rather than the open question, because they:
- experienced the crime more than 12 months ago
- were not the direct victim and did not think we were interested in witnessed crimes (which is correct) or company crimes that affected them (such as cyberattacks and thefts), which we would want to capture
Redesign recommendation
Consider clarifying the screener module preamble so respondents know that – unless otherwise stated – witnessed and heard about incidents should be excluded. For example, reminding them to: "Only tell us about incidents that have happened to you personally and DO NOT include things that have happened to other people".
Participants’ initial exclusion of these experiences at the initial open question aligns with CSEW requirements. However, there were other experiences not mentioned until probing that the CSEW would want to capture and further consideration should be given to ensure this happens. Participants did not mention these incidents at the open question because they:
- forgot about it until probed
- did not consider it a crime
- thought they were “normal” events or that it was their fault
This last finding related to fraud and sexual harassment. This could indicate potential social desirability effects and a need for CSEW questions to explain that respondents should include things, even if they think they are “normal” or that they were in some way responsible for it happening.
Redesign recommendation
Our open question was broad to reduce leading or biasing our participants. However, our findings reiterate the need for screeners to be specific. Consideration should be given to adding or rewording preambles, screener questions and check questions to prevent out-of-scope experiences being taken through to victim forms during offence coding.
4.1.3 Discrepancies between recruitment screener and interview
Participants were “screened” prior to interview to indicate the crimes they had experienced (see Appendix 1). There were participants who, in response to the initial probe, reported experiences that were consistent with the options they had selected at the recruitment screener. However, there were also instances where the options reported at screener and things mentioned during the open probing were less aligned, for example:
- sexual offences, fraud, and threats and harassment were not mentioned at interview despite being selected at screener
- experiences were disclosed at interview that had not been selected at the screener
- threats and harassment were reported by participants at the screener when “assault” would have been more appropriate
- having no personal experience of crime (for example, selecting “Other” at the screener and discussing anti-social behaviour, or reporting crimes they had witnessed, which would be out of scope – however, this might be because of our open screener wording, see Appendix 1)
- selecting “Other” for a crime despite applicable answer options listed
- selecting criminal damage, but discussing vehicle damage at interview
There were participants who mentioned experiences during the interview that they had not selected in the recruitment screener, such as domestic abuse and others considered as “something you deal with”, for example:
- computer hacking, phishing and other fraud “attempts”
- incidents of intimidation, and threats and harassment (including racial abuse)
The recruitment screener did not ask a set of detailed questions about behaviours as with the CSEW. Instead, there was a checklist of offence types with example behaviours underneath. The discrepancy in details given between our screener and interview questions emphasises the importance of well-designed survey questions. We only want the CSEW to collect experiences that are relevant to data requirements, and that they be recorded at the appropriate questions, especially in the absence of interviewer moderation.
Redesign recommendation
Retain the use of screeners that refer to behaviours or features of offences rather than offence labels, which might be subject to interpretation. Word them to ensure they work without interviewer moderation.
4.2 Multi-feature incidents and series crime
4.2.1 Multi-feature incidents
A multi-feature incident (MFI) involves more than one offence type happening at the same time. To conform with Home Office Crime Recording Rules (HOCR), specifically the principal crime rule, only one of the offences should be coded and included in the survey estimates. The existing screeners ask respondents to only report one part of an MFI, at the first applicable screener, by including the phrase “apart from anything you have already mentioned” at the subsequent screeners.
This means that respondents who have experienced MFIs cannot report all the offences they have experienced in the screener module; some may only be identified in the “incident checklist” questions in the victimisation module. For example, if a respondent has experienced an incident involving a burglary and an assault, they should report this incident at the burglary screener only (as this is the first screener they are presented with), with the assault recorded in the victimisation module for the burglary incident. In some cases, this offence may be more salient to the respondent or of a higher offence coding priority.
Field interviewers sometimes clarify details with respondents to prevent both, or all, the elements of an MFI being reported at screeners and asked about in separate victim forms. This would result in double counting (that is, both or all offences coded rather than just the principal offence), if not uncovered, and rectified during the victimisation module or offence coding. A further complexity is that there are offences comprising two or more features that are covered in separate CSEW screeners – for example, robbery involves the use or threat of force at the time of a theft.
In our mental models research, there were participants who spontaneously described MFIs at the open question. For example, a participant described an incident of assault, being threatened with a weapon and having their personal possessions stolen as a robbery (see Section 4.3: Mental models of specific offence types). There was also an incident of burglary where a participant had their house broken into, possessions stolen and a door damaged.
However, only one element of an MFI might be mentioned spontaneously. For example, a participant reported the theft of bikes from their garage but did not mention damage to the garage door until probed. Although we cannot say if this was because some offence types were more impactful or more salient to the participant than others, our findings suggest that participants can recall elements of an MFI when asked directly.
Redesign recommendation
Consider how to apply the concept of an MFI to the CSEW. For example, allowing or encouraging respondents to report all elements of an MFI at different screener questions and reviewing the need for the incident checklist in the victimisation module. A new set of questions to establish the experience of MFIs in the screener module could be added to better align with respondents' mental models.
4.2.2 Series crime
The current CSEW treats repeat incidence of a screener as a series when the respondent reports that two or more of the incidents were “similar”:
You mentioned [X number] incidents of [X screener]. Were any of these very similar incidents, where the same thing was done under the same circumstances and probably by the same people?
If a respondent experiences a combination of single incidents and a series of the same crime, a victim form will be generated for the series and for each single incident (up to a maximum of six). The most recent incident in the series will be asked about in the victim form, but all will be included in the incidence rates (subject to them occurring within the 12-month reference period). Incident dates and the series pattern (whether incidents occurred before, after, or in between separate incidents) establish the order in which victim forms should be asked (or, if there are more than six forms, excluded).
In our mental models research, there were participants who spontaneously mentioned various crimes that they had repeatedly experienced at the open question, for example, of vehicle damage, fraud, attempted theft, actual theft of personal possessions, and threats. These participants said they were “related” incidents when probed. When asked what “related” incidents meant, participants defined them as crimes committed by the same perpetrator under similar circumstances, for example, there was a “connection” between incidents, or they happened at the same time.
A participant recalled repeat incidents of theft and separate criminal damage to work machines. When probed, the participant did not think any incidents were connected. While we note that the CSEW does not aim to capture business-related crime, this suggests there will also be CSEW respondents who can identify separate incidents.
Despite participants seemingly being able to distinguish between series and separate incidents, consideration will be given to the treatment of repeat incidence for self-completion mode. One potential change could be to use the word “related” in questions.
Redesign recommendation
Test how the word "related" is understood in the context of the redesigned specification, rather than the existing "similar".
4.3 Mental models of specific offence types
Exploring how participants organically described and defined their experiences allowed us to compare their mental models with the existing CSEW screener questions, including terminology used. This meant we could consider how best to break down and order the offence types covered.
Our findings combine coverage of broad offence types, such as “criminal damage” and “theft”, regardless of whether these related to the home, a vehicle, or personal possessions (which is how the current CSEW screeners divide them). We also cover experiences that are not currently explicitly covered by unique screeners, such as domestic abuse and hate crime.
4.3.1 Criminal damage
Participants’ experiences of criminal damage varied. Some took place at the same time as other crime types such as burglary and theft elsewhere from property (that is, they were MFIs). Other, non-MFI, experiences included criminal damage to the home and garden. There were instances that would be out of scope on the CSEW, such as criminal damage in the neighbourhood, including graffiti, and at the workplace.
When describing their experiences of criminal damage, participants used alternative words and phrases such as:
- “broke into my home”
- “damaged the doorknob”
- “someone had forced the lock and gone in”
- “broke the lock”
- “graffiti”
- [a] “machine was smashed” [at participant’s workplace]
Redesign recommendation
Continue to use the word "damage" rather than "criminal damage" in screeners. It will not be possible to include all examples of criminal damage in the questions. However, we will consider words or guidance that encourage reporting of specific types of damage, such as graffiti to a respondent's residence where relevant.
Arson
The Home Office Crime Recording Rules, on which the CSEW offence coding manual is based, defines arson as:
“any deliberate damage by fire to something belonging to the respondent or their household”
Despite being the highest priority offence code, arson is not specifically asked about in the CSEW screener questions. Instead, incidents of arson are captured by questions about criminal and deliberate damage in the household, vehicle and personal crimes sections of the screener questions.
Experience of arson among our sample was limited. The word was used spontaneously in experiences described by a participant at the open question. Although the participant understood the basic definition of arson, they were not always consistent with the terms they used to describe their experiences throughout the interview. For example, they described their personal experience as both “attempted arson” and “arson” but said they “didn't think” they had experienced attempted arson later in the interview. They said the arson could have been an attempted crime because there were:
“…multiple burn marks on the [machinery]...so someone had tried to set fire to it, suppose at that point it was arson...but not very good arson.”
It should be noted that these experiences of arson would have been out of the CSEW’s scope as they related to work machinery or someone else’s property. However, the finding highlights potential difficulty with identifying attempted versus actual arson and informs our redesign accordingly.
4.3.2 Threats, harassment and assault
The CSEW screener module includes questions about experiences of:
- sexual assault
- assault (deliberate hitting, kicking, or using force or violence)
- assaults by household members aged over 16 years (deliberate hitting, kicking, or using force or violence)
- threats or intimidation
This section reports findings on these related topics, as well as stalking and harassment, which are currently captured by specific modules after the screener and victimisation modules and reported as separate outputs to the survey’s main estimates.
The complexity of separating attempted assault, assault and threats and harassment was identified in research by Verian Redesign of Crime Survey for England and Wales (CSEW) Core questions for Online Collection (2018); Research on Transforming the Crime Survey for England and Wales (PDF, 1.89MB) (2022). Changes to the threats screener question in the Telephone-operated CSEW (TCSEW) during the coronavirus (COVID-19) pandemic highlighted further problems (see Discovery Part 3: Redesign of the screener module for further information).
Participants reported a range of experiences of threats, harassment and assault, for example:
- verbal racial abuse in the form of threats (see Section 4.3: Changes to coverage of attempted crime in screeners)
- domestic abuse in the form of repeated threats and assault (see Section 4.3: Changes to coverage of attempted crime in screeners)
- robbery (discussed in the following section)
- neighbour disputes (for example, being verbally threatened and intimidated)
- a parking dispute (being threatened over allocated work parking)
Our further analysis used cognitive mapping to explore how, if at all, participants differentiated or conflated these incidents. We found that participants used terms such as “threatening” and “intimidating” interchangeably to describe experiences that might be counted as “threats” (a single incident) or potentially, “harassment” (a repeated incident) in the CSEW. The term “intimidation" is not currently included in the threats screener question, but it is used in the CSEW’s Harassment module (introduced in 2022).
There was also an example of uncertainty about whether to define someone making a participant feel uncomfortable on a train as stalking or harassment, as “stalking seems a stretch”. These examples suggest “stalking” and “harassment” might not always be associated with repeated incidents as per the HOCR definition. Although these are not currently captured in the screener module, our redesigned questions will need to ensure specific wording is used to capture experiences that could amount to threats and intimidation.
Redesign recommendation
Consider adding the word "intimidation" into the threats screener question.
Further information about stalking and harassment can be found in our Discovery Part 3: Redesign of the screener module report (Section 4.6: Changes to personal crime screeners).
Participants recognised that assault involved force, with participants describing assault as “involving violence”, “getting beat” and “they hit me”. They could differentiate between threat and assault, but the term “attempted assault” was not used spontaneously by participants. It is not clear if participants would have defined a threat with a weapon as an attempted assault, as per the CSEW offence coding manual.
Redesign recommendation
Consider including guidance in the assault or threats screener that encourages respondents to include incidents when weapons were used (an attempted assault, not a threat) and even if they were not injured (an assault).
The coding manual defines a theft with force or attempted force as a robbery. In the current CSEW there is no robbery screener; a robbery would probably be recorded as an assault, something being stolen, or a theft (at whichever screener was applicable first), and established as a robbery in the victim form.
Participants used the term “robbed” spontaneously, but only sometimes in accordance with the CSEW definition. It was used appropriately to describe a snatch theft of a watch and being assaulted at the same time. However, it was also used colloquially to describe a stealth theft where their mobile phone was taken from their pocket without their knowledge and without threat or violence.
Redesign recommendation
Because these terms are used interchangeably, we would not recommend introducing the term "robbery" in the screener questions. Instead, we recommend asking about theft and assault separately and establishing whether they happened together in the same incident at a separate question (either in the screener or victimisation module).
4.3.3 Domestic abuse
In the current CSEW, the preferred measure of domestic abuse is produced from a separate computer-assisted self-completion (CASI) module. This produces a prevalence measure. The same is true for sexual victimisation and stalking.
This domestic abuse module covers a much broader set of behaviours someone may experience than the main face-to-face crime survey (including controlling and coercive behaviour). Currently, if the same experiences are reported in both the main screener section and the self-completion module, double counting is not a concern as both outputs are published independently.
For this research, our sample included participants who had experienced domestic abuse. This section will explore the experiences they described and consider the implications for the design of the CSEW screener module, particularly in the context of self-completion online mode rather than interviewer-administration.
As domestic abuse is not a criminal offence, within the current CSEW, there is no single screener or offence code that aims to capture domestic abuse. Rather, experiences of domestic abuse should be captured by relevant crime screeners, not only those on violence, threat or sexual assault, but others such as criminal damage, theft and fraud.
There is one screener designed to capture incidents of violence against them by a household member, if it had not already been mentioned at a previous assault screener. The closest measures for CSEW headline crime are outputs published for violent incidents in relation to the victim’s relationship to the perpetrator. This is defined as “other household members, acquaintances, and strangers”. The term domestic abuse is currently not used within the screener module.
The term “domestic abuse” was used organically by participants who had experienced it. We do not know if there were other participants who had experienced domestic abuse but did not recognise or report it as such.
There were participants who mentioned experiences of domestic abuse behaviours that are not covered by the existing CSEW screener module and primary outputs, including “lovebombing” and “gaslighting.” These behaviours are covered by the definition of controlling and coercive behaviour (CCB), the only crime exclusive to domestic abuse.
There were participants who organically recalled “live” criminal cases first when they had also experienced other offence types not related to domestic abuse. This was because the cases were still going through legal proceedings. One of the criminal cases included multiple different crime types, including some repeat incidents committed by the same perpetrator.
Ordering incidents into criminal cases and/or those reported to the police and civil cases suggests domestic abuse was organically considered holistically. There was evidence that it was hard for participants to disentangle incidents and recall whether incidents happened in the last 12 months (see Section 4.5: Recall of the timing and other details of experiences). This applies to both CCB and offence types covered by the screener module. They also found it traumatising to “relive things” during court trials.
This suggests that some CSEW respondents could have difficulty breaking down their holistic domestic abuse experiences into components, as required by the screener and victimisation module questions, including:
- separating incidents between different screeners
- recalling whether individual incidents included one screener type only or were MFIs
- accurately counting repeat incidents of the same offence type
Redesign recommendation
Create a new approach to counting and recording incidents of crime, for example, by allowing respondents to identify incidents that happened at the same time in the screener module.
However, when probed, participants could recall specific types of crime experienced as part of domestic abuse, for example:
- physical assault, including having things thrown at them
- fraud and computer hacking, including being “defrauded” by having contracts taken out in their name
- vehicle damage, such as their car being keyed by the perpetrator
- criminal damage, such as breaking things around the house
- sexual offences
- burglary and theft
- threats and “harassment”, including the perpetrator using family members to gain indirect contact with them and stalking
This shows that experiences of domestic abuse were not limited to household violence. While respondents may find it difficult to recall exactly how many times an offence type happened, they might recall the offence type itself.
Redesign recommendation
To reduce respondent burden and shorten the length of the overall survey online, consider introducing a capped rule to series crime. For example, if a respondent has experienced a screener with a certain incidence rate, it would be treated as series crime and a victim form would only be generated for the most recent.
The civil cases participants mentioned included breaking non-molestation orders and undertakings, and witness tampering. These would not be in scope of the CSEW screeners or primary outputs (unless these amounted to an offence type, for example, threats).
Redesign recommendation
Domestic abuse encompasses multiple offence types, and those who experience it may not realise they do or use the term themselves. Therefore, continue to avoid referring to "domestic abuse" within the screener questions, introductions and guidance.
However, add guidance across the screener module to include incidents committed by people the respondent knows and those they do not, to encourage them to report all crime incidents that relate to domestic abuse at all relevant screeners.
4.3.4 Hate crime
Hate crime is defined by the Crown Prosecution Service (CPS) as:
“hostility or prejudice, based on a person’s disability, race, religion, sexual orientation or transgender identity”
“Any criminal offence” can constitute hate crime. Similarly to domestic abuse, hate crime is collected indirectly through experiences reported at the CSEW screener questions; there is no “hate crime” screener. The victim’s perception of the perpetrator’s motivation is asked in the victimisation module.
Our mental models sample included participants who had written “hate crime” on their recruitment screener and/or who mentioned in the interviews that they had experienced or witnessed friends’ experiences of “racial slurs”, “racial abuse”, racial “slander”, shouting or verbal abuse. In these instances, we would expect either:
- the threats or assault screener to be answered “yes”, if the incident reached the threshold to be recorded as such
- no offence to be reported if they were not threatened or assaulted
Redesign recommendation
Consider the need to add guidance to encourage respondents to include crimes motivated by protected characteristics in preambles or at specific screeners.
Participants who experienced these types of incidents varied in whether they considered them crimes, even when their experiences were similar to each other.
Those who considered it to be a crime generally understood it consistently with part of the CPS definition as being against, for example, “somebody from a protected characteristic that’s targeting that characteristic”. They mentioned their experiences organically, without probing, and there were participants within this group who had reported their experiences to the police (although nothing had been done). However, it is not certain whether they also understood that the behaviour they experienced needed to amount to an offence to count as a hate crime.
Those who were not sure if it was a crime did not disclose their experiences organically. Rather, they mentioned them in response to probing questions such as, “Is there anything else that you’d like to mention but you were unsure whether it was a crime?”. Reasons these participants gave for not mentioning their experiences of hate crime sooner were:
- they thought it was “part of life”
- they thought it was “something you just deal with” rather than something to report to the police – they did not think they would be taken seriously
- not experiencing the hate crime personally (which would be out of scope on the CSEW)
Redesign recommendation
Add guidance across all screener sections to include incidents committed by people the respondent knows and those they do not to encourage them to report all crime types, including those that would fall into the definition of hate crime.
4.3.5 Fraud and computer misuse
Currently, the CSEW divides fraud and computer misuse into five types:
- Having personal information or account details used, or an attempt made to use them, to obtain money or buy goods or services
- Being tricked or deceived out of money or goods, in person, by telephone or online
- Someone trying to trick or deceive a respondent out of money or goods, in person, by telephone or online
- Personal information or details being accessed or used without permission
- A computer or other internet-enabled device being infected or interfered with, for example, by a virus
Although the term “fraud” is not used in the CSEW questions, it was widely used by participants who had experienced it. Participants also described experiences using words such as “scam” or “fraudation” [sic]. These experiences included:
- being called and asked if they would like to switch phone provider
- being called by someone impersonating their mobile network provider who claimed they could reduce their bill if they shared their one-time code (which they did)
- losing cryptocurrency after being scammed online
- paying in advance for a rental product that never arrived
The latter two examples were only recalled when they were probed whether there was anything else they wanted to mention, but they were not sure if it was a crime. In the case of sharing their one-time code, they said they did not consider this a crime at the time because it was their own “stupidity”. Other participants expressed similar emotions about experiencing fraud. This included feeling:
- shame
- embarrassed or “silly”
- like it was their “fault”
Redesign recommendation
To encourage participants to report their experiences, consider adding guidance to include fraud incidents, even if they thought it was their fault.
Experiences where financial loss had been incurred were considered a crime by participants. However, others described their experience(s) as “wrong but not a crime”, when there was no monetary loss or they did not respond to a phishing attempt. Phishing is an attempt to steal someone’s personal information or to trick someone into sending money by clicking on a link to “malicious websites...[that] may contain malware” (National Cyber Security Centre, Phishing attacks, 2024). Participants considered phishing attempts to be “so common” and something “everyone gets”.
If a respondent received a phishing email, mailshot or cold call but did not respond to or engage with the communication, such as by clicking a link or providing further details, they would not be considered the Specific Intended Victim (SIV). The CSEW does not aim to capture experiences where the participant was not the SIV.
Redesign recommendation
Clarify fraud guidance so it is clear to the respondent what should be included, for example: "Please DO NOT include phishing you did not respond to, such as emails or phone calls".
Consider introducing an SIV check question to ask if the participant responded to any communication. This would reduce the number of non-SIVs who would need to complete a victim form.
Another example of fraud that was only mentioned when probed was a participant’s personal pension information being exposed in a cyberattack at work. Although the participant did not think this was relevant, this should be captured on the CSEW because it involved their personal information rather than information related to their work.
Redesign recommendation
Consider adding guidance to encourage respondents to report incidents where they lost personal information from cyberattacks or data breaches in their workplace.
In addition to the specific terms listed previously, participants also used a range of other terms to describe fraud, regardless of whether they had experienced it, for example:
- “scam”, including “emails” and “telephone calls, texts and WhatsApp messages”
- “phishing emails and calls”, “bogus” [calls] and “fraudulent cold calling”
- “attempted fraud”
- “tricked” or “con”
- “fake”
- “hacked”
- “identity fraud”
Our sample also included experiences of fraud as part of ongoing domestic abuse. The following terms were used in these cases:
- “economic abuse” to describe a partner withdrawing money from a joint account without permission
- being “defrauded” by having direct debits set up in their name
In these cases, the perpetrator was known because the fraud was part of a wider crime experience. However, our sample also included participants who were uncertain who the exact perpetrator was, for example, those who had engaged with cold calls and where rental products had not shown up. Instead, they could recall the company they were impersonating, for example, “O2” or the participant’s phone insurance provider. The term “scammers” was used spontaneously to describe those responsible for the fraud.
4.3.6 Burglary, theft and robbery
The CSEW screener module includes questions about various types of burglary and theft from the home, of or from vehicles and of other personal property. There are several offence codes that may apply, requiring additional detail, which is collected in the victimisation module.
A range of experiences of theft were disclosed. There were participants who had experienced “burglary”, which they described as theft from their homes. Burglary is defined in the coding manual as entry to the home without permission, regardless of whether anything is stolen, there is an attempt to steal, or damage is caused. We only had participants in our sample who reported entry to their homes with theft. Therefore, we cannot say if participants would use the term “burglary” to describe entry alone.
Redesign recommendation
To ensure the CSEW screener questions capture these incidents, introduce separate screener questions for entry to the home without permission, where no theft occurred, from theft and criminal damage. As per the existing CSEW, do not use the term "burglary" in the screeners; include specific behaviours instead.
When probed on details, participants used the term “broke into” to describe perpetrators gaining entry to their homes. The subsequent theft of possessions was described using phrases such as “properties stolen” and the perpetrator got “away with lots of valuables”.
Participants recounted experiences that, according to CSEW, would likely amount to robbery. The CSEW definition is theft with use of, or threat of, force. For example, a participant described their experience as being a “hit and run” in which they were “robbed outside a club”. This involved being assaulted, having a gun pointed at them and having “handed over some of my things”, meaning money and possessions.
There were also participants who described experiences using terms that would be inconsistent with the CSEW’s definition of offences. For example, using “robbery” or being “robbed” to describe incidents where no force or threat was involved, such as:
- the theft of children’s bicycles from a porch or garage (this would be coded as a “bicycle theft” in the CSEW Coding Manual (2019))
- knowingly having a wallet stolen from a bag (which would be coded as a “snatch” theft – a term also used by the participant)
- a mobile phone being stolen from the participant’s person without their knowledge (which would be coded as a “stealth theft”)
Redesign recommendation
Avoid using the terms "robbery" and "robbed" in screener questions.
There were also participants with other experiences of theft in our sample. For example, sports equipment being unknowingly “stolen” from behind the participant in a park, and a participant’s gift for a colleague being “nicked” in the workplace. These examples should both be captured by “other theft” in the CSEW because the items were not on their person at the time.
Redesign recommendation
Consider highlighting that "from the person" means something they were wearing or carrying, and "other theft" means anything not already captured by screeners on theft from homes, vehicles or the person and is not limited to "a cloakroom, office, car or anywhere else they left it".
There were participants who experienced theft of bicycles. This included bicycles being stolen from sheds and garages. These participants spoke of the perpetrator having “forced the lock” and their garages being “broken into”. There were also experiences of bicycles being stolen from elsewhere on their property and away from their home. To describe these thefts, participants used the words:
- “pinched”
- “stolen”
- “theft”
- “took”
There were participants who had experienced the theft of possessions or car parts (for example, a catalytic converter) from vehicles.
4.4 Attempted crime
The topic of attempted crime was explored. In the coding manual, there are offences that have separate codes for their “actual” and “attempted” versions. In the current CSEW, some of the “attempted” offences are not asked about in a screener.
There are also some offences with nuanced definitions. This means that some experiences reported in answer to an “actual” screener may turn out to be an attempted one (or the other way around), once further details have been asked in the victim form. For example, an incident recorded at the threat screener would be coded as an attempted assault if the incident description in the victimisation module indicated that a weapon had been brandished, even if it did not strike the victim.
Participants considered various incidents they had experienced to be “attempted crime”. There were participants who used the word “attempted” spontaneously, for example, to describe attempted arson (see Section 4.3: Mental models of specific offence types), and others who used the term when probed about repeat incidence of fraud. Other participants used “tried to” rather than “attempted”, for example, “tried to rob” or “tried to grab my bags”.
Redesign recommendation
As per the existing CSEW, use the term "tried to" in the screener questions. For example, "Since 1st [MONTH, YEAR], has anyone got into, or tried to get into, your home WITHOUT permission?"
Regardless of whether participants had used the term spontaneously at the initial open question, later in the interview, we asked participants if they had experienced any attempted crimes. Their understanding was mixed. There were participants who had experienced actual crime, but not attempted crime, that thought it was “something that shouldn’t be taken seriously”. Others were unclear on the definition and needed clarification from the interviewer.
There were participants who reported experiences where they had witnessed, or heard about, the attempted victimisation of others. This would not be in scope of the CSEW.
Redesign recommendation
Ensure questions and guidance are clearly worded to prevent misreporting, for example:
"Only tell us about incidents that have happened to you personally and DO NOT include things that have happened to other people."
This wording is different for home-based crime screener questions to encourage respondents to include experiences of household members, which are in scope.
4.5 Recall of the timing and other details of experiences
4.5.1 Recalling details of an incident
When a CSEW screener question is answered “yes”, further details of the incident are asked about in a victimisation form. This includes details of what happened and where, who did it, the impact on the victim and interactions with the criminal justice system.
In the mental models interviews, participants’ open descriptions of their crime experiences included various spontaneous details. We probed for further details covered by CSEW questions and how easy or difficult they could recall them.
Participants varied in their ability to recall details about their experience of crime. Details they could recall with some ease included:
- location (where it happened or where they were going)
- what the participant was doing at the time of the crime
- what was stolen or lost, even if they got it back
- damage such as burn marks, smashed windows or evidence such as footprints
- details of the perpetrator, for example, their ethnicity or clothing
- the financial impact, such as an approximate cost incurred of crime experienced
- how it made participants feel
Participants could recall details with some ease if:
- the crime was “significant”, for example, when they had “direct interaction” with a neighbour during a series of threats
- they had reported the crime (to Action Fraud or the police)
- the crime was recent
- there was lasting physical evidence (such as arising from criminal damage or burglary)
- they had documents they could refer to
Examples of such documents included:
- police reports and records or court documents
- diaries, calendars and notes
- social media, text messages and emails
Details that were difficult for participants to recall, or not known in the first place, included:
- exact costs of loss and damage
- specific details of the perpetrator, particularly if they were not clearly visible
Reasons for participants not knowing details included incidents not being visible to them, either wholly or partly. For example, fraud taking place online and incidents where they were not present or which took place at night.
Although participants collectively found these reported details easy or difficult to recall, we do not know if counterviews were held by other participants in different circumstances. This is because we are reporting what was said spontaneously. Probing the details they could and could not recall was not conducted exhaustively.
4.5.2 Recalling when an incident happened
The CSEW screeners ask about experiences in the 12 months prior to the month of interview, including the dates of incidents. This section discusses how well participants could recall when incidents occurred.
Participants were mixed in how well they could recall the dates and time of day of crimes they had experienced. There were participants who said they could recall these with ease. This appeared to be associated with how impactful the incident was perceived to be (for both actual and attempted crime) – the more impactful, the easier to recall. However, there were participants who reported “blocking [experiences] out” if they were particularly sensitive, so could not recall exactly when it occurred.
Other factors included participants having made formal reports or having discussed their crime experiences, for example, with family and friends or a support charity. This assisted more precise recall of when incidents happened, regardless of how impactful they were perceived to be.
Other participants found recalling the date more difficult, for example, there were participants who could not recall:
- precise dates, but could recall or estimate the month of the incident
- the month (nor estimate it), but recalled the time of year, such as “maybe October, November time” or “wintertime”
Exact recall was more difficult for participants that had experienced multiple, repeat and series crimes, for example:
- those who experienced domestic abuse remembered the order, how long they had been experiencing the abuse, and particular episodes, such as having a device hacked, but not necessarily the date(s) of specific incidents
- fraud because participants had become used to it occurring
Participants used a variety of methods to help them recall when incidents took place, with varying ability to pinpoint a date, for example:
- the time of year, including recalling the weather and whether it was light or dark in the mornings
- notable holidays and events, such as Christmas, Valentine’s Day, bank holidays or sporting events
- other memorable things that happened to them, such as moving house or changing jobs
- working or commuting patterns to indicate whether it was a weekday or not
Participants also used, or said they could use, the following recall aids:
- police reports and records or court documents
- insurance documents
- diaries, calendars and notes
- pictures and videos from personal cameras, CCTV and video doorbells
- social media, text messages and emails
- recalling conversations had with friends or family about the incident
Redesign recommendation
Consider adding an accessible timeline tool to assist respondents in recalling their experiences, and asking them to have relevant documents, such as crime reports, to hand while they complete the questionnaire.
Those who were uncertain of the date and experienced incidents roughly 12 months before interview did not have recall methods or aids that could help them to determine whether the incident was inside or outside the reference period. However, they did enable participants to give an indication of when the incident may have been.
Redesign recommendation
Asking for the exact date of an incident may not fit with participants' mental models, so consider adding an alternative option, such as asking for the month in a follow-up question if the participant is unsure.
4.5.3 Forward telescoping
Forward telescoping is when a respondent reports that an incident took place more recently than it did. In the CSEW, which has a reference period of 12 months, a respondent may erroneously include an event that happened 14 months ago. This could lead to overestimation of the prevalence and/or incidence of crime.
In workshops run by Verian (2018), CSEW interviewers reported that respondents forward telescoped and found it difficult to limit their recall of incidents to the last 12 months.
Although our mental models probes asked about crimes experienced in the last 12 months, we allowed participants to discuss experiences that fell outside this time period to explore when forward telescoping may be an issue. Various participants mentioned incidents outside of the 12-month reference period, for example:
- historical crimes, months or even years outside of the reference period
- incidents that happened just outside the 12-month reference period, for example, 13 months before the interview, although this could have been because these incidents were in scope when they completed the recruitment screener questionnaire, but not by the time of interview
- they queried whether we wanted to know about crimes they had experienced during their childhood
Redesign recommendaton
Many screeners in the current CSEW include the reference period in the question stem, but some say "And in that time..." instead. Consider including the reference period in every screener and frequency question to minimise the reporting of out-of-scope experiences.
There were participants who spontaneously recalled incidents outside of the reference period when asked what crimes they had experienced in the last 12 months, with no prompting from the interviewer. This was for several reasons, such as:
- these incidents were salient to the participant (such as sexual offences), or had a significant impact on their lives (such as causing lasting injury)
- they were unsure of the date of the incident (and therefore whether it was outside the reference period)
- these incidents were the same crime type as crimes that occurred within the reference period
- their experiences were part of a long-term series of crime (such as domestic abuse) and they found it difficult to pinpoint the start and recall whether incidents happened in the last 12 months (see also Section 4.3: Mental models of specific offence types)
Redesign recommendation
Consider the following approaches to minimise incidents outside the reference period being asked about in a victim form, while allowing participants to disclose (out-of-scope) incidents important to them.
Option 1
If we offered respondents a "best estimate" question with a list of months in the reference period, consider introducing an "earlier than this" answer option. This would allow respondents to include incidents that took place "earlier than" the start of the reference period but would not generate a victim form (as suggested by Verian, 2018).
Option 2
Ask if the incident occurred before, during, or after the first month of the reference period, then to ask for the specific month (proposal in National Crime Victimisation Survey (PDF, 3.48MB), 2022).
Option 3
As per the existing CSEW, allow respondents to input a date before the reference period in the screener module, but do not generate a victim form, to reduce respondent burden. These figures continue to not be included in the estimates.
There were also participants who were aware of the reference period, for example, they described incidents that took place longer ago as “not in the last year” or “long before this”, realising they were not in scope.
4.5.4 Order in which experiences were reported in interviews
We wanted to explore the order in which participants reported their crime experiences in interviews to consider if the existing order of the CSEW screeners meets respondent needs. For example, in relation to how they recall their experiences of multiple victimisation and the perceived salience and impact of different types of crime.
There were participants with multiple crime experiences who mentioned them in chronological order, from older to more recent, or in reverse order. The first incident they recalled depended on:
- if crimes were still ongoing
- which crimes had the biggest monetary loss
- which crimes had the biggest emotional impact
There were experiences of threats, harassment and assault only mentioned after other incidents that did not impact them personally. For example, knowing someone who drink-drives was considered more serious and bothered the participant more than being threatened personally.
An experience that a participant termed as being potentially “stalking or harassment” (reported in Section 4.3: Mental models of specific offence types) was mentioned after the theft and attempted theft of their phone. When probed, they said it was possibly because they considered stalking and harassment a common occurrence as a woman.
Experiences of fraud were mentioned later on or after traditional crimes when the participant had experienced multiple crime types in the last 12 months.
We cannot draw clear conclusions about how to order the screener questions to better match mental models, other than that it seems fraud experiences are considered after those of traditional crimes (which is the current order). There seems to be no consistent and clear patterns to how salience or impact of experiences, the order in which they occurred, or the priority given to offences in HOCR, related to how participants recalled them.
4.6 Views on reporting crime experiences in this research and on a survey
Participants’ feelings towards discussing their experiences of crimes with researchers varied. There were participants who felt heard, safe and proud during their interviews, and those who felt like they were making a difference. Sadness and anxiety were also expressed at recollecting experiences of crime and that the interview had been a “little bit personal”.
We cannot be sure whether such feelings would apply to responding to a crime survey, though some things were echoed when participants were asked about their thoughts on doing so, in different modes. They were not told further details of the CSEW’s content or how it would be administered, so we acknowledge that their actual behaviour may differ from the hypothetical opinions presented here. Many of the views are familiar to survey researchers and many issues already addressed in designs, including on the CSEW. However, this probing provided some potentially useful insights for the CSEW Transformation redesign that could be researched further.
4.6.1 Online surveys
Opinions of online surveys varied. Those who preferred online surveys to face-to-face and telephone surveys reasoned that they can be completed at a participant’s own pace, which may make it easier to recall details and provide extra anonymity over other modes; online was considered “less intrusive”. However, others felt that talking was easier than writing details down or that online surveys, in general, were overwhelming.
To take part in an online survey, participants stated they would need to know:
- what the study was for
- that there was a reason for their involvement
- how useful their information would be to the research
Participants spoke of conditions they would expect around the content of the questionnaire itself, for example:
- that they would take part if it was not as “extensive” as the mental models interview
- that it was dependent on how the questions were written (with a preference for an open text box to express themselves)
- that the survey would need to be engaging to complete
- they would need a written confidentiality agreement with information regarding who would handle their data
4.6.2 Telephone surveys
Opinions towards completing telephone surveys also varied. There were participants who favoured telephone surveys over filling in an online questionnaire as they felt, for example, an interviewer asking them “questions about what was lost” could help them remember more details about their experiences. A telephone survey was also seen to provide greater anonymity than face to face.
There were participants who felt telephone surveys had less anonymity than online and would be more cautious of what they said. Other concerns mentioned were:
- that the participant would not know who was calling them
- logistical issues around the participant’s availability
Again, participation in the survey would be dependent on the survey length, participants’ time, and the questions themselves. If the questions were “scripted”, they were concerned about not building a good relationship with the interviewer. Participants wanted to ensure the survey would be beneficial, and they wanted to know who was collecting the information.
4.6.3 Face-to-face surveys
Face-to-face surveys were seen as good for interaction. There were participants who said that face to face would feel safer emotionally, as they liked to “have somebody there” when talking about their experiences.
However, face-to-face surveys were seen as less desirable by participants who would “rather put pen to paper” – write down their answers – and by those who felt that it was unnecessary because of the logistical difficulties (compared with telephone mode).
Another concern was contracting germs or illnesses from interviewers.
Privacy and confidentiality were also a concern as participants were worried that their address would be known, and they were “protective” over who enters their home. It was also viewed as an “awkward” experience, likened to a “police interrogation”.
The completion of a face-to-face survey was dependent on the following reasons:
- the survey’s length and if the participant had time
- if it was more practical and gave additional information than other modes
- if they were informed about the “benefits” of the survey, specifically if they would receive a monetary voucher and information on where to seek further support
- if they had a non-judgmental interviewer
4.6.4 Sensitivity of question content
There were participants who were willing to answer survey questions about any of their experiences. Others were reluctant to disclose sensitive crimes, such as sexual assault, in any mode because it was deemed too “personal”.
Those that would discuss said they preferred “talking” or felt “safer” with an interviewer present, whereas others felt it was “too personal” to share this information face to face.
Some reluctance was expressed to share anything on a survey that could compromise official processes, such as court proceedings.
Nôl i'r tabl cynnwys5. User journeys
5.1 Overview
User journeys are a desk-based research activity, completed as part of a Respondent Centred Design (RCD). Our user journeys work involved passing various crime experience scenarios through the current version of the Crime Survey for England and Wales (CSEW) questionnaire to understand the potential survey experience of respondents.
This work focused on the screener and victimisation modules, but there were also findings related to questions outside of these sections. We used the existing questions for the interviewer-led modes and focused on how they would work for a self-completion module.
The user journeys encompassed a range of crime experiences, from “simple” cases involving single incidents, to more “complex” cases of multiple victimisation, including repeat incidents and multi-feature incidents (MFIs). Traditional crimes, fraud and computer misuse, and actual and attempted crimes, were used in the scenarios.
The work we undertook also included elements of “user flows” (we use the term user journeys to cover both collectively). Previous literature outlines the use of user journeys and flows in user research, highlighting the differences and benefits of each approach, see User journeys versus user flows, Kaplan (2023)).
User journeys aim to:
- understand the experience of a user across many points of interaction
- contextualise actions with information about users’ emotions and thoughts
- analyse and optimise user experience
User journeys focus on a user’s high-level, holistic experience, whereas user flows describe separate, discrete interactions that make up a user pathway. This helped us to identify “sticking points” respondents may experience in the CSEW questions.
Once we had established our crime profiles (see below), we used user journeys to:
- imagine ourselves in a respondent’s position to anticipate their potential behaviour and reactions as they answer questions
- determine how easy or difficult it might be for respondents to complete the existing questions
- explore what potential problems may arise with question wordings and how questions could be interpreted
- evaluate potential causes of response error
- investigate how respondents may respond to the self-completion survey, without interviewer assistance
We also utilised user flows to dive more deeply into specific user journeys. For example, we explored pathways that participants with experiences of domestic abuse may take through the CSEW. This highlighted where respondents might interact with the survey differently than those without experiences of domestic abuse, such as, how they complete the screeners and answer specific questions in the victimisation modules.
5.2 Creating user scenarios
The range of potential crime experience scenarios is very wide, given the number of different screener offences that could be experienced in any combination over a 12-month period. These could be experienced repeatedly, in a series and/or a separate incidents, or as part of an MFI. A few examples of user scenarios, from simple to more complex, include:
- A single incident of physical assault
- An incident of physical assault and threat at the same time, plus a separate incident of physical assault (repeat victimisation)
- Two incidents of physical assault and threat at the same time, plus a separate incident of fraud
- An incident of physical assault and threat at the same time, plus two separate incidents of physical assault (series crime)
We did not set out to methodically check every question and route. To create a manageable number of realistic and plausible crime scenarios, we used the following three sources.
5.2.1 Mental models research
We used data from our mental models interviews to create user scenarios. This was to ensure some journeys were as close to real respondent experiences as possible. For example, the crime scenarios included different:
- respondent demographics, such as age, sex and ethnicity
- timings and dates of the crimes
- examples of multiple victimisation
- details of the crime
Examples of details of the crime include:
- location
- offender
- weapons and injuries
- type of threat
- method of entry for burglary
- what was stolen and where from
- cost of crime
- damage to property
- respondent responses to fraud attempts and accounts or personal details involved
5.2.2 Hypothetical and partly hypothetical scenarios
We created some hypothetical scenarios, as well as scenarios that were both part-hypothetical and part-informed by the mental models data. This was to cover a wide range of user journeys, including potential scenarios where we did not have any mental models data, and to ensure some scenarios had some basis in real respondent experiences. Where mental models data was used, scenarios were informed by:
- crime frequency
- crimes committed at previous and current addresses
- crimes against the household, vehicles and the person
- a mixture of perpetrators and circumstances, such as location and time
5.2.3 Multiple victimisation data
The user journeys were also informed by data drawn from the CSEW 2019 to 2020 dataset on multiple victimisation and components of crimes (further information can be found in Discovery Part 3: Redesign of the screener module). From our analysis of this dataset, we concluded that we should include experiences of multiple offence types, including fraud. This is because when multiple victimisation includes fraud, the unrelated traditional offences are most often threats and sexual offences, and less often theft of personal property and robbery.
The dataset also showed that examples of multiple victimisation included separate incidents of a sexual offence and common assault, of a sexual offence and wounding, and of a sexual offence and threats. We also saw that assault incidents commonly involve injury (especially common assault) and threats of violence (especially serious wounding).
5.3 Overall findings and considerations for redesign
This section describes some high-level findings. Most related to individual questions within the victimisation modules, but some relate to the screeners.
5.3.1 General
Our user journeys work highlighted that the ordering of the victimisation module questions requires further consideration. At times, the questions may not make sense to respondents and could be considered insensitive. This is particularly true for the questions that ask if any other offence occurred at the same time as the screener question they answered “yes” to.
For example, respondents who have experienced sexual assault are asked if any property was stolen before asking if the incident involved assault, which may be considered more serious or impactful by the victim. Currently, these questions are asked after respondents have had the opportunity to describe the incident to the interviewer, which may mitigate this insensitivity in interviewer modes. However, further consideration should be given to prevent questions being repeated unnecessarily or insensitively ordered.
Another finding from our user journeys work that requires further thought is the victimisation module questions that could be considered victim-blaming. Currently, there are questions that ask the respondent about their alcohol consumption at the time of the incident and whether the respondent believed anyone other than the offender(s), including themselves, was responsible for what happened. In self-completion mode, where answer options would be presented rather than coded from a list by the interviewer (as occurs at many open questions), the wording of the question stem and options need to be carefully designed so they do not imply victim-blaming.
Lists of answer options might also need to be minimised for self-completion mode to reduce respondent burden and the risk of measurement errors, including satisficing. This applies both to open questions, which have many lengthy response options for an interviewer to code, and to closed questions where show cards are presented or response options are read out by the interviewer.
We also found questions that covered the same or similar topic, despite the respondent providing an answer during a previous question. This could seem repetitive or unnecessary to respondents. For example:
- respondents are asked if the crime occurred inside, immediately outside or near the home, even if they answer “no” to the crime occurring within a 15-minute walk of their home or “yes” to any household crimes, such as burglary
- respondents are asked if they were aware of the crime taking place, even if they previously answered “no” to being aware of a theft
- the question about whether the police know who committed the crime is asked even if respondents have already indicated that they themselves know who the perpetrator(s) are and have reported the crime to the police
- if respondents answer “no” to reporting the incident to the police or Action Fraud, they are asked if they have received a crime reference number
- when respondents answer “yes” to criminal damage to the home, the survey still asks if any property was damaged
At least some of the apparent repetition might be because some questions include guidance for the interviewer to “ask or record” at their discretion. Therefore, they can code accordingly if the answer is obvious from answers given earlier in the interview (for example, at the open text description). These issues will need to be considered for self-completion mode with the question wording and routing amended accordingly.
5.3.2 Fraud and computer misuse
We found that the fraud and computer misuse screeners were not mutually exclusive, which may make it unclear which one the respondent should answer “yes” to.
We also found that if respondents experienced more than one fraud offence type, they are presented with a question which asks if they believe the crimes are “related”. If so, they would need to adjust the “how many times” count for the lower priority one. If they experienced more than one incident of any fraud offence type, another question asks if they are “similar”. This is asked regardless of whether they have answered “yes” or “no” to the “related” question.
These questions are complex, and answers may depend on respondents’ understanding of these potentially subjective and overlapping terms. For self-completion mode, the method of identifying related fraud types and fraud series will need to be approached differently to reduce respondent burden and potential error.
There are questions in the fraud module that appear to assume that perpetrators are unknown or a stranger. For example, “Did the person or people who did this contact you first or did you make contact first?” and “As far as you are aware, was the person or people who did it acting on behalf of a company or organisation that is still contactable now?”. This overlooks fraud where the perpetrator was known to the victim (such as part of domestic abuse). These question wordings may need amending, or additional questions adding, to ensure they are inclusive of different situations.
5.4 Potential future user journeys work
Future user journeys work could:
- create and test more scenarios with a further variety of compositions
- further explore multi-feature series (MF series) which are MFIs made up of the same combination of screeners occurring on more than one occasion, if the respondent says that they are related
- explore if user journey maps, which show “a user’s journey through a service over time” would be a beneficial research tool to elaborate on user journeys, with particular emphasis on domestic abuse and hate crime incidents, see Design in Government.
- run the same and new profiles through the redesigned survey
7. Cite this methodology
Office for National Statistics (ONS), released 8 August 2025, ONS website, methodology, Crime Survey for England and Wales Transformation – Discovery Part 2: mental models research