Language selection

Search


Evaluation of travellers processing through a GBA+ lens:
Appendix E: Evaluation methodology and data limitations

The CBSA Risk-Based Audit and Evaluation Plan for Fiscal Year (FY) 2018-2023 identified the Traveller Facilitation and Compliance Program as a priority for evaluation. The original evaluation scope included:

This scope was endorsed by the Performance Measurement and Evaluation Committee (PMEC) on .

The evaluation plan, which included the evaluation questions, was developed based on the evaluation scope and the model of the travellers continuum, in consultation with two groups:

Evaluation questions

Consultations with key stakeholders and a review of key documents during the planning stage assisted in refining the evaluation questions. This ensured that the evaluation would provide useful information for decision-making. The following evaluation questions focused on an assessment of how the Agency and its activities and outcomes impact diverse groups of Travellers:

  1. To what extent does the scenario based targeting (SBT) approach consider impacts on travellers, through a GBA+ lens, when targeting travellers?
  2. To what extent GBA+ variables were considered in traveller inspections at ports of entry in Canada between FY2014-2015 and FY2019-2020?
  3. To what extent does the Traveller Program consider the development and achievement of its outputs and outcomes through the GBA+ lens?
    1. To what extent is the Traveller Program developing communication products/delivering outreach from a GBA+ perspective?
    2. To what extent are admissible travellers satisfied with border processing (i.e., professionalism, courteousness, timeliness, and quality and services standards) from a GBA+ perspective?
    3. To what extent is the Traveller Program effective in processing admissible travellers according to established legislation and policies? Are admissible travellers subjected to minimal necessary intervention?
  4. How does the Traveller Program consider GBA+ variables between and within target population groups? Are diverse groups treated equitably by the Program?

Data collection methods/sources

Multiple data collection methods and sources were used, including:

  • document review
  • HR and operational data
  • semi-structured interviews with internal program stakeholders
  • survey data

The evidence that was collected based on the above-mentioned methods and sources was compiled and analyzed as a whole. The common themes that emerged from multiple lines of evidence contributed to the development of preliminary evaluation findings. These findings, alongside the evidence that informed them, were presented to the Working Group and the Evaluation Advisory Committee for review and input. The feedback from these consultations was incorporated, where relevant, into the final evaluation report and recommendations.

Document review

The document review took place throughout the evaluation project, from the planning to the reporting phases. It was used to inform the evaluation scope, plan, and questions. Over 100 documents were reviewed, including internal CBSA program documents. Documents were reviewed systematically and, where appropriate, evidence was compiled.

Operational data

The CBSA Program Evaluation Division (PED) collected and analyzed operational data from a variety of internal IT systems.

HR data

The analysis of HR data included information on Diversity and Race Relations and Preventing Racial Profiling at the Frontline training participation data from the Human Resources Branch, as of .

Operational data

The analysis of operational data included data from the tracking sheets from the NTC, and the following CBSA IT systems:

  • COGNOS (CMRS)
  • SPPH
  • ICES

The operational data was provided to the CBSA PED by the CBSA's Strategic Policy Branch (SPB), the National Targeting Centre (NTC), and Information, Science and Technology Branch (ISTB). The time period for the data originating from the CBSA's IT Systems and the NTC varied due to availability and reliability. The time period for each data source are as follows:

Table 13: Time periods for the operational data received

Operational data Time period
NTC – Scenario performance data FY 2014-2015 to FY 2020-2021 ()
NTC – Accumulated tracking sheets FY 2015-2016 to FY 2019-2020
COGNOS (CMRS) FY 2015-2016 to FY 2020-2021
SPPH FY 2014-2015 to FY 2020-2021
ICES – Seizures FY 2018-2019 to FY 2020-2021
ICES – Personal searches FY 2015-2016 to FY 2019-2020

A note on calculating the resultant rate for customs examinations

Calculating an accurate resultant rate is challenging due to the lack of data integration between IT systems, which capture whether a customs exam has occurred (i.e. Secondary Processing Passage History) and whether enforcements actions such as seizures are recorded in ICES. Each customs exam in SPPH could have one or more "resultant" records in ICES. Currently, this issue cannot be resolved.

When conducting a GBA+, the resultant rate can be extremely inflated for small subpopulations, which might give a false indication of the level of risk from these groups, for example, gender categories that are not male or female, or countries with a low percentage of travel volumes or customs exams, such as [*]. There could be more than one resultant enforcement action for a passage with a customs exam. In particular, this is highlighted by the examples of an unknown or unspecified gender category, or in the case of citizens of [*] who have a resultant rate greater than 100%.

Table 14: Resultant rates by gender

Gender Customs exam Total resultant (seizures, AAMPS, searches and arrests) Resultant rate
Female [*] [*] [*]
Male [*] [*] [*]
Unknown or unspecified [*] [*] [*]
Source: SPPH, ICES Data, FY 2018-2019 to 2020-2021.

Table 15: Resultant rates by citizenship

Citizenship Customs exam Total resultant (seizures, AAMPS, searches and arrests) Resultant rate
[*] [*] [*] [*]
[*] [*] [*] [*]
Source: SPPH, ICES Data, FY 2018-2019 to 2020-2021.

Semi-structured interviews with Government of Canada stakeholders

Interviews were conducted via teleconference with 15 internal program representatives.

Interviewees received in advance semi-structured interview guides that included an outline of key issues and questions for discussion. Most interviews took place in . The interview data was then compiled and analyzed, and emerging themes were established.

Survey

CBSA PED administered a survey to BSOs, superintendents and NTC targeting officers who have worked in the travellers stream within the last two years. The survey was launched on and concluded on . Survey responses were received from 922 frontline officersFootnote 35 (with a 20% response rate) and 20 targeting officers (with a 38% response rate). The structure and design of the survey ensured that respondents answered only the questions that were relevant to their roles and responsibilities with respect to the Traveller Continuum. The survey was designed in consultation with representatives of the Travellers Branch, the Intelligence and Enforcement Branch, the GBA+ Centre of Responsibility, the Indigenous Affairs Secretariat, the LGBTQ2+ Advisory Committee, and with Regional Directors General.

Key limitations identified during data collection and analysis

Some limitations and challenges were identified during the data collection and analysis process.

Table 16: Limitations and mitigation strategies

Operational data

Limitation
  • Inability to conduct a comprehensive GBA+ of the entire traveller continuum.
  • Level of time, effort, and expertise required to obtain certain data, for which there are no sufficient reporting tools in place.
  • There were a number of evaluation indicators for which data were not available. As a result, the evaluation was unable to comment on these areas.
  • Prior to , referral data for the highway mode did not distinguish between referral types. Data was only available for referral and release decisions.
  • DSO request for only three years of ICES data in the air mode.
  • Travellers Branch monitored rover referrals and results for about a year, after the issuance of the PIK roving note in . Data for rover referral improved after .
  • Lack of standard definition of certain data elements and/or data dictionaries.
Mitigation strategy

The evaluation continued to seek feedback on evaluation preliminary findings from stakeholders. Through this process, more data became available to the evaluators. However, there were several evaluation indicators for which data were not available. As a result, the evaluation was unable to comment on these areas.

Throughout the report there are suggestions of areas where stakeholders could conduct further investigation. Evaluation Recommendation 3 addresses this concern by recommending the creation of an action plan, which includes options to:

  • increase standardization and accessibility in data dictionaries, business definition, and mapping
  • assess the agency's current resource capacity for data analytics in the commercial stream (e.g. subject matter expertise, data fluency, and analytical and technical competency)

Evaluation survey

Limitation
  • Small population of NTC targeting officers
  • Survey excludes any respondents that are not currently BSOs, Superintendents, or NTC Targeting Officers, even if they were recently in those positions
  • Low response rate for certain subpopulations
  • Response biases / non-response biases
Mitigation strategy

The evaluation presented results on the national level, and did not disaggregate responses by region, mode, or demographic characteristics. At the national level, a response rate of 20% was achieved.

In recognition of potential response biases, and an 80% non-response bias, the evaluation team used survey data in conjunction with other lines of evidence and clearly indicate the proportions and absolute numbers of respondents specific to the results presented. The evaluation team was aware of possible over-represented and under-represented groups among those that responded or did not respond to the survey.

Interviews

Limitation

Only 15 formal interviews were conducted with internal program representatives.

Mitigation strategy

Interview data is only presented alongside other lines of evidence. The evaluation team also mitigated this issue through regular and frequent consultations with program subject matter experts.

Complaints data

Limitation

Complaints from three airports between and , only in the air mode.

Mitigation strategy

Complaints data is only presented alongside other lines of evidence and absolute numbers. Proportions are clearly identified when these data are presented.

COVID-19

Limitation
  • Evaluation was put on hold for two months, as a result of the pandemic
  • Prevented the evaluation team from conducting field observations at ports of entry
  • travellers stream trends changed dramatically following the implementation of COVID-19 travel restrictions ( to present)
Mitigation strategy

The evaluation has identified the overall impact of COVID-19 on volumes and enforcement activities.

The lack of field research was supplemented with open-text and close-ended survey response data from 942 respondents (BSOs, Supts, and NTC Targeting Officers), and with regional representatives, have been included on the evaluation working group and EAC.

Date modified: