AI literacy gap threatens UK laboratory workforce competence
UK laboratory professionals are increasingly using artificial intelligence tools without adequate understanding of their functionality or associated risks, according to a new industry report examining AI implementation in scientific research environments.
Widespread adoption outpaces technical comprehension
The report, presented by LAB Innovations in collaboration with the Institute of Science and Technology (IST), highlights a concerning disconnect between AI tool adoption and user competence. An Oxford University Press study referenced in the report found that more than 75 per cent of researchers now use some form of AI tool in their research, yet barely a quarter report having a good understanding of AI tools in general.
Dr Marie Oldfield, AI Lead at the IST, and Joan Ward, Deputy Chair and Finance Officer of the IST, identify outdated Quality Assurance Agency Subject Benchmark Statements and inadequate school curricula as fundamental barriers preventing British scientists from developing the foundational knowledge required to understand AI technology effectively.
The authors note that training programmes frequently fail to address specific, current ethical and technical questions relevant to AI professionals and laboratory personnel. Data privacy represents a particular concern, as without rigorous standards, regulatory and legal frameworks can be easily breached when working with large datasets and personal information.
Funding shortages impede workforce development
The IST has petitioned successive UK governments for budget allocation and protected time to educate technical staff for a decade without significant change materialising. Although poor-quality AI tool implementation creates substantial longterm financial costs, current funding remains insufficient to build the structural foundations required for a well-trained and confident workforce.
The report emphasises that without AI literacy, laboratory personnel and their work become vulnerable to mistakes that ultimately undermine trust in the technology they have been directed to use. This literacy gap extends beyond practical skills to en-compass fundamental shifts in mindset, with new attention-grabbing tools often consuming funding that could deliver greater return on investment through automating administrative work and freeing scientists’ time and energy.
Accreditation programmes offer structured development pathway
The IST’s Registered Technician in AI (RTechAI) accreditation programme provides scientists with deeper understanding of how to question, interpret and responsibly use AI in their work. The former Institute for Apprenticeships and Technical Education, now Skills England, has adopted a more strategic approach for young people, covering machine learning and AI in real laboratory applications in Level 3 and Level 5 qualifications.
However, older staff risk being left behind. The authors argue that continuous professional development and structured accreditation should become standard throughout career paths, similar to medical and engineering professions. The current voluntary status of both employee continuous professional development and employer funding for such development diminishes its perceived importance.
Risk of workforce exclusion
The report warns that failure to address the AI skills gap could result in large portions of the workforce, disproportionately women and older personnel, being excluded from the artificial intelligence transition. The IST urges the laboratory sector to invest in literacy, training and professional accreditation to ensure AI strengthens rather than destabilises essential scientific work.
Maya Carlyle, Principal AI Engineer at the National Physical Laboratory, suggests the future of AI in technical settings may diverge significantly from current large language model approaches, potentially favouring smaller specialist models trained to excel at specific problems. Duncan Lugton, Head of Policy and Impact at the Institution of Chemical Engineers, addresses complementary cybersecurity concerns, noting that 43 per cent of all UK businesses and nearly three-quarters of large firms experienced cyber-attacks in the past year, with AI tools accelerating these risks.
The report concludes that responsible AI implementation requires rigorous attention to security, accuracy and reproducibility, with scientific rigour and robust cybersecurity underpinning implementation strategies.





