Start main page content

Managing risk and harm in research ethically

- Wits University

Research findings must be limited and mitigated to protect participants’ well-being and dignity.

Ethical research demands that participants are not harmed and that their rights and dignity are respected_Pic ARK study Agincourt Wits Rural Campus 2022.600x300

The Wits Research Integrity Office, in partnership with the Southern African Research & Innovation Management Association (SARIMA), and the Carnegie Council for Ethics in International Affairs hosted the virtual conference on 5 June 2023, themed: Risk and Harm at the 3rd Annual Global Ethics Day.

The concept of ‘ethics’ is living and fluid, with communities of practice constantly considering what ethical research entails and how research ethics can adapt to the changing needs of researchers and global societies. This ensures the sustainability of research practices, builds trust with participants and broader society, and benefits humanity.

Eleni Flack-Davison, Head: Wits Research Integrity Office, legal advisor and research compliance manager in the Wits Research Office, noted that the annual Global Ethics Day allows researchers to deepen their knowledge of risk and harm mitigation.

“This online Global Ethics Day comes after the Cape Town Statement was published at the 7th World Conference on Research Integrity [WCRI] in 2022. Our practice demands our consistent fostering of integrity.”

Professor Lynn Morris, Deputy Vice-Chancellor: Research and Innovation at Wits, said that research is critical to answering increasing and complex questions, but “it is important to pay attention as to how to do that.”

“In clinical research studies, participants literally give life and body in aid of research and innovation. We need to make sure there is no potential for exploitation. Of course, we have research ethics committees and ethics clearance protocols but we try and go beyond box-ticking; we need to keep talking about this, keep innovating,” said Morris.

Flack-Davison introduced the local and international guest speakers.

Building public trust

Adjunct Associate Professor Zoë Hammatt, from the University of Hawaii School of Medicine, spoke about some of the risks to research ethics and integrity. She highlighted the trustworthiness of research as essential. She referred to the 2010 WCRI Singapore Statement on Research Integrity, which prizes honesty in all aspects of research, accountability in the conduct of research, professional courtesy and fairness in working with others, and good stewardship of research on behalf of others.

Hammatt highlighted responsible data management practices to ensure research integrity, drawing upon the example of conducting archaeological studies concerning commercial construction projects in Hawaii. By meticulously following legal and ethical requirements and securing approvals, collecting and treating the data in accordance with establishment protocols and cultural practice,  communicating findings, and mentoring team members, the researchers modelled exemplary behaviour in terms of fostering integrity.

Hammatt emphasised that cultural sensitivity is vital to ensuring trust in similar contexts. In the archaeology case study, the researchers engaged with community members and held educational talks with children to increase their understanding of preserving historical assets that form an integral part of their culture.

Risks and harm in non-medical research

Professor Jasper Knight, in the Wits Human Research Ethics Committee: Non-Medical, critically examined the relationship between risk and harm and its relevance to non-medical research.

“Risk refers to the likelihood of a negative outcome, which can happen in numerous ways. It could be the conduct of a researcher, the interactions between researchers and participants and the collection and processing of data. With regards to social science, this makes it difficult to calculate risk,” he said.

Knight suggested implementing a risk management strategy which entails applying logical and systematic methods for identifying and managing risks.

He considered whether a ‘no-risk’ study exists. “We assume that there is a risk if a study is being done. What, however, is the cause?”

Therefore, to mitigate risk and harm, risk should be managed throughout the lifecycle of the research study.

Managing emotional distress in research contexts

Dr Preven Naidoo and Associate Professor Katherine Bain, in the Wits Department of Psychology, noted that research participants might include trauma survivors. In addition, researchers may also experience distress and may be exposed to secondary trauma throughout their research. It is thus imperative to manage distress to create a more compassionate and resilient research community.

Emotional distress, however, is a complex phenomenon, and researchers should identify and acknowledge healthy emotional expression. “Negative feelings are not an indication of psychological harm. Many participants say that despite their initial discomfort, they have benefitted from research,” said Bain.

Naidoo said that a researcher should validate the suffering of a research participant and that finding meaning is key to psychological strength. “We need to move beyond notions of categorical sensitivity and vulnerability. These often violate principles of autonomy, justice and non-maleficence,” which refers to the obligation not to harm the participant.  

Risk and harm in the Artificial Intelligence and ChatGPT era

Dr Martin Bekker, computational social scientist in the School of Electrical and Information Engineering at Wits, spoke about the potential of risk and harm at a time when Artificial Intelligence (AI) could ostensibly perform and accomplish any intellectual task that humans can perform.

This so-called human-competitive intelligence has been hitherto somewhat theoretical, but the advent of advanced large language models, the successors of ChatGPT, introduced new classes of societal risks and ethical considerations. He discussed a call among AI ethicists to ‘pause’ AI development for six months.

Among other concerns, Bekker explained how some large language models have been built on intellectual property infringement, trained using labour exploitation and human rights violations, and have come with alarming environmental costs.

Bekker noted several use cases for ChatGPT and large language models, but this ought to be used with an appreciation for its many flaws and accompanied by a wider drive towards AI literacy.

Other topics discussed on the day included biocontainment research in times of uncertainty, animal research ethics, child research ethics, the implications of vicarious liability under the Protection of Personal Information Act (POPIA), institutional biosafety in academic and research settings, and the role of the Wits Institutional Biosafety Committee.

Click the link for the programme and complete recording of the conference: