The use of AI in mental health practice holds much promise in both clinical and administrative aspects, but clinicians must safeguard against ethical perils.
Related articles: Mental Health Ethics in the Digital Era, Ethical Decision-Making in Mental Health, Ethical Use of Social Media in Mental Health Practice.
Jump to section
- Introduction
- Understanding AI in the mental health landscape
- The ethical promise of AI
- The ethical perils: Bias, opacity, and accountability
- Informed consent and transparency in AI-assisted care
- Professional competence and oversight
- Emerging standards, policies, and global guidance
- Toward ethical futures in AI and mental health practice
- Key takeaways
- Questions therapists often ask
- References
Introduction
Artificial intelligence (AI) and other emerging technologies are transforming the landscape of mental health care. From digital chatbots and predictive analytics to virtual reality therapies and wearable devices, these innovations promise new pathways to accessibility, efficiency, and personalised care. Yet, alongside these opportunities come profound ethical questions: How do we protect privacy, prevent bias, and ensure that technology supports rather than supplants human connection?
In this third and final article of our series (the previous articles covered ethics in the digital age and ethical use of social media), we examine both the promise and the peril of AI in clinical and organisational contexts, guiding you so that you can make thoughtful, ethically grounded decisions about how such tools are adopted and explained to clients.
After identifying how AI is currently being used in mental health contexts, we help you evaluate the ethical opportunities and risks presented by it and related technologies, especially in relation to confidentiality, bias, transparency, and accountability. You will learn about applying ethical principles and standards when integrating AI tools into practice and formulate strategies for transparent communication with clients about AI-assisted tools. Our overarching principle is that technological progress must always serve, and never replace, the compassionate core of mental health practice.
Understanding AI in the mental health landscape
Artificial intelligence (AI) refers broadly to computational systems capable of tasks that normally require human intelligence – such as recognising patterns, interpreting language, or learning from data to make predictions or recommendations (Russell & Norvig, 2021). While the idea of intelligent machines once belonged to science fiction, AI now shapes daily life – from search engines and voice assistants to healthcare decision-support systems. In mental health care, the rise of AI has been particularly striking, offering new ways to deliver, augment, and evaluate psychological support while simultaneously raising complex ethical questions.
Current uses in mental health: Emerging technologies and integration
Beyond AI algorithms, emerging technologies such as virtual reality (VR), augmented reality (AR), and wearable sensors are expanding therapeutic possibilities. VR exposure therapy has shown strong results for treating phobias, post-traumatic stress disorder (PTSD), and social anxiety by providing immersive environments for gradual desensitisation (Maples-Keller et al., 2017). Wearable devices, from smartwatches to specialised biosensors, can track physiological indicators such as heart rate variability or sleep patterns, enabling real-time monitoring of mental wellbeing (Mohr et al., 2017).
Digital phenotyping – the continuous collection of behavioural and physiological data via smartphones – is also emerging as a research and clinical tool (Onnela & Rauch, 2016). However, as these technologies evolve, so too must the ethical reflection guiding their use. Issues of privacy, consent, bias, and data ownership loom large, particularly when sensitive psychological information is collected and processed by proprietary systems.
AI is already embedded in mental health practice in several ways. Chatbots such as Woebot, Wysa, and Tess use natural language processing to simulate therapeutic dialogue, often based on cognitive-behavioural therapy (CBT) principles (Fulmer et al., 2018). These tools can provide low-intensity emotional support, psychoeducation, or triage between sessions, making mental health resources more accessible and affordable. Similarly, predictive algorithms are being developed to detect risk markers for depression, anxiety, or suicidality through speech analysis, social media content, or wearable data (Jacobson et al., 2020; Inkster et al., 2018).
On the clinical operations side, AI-driven tools assist practitioners by automating note-taking, transcribing sessions, and analysing progress data. Some systems even attempt diagnostic support by comparing patient profiles with large datasets to suggest possible treatment directions, and AI has a role in enhancing the training of mental health professionals and facilitating the creation of culturally responsive education programs (Marwala, 2024). Such tools are designed to support – not supplant – the professional’s judgment, potentially freeing up time for human connection and reflective practice.
AI in support of clinicians vs AI as clinician
A crucial ethical distinction lies between AI in support of clinicians and AI as clinician. The former refers to systems designed to augment human expertise – for example, assisting with assessment or treatment planning. The latter envisions AI delivering therapy independently, a scenario that raises fundamental questions about empathy, accountability, and the therapeutic alliance (Blease et al., 2019). Mental health care depends not only on accurate information but also on relational trust and human presence – dimensions that AI cannot authentically replicate.
Safeguarding the human core
The central question for mental health professionals, then, is how to harness innovation while safeguarding the human core of therapeutic practice. AI’s capacity to enhance accessibility and efficiency must be balanced with vigilance against depersonalisation and ethical drift. Practitioners, educators, and organisations have a shared responsibility to ensure that technological adoption strengthens, rather than weakens, the humane and ethical foundations of mental health care.
The ethical promise of AI
Artificial intelligence holds significant promise for advancing mental health care when implemented ethically, competently, and with appropriate human oversight. Grounded in the ethical principle of beneficence – the obligation to promote well-being and act for the good of clients – AI has the potential to enhance access, quality, and consistency in psychological support (Beauchamp & Childress, 2019). When developed responsibly, it can amplify clinicians’ capacity to deliver compassionate, effective care rather than diminish it.
Expanding accessibility
One of the most celebrated benefits of AI in mental health is its potential to increase accessibility. Digital platforms, including AI-driven chatbots and teletherapy tools, can reach individuals who might otherwise face barriers such as cost, stigma, geography, or clinician shortages (Inkster et al., 2018; WHO, 2021). For instance, AI-based self-help systems like Wysa and Woebot offer 24/7 support at low or no cost, often in multiple languages and cultural contexts. These systems can serve as early entry points to care, providing psychoeducation, self-guided cognitive-behavioural strategies, or crisis triage (Fulmer et al., 2018).
In low-resource or rural communities – where mental health professionals are scarce – AI-powered mobile interventions can bridge critical service gaps. Although these tools cannot replace human therapists, they can provide scalable, immediate assistance that complements traditional care models (Tompa, 2025). When responsibly designed and monitored, such innovations align with beneficence by expanding the reach of care without compromising safety.
Improving efficiency and clinician focus
AI’s capacity to automate administrative or repetitive tasks can improve the efficiency of clinical work. Automated transcription, progress tracking, and scheduling systems reduce paperwork, enabling practitioners to devote more time to direct client engagement (Esteva et al., 2019). This reallocation of time enhances the therapeutic relationship, which remains central to effective psychological intervention.
In institutional or organisational contexts, AI can assist with workflow management, outcome measurement, and resource allocation – supporting clinicians and administrators in making data-informed decisions that enhance service quality (Topol, 2019). When properly integrated, these systems uphold both beneficence and non-maleficence by improving care delivery without undermining professional autonomy or compassion.
Personalisation and early detection
Through data analytics and machine learning, AI systems can recognise subtle patterns in language, behaviour, or physiological signals that may indicate early signs of relapse or crisis (Jacobson et al., 2020). This predictive capacity allows for personalised interventions – tailored to a client’s unique presentation and history – enhancing both effectiveness and responsiveness. For example, digital mood-tracking applications can alert users or clinicians to significant shifts in mental state, facilitating timely support before crises escalate (Mohr et al., 2017).
Such precision aligns closely with ethical principles of care and prevention. Yet, it also demands robust validation and continual monitoring to ensure that predictions are reliable, fair, and free from bias.
Promoting consistency and evidence-based practice
AI systems can reinforce adherence to evidence-based protocols by standardising aspects of assessment, documentation, and intervention delivery (Blease et al., 2019). Consistency in application helps reduce human error and variability while ensuring that interventions align with best practice guidelines. For training and supervision, AI-enabled feedback tools can support practitioner learning by analysing session data and offering constructive insights.
However, beneficence here depends on the integrity of the underlying models: only validated, secure, and professionally supervised systems can be considered ethically sound. Transparency about limitations and continuous human oversight remain non-negotiable.
Balancing promise with prudence
Ultimately, the ethical promise of AI rests not on technological sophistication alone, but on how responsibly it is used. When guided by beneficence and grounded in competence, AI can act as a force multiplier for empathy, access, and clinical excellence. It is not a replacement for the therapeutic relationship but a set of tools that, under human stewardship, can extend its reach and reliability. As the field evolves, mental health professionals must ensure that innovation serves humanity – not the other way around.
The ethical perils: Bias, opacity, and accountability
The ethical promise of artificial intelligence is inseparable from its perils. When applied to mental health care, AI systems can magnify existing inequities or create new forms of ethical risk. The core principles of justice, autonomy, and non-maleficence require mental health professionals to scrutinise how these tools are built, validated, and deployed (Beauchamp & Childress, 2019). Beneath the polished interface of any intelligent system lies a chain of human choices – about data, design, and deployment – that determine whether technology ultimately serves or harms.
Bias and fairness
AI learns from data generated within human societies, and those data often mirror systemic inequalities. If training datasets over-represent particular linguistic, cultural, or diagnostic profiles, the resulting algorithms can reproduce bias in subtle but consequential ways (O’Neil, 2016; Obermeyer et al., 2019). For example, sentiment-analysis models may misinterpret expressions of distress from culturally diverse clients, or predictive risk tools might underestimate the needs of under-represented populations. In the therapeutic context, such distortions threaten the ethical principle of justice, which demands fair and equitable treatment for all clients (Gebru et al., 2021).
Opacity and explainability
Many AI systems operate as “black boxes,” using complex neural-network architectures whose inner logic cannot easily be explained – even by their developers (Burrell, 2016). For practitioners, this raises the question: Can one ethically rely on a recommendation that cannot be interpreted or justified? In clinical settings, decisions must be transparent enough for professionals to articulate their rationale to clients and colleagues. Explainability is a prerequisite for informed consent and accountability (European Commission, 2019).
Accountability gaps
When harm arises from an AI-assisted decision – misclassification, breach of privacy, or inappropriate triage – responsibility can become diffused. Is it the developer who built the system, the clinician who used it, or the organisation that deployed it? Without explicit governance structures, accountability risks evaporating into a digital ether (Floridi et al., 2018).
Ethical frameworks in mental health must therefore delineate clear chains of responsibility: practitioners for their use, organisations for oversight and validation, and developers for transparent, safe design. Accountability is not optional – it is the moral anchor that maintains trust between clinician and client.
Data privacy and surveillance
AI systems rely on large datasets, often containing sensitive personal information. When mental health data – among the most intimate forms of personal information – are stored or shared across platforms, risks of misuse, unauthorised access, or re-identification increase (WHO, 2021). Ethical data stewardship requires informed consent, data minimisation, and clear communication about how data are processed and stored (Australian Psychological Society, 2023b).
Psychological and relational risks
Beyond technical risks lie subtler psychological ones. Overreliance on AI may erode core therapeutic qualities – empathy, presence, and spontaneity (Ritvo, 2025). If clinicians begin to trust algorithms more than their relational intuition, the therapeutic alliance could weaken. Ethical vigilance therefore involves attending not only to technical accuracy but also to the human experience of care. AI should augment, not replace, the empathic engagement at the heart of psychological practice.
Informed consent and transparency in AI-assisted care
Artificial intelligence has introduced new layers of complexity to the traditional concept of informed consent in mental health practice. In conventional settings, consent focuses on ensuring that clients understand the nature, purpose, and potential risks of a therapeutic process. In AI-assisted care, however, the boundaries of that process widen: algorithms may analyse client data, assist in assessment, or influence treatment recommendations. Consequently, ethical practice demands that consent be expanded and continuously reaffirmed to reflect these new dimensions (WHO, 2021).
The expanded meaning of informed consent
Clients have a right to know when, how, and why AI is used in their care. This includes disclosure of whether automated tools contribute to assessment, record-keeping, or therapeutic interventions (APS, 2023b). Ethical consent processes should explicitly address how data are collected, stored, and processed; what algorithms do with those data; and what risks might arise, such as data breaches or misinterpretation. Practitioners should explain these systems in plain, accessible language, avoiding technical jargon, and treat consent as an ongoing dialogue, not a one-time signature (Beauchamp & Childress, 2019).
Supporting client autonomy
The ethical principle of autonomy requires that clients maintain meaningful control over their participation. They should be free to opt out of AI-based tools or request human-only alternatives wherever feasible, without fear of reduced quality of care. While some institutional systems may make AI integration unavoidable (e.g., digital recordkeeping), clinicians can still provide clear explanations of how automation functions within the broader therapeutic ecosystem.
Informed consent also includes empowering clients to understand the limitations of AI: that predictive models are probabilistic, not definitive. This transparency helps preserve client agency (European Commission, 2019).
Disclosure and transparency essentials
Transparency requires clarity not only about what technology is being used, but also how it is used. Practitioners should inform clients if AI contributes to diagnostic suggestions, note-taking, or progress analysis, and clarify what data are shared with external vendors (Floridi et al., 2018). This includes details about encryption, anonymisation, and data retention policies.
Where external platforms or cloud-based services are used, clients must be told who controls the data, how long it will be retained, and what rights they have to access or delete their information (APS, 2023b). In the absence of such clarity, consent cannot be genuinely informed.
Maintaining trust in the digital therapeutic alliance
In the digital era, informed consent and transparency are not bureaucratic checkboxes but relational practices. Clients who understand how AI functions are more likely to feel respected and safe, even amid uncertainty. By embedding openness, dialogue, and client choice into technological integration, mental health professionals uphold the foundational ethics of respect, beneficence, and integrity – ensuring that technology serves as a partner in healing, not a silent authority.
Professional competence and oversight
As artificial intelligence and other emerging technologies become increasingly integrated into mental health practice, ethical competence must evolve to meet new professional realities. The traditional expectation that practitioners maintain up-to-date clinical skills now extends to the tools and systems they use. The ethical principle of competence – anchored in both professional standards and duty of care – requires that mental health professionals understand, evaluate, and critically monitor AI-assisted tools to ensure they are used responsibly (APS, 2023a, 2023b, 2024; Beauchamp & Childress, 2019).
Evolving definitions of competence
Competence in the digital era encompasses more than clinical expertise; it includes technological literacy and ethical discernment. Practitioners who incorporate AI into their work are ethically obliged to understand at least the basic functioning, data sources, and limitations of the systems they employ (APA, 2024). Using a tool without adequate understanding risks not only technical errors but ethical violations – particularly if the system’s outputs are taken at face value without critical scrutiny. The question that every professional should ask before adopting any AI system is: “Do I understand this technology well enough to ethically justify its use?” If not, continued training is de rigeur.
The learning obligation
The ethical obligation to learn applies at both individual and organisational levels. Clinicians should pursue continuing professional development (CPD/OPD) opportunities focusing on digital ethics, AI literacy, and data governance. Similarly, organisations must provide structured training, ensuring that staff understand how technological tools intersect with privacy law, informed consent, and cultural safety (WHO, 2021).
Competence also involves recognising when AI is not appropriate. For example, if a system’s training data exclude certain cultural or linguistic groups, its use may inadvertently produce inequitable outcomes. Responsible practitioners must therefore evaluate whether a tool’s design aligns with the needs and values of their client base (Gebru et al, 2021).
Supervision and oversight
AI must always operate under human supervision, never as an autonomous decision-maker. Ethical integration means using AI to support – not replace – clinical judgment. Practitioners retain ultimate accountability for how algorithmic insights are interpreted and applied (European Commission, 2019).
Regular review of AI tools in supervision, peer consultation, or ethics committees helps maintain oversight of system performance, bias, and unintended consequences (such as “automation bias” on the professional’s part). Such forums can also foster collective learning, encouraging professionals to share both benefits and pitfalls encountered in digital practice (Floridi et al, 2018).
Ethical documentation and transparency
Competent and transparent documentation forms part of ethical oversight. Practitioners should record the rationale, scope and purpose, client consent, and any limitations related to use of adopted AI tools, thus supporting ethical accountability and also protecting both clients and practitioners should questions later arise about decision-making. Ultimately, professional competence in AI-assisted practice is not about mastering algorithms but about stewardship – using technology wisely, transparently, and under vigilant human judgment.
Emerging standards, policies, and global guidance
As AI becomes more embedded in mental health care, ethical practice cannot rely on individual discretion alone. Professionals increasingly look to emerging international frameworks and professional guidelines to ensure responsible and transparent integration of technology. These evolving standards emphasise the shared responsibility of clinicians, organisations, developers, and policymakers in safeguarding client welfare.
Professional associations
Professional bodies are taking a proactive stance in shaping ethical expectations for AI use in mental health. The American Psychological Association (APA) and National Association of Social Workers (NASW) have begun developing guidelines emphasising practitioner competence, transparency, and accountability in technology-assisted care (APA, 2024; NASW, 2025). The British Psychological Society (BPS, 2023) and British Association for Counselling and Psychotherapy (BACP, 2018) advocate the “human-in-the-loop” principle – asserting that humans must always retain oversight of AI systems, particularly in clinical decision-making.
Regulatory frameworks
At the policy level, the European Union Artificial Intelligence Act represents the world’s most comprehensive regulatory framework for AI to date. It classifies AI systems used in health and mental health contexts as “high-risk”, requiring stringent testing, documentation, and human supervision before deployment. The Act mandates transparency, data protection, and accountability measures – key safeguards for sensitive domains such as psychological care (European Parliament, 2024).
Globally, the World Health Organization (WHO) has issued ethical principles for AI in health, including autonomy, safety, transparency, accountability, and sustainability (WHO, 2021).
Organisational policies and local governance
While international guidance provides the scaffolding, implementation depends on local governance. Mental health organisations are encouraged to establish internal AI ethics policies that include regular audits (bias/security), clear procedures (informed consent), documentation standards (for decisions), and mechanisms for client feedback and grievance redress (Floridi et al, 2018). These protocols ensure that AI use aligns not only with external laws but with the organisation’s own ethical culture.
Ethical foresight and advocacy
Ethical foresight means anticipating tomorrow’s challenges today. Mental health professionals should engage in policy dialogue and advocacy, contributing their expertise to the formation of new standards rather than waiting to be governed by them. As frontline users of technology, clinicians can help ensure that regulatory and ethical frameworks remain grounded in real-world therapeutic practice.
Toward ethical futures in AI and mental health practice
Artificial intelligence and emerging technologies are reshaping what it means to care, to connect, and to act ethically in the helping professions. The task for mental health practitioners is not to resist this evolution but to shape it deliberately, ensuring that technology enhances rather than eclipses the human dimensions of practice. Ethical futures are not technological inevitabilities – they are moral choices, shaped through reflection, collaboration, and courage.
From compliance to ethical imagination
The future of AI in mental health will depend less on technical capacity than on ethical imagination – the ability to foresee consequences, empathise with diverse experiences, and innovate with integrity. While compliance with professional standards and laws remains essential, ethical excellence requires going beyond minimal requirements to envision the kind of digital care that embodies compassion, justice, and dignity (Beauchamp & Childress, 2019).
This means approaching technology with curiosity and caution in equal measure. Practitioners who ask, “Should we?” as often as “Can we?” become agents of ethical foresight, not just users of digital tools. In this sense, ethics is not a boundary around innovation but its guiding light.
Collaborative ethics: Shared responsibility for AI
Ethical futures depend on collective stewardship. No single professional or organisation can manage the complexity of AI alone. Clinicians, researchers, developers, educators, and policymakers must collaborate to ensure that systems are transparent, fair, and human-centred (Floridi et al., 2018). When collaboration includes such cross-disciplinary dialogue, technology becomes not merely efficient but wise.
Equity, inclusion, and the global perspective
An ethical AI future must also be inclusive. Without deliberate attention to diversity, AI risks entrenching global inequities in access, representation, and quality of care (O’Neil, 2016; WHO, 2021). Practitioners and policymakers should advocate for culturally sensitive datasets, equitable digital access, and fair representation in system design. Ethical leadership in this area means amplifying marginalised voices in the conversation about technology and mental health, ensuring that digital innovation reflects – not distorts – the world’s psychological diversity.
Humanity at the centre
Ultimately, the defining challenge of the AI era is to preserve the human core of therapeutic practice. Machines may simulate understanding, but only humans can empathise, hold a point of tension with uncertainty, and co-create meaning. Ethical futures in mental health therefore require continuous reflection on a simple, enduring question:
“How do we remain fully human while working with intelligent machines?”
If practitioners can sustain that inquiry – with humility, compassion, and courage – then AI will not diminish the soul of mental health care but deepen its reach and relevance. Ethical progress, like healing itself, is not a product of automation, but of intentional, mindful relationship.
Conclusion
The integration of artificial intelligence into mental health care represents both a profound opportunity and a continuing ethical challenge. As we have noted throughout this series, AI can enhance accessibility, personalisation, and efficiency, but only when guided by professional competence, reflective oversight, and a steadfast commitment to human-centred ethics.
For mental health professionals, the key task is not simply to master new technologies but to reimagine ethical practice in a rapidly evolving digital landscape. By remaining curious yet cautious, informed yet humble, practitioners can ensure that technology amplifies – rather than replaces – the empathy, insight, and integrity at the heart of therapeutic work.
Key takeaways
- The use of AI in the mental health professions holds promise in terms of improved efficiency, early detection, and standardisation of aspects of assessment, documentation, and intervention delivery.
- However, there are ethical perils in areas of bias, opacity, accountability, and data privacy, and thus informed consent.
- Competence means not only clinical expertise but also adequate understanding of the AI systems used.
- Practitioners must pay attention to regulatory frameworks and organisational policies, and also regularly review AI tools in supervision, peer consultation, and ethics committees.
- The best evolving use of AI for mental health is in collaborative ethics which keeps humanity at the centre of therapeutic practice, especially when AI is used.
Questions therapists often ask
Q: How do I decide whether an AI tool is supporting my work or quietly replacing clinical judgment?
A: Look at who holds the final decision-making power. If the tool informs, organises, or flags patterns while you retain responsibility for interpretation and action, it’s in a support role. The moment you feel nudged to defer to an algorithm because it seems “more objective,” that’s a red flag. AI should sharpen your thinking, not outsource it.
Q: What does meaningful informed consent actually look like when AI is involved?
A: It’s more than a line in the paperwork. Clients need a plain-language explanation of what the tool does, what data it uses, and where its limits are. Consent should be revisited, not assumed, especially if tools change or are used in new ways. If you can’t explain it clearly, you probably shouldn’t be using it yet.
Q: How real is the risk of bias in AI tools, and what can I do about it in practice?
A: The risk is very real because AI learns from human-made data, complete with human blind spots. Practically, this means asking who the tool was trained on and who might be missing. If it hasn’t been validated across diverse populations, use it cautiously, monitor outcomes closely, and never treat its outputs as neutral facts.
Q: Where does accountability sit if an AI-assisted decision causes harm?
A: Accountability doesn’t disappear just because software is involved. You remain responsible for how the tool is used and interpreted, organisations are responsible for governance and oversight, and developers are responsible for safe, transparent design. Ethically, you should be able to justify any AI-informed decision as if the tool weren’t there.
Q: How do I stop technology from eroding the therapeutic relationship?
A: By staying alert to subtle shifts in how you practise. If screens, dashboards, or predictions start to command more attention than the person in front of you, something’s off. Use AI to reduce admin and increase presence, not the other way around. The alliance still grows from empathy, attunement, and human judgement – none of which can be automated.
References
- American Psychological Association. (2024). Guidelines for the practice of telepsychology. Retrieved on 11 November 2025 from: https://www.apa.org/about/policy/telepsychology-revisions
- Australian Psychological Society. (2023a). APS code of ethics. APS. Retrieved on 6 November 2025 from: https://psychology.org.au/about-us/what-we-do/ethics-and-practice-standards/aps-code-of-ethics
- Australian Psychological Society. (2023b). ChatGPT & beyond: Harnessing ethical AI for client and clinician care. https://psychology.org.au
- BACP (British Association for Counselling and Psychotherapy). (2018). Ethical framework for the counselling professions. https://www.bacp.co.uk
- Beauchamp, T. L., & Childress, J. F. (2019). Principles of biomedical ethics (8th ed.). Oxford University Press. E-book ISBN: 9780190077884 Print ISBN: 9780190640873
- Blease, C., Kaptchuk, T. J., Bernstein, M. H., Mandl, K. D., Halamka, J. D., & DesRoches, C. M. (2019). Artificial intelligence and the future of psychotherapy: Hype, hope, or harm? Journal of Medical Internet Research, 21(8), e16223. https://doi.org/10.2196/16223
- British Psychological Society. (2023). Guidelines for psychologists working with artificial intelligence and digital technologies. https://www.bps.org.uk
- Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12. https://doi.org/10.1177/2053951715622512
- Esteva, A., Robicquet, A., Ramsundar, B., Kuleshov, V., DePristo, M., Chou, K., Cui, C., Corrado, G., Thrun, S., & Dean, J. (2019). A guide to deep learning in healthcare. Nature Medicine, 25(1), 24–29. https://doi.org/10.1038/s41591-018-0316-z
- European Commission. (2019). Ethics guidelines for trustworthy AI. Retrieved on 11 November 2025 from: https://www.europarl.europa.eu/cmsdata/196377/AI%20HLEG_Ethics%20Guidelines%20for%20Trustworthy%20AI.pdf
- European Parliament. (2024). Artificial Intelligence Act. https://artificialintelligenceact.eu
- Floridi, L., Cowan, K., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). AI4People – An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
- Fulmer, R., Joerin, A., Gentile, B., Lakerink, L., & Rauws, M. (2018). Using psychological artificial intelligence (Tess) to relieve symptoms of depression and anxiety: Randomised controlled trial. JMIR Mental Health, 5(4), e64. https://doi.org/10.2196/mental.9782
- Gebru, T., Morgenstern, J., Vecchione, B., Vajda, S., Wardle, H., … Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92. https://doi.org/10.1145/3458723
- Inkster, B., Sarda, S., & Subramanian, V. (2018). An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: Real-world data evaluation. JMIR mHealth and uHealth, 6(11), e12106. https://doi.org/10.2196/12106
- Jacobson, N. C., Lekkas, D., Price, G., Heinz, M. V., Song, M., O’Malley, A. J., and Barr, P.J. (2020). Flattening the mental health curve: COVID-19 stay-at-home orders are associated with alterations in mental health search behavior in the United States. JMIR Mental Health, 7(6), e19347. https://doi.org/10.2196/19347
- Maples-Keller, J. L., Bunnell, B. E., Kim, S. J., & Rothbaum, B. O. (2017). The use of virtual reality technology in the treatment of anxiety and other psychiatric disorders. Harvard Review of Psychiatry, 25(3), 103–113. https://doi.org/10.1097/HRP.0000000000000138
- Marwala, T. (2024). AI and mental health: A global south perspective. Medium. Retrieved on 11 November 2025 from: https://medium.com/@tshilidzimarwala/ai-and-mental-health-a-global-south-perspective
- Mohr, D. C., Zhang, M., & Schueller, S. M. (2017). Personal sensing: Understanding mental health using ubiquitous sensors and machine learning. Annual Review of Clinical Psychology, 13, 23–47. https://doi.org/10.1146/annurev–clinpsy-032816-044949
- National Association of Social Workers. (2025). NASW standards for technology in social work practice. https://www.socialworkers.org/Practice/NASW-Practice-Standards-Guidelines/Standards-for-Technology-in-Social-Work-Practice
- Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342
- O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown Publishing. ISBN:978-0-553-41881-1
- Onnela, J. P., & Rauch, S. L. (2016). Harnessing smartphone-based digital phenotyping to enhance behavioural and mental health. Neuropsychopharmacology, 41(7), 1691–1696. https://doi.org/10.1038/npp.2016.7
- Ritvo, E. (2025). AI ChatBot Therapy: Hype, Hope, Risk? Exploring the promise and limits of AI-driven mental health support. Psychology Today. Retrieved on 11 November 2025 from: https://www.psychologytoday.com/ie/blog/on-vitality/202503/ai-chatbot-therapy-hype-hope-risk
- Russell, S., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.
- Tompa, R. (2025). Gen AI’s potential transform global medical care – and the ‘tension between the perfect and good, Stanford Report. Retrieved on 11 November 2025 from: https://news.stanford.edu/stories/2025/03/generative-ai-tools-global-health-care-low-income-countries
- Topol, E. (2019). Deep medicine: How artificial intelligence can make healthcare human again. New York: Basic Books. ISBN:978-1-5416-4463-2
- World Health Organization. (2021). Ethics and governance of artificial intelligence for health: WHO guidance. https://www.who.int/publications/i/item/9789240029200