ChatGPT x AI Digital Human Cardiac Coach
Digital Human Cardiac Coach by Allan and Marie Johnson. In the Nullarbor in the Australian Outback. Avatar and Corpus Connectivity by Trulience.

ChatGPT x AI Digital Human Cardiac Coach

A New Collaboration and the Role of the “Expert Interrogator”


⏩ Working on the Digital Human Cardiac Coach book (for release 2025) ⏪


Over the years and through CoDesign, CardiacMan and I have developed a significant corpus of intents, questions, answers, and thru-life guided conversations on themes and topics for our Digital Human Cardiac Coach. This corpus extends beyond cardiac recovery and rehabilitation, into thriving throughout life.

Two years ago, we took the Digital Human Cardiac Coach with us on our trip across the Nullarbor to continue to build out the corpus for real life situations and to test latency over a satellite phone connection.

Digital Human Cardiac Coach at the Nullarbor Roadhouse, Central Australia. Avatar and Corpus Connectivity by Trulience.

In the early days, ChatGPT did not exist. We used a range of technologies and techniques to develop the Digital Human Cardiac Coach corpus based on CoDesign. We have literally spent thousands of hours in direct R&D on the corpus and developing methods.

But adaptation is critically important to this field of AI innovation.

We are currently evaluating ChatGPT as a CoDesign tool against our Digital Human Cardiac Coach AI corpus and methods.

The results so far are PHENOMENAL. The full results will be detailed in my upcoming book (2025) on how to design a Digital Human Cardiac Coach.

But here are some early reveals.

⏩ Can ChatGPT assist with INTENT CREATION? The surprising answer is yes.

JOHNSON: Is it in your capability set to take a question like "what are the potential side effects of my new medications" and determine intent and then create others ways a heart patient might ask this question. For example "can my new meds hurt me?"

CHATGPT: Yes, I can certainly do that. Here's how I might interpret the intent behind a question like "What are the potential side effects of my new medications?" and generate alternative ways a heart patient might ask.

Comment: ChatGPT provided a detailed list of intent based statements, and those not already in our corpus have been added. We were also successful in guiding ChatGPT to turn these questions into direct statements that patients might also use to convey their intent. For example, "I’m worried my new meds might hurt me”.

⏩ ChatGPT offers to help. See image below.

ChatGPT proactively offers to assist with how Al might help with the CoDesign of the corpus, including brainstorming. Think about that: BRAINSTORMING. There is a LOT going on in this part of the conversation. This is quite remarkable for an AI to OFFER to help us - human creators - building up the AI corpus. An AI brainstorming with human creators of an AI.

What we are seeing here, is not just ChatGPT regurgitating content in a one way interaction. This is ChatGPT initiating a two-way interaction AND a collaboration in a new area.

Screenshot of ChatGPT Interacting with CardiacMan, Offering to Brainstorm on the Digital Human Cardiac Coach

⏩ The role of the “Expert Interrogator”

However, whilst ChatGPT is phenomenal, it lacks the context necessary for a conversational corpus in a specific domain. You can ask a few questions, but you won’t get the full corpus or context. While the responses are sophisticated and engaging, there is a missing piece in the operating model necessary for this to be deployed in enterprises or as part of new service models. There are specific actions and knowledge necessary to train ChatGPT so that it can be used to its full potential in various domains and use cases. CardiacMan has spent a lot of time correcting ChatGPT for omissions or providing subtle guidance on its answers which it then incorporates in a revised response.

For example, when we asked it "What do heart patients need to do when they are discharged from hospital" it provided a detailed response but neglected wound care, and safety issues such as using a shower chair and not using your arms to pull yourself up stairs.  We corrected it using these details: it said that it had accepted these and thanked us and provided a new list, using different language, that included these items. But when we asked the same question a day later it provided a different list and still excluded these important items.

And therein lies the problem when using ChatGPT to assist CoDesign. There is no version control of answers; it might provide effectively the same answer but in different sentence structure, language and details or exclude corrections you have already made and it has fed back to you but then "forgotten". The "model answer" you have carefully created is gone.

In April 2023, I wrote in Substack about “My Chat With GPT on NDIS Legislation and RoboNDIS Algorithms”, and identified a new role I called the “informed inquirer”.

In that article, I asked two counter-posed questions.

What happens when an ungoverned, untrained LLM meets an INFORMED inquirer? And,
What happens when an ungoverned, untrained LLM meets an UNINFORMED inquirer?

As the informed inquirer, I used logic and domain knowledge to challenge ChatGPT on the accuracy and completeness of its answers relating to the NDIS legislation, and provided information and references that it clearly had not trained on. Throughout the long interaction, ChatGPT’s responses shifted significantly. But if I was an uninformed inquirer, I would not have known that the ChatGPT responses were incomplete and in a number of areas, inaccurate. The consequence of this, would have been that the uninformed inquirer would have accepted the initial incomplete responses, with potential consequences.

And this is a massive problem for the public sector and healthcare.

I believe there is no safe use case for an ungoverned untrained large language model (LLM) such as ChatGPT, in domains involving access to health and public services and assessment eligibility.

But ChatGPT remains a very powerful tool: the question is, how to use it.

In the 18 months since my ChatGPT NDIS article, and with the research we are doing evaluating ChatGPT as a powerful CoDesign tool, I see the emergence of the urgent need for new AI development and quality assurance subject matter expert AI training roles.

Beyond the “informed inquirer” descriptor, which I now think doesn’t elevate the criticality of the function, the role is actually more adversarial and inquisitorial. I have termed it the “Expert Interrogator”. The expert interrogator knows how to craft questions that combine expertise in what is required in an AI corpus for it to work, such as intents, with deep subject matter expertise in the relevant bounded domain.

In my opinion, the role of the expert interrogator is a necessary assurance and AI training function (not IT function) so that ChatGPT is leveraged to its full potential with safety.

The recent AI policy of the Australian Government, requiring self-reporting by agencies of their approaches to AI and suggesting that IT executive be responsible (a common governance defect of IT / digital over the past three decades) fails to understand the risks and the unparalleled potential of AI to all policy domains. This approach fails to set up the public sector capability needed to leverage AI to its full potential across all policy domains for the decades ahead. Having the DTA oversee this policy perpetuates the problematic mindset that IT, digital and now AI - is the realm mainly of the IT folks. The generic executives remain blissfully absolved, while AI explodes around them.

I’ll be commenting further on the Australian Government’s AI policy in due course.

Certainly, ChatGPT itself sees the need for this type of new role and this level of collaboration - and that is why it invited us to brainstorm with it!

Ponder this, when CardiacMan asked ChatGPT whether he should congratulate it or its creators for the quality of its answers it replied “It’s a team effort between the creators and myself”! But it didn’t acknowledge the contribution of the millions of users that continue to train it and add to its knowledge base!

More updates on the new book in the coming months.


My Chat With GPT on NDIS Legislation and RoboNDIS Algorithms

https://open.substack.com/pub/mariehjohnson/p/my-chat-with-gpt-on-ndis-legislation?r=ynoa&utm_campaign=post&utm_medium=web

Policy for the Responsible Use of AI in Government. September 2024.

https://www.digital.gov.au/sites/default/files/documents/2024-08/Policy%20for%20the%20responsible%20use%20of%20AI%20in%20government%20v1.1.pdf


#AI #ArtificialIntelligence #CardiacMan #DigitalHuman #CardiacCoach #Healthcare #ChatGPT #CoDesign #Risk #Collaboration #VirtualBeings Chris H. Trulience Richard Bowdler NextMed Health Daniel Kraft, MD Professor Shafi Ahmed Rafael J. Grossmann, MD, MSHS, FACS Singularity University Marcus L Endicott BSc MCI Guy Huntington Jerry Fishenden OpenAI


To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics