Why We are Happy When Carebrain Says ‘No!’ 

At CareBrain we’ve put a huge amount of effort into ensuring our digital assistant for care professionals is highly accurate and trustworthy. It’s been finely tuned for the specific needs, language, terminology, different accents and figures of speech that comprise the sector’s diverse care workforce. 

Whether being used by a carer or care manager, unlike Chat GPT and the like, CareBrain won’t try to answer something it doesn’t know. Instead, it will politely ask the user to rephrase the question. Or if the question isn’t care related it will remind the user that CareBrain is for care only. It will only answer questions from within its own knowledge base. 

Designed to be as familiar and easy to use as WhatsApp, our clients use the General chat for broader questions about care and service user chats for questions about individual service users. 

Answers to care professionals’ questions might be pulled from the industry best practice information from our own Best Practice Library – the carefully curated collective wisdom of CareBrain’s own care experts who all have extensive experience at all levels and sectors of the industry. Or it could be your own care company specific policies and procedures or individual service user care plans and related information.  

We’ve invested a lot of time and human effort (did I mention the collective wisdom of our own care experts…) as well as innovative technical approaches to ensure accuracy. This might be our unique architecture to effectively manage and validate questions and answers – or more technical ways of testing, such as ‘LLM as a judge’ to ensure CareBrain’s accuracy.  

For our other products, like the sentiment analysis within our Supervisions module, or Care Plan Audits, we are similarly strict with our testing. Even if the QA testers in our tech team assure us that a new feature or changing what LLM we use for a new function works perfectly, we nod, sigh, and crank up our very own ‘humans in the loop’ to thoroughly test again. And again. As we grow, we are adding more automated ways of testing which I’m very confident in from a tech perspective. However, we always require human sign off.  

Like all CareBrain’s products (watch this space…), the technology – even the testing – is carefully designed to meet business needs – complementing and not replacing humans. 

So, why are we happy when the bot says ‘no’? It’s because – unlike most AI services – it isn’t just making something up from a random, untrusted source which that LLM happened to be trained on. Rather CareBrain has checked its knowledge base, confirmed that the question is relevant and is happy to answer it.  

At CareBrain, we will always keep humans in the loop! 

Previous Post
From an AI sceptic to advocate!
Next Post
Vigilant, reliable, and always on duty- like any good guard dog!