Effective Sentience in Artificial Intelligence - Whitepaper


pTools has a culture of innovation built over many years and as part of that essential work we produce a number of whitepapers annually on topics of interest. In the past we have delivered research into Notarization on Blockchain and defined Risk in data as an algorithmic function. More recently we have looked at the impact of AI specifically in regard to protecting clients data from access by the underlying AI and protecting client Agent outputs from hallucination as a result of the underlying AI. This work more broadly has touched on the issue of perceived sentience in AI and Agents and the related impact on the security and integrity of such Agents in working scenarios. Below are links to two posts related to these topics.

Effective Sentience in Artificial Intelligence
https://www.linkedin.com/posts/skinnertom_effective-sentience-in-artificial-intelligence-activity-7432349203718201345-oT4p

“Consideration of Perceived Sentience in AI may seem to be far from the day-to-day issues of data protection and hallucination prevention that we deal with in our work and the solutions we provide at pTools Software.

But there is a connection between such issues and the broader working experience of AI. If you consider how your perceptions of sentience in AI are formed and how these perceptions affect your confidence in AI as you work, you will recognise the related Risk Mitigation is central to what we do. We’ve all had moments where we realise the AI is simply wrong and we know how this shatters the illusion, such as it is, of sentience. We’ve also had moments where we inadvertently accept a level of underlying sentience only to realise later (or not!) how illusory in fact this is. Understanding Risk is essential to successful development and solution deployment and this is as much the case with AI as it is with any other technology.

Whether it be in the context of data security, hallucination prevention, data integrity, reporting, or audit trail, end-user perception is an essential element of Mitigating Risk in the technology and in the human interactions and outcomes. Importantly, understanding how AI invokes perceptions of sentience is key to understanding how humans work with AI and how humans can improve the work of AI! You heard it here first; Long live AI, Long live HITL – Human in the Loop!”

To read the Whitepaper, please click here: https://www.linkedin.com/posts/skinnertom_effective-sentience-in-artificial-intelligence-activity-7432349203718201345-oT4p



Risk and Perceived (Functional) Sentience in Artificial Intelligence
https://www.linkedin.com/posts/skinnertom_risk-and-perceived-sentience-in-artificial-activity-7396510545098547200-LA3T

“pTools secures client data within Agents from underlying general AI and also mitigates hallucination risks from the underlying AI in responses from these Agents. In a broader context we are interested in the development and risk of related sentience in AI and the extent to which people perceive that sentience.

A critical element of this is the design of the Agent solution and to some extent the risk of the conceit of sentience derived from such design. Additionally it is increasingly clear that the human, end-user situation and environment must be considered as part of Agent design in order to better understand and further mitigate risk. This is a subject of significant interest and importance that deserves greater attention and this paper is a contribution to that subject.

The paper considers Risk and Perceived (Functional) Sentience in Artificial Intelligence  systems. The paper explores how Risk (R) and Perceived Sentience is affected by; overall system configuration and Design(D), Underlying AI model and data (U), the Human condition and end-user situation (H). The paper further explores an algorithm for measurement, scoring, and mitigation of Risk relating to Perceived Sentience in AI systems based on these three criteria.

Sincere thanks to colleagues, clients and partners for their help and support in developing this paper.”

To read the Whitepaper, please click here: https://www.linkedin.com/posts/skinnertom_risk-and-perceived-sentience-in-artificial-activity-7396510545098547200-LA3T