is there anything responsible about responsible ai?

- Oct. 11, 2023


This post isn’t about solutions, think of it as an observation. While glossing over various articles, policy briefings, research papers, and company mission statements I couldn’t help but notice a recurring term: responsible AI. In 2023, the hype of large language models is alive and well but has been overshadowed by a growing gloomy undertone that centers on the harms that artificial intelligence (AI) can cause. Companies, organizations, and governing bodies are using “responsible AI” practices to counteract the risks associated with the deploying AI systems to assist their operating procedures. The only problem with this is there is no unified or agreed-upon decision across any sector, whether that be public, private or not-for-profit. The general components of a system that operates through a responsible AI operational framework are: transparency, fairness, inclusivity, privacy and security. The issue is that each organization claiming to employ responsible AI has its own version. As a result each of the components of the operational framework has a different meaning depending on the organization or company that defines it. Isn’t this a bit off? Organizations that claim to operate under a responsible AI framework are creating their own definition of what responsible AI is. How can we safely interact with the AI provided by these organizations when the components of responsible AI cannot be ensured by its users? What does responsible AI mean to us? By us I mean the people that click “Accept all” to cookies, terms & conditions documents, and privacy policies. By us I mean the people whose data powers the algorithms of these organizations. How do we responsibly contribute to this AI framework?



Back