Date:

AI Explainability and Its Immediate Impact on Legal Tech

Regulatory Challenges and the New AI Standard ISO 42001

Tony Porter, former Surveillance Camera Commissioner for the UK Home Office, provided insights into regulatory challenges surrounding AI transparency. He highlighted the significance of ISO 42001, the international standard for AI management systems which offers a framework for responsible AI governance. “Regulations are evolving rapidly, but standards like ISO 42001 provide organisations with a structured approach to balancing innovation with accountability,” Porter said.

Chamelio: Transforming Legal Decision-Making with Explainable AI

Alex Zilberman from Chamelio, a legal intelligence platform exclusively built for in-house legal teams, addressed the role of AI in corporate legal operations. Chamelio changes how in-house legal teams operate through an AI agent that learns and uses the legal knowledge stored in its repository of contracts, policies, compliance documents, corporate records, regulatory filings, and other business-important legal documents.

“Trust is the number one requirement to build a system that professionals can use,” Zilberman said. “This trust is achieved by providing as much transparency as possible. Our solution allows users to understand where each recommendation comes from, ensuring they can confirm and verify every insight.”

Chamelio avoids the ‘black box’ model by letting legal professionals trace the reasoning behind AI-generated recommendations. For example, when the system encounters areas of a contract that it doesn’t recognise, instead of guessing, it flags the uncertainty and requests human input. This approach helps legal professionals control important decisions, particularly in unprecedented scenarios like clauses with no precedent or conflicting legal terms.

Pini Usha from Buffers.ai shared insights on AI-driven inventory optimisation, an important application in retail. Buffers.ai serves medium to large retail and manufacturing brands, including H&M, P&G, and Toshiba, helping retailers – particularly in the fashion industry – tackle inventory optimisation challenges like forecasting, replenishment, and assortment planning.

Buffers.ai offers a full-SaaS ERP plugin that integrates with systems like SAP and Priority, providing ROI in months. “Transparency is key. If businesses cannot understand how AI predicts demand fluctuations or supply chain risks, they will be hesitant to rely on it,” Usha said.

Buffers.ai integrates explainability tools that allow clients to visualise and adjust AI-driven forecasts, helping ensure alignment with real-time business operations and market trends. For example, when placing a new product with no historical data, the system analyses similar product trends, store characteristics, and local demand signals. If a branch has historically shown strong demand for comparable items, the system might recommend a higher quantity without any existing data for the new product.

Corsight AI: Facial Recognition in Retail and Law Enforcement

Matan Noga from Corsight AI discussed the role of explainability in facial recognition technology, which is used increasingly for security and customer experience enhancement in retail. Corsight AI specialises in real-world facial recognition, and provides its solutions to law enforcement, airports, malls, and retailers.

The company’s technology is used for applications like watchlist alerting, locating missing persons, and forensic investigations. Corsight AI differentiates itself by focusing on high-speed, and real-time recognition in ways compliant with evolving privacy laws and ethical AI guidelines. The company works with government and its commercial clients to promote responsible AI adoption, emphasising the importance of explainability in building trust and ensuring ethical use.

ImiSight: AI-Powered Image Intelligence

Daphne Tapia from ImiSight highlighted the importance of explainability in AI-powered image intelligence, particularly in high-stakes applications like border security and environmental monitoring. ImiSight specialises in multi-sensor integration and analysis, utilising AI/ML algorithms to detect changes, anomalies, and objects in sectors like land encroachment, environmental monitoring, and infrastructure maintenance.

“AI explainability means understanding why a specific object or change was detected. We prioritise traceability and transparency to ensure users can trust our system’s outputs,” Tapia said. ImiSight continuously refines its models based on real-world data and user feedback. The company collaborates with regulatory agencies to ensure its AI meets international compliance standards.

Conclusion

The panel underscored the important role of AI explainability in fostering trust, accountability, and ethical use of AI technologies, particularly in retail and other high-stakes industries. By prioritising transparency and human oversight, organisations can ensure AI systems are both effective and trustworthy, aligning with evolving regulatory standards and public expectations.

FAQs

Q: What is ISO 42001?

A: ISO 42001 is the international standard for AI management systems, offering a framework for responsible AI governance.

Q: What is the role of explainability in AI-driven decision-making?

A: Explainability is crucial in AI-driven decision-making, as it enables users to understand the reasoning behind AI-generated recommendations, fostering trust, accountability, and ethical use of AI technologies.

Q: How can AI systems be made more transparent and explainable?

A: AI systems can be made more transparent and explainable by integrating features that provide users with insights into the reasoning behind AI-generated recommendations, such as traceability and transparency.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here