AG

Andrew Gamino-Cheong

CTO & Co-Founder at Trustible
On the record
Share profile 
Link:
Bio
Edit

Andrew Gamino-Cheong is the CTO & Co-Founder of Trustible, and AI Governance startup helping organizations adopt Responsible AI and comply with emerging AI regulations such as the EU AI Act. Prior to starting Trustible, Andrew was a machine learning engineer and tech lead at FiscalNote where he developed and deployed AI systems to analyze the policy making process.

Employment
Sign up to view all
  • AI in HR: Ensuring Fairness and Avoiding Pitfalls
    Andrew advises, "Any decisions on employee termination or disciplinary actions should never be done by AI." He stresses transparency in data inputs and warns of human bias. Proper AI training for HR personnel is crucial. AI should support, not replace, human judgment in HR decisions.
  • AI Innovation Stifled by Lack of Universal Policies, Says Trustible CTO
    Andrew highlights that the absence of universal AI policies is a major hurdle for risk-averse sectors like healthcare. He notes, "A lack of universal AI policies is a major blocker." He advocates for a risk-based approach, allowing quick adoption of low-risk uses while assessing high-risk cases, with future regulations mirroring cybersecurity practices.
  • Crunchbase's AI Accuracy Claims Met with Skepticism by AI Expert
    Andrew expresses skepticism about Crunchbase's 95% accuracy claim, citing potential data self-selection and reliance on lagging indicators. He warns, "It's a very common problem to have a classification model that performs well on past data, and then performs terribly on predicting new upcoming stuff."
Recent Quotes
Sign up to view all
  • A lot of use cases of LLMs are limited by data that might be older, and RAG patterns are the most effective way of keeping them up to date without spending millions on fully retraining them. One secret is that a lot of LLM providers would love for users to add RAG pipelines or outright fine-tune their foundational models because it radically shifts a lot of product liability.

    RAGs have been used for this application for years before LLMs even appeared on the public’s radar. Overall, practically any application that requires you to have a tightly controlled dataset will favor using an RAG, as they allow for less surprises and much more consistent results across the board.

Headshots