Thomas Robinson is a successful and innovative technology executive, advisor and investor helping to bring some of today’s most groundbreaking technologies to market. Currently the Chief Operating Officer for Domino Data Lab, provider of the leading Enterprise MLOps platform trusted by over 20% of the Fortune 100. In this role, Robinson is responsible for revenue & go-to-market for Domino, leading Sales, Marketing, Professional Services, Customer Support and Partnerships.
Throughout his career, Robinson has gained an in-depth understanding of the ins and outs of the enterprise tech space, specifically the cloud and data science markets. Robsinson can share insight into the technologies, strategies, capabilities and best practices enterprises need to drive better business outcomes. For example, he could share advice on how organizations can regain control over cloud infrastructure costs while mitigating vendor lock-in, or how breaking down silos increases collaboration to optimize productivity through knowledge sharing, helping teams build off prior work.
A poorly trained or maintained ML model can provide outputs that are unhelpful or even misleading.
AI and machine learning initiatives have turned from obscure into the de facto force for enterprises.
Technology can provide you with frameworks and instrumentation to monitor data drift and model accuracy — responsible AI is practiced through your people and processes.
Be wary of expecting cloud platforms to offer the best cost and/or performance value. Data access and GPU infrastructure are critical for training. Cloud providers tout pay-per-use economics with GPUs, yet costs add up rapidly, as training is an iterative process. Similarly, enterprise data lives across repositories. The cost of getting AI-scale data into the cloud will raise eyebrows. Consider on-premises training.
Supporting another cloud is, at best, only an acquisition or regional expansion away. Since no cloud vendor has an incentive to make it easy to transfer data and workloads between clouds, or to on-prem infrastructure, the result is data silos.
Low-code and no-code tools used for data science and AI workloads have limits. Without a sophisticated understanding of data science and statistics, it’s easy to inject bias into low- and no-code models, making the results more risky than beneficial. When a model becomes business-critical, it needs to be rewritten in code to allow for appropriate monitoring, governance and incremental improvement.