Dane is a Staff Innovation Architect at HackerOne, helping large enterprises and governments to successfully leverage bug bounty programs and ethical hacking services to best minimize growing cyber risks. Prior to joining HackerOne, Dane worked as a mobile application security specialist helping organizations utilize tools to uncover vulnerabilities in their mobile apps. Dane is passionate about leveraging advancements in cryptography to preserve privacy on the internet. In his free time, Dane enjoys bug bounty hunting and tinkering with EVM-based smart contracts.
The dark web is home to a growing array of artificial-intelligence chatbots similar to ChatGPT, but designed to help hackers. Businesses are on high alert for a glut of AI-generated email fraud and deepfakes.
The dawn new generative artificial intelligence tools are making lots of users much more efficient, including, apparently, hackers who are using ChatGPT, too.
We spoke with DaneSherrets, Solutions Architect at HackerOne, to uncover the intricacies of AI vulnerabilities, particularly in GenAI tools, and hear more about the potential exploits and creative methods used to bypass security measures. Sherrets offers insights into the effectiveness of OpenAI's updates in curbing political misinformation and discusses the need for transparency and additional measures to ensure the secure and ethical use of GenAI tools, especially in the context of elections.
OpenAI has announced that ahead of the 2024 U.S. elections, it will not allow people to use its tools for political campaigning and lobbying. People also aren’t allowed to create chatbots that impersonate candidates and other real people or chatbots that pretend to be local governments. Personally, I question whether chatbots are as impactful as AI-generated targeted content. I used to work in political campaigns, and I would go to Political Tech conferences where folks would talk about how we were ‘data rich and content poor’ --- meaning that even if we had all the granular data in the world about what a particular demographic wants to see in a candidate, it was impossible to create bespoke social media or email copy for them at scale. With AI now that is possible.
I am also interested in seeing how effective OpenAI will be in policing this activity. Through my experience with AI safety red team testing, I have seen just how creative people can be in tricking AI to circumvent its guardrails. Stress-testing AI with human creativity and ingenuity is one of the few tools we have in the AI Safety toolbelt, but finding an issue and fixing the issue are very different things. If they are just monitoring accounts to see which ones are violating the policy and banning said accounts, then how well does that scale?
It is worth noting that in the Web3 world, bug bounty programs often serve a different function than in the more traditional Web2. If a smart contract that has $100 million of cryptocurrency locked in it has a critical vulnerability, then that means an attacker could steal or destroy all $100 million. But, if a program offers a $1 million bug bounty, it may encourage the attacker to just report the issue and collect the bounty legally and cleanly.