Alon Y. is the CEO and Co-Founder of Copyleaks, a cloud platform focused on plagiarism detection and writing assessment tools. Alon has 7 years and 4 months of experience in this role. Prior to Copyleaks, Alon worked as a software developer for Unit 8200 - Israeli Intelligence Corps, where they worked on different intelligence projects mainly for the Air Force.
Plagiarism detector maker Copyleaks studied GPT 3.5, OpenAI's previous-generation model.
The first-ever legal framework on artificial intelligence, the AI Act, could set the tone for AI regulation around the world.
Copyleaks co-founder and CEO Alon Yamin discusses the use of artificial intelligence technology within academia and how his company uses AI-powered tools to help identify instances of plagiarism in student submissions. Alon speaks with Tom Keene and Scarlet Fu on Bloomberg Radio. (Source: Bloomberg)
Alon Yamin, co-founder and CEO of Copyleaks which uses AI for content authentication, told FOX Business that the executive order was fairly comprehensive overall but added, "I think really focusing a bit more on the different types of solutions for different content types and understanding and distinguishing between them is one point that I thought was missing a little bit." "You can’t have the same strategy for detecting AI in video, to detecting AI in music, to detecting AI in photos, to detecting AI in text – each one of these content types is a different character and you can’t have one solution for all," Yamin said. He added that while watermarking was discussed in the executive order "it’s not a bulletproof strategy.
Although YouTube and Meta will require disclosures, many experts have pointed out that it’s not always easy to detect what’s AI-generated. However, the moves by Google and Meta are “generally a step in the right direction,” said Alon Yamin, co-founder of Copyleaks, which uses AI to detect AI-generated text. Detecting AI is a bit like antivirus software, Yamin said. Even if tools are in place, they won’t catch everything. However, scanning text-based transcripts of videos could help, along with adding ways to authenticate videos before they’re uploaded. “It really depends how they’re able to identify people or companies that are not actually stating they are using AI even if they are,” Yamin said. “I think we need to make sure that we have the right tools in place to detect it, and make sure that we’re able to hold people in organizations accountable for spreading generated data without acknowledging it.”