Earley AI Podcast – Episode 55: Staying Secure in a Digital Age with Yuri Dvoinos

Data Privacy, Deepfake Threats, and AI-Powered Scam Detection: Building Digital Defense in an Era of Sophisticated Fraud

 

Guest: Yuri Dvoinos, Chief Innovation Officer at Aura

Host: Seth Earley, CEO at Earley Information Science

              Chris Featherstone, Sr. Director of AI/Data Product/Program Management at Salesforce

Published on: August 28, 2024

 

In this episode of the Earley AI Podcast we are joined by guest Yuri Dvoinos, the Chief Innovation Officer at Aura, a leading cybersecurity company based in Boston. With an extensive background in information security, AI applications, and data privacy, Yuri brings a wealth of knowledge on how to protect personal information in the digital age.

Yuri joins our hosts, Seth Earley and Chris Featherstone, as he explains the intricacies of data privacy, the adverse effects of phone scams, and the critical role of AI in detecting and thwarting sophisticated fraud attempts, including deepfakes and phishing scams.


Key Takeaways:

  • Data privacy isn't lost—organizations like Aura offer automated deletion from data brokers, masked emails, VPN protection, and identity monitoring to systematically reduce your digital exposure.
  • Phone scams inflict devastating mental health damage beyond financial loss, with seniors particularly vulnerable to trusting voice communications that now use AI-generated scripts mimicking loved ones.
  • AI-powered scam detection analyzes incoming messages across email, SMS, social media, and phone calls using 555+ risk factors, providing real-time risk scores without requiring behavior changes.
  • Deepfakes have progressed to near-perfect voice synthesis from 10-30 second audio samples, with video masks enabling convincing CEO impersonations that compromise entire corporate security systems.
  • Security versus convenience remains the fundamental trade-off—ultimate privacy through masked emails and virtual cards exists but requires accepting workflow complexity most users resist adopting consistently.
  • Verify any suspicious financial request through a different communication channel immediately—if you receive an Instagram message, call the person; if you get an email, send a WhatsApp message.
  • Critical thinking about information sources is now essential as algorithms optimize for emotional engagement over accuracy, making emotionally triggering false content more viral than factual reporting.

 

Insightful Quotes:

"AI is a technology and like any other technology, is a tool and can be used to serve many different purposes. What people don't understand is how we use this technology is what's going to shape the influence of that and the outcome." - Yuri Dvoinos

"We believe that the balanced approach is where we check most of the incoming communication and we check all of the messages that are being sent to you over social media, SMS, email and phone calls. That approach allows you to have almost an augmented reality where AI will tell you the risk score for this communication." - Yuri Dvoinos

"It's the first time throughout human history that we're facing such a problem. It's always been the opposite—if you have seen something, then it has happened. That's not going to be the case anymore, and the implication is that disinformation is a much more alarming threat." - Yuri Dvoinos

Tune in to discover practical strategies for protecting your identity and privacy—and learn why the most effective security measure costs nothing but requires changing one simple habit.


Links:
LinkedIn: https://www.linkedin.com/in/dvoinos/

Website: https://www.aura.com

Twitter: https://x.com/YuriyNos


Ways to Tune In:
Earley AI Podcast: https://www.earley.com/earley-ai-podcast-home
Apple Podcast: https://podcasts.apple.com/podcast/id1586654770
Spotify: https://open.spotify.com/show/5nkcZvVYjHHj6wtBABqLbE?si=73cd5d5fc89f4781
iHeart Radio: https://www.iheart.com/podcast/269-earley-ai-podcast-87108370/
Stitcher: https://www.stitcher.com/show/earley-ai-podcast
Amazon Music: https://music.amazon.com/podcasts/18524b67-09cf-433f-82db-07b6213ad3ba/earley-ai-podcast
Buzzsprout: https://earleyai.buzzsprout.com/ 

 

Podcast Transcript: Data Privacy, AI-Powered Scams, and Deepfake Defense Strategies

Transcript introduction

This transcript captures an in-depth conversation between Seth Earley, Chris Featherstone, and Yuri Dvoinos about the escalating sophistication of digital fraud, exploring AI-powered scam detection systems, the psychological toll of identity theft, deepfake voice and video manipulation, practical privacy protection strategies, and the fundamental shift requiring humans to question what they see for the first time in history.

Transcript

Seth Earley: Good morning, good afternoon, good evening. My name is Seth early and welcome to the Early AI Podcast and I'm Chris Featherstone and today we're really excited to introduce our guest to discuss AI and security and how to deal with scams, how to deal with fraud, but also data privacy, which is so, so important and seems so out of control these days, and the role of AI. We talk about the development of risk going systems for scam messages and talk about scam approaches in general and scam strategies. There's seasonal variations to scams, the role of AI in detecting scams and preventing them, and the importance of data, personal data privacy and methods to safeguard it. Our guest today is a visionary in the field of cybersecurity and risk assessment with an impressive background that includes developing a pioneering risk scoring system for scam messages with a focus on visualizing risk levels. Having previously launched digital products, he now serves as Chief Innovation Officer at Aura, a cybersecurity company in Boston, currently based in Dubai. Yuri Voinos, welcome to the show. Did I get your name right?

Yuri Dvoinos: You did, and thank you for having me. Silent D spelled with the d, but you don't have to say anything about the D.

Seth Earley: And you are originally from Ukraine, correct?

Yuri Dvoinos: I am Ukrainian, yes. I was raised and born there. I've been living in Dubai for the last 10 years. I moved there a long time ago and somehow I'm still here.

Seth Earley: So we'll get a little bit more into your background in a little bit. But what I'd like to do is talk about what are the things that people are not understanding about the role of AI in both data privacy and in scams and trying to prevent fraudulent behaviors. And we know they're getting so sophisticated, but what are the things that people don't quite understand? What are the misconceptions that you come across, what people not understand or appreciate?

Yuri Dvoinos: That is a really great question. I think that there are, there are many misconceptions and I think one of the interesting one which I frequently answer as a question is will AI be dangerous to us as the whole, as human species? And so on and so forth. And I think this broader misconception is that I believe that AI is a technology and like any other technology, is a tool and can be used to serve many different purposes. So what people don't understand, it's how we use this technology is what's going to shape the influence of that and the outcome. And another thing that I think not everyone realizes is that AI has indeed been used for many malicious purposes. Specifically for making scams a bit more efficient. And that is something that we're trying to tackle right now. And of course we are using AI for the greater good purposes. So I guess every similar with electricity or Internet AI is being widely adopted by everyone and it influences our lives and we probably don't really track that, but it does happen. It is happening. Yeah.

Seth Earley: And I think one of the things that we talked about in our prep call was that there's this kind of belief in a way that the horse is already out of the gate when it comes to data privacy. And one of the things you were talking about is some of the work you've done at aura and with OEMs of machines to protect data, to do things like masking. But is the horse out of the barn already? I get so many notices about the data's been breached that this data source has been breached. I do try to have different passwords for every single site. That's important for some of the trivial ones. I don't really, can't really care, you know, someone going to hack my, you know, Cast Magic account? I don't know. But, but, but tell me more about your thinking around data privacy. We have to do now and, and how we have to take precautions now, especially younger people, obviously.

Yuri Dvoinos: Right, yeah, absolutely. I think that the threat is very much real. And you know, we hear about data breaches very regular there. Obviously they're not going away, they're. Which is why I have started my last startup, which was acquired by Aura, called Figliff, which was one app for total privacy. As I realized back, it was eight years or seven years ago, I realized that we don't control our data, we don't own our data anymore. So once, once you sign up somewhere and once you give up your data, it's a little bit better right now as you have some rights to request its deletion. But you're not going to delete all your data from the Internet. The reality is that you still have a very large footprint across the Internet and then whenever the next bridge happens, you don't have much of a choice. So we see people have stolen identities and the implication of lack of privacy and lack of privacy controls and you might be losing your identity. So what that means is that people can open credit under your name using a, what we called forged identity. So if you, if I know enough about your identity, I can pretend to be you. And sometimes it's difficult to differentiate, you know, the bad actor from the, you know, from you. That is why the Beauty of Aura, which is the company that has acquired my startup, is that we're tackling the problem downstream and upstream. So we're trying to prevent the lack of privacy, we're trying to prevent the identity, the everything, everything that might lead you to losing your identity. But if that has happened to you, we also have a stellar identity protection downstream, so we can help you if even if you are in that sort of situation.

Seth Earley: I was just remembering that you're reminding me that I need to, to turn on the data protection or the. What does it lock? My credit file? Because I had to unlock it. I've been warned over and over because my data has been breached. Then I locked it and then American Express would not approve something that I was trying to do. And then I started to unlock it. Now I have to remember to lock it again. So you're reminding me right after the session. Anyway, this is a, this is a very classy challenge.

Yuri Dvoinos: This is security against convenience. So typically the more convenient it gets, the less security you have. And then the opposite is Right. Right. It is, it's, it's very universal. It's not just about cybersecurity. Like if you want to, if you don't want to have any locks within your house, like it's just very easy to access your house for all your family members, but then you don't have, frankly you don't have any security at all. And then if you start bringing fancy locks and maybe alarm systems, that becomes a little bit of a hustle to maintain them. But I think security is not a zero to one thing. It's all about risk management. And I think with reasonable approach, by being able to have access to good tools, depending on assessing your risk profile, you can, you can limit this risk and you can lower that. So you are much better off.

Chris Featherstone: Can you. Let's double click on that a minute. Just in terms of the zero to one approach versus doing a risk scoring system. Because I want to, I think, I don't know that people necessarily understand what that looks like and how to actually think about it from a rational perspective of how do I apply these techniques and methods to make sure that I am safe. But to your point, what should be my, my risk profile as opposed to not, you know, that kind of thing. Can you double click on that for a minute?

Yuri Dvoinos: 100%. And that is something that we also do as a part of our onboarding process that we actually allow people to understand what's their, I would say private footprint or digital footprint within online. So how much data is available on them on their identities online. And that goes anywhere from data breaches to data brokers, which we also opt out our customers from. Which is another big problem is people. Data brokers are, they collect a lot of personal information on everyone, so they profile it. So that's another thing that happened. And so all of those tools allow you to understand and of course you know, if you had any incidents with your identity and if you had, then it's very, it's very obvious that you need to have identity protection. But even if you haven't had those incident, then we still argue that you're much better off by protecting the, your most used communication channels. And again, depending on which channels you're using, if you are a senior person who trusts the phone more, historically and traditionally we see seniors are trusting phone conversations way more than they should have. And I think the level of phone scams have, is absolutely peaking. It's on the rise and we see a lot of financial damage, but we also see a lot of mental health damage. So once we, we, we were interviewing people that in the unfortunate situation when someone's father, which was a very senior person, made a mistake and you know, believed that someone on the phone call was a customer support representative, which was a fraud and then they ended up losing substantial amount of money, it wasn't the end of the world for that family, but the hit that after the, after father has finally realized he made a mistake, he, he felt a lot of guilt and we realized that the mental toll is massive. People are recovering from those incidents. Difficult, it's very difficult to recover, let's say for some of the folks. So again, you should be better off by protecting your phone conversations. It's easy. We have tools where we can give you much more information about how likely this phone call is genuine, how likely this is a scam and the opposite is true. If you are younger, you might be trusting Instagram messages much more than you might, you may should be. So you may want to check the links that you receive for the DMs. You may want to, you may want to install an AI email copilot that will be checking every single email that you are receiving for gazillions of risk factors which in our, in our modern world, zero chance we will be able to scan every single email for 5555 criteria. It's just not going to happen. So it is this fast pace of life. We're always in a hurry, we're always on the go. We're doing something. We're focused on something that scammers are also abusing to some extent, and being mindful of that, being mindful of our imperfections, of our disposition to trust and the risk being out there is a good starting point for that. I always recommend people to consider. And then which tools do you want to use? How do you want to use those tools? This comes second. But I think understanding your risk profile, understanding and recognizing that those things happen, and you should be mindful about that, this is the foundation of how you want to ensure safety of your family.

Chris Featherstone: Wow, that's interesting. There's, there's so much. I had a gentleman that tried to hack my Facebook account and he said, I'm gonna, I'm gonna, you know, I'm gonna basically take over your account. Give me a thousand dollars or, you know, else. And I'm like, you mean this? And I deleted it, right? And once, because I, you know, like, I don't keep, you know, like to your point, my comfort level is not in my social media. My comfort level is, is more in my financial institutions and the things that I've built up over time, you know, because of where I sit in my demographic. And so when I did that, then I, I, I, you know, was texting him and I just said, hey, I want to understand what you're doing and why. And he's like, come on, you don't really want to understand this. I'm like, no, I want to understand what your motivation is. And most of it was his, it was his job. So I don't, I don't think that people get the fact that the people that are actually doing this, they're being employed by other groups to go after, you know, these accounts and then, you know, of course portraying themselves as others and, and the things like that. So it was really interesting to kind of get his take on what was happening. And then he tried to hold, he tried to come back and say, no, I need a thousand dollars for this. I'm like, sorry, buddy, this stuff's, you know, deleted. I'm not your profile candidate.

Yuri Dvoinos: Yeah, so that is very much true. People who are doing that, they are part of the organized crime. But, and maybe some of them were forced or there were questions situation in their circumstances in their life that they somehow got into that. But I've seen something similar with the big butcher scam that they are quite common these days. And the nature of the scam is that someone very commonly a female profile, which, which would reach out to middle aged or senior aged men and they would very Quickly try to jump into a romantic relationship and mix that with an opportunity to, to invest and make insane returns. And then as you invest in a non existent platform which looks legit, the platform shows you that you just made huge return and you basically are getting rich. So they lure you into investing more and more, but essentially you just lost all of that because you will never be able to take those money out.

Chris Featherstone: It's sad. They're targeting our older demographics for a lot of this too and just taking them for everything they're worth. I've had no less than, at least directly, five or six people that I know that their parents single, lonely and older and then their life savings are gone because of these types of. Oh, the promise of, like you said, just the mental side of it, not even the financial side. So. But taking them for everything.

Yuri Dvoinos: What's interesting about AI, so what people don't understand that a lot of times those chats, they wouldn't be chatting with a real actual person. A lot of times this would be a bot or a script or a large language model that would be programmed in order to chat. And it's very interesting how progressed AI has become as you wouldn't notice the difference. And also you can fine tune the AI model to be, to change, to use different sentiment, maybe to mimic some other conversations. So another thing that's happening, if someone's Instagram account was hacked, what might happen as an LLM would chat with every single contact, which are potential victims, but they would change the narrative, they would change the tone, they would change the way. So it is very difficult to differentiate. Like because if you're chatting with your mom and she's using some specific words or language, LLM would pick it up and it would mimic that language. So now you would almost be certain that you're chatting with your mom. Mom. So that, that, that is something that, where AI is really making things more difficult.

Chris Featherstone: Yeah, yeah. In terms of. Yeah. Doing a branded voice type of scenario. Looking at it from the perspective. Because I mean, can you. Yeah, go ahead, Seth, go ahead.

Seth Earley: What's up? You there? Oh, you can hear me now. Okay. Because I, I lost my Internet and I had to start off and come back by my phone. So I missed what you guys were saying. But there's a lot of, a lot of interesting things. It sounded like you were, you're starting to talk about the deep fakes, the fact that, you know, we, we have so much. And the pig book butchering, which I get those texts all the time. And I start off by Saying, how's the scamming business going? You know, and I usually get. I don't get any messages after that, but to talk a little bit about, you know, how this is done at the device level, because where do you do that interception? Where do you do that intervention?

Chris Featherstone: Seth? You know, it's just in terms of, like, with all of the fake jobs, big jobs and stuff that can go on, like, to your point, you know, how do you actually dive into that and combat it and things like, you know, things of that nature, so.

Seth Earley: Right. And where do you, where do you deal with the problem? One of the things that we talked about in the prep call is the fact that when you go on to websites and this is a privacy. I know we're switching back and forth between security and privacy, but they're so intertwined. You know, I've, I've been on sites where the granularity is incredibly detailed. Like, they will ask you to opt out of. They'll say, well, here are partners. And rather than being able to opt out of your partners and your third party and selling my data, you have to go in and select each one and there's like hundreds, you know, and it's like, are you kidding me? Or they'll, they'll just say, you know, accept our cookies or don't. And if you don't, you know, go away. So, so, so that there's so much frustration on how organizations are doing it. You really don't have much choice. So, you know, where do you, where do you protect that? Because I know I've had situations where I've had private accounts that didn't have my, my first and last name on it. It was just a username. And. And then all of a sudden, you know, over a period of time, I start getting things addressed to my name associated with that account, where I never associated my name with that account. Right. So somebody did that integration and correlation where I never gave any permission for that. Right. Because that would. That was a completely private account, not with my name. So how do you start addressing those kinds of things? And, and all of the dis. Issues with the data brokers and data sales

Yuri Dvoinos: there. There is a lot of activity happening and your data is definitely being profiled and being sold. And of course, we don't necessarily read all terms and agreements for whenever we sign up. And nor that, you know, nor that it's possible because sometimes they make it so lengthy and so complicated and sophisticated language that it's really difficult to understand what's happening there, which is Another problem by the way, but there are multiple ways for you to be essentially a bit more private. I would briefly mention several advices that I tend to give folks whenever they ask me this question. So first is ultimate privacy. So this is how this was the main theme of the, of my startup figlev.com where we were envisioning, masking, saving the Internet. And what I mean by that is we came up with this idea that if you sign up with a masked email, which is essentially a unique email that you know, people you, you don't use anywhere but, but this website, then it's very difficult to understand who are you because this email is very, is unique to that website. You can't even if it was breached, the website was breached. They can't use this email and apply it to other websites because it is unique. However, that is something that I believe Apple is also doing with an option of hide my email. Had this before Apple has released this, which I'm very proud of. But even with Apple making it very simple, I think a lot of people struggle to adopt this technology and this ultimate privacy. It is the ultimate privacy but it is very difficult because it's less convenient. So there are other ways that you can do to be a little bit more private online. One, of course you can use two fa which is two factor authentication. For every single important website where you believe you have important data, maybe document data or financial data. We recommend to use two factor authentication whether this a special authenticator app or through an sms. That's one thing. The other thing is that whenever you're in public WI fi we do recommend to use a vpn. So everything you do online is encrypted because essentially there, there are instances where there is middle in demand attack and the WI fi can be listening to everything you type, everything you do. And this is how you might lose some of your private data as well. But then if you sign up for, and this is just free advice, right? So most of them are free except for the VPN which I also recommend to you know, select from a, from a reputable company. But if you sign up for services like Aura, what we can also do for you, in addition to just a standard VPN we can also delete your data from data brokers and a lot of people don't know they exist and they, they profile your data and then they sell it. So what we are doing, we are automating deletion requests on your behalf for most of the largest data brokers out there. So you don't have to do anything. Once you have subscribed for the service, it just happens automatically. And this is also an ongoing process and this is how we systematically work to get your digital footprint under control and contain the exposure of your private data.

Seth Earley: Yeah, that's definitely something that's necessary. Now when you think, and I'll, I personally will look into that. I didn't know, didn't realize you had consumer products. I thought you just did the oem, but that's great. So I'll check that out. And one of the things that I was curious about is that, you know, as you mentioned, Apple and American Express offer masking so you can use a virtual card. With American Express MasterCard you can do masking. And, and I haven't used masking because I'm not sure how that, I'm not sure the level of inconvenience that that adds. And I'm like, well, I just want to log in with my, my regular email address. So I don't want to have to remember. And I don't know if that, if that happens automatically, but I haven't checked into that. But so if you mask your email, do you have to now use that, that virtual email or. That's automatic. That's all automatic. That happens behind the scenes.

Yuri Dvoinos: It is automatic. It does happen behind the scenes. The inconvenience that you have with that setup is that you have to remember for which website you decided to hide your email and instead of. And with your regular one. Right. You need to realize, oh, I, I need to use this weird hide my email thing. Right.

Seth Earley: I, I forget which one I, I sign in with Apple versus my regular email address. And, and that can be confusing. But where it gets you is if you

Chris Featherstone: like, I had a fraudulent charge that I thought was fraudulent and when I called the merchant, they, they were using a different card number, had no idea what the card number was. I'm like, yeah, of course that's right. That's not my card number. Had I known that it was actually a masked number from my original number. Right. To actually do that charge. So in this chasing my tail in a loop trying to figure out the. Data was

Seth Earley: so one of the things that. Yeah, that's a really good point. It does add complexity, as you say, there's a trade off between convenience and security. But one of the questions I have is how is your algorithm so most of the email providers have spam filters and we'll mark things as Microsoft Office 365 will mark things spam or phishing or whatever. How Is your algorithm different and how is it complementary to those things?

Yuri Dvoinos: Sure. I think the most interesting algorithm that we have come up with, which is also, which also includes AI. Getting back to the topic starter, is that we believe that rather than, we believe that we need to balance the security and convenience rather than getting completely off the grid, which we find is appropriate for several, some circumstances where you know, the website is shady, you don't even want to sign up with your actual email and stuff like that. We believe that the, the balanced approach is where when we check most of the incoming communication and we check all of the messages that are being sent to you over the social media, over the sms, the email and the phone calls, and that approach allows you to, maybe your data was exposed, maybe you will be exposed to a little bit of more communication. But what we can offer is, we can offer almost an augmented reality that you will be looking at every single incoming email, let's say, and AI have already scanned that email and AI will tell you, okay, so the risk score for this communication is X. And now you're way more empowered and informed about the risk that's happening around you. And then the beauty of that, you don't necessarily have to change any, any behavior, anything that you are used to. You're, you're pretty much it's business as usual for most of our, most of us. But then you are using a lot of powerful technology to stay on high alert. And even, even if you, sometimes people don't realize that they're being pressured psychologically or there is a sense of urgen someone is applying and since they don't realize they're chatting with someone else who's pretending to be someone you trust, they, they do, they may bite that and make decisions that they're going to regret about. The good thing about AI, it's not going to happen with AI immediately. AI will pick this up and will immediately tell you that you have to stay on high alert. And sometimes all it takes is someone to tell you that something is shady right here, right now. Which I think is a really cool balance because we're not fully addressing that, we're just raising awareness. But is because it happens in the real time, it's very, very handy and you don't, again, you don't have to change your behavior. So I think this approach is way more balanced and we're really big believers into that.

Seth Earley: One of the things that we talked about was that these things are getting increasingly creative and it is, it is an arms race. And so, you know, I Get a lot of emails from allegedly, you know, our employees asking to change account numbers. Right. To change their direct deposit. And by the way, the preferred scamming bank is Green Dot Bank. Anything from Green, anything in Green Dot is problematic. I also had a fraudulent sales, a fraudulent store opened in blanking on the name of the, the tool Salesify. Right. Salsify. Salsify, yeah. Which, which then ended up depositing a five thousand dollar check into a Green Dot account which I got a debit card in the mail and I got an email saying there's money in your account. And I'm like I never applied for this. And by the way, what's this $5,000 doing in here? Right? And I didn't understand how they were going to deal with this, but I, I reached out. It was a QuickBooks credit debit card. So they must have gotten all the information to be able to open this stuff. And then I still got, I still had problems. There were multiple stores that were opened and I had to return this money and I had to say look, and then I just got to check the other day in the mail for like some refund of 50 bucks and like what is going on here? So this is still not resolved. Even though the bank said my, my Social Security number was flagged. The, the software company, the E Store said my, my number was flagged. Now this isn' company name. I mean again, very, very sophisticated, interesting things, but I don't understand their end game when they're depositing this money. But talk a little bit about, you know, where you see this going and especially when you, you started to talk about deep fakes, you know, where, where, you know, what do you envision down the road and you know, are we going to be able to keep up with this constant battle? I mean obviously we have to, but it just seems like it's getting more and more challenging every day to, and it's not necessarily something that I did to cause this fraud where I was scammed. There were, this was identity theft and, and scamming other entities. So where do you see this going when, when things get so sophisticated?

Yuri Dvoinos: Absolutely sad. And I feel sorry for you being a victim of the identity fraud. I mean.

Seth Earley: For, there's no financial implications for me yet, but I'm filing police reports and I'm dealing. It's causing a lot of time, you know, so anyway, carry on.

Yuri Dvoinos: Yeah, at the bare minimum it's tremendous long term inconvenience which is probably the, the most optimistic outcome if in your case, and I think your Story just in and of itself is the reason why or exists and why we, we are here. And as we as, as we see identity scam, it's really someone pretending to be you just in front of the bank so they can issue credit under your name. But the same dynamics is also happening in other instances. So for example, a very common corporate scam which is now we see tend to roll out into the consumer world as well as I suspect with AI it's getting more cost effective, if you will, when people are pretending to be a legitimate company and they send you fake invoices. So if someone was listening to your email correspondence with someone, they would know when you expect potentially an invoice and what they can do. They can send you an email from a company, you know, an address that looks exactly similar from the same person, which again uses the same style, text and style. So it's very difficult for you to recognize this as being a scam. While this being primitive, it is efficient. We see that people are, it works. People are trusting those emails. But with AI is progressing so fast, not only we see this with text, but we also see this with audio. And when I what I mean by that is that it is easy right now to create a voice synthesizer that would mimic your voice or someone else's voice. And you don't even. It used to be where they would need a sample of how you. Half an hour or 15 minutes talking to that and there would be still, you know, okay, somewhat similar. Now they can start mimicking your voice with a much smaller sample, almost like a 10 to 30 seconds sample. And the voice synthesizers are much better. And if you think for a minute that you got a phone call and imagine that the caller ID was spoofed. So now all of a sudden it says that Chris is calling you and someone with a voice that is very similar to Chris's voice asks you to open a link. He has sent you to the email and it's urgent. And just to make sure that you will not pick up on those imperfections of the voice synthesizer, what they would do, they would add background noise. They can add like a street noise or traffic sounds. They can add something. So now you would, maybe you would register those imperfections, but you would think, is this, is this me or is this just the background noise? So I, you know, I can't hear you really well. But it seems like Chris, okay, Chris, you know, all of a sudden you're open inefficient link. So that is a very classic attack. There is another angle that can be used. And of course, this angle can be used over WhatsApp audio messages and across the world and across other channels as well. And the ultimate, ultimate danger, those scams and deep fakes, is the video defects, right? We got. We got to the point where you can reproduce the voice, and then you can do. With. With the right actor, you can do a mask that will make this person look incredibly similar to the person you're trying to mimic. So imagine someone, again, with a. Sends you a link. Hey, this. Hey, this is. Let's say this is. Again, for the sake of the example. This is Chris, and Chris is the CEO of the company. This is Chris. We're in the. In the middle of an urgent meeting. We need your help right now. Jump on the zoom link. You're jumping on the zoom link, and you're seeing someone who looks like Chris, talks like Chris, instructing you to do something that will, you know, frankly, not benefit the entire company. All it takes is to compromise security for one employee or maybe one of, you know, a group of important employees, and your entire security posture is compromised. So that is the ultimate problem that we're facing with deepfakes.

Seth Earley: It's pretty scary, and it's pretty unbelievable, I'll tell you. I had one call once, allegedly, from Citizens bank fraud department. And the guy said, you know, I'm. And his tone was her pitch perfect, right? Like he had worked in a fraud department.

Yuri Dvoinos: He. He sounded. He was very low key, very relaxed,

Seth Earley: very official sounding. And he said, I'm calling from. You know, check the back of your card. I'm calling from an official number. That number cannot be spoofed. Right? And that was a lie right there. Right? But he said that. And I looked. I looked at the back of the card. It looked like the right thing. Then he started asking me questions. I was like, wait a minute. No, you're not supposed to me that question. He goes, okay, no, don't worry about it. I'm gonna. Let me try something else. Turned out he was on the phone with Citizens bank at the same time trying to break into my account, trying to impersonate me. And then he was asking certain questions. And then it. It was so incredibly sophisticated. Like, I. I remember writing it out for Citizens to say because they finally did get something. He did. He did get me to do something that I quickly realized was. Was not correct. And I immediately called them, and they did, you know, fix it. But it was unbelievable. Like. Like they. They told me he was on the phone with them while I. He was on the phone with me and he was talking to them, giving them answers that he was trying to get from me. And then when he asked certain questions, I said, no, you're not supposed to answer. I'm not supposed, you're not supposed to ask that question. And that's when it made me very suspicious. But it was very sophisticated, it was very interesting and I'm on the alert for that sort of stuff. But don't you think, Gary, that for every great

Chris Featherstone: thing that comes out in technology, there's also a nefarious act using that same technology and people don't keep that in mind?

Yuri Dvoinos: Yeah, I do think so. Yeah. I do share that sentiment, although I think it's not just for the Internet, it's more universally. Whatever we do, somehow we use it for the greater good. And we also figure out being very creative to completely mismanage whatever we have created. But, but the phone call, the trick, a very common trick is the, that tricks a lot of people is the spoofing, the phone number spoofing. So here for, for our customers, for example, at a certain plan for family plan or the suite, we do offer a service called call protection. And what it will do, it will route all unknown calls, which are calls from, not from your context. It will route to aura. And then we will understand the risk profile for that specific phone number. And we will always rewrite the cname. And the cname is the parameter that is the, that shows up instead of the phone number. And that is what people don't understand, that instead of the phone number, the convenience of the technology allows us to show something which is called the cname or the caller id. But then that can be another phone number that can be your phone number.

Seth Earley: That's so crazy. It's like, how can that system be so open to that type of lack of security? I mean, it's just mind boggling. When they came up with Integrated services Digital network, when the first reiteration or redesign or revamping of the phone system back from the analog days to the digital days to leave those gaping security holes is just unbelievable. It's mind boggling.

Yuri Dvoinos: But people don't know how common is that still. I think it's getting better, but it's still extremely common. So somehow people are finding loopholes in the system to continue abuse the C name. But for anyone who uses Oracle protection, that will never happen because we never allow custom cnames. We would always rewrite this either to the original number or to this name that we can verify.

Seth Earley: Yeah, and just as people know that in this podcast, we don't typically like pitch products, but I think this is really a product people should be aware of and take advantage of because there's. Right. So, you know, this is, you know, it's not a sales pitch. It's just information and education because stuff is certainly stuff I need.

Chris Featherstone: People don't know where to go get stuff either. I mean, they, you know, problems are if they go out and they just do general searches and stuff, they're very expensive and, or don't know, like you're saying phone.

Seth Earley: Yeah, exactly. Will they work whether they're a scam themselves? Yeah, exactly. You know, imagine, you know, look at the companies that are advertising themselves as security companies that are scammers and, or. Causing problems

Yuri Dvoinos: and scams. Scams today are so universal that, you know, if you're a senior person, they have the phone call scam. If you're a younger person, they have the Instagram scams. There are so many different scams. Another interesting thing that we're seeing is scams towards minors. So we see and force. So the nature of the scam, you said you're seeing minors for kids. Oh, kids. Okay. Yep. Yes, we, what we, what we are seeing is that a lot of automated accounts that pretend to be female accounts, they would target younger boys and they would trick them into sending their intimate photos or videos. Yes, that of course is a scam. And a lot of times, a lot of times they would blackmail those boys. And we have those very unfortunate incidents out of Canada in British Columbia, where this was way too much and one child wasn't able to cope with that. And British Columbia, they even come up with this ridiculously provocative public campaign to raise awareness about that because those scams are very common. But yes, and it's a large language model that tricks people behind the scenes.

Seth Earley: Yeah. And more than one child has, has ended their lives. I mean, they, they have. And then, and then the, the, the, the scammer communicated to the what the child's, you know, sister or somebody else rel close to them that they'll ruin their reputation. I mean, they just, there's just, they're so heartless and so sociopathic in what they're doing, you know, and, and it's just, it's mind boggling. It's like the lack of humanity and, and, and the damage it's doing and the psychological damage. But even the things you talked about, like the deep fakes of, of, of kids in, in middle school, even though people know it's fake. It is devastatingly traumatic for them. It is horrendous. I mean, I agree the psychological toll is just mind boggling.

Yuri Dvoinos: It's just another example that similarly, we talked about the mental toll for seniors for the phone scam. It's the same thing that happens to minors. Just, you know, a different scam, different package. But to be honest with you, you have. I feel incredibly grateful and incredibly lucky of being able to drive some of those technology to fight against that. I lost a very close friend of mine and I know how it feels. And we are building something that I truly believe is for the greater good and that can make a difference. And every single day I come to work, work restless and so hard to come up with those things that people will pick up that to ensure that our company can, and not just our company, but just overall the society we can be a little bit more aware of. And I love doing podcasts. I love talking about these things because I, I'm so surprised and shocked to some extent that people still don't understand the level of risks around them and what's happening, what's possible, and some very basic foundational protection.

Chris Featherstone: What's refreshing, Yuri, is that in this insecurity and in, you know, all of this stuff that you're dealing with, you're very optimistic, which is actually refreshing. So thank you. Because it could be like doomsday for a lot of folks just to think about, you know, how to combat it and be pessimistic. But you're very optimistic, so it's refreshing.

Yuri Dvoinos: No, no, we're not gonna, we're not gonna let it happen. We're gonna. That's, that's the whole thing.

Seth Earley: That's where your, your, your energy and your, your, your values and, you know, and it's, it is great, you know, to see and it's great to hear about the things that can be done. You know, we were talking about. It was another topic. I can.

Yuri Dvoinos: While we're thinking, where do we steer the conversation, I can give one free practical advice which does subscribe into any fancy service or any cyber security product per se. It's free. We all can do that. And I love sharing this advice because a lot of people would come back to me and say, hey, thank you. You really rescued me from being part of the scam, which I somehow got into that. So whenever you receive a risky communication, and when I say risky communication, it's really a suspicious link or a financial transaction of sorts, even including from your close friends, and specifically from your close friends, verify that using Another channel of communication. So, for example, if you have received an Instagram message from your best friend saying, hey, dude, I need your help. Can you send me, I don't know, something on Venmo X X dollars on through Venmo, pick up the phone, call the guy and say, hey, I just wanted to double check if you're all right. If you have received an email, send them a WhatsApp message. You know what I mean? It's very difficult to compromise several communication channels at a time, and that is free. And some. And some people keep communicating over that compromised channel just to reinforce their bias. But in reality, what you should have done, you should have just pick up the phone and verify.

Seth Earley: Yeah, pick another channel. Yes, they have compromised this channel. What you're doing is you're double checking or verifying or validating via another channel, which is exactly perfect. Now, that's wonderful advice. And I know parents get calls late at night from their kids, you know, in jail and all of that stuff. And it's, you know, there's got to be another way to validate. Okay, give me the phone number of the. What station are you at? Right. Let me call, etc. But. But it is, you know, it is interesting. Like you say, it's. It is an arms race. And, you know, the. The deep fakes. I think the other thing that we talked about a little bit is, you know, the use of deep fakes on social media to. To move ideological ideas forward and to kind of present things. And how you said that, you know, people can no longer just believe what they see. They have to think critically. And one of the issues around that is that the things that are emotionally. The things that get the most attention, the algorithms are optimized for attention, and they're optimized for engagement. And engagement happens not through things that are factually correct necessarily, but things that are emotionally escalating or emotionally, you know, triggering. Right. So the things and the things that are emotionally triggering many times are false. They're just not true. And yet those are the things that get. So it's one of those things where, you know, reading something before you forward it. Right. Or reading something before you share it, or researching or not taking things at face value. You know, we do need as a society to become better critical thinkers when we are seeing this stuff and ask if, you know, there's a nefarious motive behind this or what the motive of the sender is. Right. Rather than just kind of blindly taking things and say, wow, this is an incredible video. This is Awful. Well, it could be a, it could be a deep fake, Right. It could be false.

Yuri Dvoinos: So, yeah, this is probably the obstacle that is the most painful as it's. It will require us to rewire ourselves for at least a little bit. And I think, I guess we're witnessing that the world is changing, the informational space is changing, and we are one step from not being able to trust what we see, which is the first time, if you think about that, it's the first time throughout human history that we're facing such a problem. Like, it's always been the opposite. If you have seen something, then it has happened. And of course, when you see a Hollywood movie, you know it's a movie, right? So you don't, you don't really apply that. But if you see someone in the news or if you see a video of something's happening, then it's most certainly has happened. Well, that's not going to be the case anymore. And it's. Unfortunately, the implication is that one, it can be misinformation. So people might be able to spread some goofy conspiracy theories like the world is flat and so on and so forth, which some people take it with a, with a pinch of, you know, humor and, you know, it's not flat. Yeah, but. Right, right. But unfortunately, the disinformation is a much, much more alarming threat because now once, you know, people have discovered that they have this instrument of manipulating our opinion by creating a deep fake, then of course, we can face things that haven't happened, news that aren't true. So, yeah, the fact check. The fact check is going to be critical for us to move forward and be realistic.

Seth Earley: Yeah. Yuri, this has been so much fun. I really enjoyed chatting. I do want to ask you, what do you do for fun? I know you just got back from vacation. I know you have a son. Tell me a little bit about what you do in your, your personal life.

Yuri Dvoinos: Right. I tend to joke that I used to have many hobbies before I got two kids. All I do is just, I spend some time with my kids, but right now they are 2 and 4. So my oldest is 4 year old. She's. Yes, great

Seth Earley: ages. Man. Boy and a girl.

Yuri Dvoinos: Yes, boy and a girl. Youngest is a boy. So we're, we're blessed with those couple. They're also great characters. But we do, we do spend a whole lot of time together. Me and my wife. Love hiking. Nice. We, we were hiking in Austria this summer, which was, yeah, very refreshing. Typically, I like to do stuff that help me to disconnect which with an element of the digital detox. It's a lot of screens for a good reason. I work in, in, in the digital economy, but sailing where you can't really have a screen, you always need to run across the boat. Hiking where you don't have the signal. Those are perfect for me.

Seth Earley: That's great. That's wonderful. Well, listen, this has been so much, so enjoyable. You the company, your LinkedIn is D V O O I N O S D V O I N O s. So on LinkedIn that's where people can find you. The company website is aura a u r a.com and you also have a you a Twitter handle which is U r I Y N o s And again, we're going to be putting all that in the show notes and I just want to thank you so much for your time and thank you for being with us today. It's been a lot of fun and very informative.

Yuri Dvoinos: Thank you. It's been a pleasure.

Seth Earley: Great. So that's been our, that's been our show for today. Thank you for attending the early AI podcast. We'll see you next time. And again, thanks to our audience and, and thank you, Yuri and of course, thank you Chris and Carolyn for all the other work. So thanks guys. Thanks, Yuri.

Meet the Author
Earley Information Science Team

We're passionate about managing data, content, and organizational knowledge. For 25 years, we've supported business outcomes by making information findable, usable, and valuable.