The mobile biometrics revolution of the last several years has helped to drive excitement about a number of modalities, with voice being one of the most prominent. Voice has emerged as a key user interface on smartphones thanks to virtual assistants like Apple’s Siri, and on smart home devices via platforms like Amazon’s Alexa. People have become very comfortable interacting with their devices via speech, and this has opened up new opportunities for voice biometrics technology to be used to identify speakers passively and continuously.
One company benefitting prominently from these trends is Voice Biometrics Group, a company that has been around since before the mobile biometrics revolution really took off, with the company’s team already recognizing broad market potential back in 2009. So naturally, the company has been busy lately, and brings a deep well of expertise to the major trends underway in the voice biometrics market. Voice Biometrics Group CEO Pete Soufleris recently shared some of these insights in an interview with FindBiometrics Director of Digital Content Susan Stover, touching on new regulations like GDPR and BIPA, user consent, the rise of AI and deep learning, and the threat of presentation attacks – not to mention, of course, the voice biometrics solutions that VBG has to offer.
Read our full interview with Pete Soufleris, Founder and CEO, Voice Biometrics Group:
Susan Stover, Director of Digital Content, FindBiometrics: I know Voice Biometrics Group has been a leading provider of voice biometric identification and verification software delivery systems and related consulting services since 2009, but can you give our readers a brief description of your company and its genesis?
Pete Soufleris, Founder and CEO, Voice Biometrics Group: Sure. Thanks Susan, and thanks for inviting me. I’ll start with VBG. We’re a platform-as-a-service provider of voice biometric software. Over the past 10 years we’ve of course developed our own voice biometric engine and APIs, but our platform also includes a highly scalable web server and database, as well as all the administrative controls, reporting, and audit functionality that you would expect. It’s all tightly integrated, very flexible and very developer-friendly. Basically, our clients and partners provide the interface and business logic to their end users and then we provide the voice biometric functionality when and where needed.
This flexibility carries over to our deploy anywhere model. Today we have production deployments in public clouds like Alibaba, Amazon, and Microsoft Azure. We’re in managed hosting providers like Rackspace and Liquid Web. And we have numerous on-premise locations globally running on both Windows and Linux servers. So, it’s really put it anywhere that it’s needed.
In terms of how VBG got started, six of our team members first began working together in 2005 at a voice biometric start up called Voice Verified here in the Delaware Valley of Pennsylvania. And although that effort was maybe a little too early for the market, we definitely all saw the promise for voice biometric software and as well, the power of cloud computing.
In 2009, several of us saw an opportunity to start VBG, so we pretty much just went for it. And since then, our core philosophy has continued to gel. Today we all understand that our platform is every bit as important to our clients and partners as the core voice biometric technology itself. Clients, of course, know and expect you to have a great voice biometric engine, but it also has to scale, it has to have full data protection and redundancy, and it has to be rock solid in every manner. So, that’s pretty much what our VBG platform is all about.
FindBiometrics: Like you mentioned, biometric data and privacy and protection is a major concern in our industry. How do you see laws and regulations like GDPR impacting companies who collect biometric identifiers?
VBG: I think there’s already been a huge impact to companies who are subject to the GDPR, but then I also think we’re just scratching the surface globally. The GDPR in many ways is very welcome; it brings a long overdue need for visibility into the topics of personal data privacy and personal data protection. It also mandates significant responsibilities for companies who handle personal data.
And, the GDPR gives end users rights to the personal data that they’re providing as they interact with companies. So, I think this is very welcome. But it’s definitely not been easy for companies to comply with GDPR because it’s not just about capture and storage of personal data; there’s a lot of significant legal, technical, operational, and other processes that have to change in order to be considered compliant.
Biometric technology vendors like us, the GDPR treats differently. Biometric data is a special category of personal information, so there are additional requirements – making compliance even more difficult.
And then here in the U.S. we don’t have GDPR but several states have what’s called a Biometric Information Privacy Act, or “BIPA”, on the books and they have many concepts that I would say are similar to the GDPR. The last time I looked into this, there were probably a half dozen or more states that are considering similar BIPA-like legislation, and I’m even understanding that there’s a lot of pending legislation that is GDPR-like. So, it’s just a matter of time before the U.S. has its own version of the GDPR – within a year or two at the most I would think.
The E.U. and the U.S. aren’t the only ones. Many other countries are addressing these same issues. I don’t think these laws and regulations are a passing thing, they’re becoming a regular part of the way we all have to do business. And for companies who collect biometric data, the only thing we can do is take a proactive stance and not assume anything. It doesn’t matter if you have a direct relationship with end users like our clients have, or are a downstream anonymous service provider like us. Everyone will have to assume some responsibility relative to these requirements.
In our case, end users are completely anonymous to us, but that doesn’t matter. We can’t just assume that our clients and partners are doing the right thing. So in the last year or so, we’ve been very heavily involved with the onboarding process with new clients and partners. We’re working closely with them, asking a lot of questions, a lot of legal and compliance questions that touch on GDPR, BIPA, etcetera. And I think it’s a healthy discussion, it makes the process a little longer but the good news is: our platform was built with a data minimization and compliance already in mind. So we had to do very little to address GDPR and in fact, at this point, we’ve gone through several GDPR audits with new customers that we’ve onboarded since last year. So, we think we’re doing the right thing. But, there’s more for all of us to do, and it’s absolutely a big topic for the whole industry.
FindBiometrics: Right, and user consent and the right to be forgotten also factors into this. In your view, how do vendors handle obtaining end user consent and actively managing their wishes over the length of their relationship? What’s the process for returning data to the end user?
VBG: That’s kind of a loaded question. I guess I’d start by saying, it depends on the situation. We’re seeing maybe three different types of prospects relative to this topic. There’s large enterprises, they’re very on top of the issues and have strong infosec policies. They already know this stuff and they have their own consent management systems in place.
Then there are those who understand these issues but maybe need help managing the consent, they might be smaller or mid tier companies who just want to get a consent management system – and it’s not as important that it has to be their own homegrown system.
And then the last group of people are those that either it doesn’t apply to, or they don’t know. This is an interesting group, because if they don’t know, we need to tell them, and that’s become part of our process. And if it doesn’t apply to them now, that’s fine, but we’re taking the approach that it’s going to apply to everybody because we think that’s the wave of the future.
We’ve developed a full consent management system that’s built into our platform, so if customers want to use it, they’re welcome to. They can use it directly; we even have variables that allow them to brand it if they want to expose it directly to their end users. Or we have a full set of API methods which is actually what we used to build our own tool and customers can have access to those and just build their own system. We don’t charge extra for it, to us it’s just part of doing the right thing. So we just work on this during the onboarding process, if and as needed.
The whole thing with consent really is: it’s not static, you don’t just have an onboarding process or registration process and say, “Yes, I agree to have my biometrics collected,” and then be done with it. That’s not the end of it. Consent can change over time as users change their minds. They should be allowed to flip flop back and forth. And although you hope they don’t necessarily do that, you have to have a system that would allow them to change their mind every month if they wanted, “Yes, I consent. No, I don’t. I’m revoking consent.” So, we allow that and that’s the first part of your question.
The second part is, “What is it exactly that we’re giving them access to?” With our system, what happens is, a customer application interfaces with us, we never have contact with an end user. So, at the direction of the customer application, speech is being provided by the end user. They’re saying a passphrase or a number string or speaking conversationally, something that we’re going to do voice biometrics with and they essentially send this to us as a recording.
Our API only accepts audio recordings or streams. This touches on a common misconception. Some people think that, “Oh, you’re sending a voice print around.” We don’t. Voice prints are only created in our platform or modified or deleted there. A VBG voiceprint only lives in the VBG platform. To get the process started, customers just send us regular audio recordings, not unlike an MP3 that contains a song.
In terms of this whole consent management and personal data topic, we provide access to the speech samples that the customer may have sent to us, and that’s it. This is coordinated by the client and partner applications because end users are unknown to us. But, end users didn’t provide us a voiceprint, so we’re not going to send them back a voiceprint. We don’t want to create a new security risk. But if they want original speech samples, we will do that to the extent that we have them stored.
And relative to storage, we generally don’t keep a lot of speech samples in the system. We take the data minimization principle very seriously, we only keep enough samples on hand to do the job we need to do as directed by our customers. And then when we don’t need the data anymore, we get rid of it as quickly as we can. So, we have the tools that allow our clients’ customers to get whatever speech samples they need back to them within the limitations of the data retention policy.
FindBiometrics: I think that’s a critical approach within our industry and that’s a really great way to go about that. Just to change the subject a little bit, we’re seeing AI and deep learning expanding the field of biometric vendors. Definitely something I saw firsthand when we were reporting from Mobile World Congress in Barcelona. With this growing presence in our industry, what would you say customers should look for, and perhaps even look out for when considering biometric vendors?
VBG: Yeah. That’s a great question. There’s absolutely no doubt that deep learning is the biggest thing that has happened in the voice biometric industry in years. It’s everywhere. There’s a tremendous influx of new vendors in the market, many of them are coming directly from higher education institutions. This isn’t surprising because many schools are now offering degrees in data science or have respectable speech labs. So we’re hatching all of these very bright young minds who understand what it means to be involved with big data and what the implications are. We’re also seeing a lot of updated marketing campaigns that highlight the use of AI and deep learning, improved accuracy and the like.
And deep learning projects are certainly now occupying a significant portion of our product road map. So yes, there are new vendors out there, there’s a lot of marketing, and there’s a good reason for it – deep learning techniques hold a lot of promise. We’re very excited about the things that we’ve been able to accomplish with deep learning algorithms, the problems we are solving, and the future of our roadmap.
You asked about recommendations. I still think this is relatively new science, and if you go back three to five years, nobody was talking about this stuff. It’s really been the last few years. But, I think it’s really important to recognize that deep learning algorithms are really only as good as the data that are supplied to them, and the skill and experience of the engineers who are configuring the solutions. And a lot of data is required, data that must be properly coded and organized so that accurate modeling and transfer learning can occur.
So, if you’re considering a vendor, you should select a company that’s been in business for a while, has access to the proper data and has the experience in servicing the customers that are similar to them. Similar languages, use cases, and usage conditions. The vendor definitely need to be knowledgeable and have a body of work that makes sense.
The other key is that vendors need to be able to back up their claims. That’s something we’ve always believed in. We provide customers with a trial, we don’t want them to look at a four color glossy marketing brochure and be sold on, “Hey, buy us.” It doesn’t work that way. People want proof and I think if somebody’s considering technology from any vendor, that vendor should be able to back up their claims. Published results are great, but they’re no substitute for hands-on proof.
So, I would say listen to everything that’s being said, but find out what kind of experience they have, what kind of data they have, and work with them on a proof of concept before committing to anything long-term.
FindBiometrics: As we both know, the rise of synthetic speech is a very hot topic in our industry. In your opinion, how close are we to the practical applications from the results we’re hearing about from research labs? I guess we kind of touched on this but how close do you think we’re getting there?
VBG: Well, we’re definitely getting there. I think it’s a tough question but an important one. There’s clearly a lot of research into synthetic speech, and synthetic speech detection for vendors like us. But these are still relatively new topics that coincide with the availability of deep learning algorithms. So it’s difficult to pin a time frame on it, and I couldn’t say for certain that there’s large scale synthetic speech attacks right around the corner, I kind of doubt it. But, there’s no doubt that someone with access to the right tools, and a good amount of speech from a target user, can potentially create a reasonable voice clone.
So, for now, I think you have to look at it that way. Is it reasonable to have a fraudster inject themselves into an authentication process at the right time and place and with the ability to grab the right content from that user and continue on as that user? I think it would require a whole lot of collusion or at the very least be the result of friendly fraud cases, where a spouse, a child, a relative, a neighbor, someone who has kind of significant access to the individual personally would have the ability to record them. I think it would require that. Or be a public figure like a president or somebody who’s frequently speaking and take their audio and hack them. It’s that kind of thing. Large scale synthetic speech attacks seem very unlikely right now.
So I think the key defense is to use a layered or multi-factor approach. You can’t rely on using any one authentication factor, you’ve got to use multiple factors. In parallel to using a multifactor approach, vendors like us will continue our research efforts, we’ll figure out what we can about synthetic speech detection and build in appropriate countermeasures. VBG is doing this and so is every other voice biometric company I would imagine.
FindBiometrics: Great. So just to wrap up the final question, in your opinion, where do you see the future of voice biometrics and synthetic speech?
VBG: Well, I think in general, anytime you have authentication you’ll have someone else trying to crack it or bypass it. You’ve heard stories about early fingerprint readers being bypassed with Play Doh impressions or less sophisticated facial recognition programs being bypassed by photographs. These early negative results sometimes happen. Then these systems evolve, they get better. Vendors develop appropriate countermeasures and they move on until the next sophisticated fraud technique arrives and then you have to research and develop new countermeasures and so on. It’s a cycle.
I don’t think voice biometric vendors are going to be any different relative to synthetic speech and voice biometrics. I think we’ll continue to be in this weird relationship with fraudsters relative to measures, countermeasures, counter countermeasures etcetera.
So, I go back to multifactor authentication techniques. We don’t have one client or partner who uses us as a single authentication component. Everybody deploys us as part of a multifactor discipline. If you have two, three, four, or more authentication factors, the likelihood of a fraudster circumventing the system becomes statistically very low whether by synthetic speech or other means.
And that’s really what most fraud and risk people are looking for, at least today. They’re not expecting 100 p[ercent perfection, but they want a very, very low probability that that person coming in is a fraudster. And voice biometrics, when used with other factors, gives people that. I think synthetic speech is going to be here to stay, and I think there’ll be countermeasures, but I think they’ll be very manageable when used in a proper, multifactor discipline.
FindBiometrics: Wonderful. Thank you so much for joining me today. It was just such a great conversation. It was so great to hear your insights on all these major topics.
VBG: Well, thanks Susan, for having me.
Follow Us