Bill Lombardi of PowerSphyr breaks down how paradigm-busting technologies such as artificial intelligence can build up your business.
There is a lot of energy and momentum around the uses to which artificial intelligence and analytics can be deployed to enhance business operational processes. But if you’re not a data scientist, it may be a little challenging to know what can be really achieved versus what may be merely hype or conjecture.
To help promote greater clarity on this topic, we reached out to Bill Lombardi, the recently named chief information officer for PowerSphyr, a company which provides wireless battery charging and comprehensive battery system solutions. Bill has over twenty years of industry experience in the design and implementation of integrated advanced analytic and artificial intelligence solutions across industries. Prior to joining PowerSphyr, Bill served as a Global Sales and Delivery Executive for the IBM Watson AI Global Center of Competence, in which he led client cross-functional teams through user-centered design and value creation.
“Imagine the quality of customer experience when human agents focus on complex issues and are provisioned for success with intelligent virtual agents of their own that ‘listen’ to the conversation and proactively serve up best candidates for response. This effectively allows humans to use AI to create a panel of agents who become overnight product and process ‘experts’ with accurate and personable responses that enable them to help address the most challenging of customers.”
– Bill Lombardi, PowerSphyr
ANSWERS: How do you see AI playing into predictive business analytics and forecasting, particularly in the near future? How can you use natural language processing and image recognition and those things to help with your business analytics if you’re a business owner?
BILL LOMBARDI: There are a lot of ways, and it touches what I call the entire value chain. If you think of it, most organizations are in the marketplace doing much of the same basic activities. They either buy or harvest product in some raw form, and that could even be information. Then there’s some element of manufacturing, assembly or massaging into a finished good or final product. They then market and distribute their finished product, and sell it for profit, right? In today’s digitized marketplace, business now has an efficient opportunity to “sense and respond” to the end-customer experience on a personal level, closing the loop. The ability to efficiently validate and respond to the customer experience at an individual level typically only occurred when a customer had a complaint. While complaints still exist, companies can now begin to understand the “silent attrition” that occurs when a customer simply defects to a competitor with no notice, as well as to respond to those customers who were “delighted” with their experience – opening doors to revenue generating up-sell and cross-sell opportunities.
When you close the sale and capture that consumer you typically know who they are. They’ve either filled out a warranty card, they happen to be a loyal shopper (if you’re a retailer), or they gave you that information in advance. You can now start to attach attributes of interest to that individual – attributes of behavior with elements that are afforded to you openly by that individual, say on a community website or when they speak out in the public space of social media.
ANSWERS: Do you think companies and organizations are going to find themselves at a point where they’re going to have to have greater dataset sharing in order to push AI development faster?
LOMBARDI: Really, it’s a bit of the Wild West out there right now. You’ve got a lot of well-intended companies looking for that next gold mine. Over the last 10 years or so, just staying within the practical business analytics sphere for consumer demand forecasting new product innovation, there were different types of companies starting to form. Some of them are very niche-focused, algorithmically driven entities that don’t own their own data. They’re partnering with companies to use their data or have access to public information that is consumable in some form.
They’re rapidly testing what I call “for purpose solutions,” such as the capability to automate a contact center. The Pareto Principle for contact centers is pretty real – 80% of call volume typically represents 20% of the typical problems, and most of these are simple questions that have prescribed answer sets, which are turned into Frequently Asked Questions. These 80% of calls are being deflected more and more to an efficient “Virtual Agent” or chat bot, freeing up contact center agents to handle the more complex, nested issues and questions they are best equipped for. In addition, these contact center agents are increasingly being provided with “intelligent” support systems that “listen” to the customer and serve up candidate responses based on analysis of customer history, cross referencing databases for fraud assessment, etc.
Over my career I have witnessed businesses willing to spend heavily on R&D, Efficient Supply Chain, and Trade and Marketing support to win in the marketing funnel battle, only to fall short when it comes to providing empathic and personal customer service to customers following purchase. Imagine the quality of customer experience when human agents focus on complex issues and are provisioned for success with intelligent virtual agents of their own that “listen” to the conversation and proactively serve up best candidates for response. This effectively allows humans to use AI to create a panel of agents who become overnight product and process “experts” with accurate and personable responses that enable them to help address the most challenging of customers.
ANSWERS: How do you see AI and similar technologies impacting the human capital needs and business strategies of companies in the coming decade?
LOMBARDI: There are two points of view on that. One I’ll call the employee point of view, and the other the industry point of view. When I say industry, I don’t mean necessarily just business leaders, but the drivers of those leaders, which is Wall Street. What does Wall Street represent? In many ways, it represents our workers’ investment in their own futures, with retirement planning and investment strategies. We create this business cycle ourselves as a society and oftentimes are unaware of who’s driving the bus.
The conveniences that citizens achieve through advances in technology come in many ways and at some greater expense. That greater expense may be displacement of workers. I would like to think that it actually improves the qualities of work for those who are flexible and willing to come along for the ride. Retraining and staying up on new technologies and staying competitive yourself, taking self-responsibility for your own future and your value in this new world involving rapid advances in technology, not only prepares us for the new jobs that are evolving, but make us more aware of when technology may not be designed and suited to our best interests.
For example, I had the privilege of working with graduate students at Columbia University School of International and Public Affairs earlier this year. With the Capstone team, we identified five applications of AI in detail: autonomous vehicles, autonomous weapons, financial risk pricing, customer recommendations and healthcare diagnostics as well as contrasting views of AI regulation across the U.S., EU, Japan and China.
What we saw from this research is that despite AI’s popularity, many people (including policymakers) fundamentally misunderstand and mischaracterize AI’s potential. There is clear need for vendors to address the consumer perception gap and work together to shape public opinion through transparency and proactive outreach. Now, I’m not saying organizations have to wait to develop and deploy AI advances until the public is fully on board with how it works. However, by not doing so will likely result in lower adoption rates and sales projections communicated to the investment communities.
There needs to be a collective industry effort to coordinate and communicate greater transparency and understanding around the personal data and dimensions collected and how they are used to provide the benefit that end users experience. Why? According to one of the studies reviewed for the project, a surprising statistic for the U.S. was cited, which if I recall correctly indicated that only about 30% of the U.S. general population was even aware that their personal information (culled from search engine queries, location information, social media, government records, website engagement, purchase behavior, music curation, etc.) is being actively mined to create “for purpose” consumer level profiles.
With the right dimensions, consumer profiling data can then provide very detailed dimensions on personality attributes and psychological composition that provide marketers with a rather intimate look at an individual’s emotive triggers and predispositions, and use those to provide targeted communications. In the right hands, companies use this data to enrich and personalize the consumer experience. In the hands of bad actors, however, this data may be used to target and recruit “at risk” individuals into extremist groups, perpetrate fraud, change elections, etc. A result is that it’s forcing “good actors” to address the challenge of creating transparency and establishing trust with its consumer base.
My concern with recent developments in the commercial AI marketplace is that companies are moving fast, providing agile releases of minimum viable solutions that rely on personal consumer data without fully vetting security and weighing in the “business risk” that comes with cybersecurity breaches. In 2018, that changed with the implementation of the European GDPR regulations which hold businesses accountable for permissible use of customer data and assignment of power to individuals as stewards of their own data. This is a social experiment that Chief Information Officers and Chief Technology Officers plan to watch carefully.
In contrast to AI, think about the advent of scientific breakthroughs in pharma, and what it takes to bring a drug to market in the United States. My understanding from colleagues in that industry is there is a very robust regulatory review process, including testing, validation, peer reviews, etc., and that it can take up to 10 years to bring a new drug from concept to market. The rationale is to protect the public from unintended consequences.
Now contrast that to the AI scenario where fundamentally anyone, anywhere can design and launch an activated “Virtual Agent” to comb and dialog with users, collect private data for a targeted business purpose, or (in the case of a “bad actor”) to initiate malicious activity. In the U.S. we have governing organizations to target and remove “bad actors” in the drug market; however, it is unclear what is being considered as a plan to protect the American public from “bad actors” online.
I believe there is a public safety need for educating our digital society with guidelines around proper communication decorum on social media, helping them to understand how their direct communication (image, chat, etc.) and observable behaviors (i.e. web navigation, keyword search) can fall into the wrong hands. Data abuse can be misused to direct communications to reinforce or change the way people think, feel, buy or even vote.
Given the low turnout rates for the voting public in the U.S., just getting someone out to vote with an emotive message is huge. Now consider if that message is skewed or hinders a population from going to the polls because maybe they believe press reports they’re receiving that could be malicious and fake; maybe it suggests that their candidate has already secured the vote and is elected, and they are led to feel that breaking away from the routine of their day for an hour to go to the polls is just not necessary.
There is a lot for society to think about as we move forward. Until we build that trust level with organizations that bring products to market, with their ability to securitize our information, I think we will see a litmus test bellwether piece of legislation begin to unfold in the next year. I’m excited to see what changes that will bring.
In our new series, AI Experts, we interview thought leaders from a variety of disciplines — including technology executives, academics, robotics experts and policymakers — on what we might expect as the days race forward towards our AI tomorrow.