Not to minimize what Google had on display, but much like Salesforce last year at its New York City traveling road show, the company failed to give all but a passing nod to its core business — except in the context of generative AI, of course.
Google announced a slew of AI enhancements designed to help customers take advantage of the Gemini large language model (LLM) and improve productivity across the platform. It’s a worthy goal, of course, and throughout the main keynote on Day 1 and the Developer Keynote the following day, Google peppered the announcements with a healthy number of demos to illustrate the power of these solutions.
But many seemed a little too simplistic, even taking into account they needed to be squeezed into a keynote with a limited amount of time. They relied mostly on examples inside the Google ecosystem, when almost every company has much of their data in repositories outside of Google.
Some of the examples actually felt like they could have been done without AI. During an e-commerce demo, for example, the presenter called the vendor to complete an online transaction. It was designed to show off the communications capabilities of a sales bot, but in reality, the step could have been easily completed by the buyer on the website.
That’s not to say that generative AI doesn’t have some powerful use cases, whether creating code, analyzing a corpus of content and being able to query it, or being able to ask questions of the log data to understand why a website went down. What’s more, the task and role-based agents the company introduced to help individual developers, creative folks, employees and others, have the potential to take advantage of generative AI in tangible ways.
But when it comes to building AI tools based on Google’s models, as opposed to consuming the ones Google and other vendors are building for its customers, I couldn’t help feeling that they were glossing over a lot of the obstacles that could stand in the way of a successful generative AI implementation. While they tried to make it sound easy, in reality, it’s a huge challenge to implement any advanced technology inside large organizations.
Big change ain’t easy
Much like other technological leaps over the last 15 years — whether mobile, cloud, containerization, marketing automation, you name it — it’s been delivered with lots of promises of potential gains. Yet these advancements each introduce their own level of complexity, and large companies move more cautiously than we imagine. AI feels like a much bigger lift than Google, or frankly any of the large vendors, is letting on.
What we’ve learned with these previous technology shifts is that they come with a lot of hype and lead to a ton of disillusionment. Even after a number of years, we’ve seen large companies that perhaps should be taking advantage of these advanced technologies still only dabbling or even sitting out altogether, years after they have been introduced.
There are lots of reasons companies may fail to take advantage of technological innovation, including organizational inertia; a brittle technology stack that makes it hard to adopt newer solutions; or a group of corporate naysayers shutting down even the most well-intentioned initiatives, whether legal, HR, IT or other groups that, for a variety of reasons, including internal politics, continue to just say no to substantive change.
Vineet Jain, CEO at Egnyte, a company that concentrates on storage, governance and security, sees two types of companies: those that have made a significant shift to the cloud already and that will have an easier time when it comes to adopting generative AI, and those that have been slow movers and will likely struggle.