The world blinked, and AI is suddenly everywhere. Organizations are using generative AI (GenAI) to provide round-the-clock customer support, produce personalized shopping recommendations, accelerate software development, create marketing content, and more.
Today, AI is just as impactful and disruptive as the computer was when it was first introduced into the business world. If a business doesn’t adopt AI, it could quickly become irrelevant.
CIOs and CISOs, then, are under tremendous pressure to launch AI initiatives. They’re expected to adopt new AI-based software solutions and give employees access to AI tools. But at the same time, these technology leaders understand that AI introduces new risks. Many are rapidly implementing policies and protections to safeguard sensitive information, prevent shadow AI, and reduce the likelihood of generating misinformation.
While technology leaders try to find the right balance between enabling AI and governing its use, they must modify their process for buying and renewing software. If software purchasing sometimes seems like a game, then AI is changing the rules. Playing by the old rules puts organizations at risk. To reduce that risk, CIOs and CISOs need to step up scrutiny of software agreements, application architectures, and the software vendors themselves.
As software vendors integrate AI into more and more of their products, CIOs and CISOs need to change how they handle software renewals. For software you’ve already purchased, you’ve probably signed a master services agreement (MSA) and agreed to certain terms. But some of those terms are hyperlinked to other documents. When software is updated, vendors might change what’s in those hyperlinked documents, and you likely won’t know unless you continuously review the MSAs.
The problem is that any new AI capabilities that are integrated into the application might conflict with your AI and data governance policies. In the race to provide you with AI-enhanced software, the vendors of your existing software may be putting your policies — and your security — at risk.
When evaluating SaaS solutions, I am always careful to understand how vendors handle multi-tenancy. Specifically, I want to know what they would do with my data. Will my data be totally separate from everyone else’s?
Now that vendors are incorporating AI into their products, there might be some additional security issues relating to how vendors handle data. First, many vendors are not hosting their own AI services. They might be running it in a public cloud. And that cloud vendor might not be adequately protecting data.
Second, software vendors are probably using their customers’ data to train their AI model, regardless of where that model is. Now, it’s true that I want my software vendors to be experts in how AI can benefit their customers within their domains. For example, if a vendor has an AI-enhanced human resources application, I want that vendor to be the best in the world at delivering that kind of software. And the best vendor will be the one that has the most data to train their model.
However, I don’t want them to use my sensitive data for model training. Some vendors might intentionally commingle data from multiple customers to train their models. But at the very least, before I purchase software, I need to be convinced that all my data will be completely anonymized before it is used to train a vendor’s model.
New AI software vendors are popping up all the time — like a constant stream of expansion teams in a pro sports league. Beyond the very large, well-established technology companies that are introducing new AI capabilities, AI startups — focused only on AI-based software or tools — are multiplying rapidly. When OpenAI feels like a well-established player, you know the market is evolving at a ludicrous speed.
I evaluate these new vendors using similar criteria as vendors of legacy apps. I want to know, for example, how these vendors are keeping my data protected and separated from their other customers’ data.
But I’m also very concerned about the longevity of these vendors. AI companies are being born and are dying very quickly. Before buying AI software from a startup, I want to know whether the company will even be around in a few years. And what happens to my data if the company is acquired or goes out of business?
The villain in this story is the assumption that some aspects of your IT environment will stay the same. But nothing will stay the same. AI is fundamentally changing everything, including the software applications you’ve already purchased and have been using for years. So, as you renew software and buy new AI-based tools, aspects of your buying process will have to change. These six best practices will be key:
Monitor changing MSAs
In this AI era, software renewals require greater due diligence. First, have a conversation with your core vendors that are implementing AI. Ask them directly: How are you handling my data? Is it being stored in the same environment as your other customers’ data?
When you are renewing licenses for software you’ve already purchased, capture and review the updated terms that are hyperlinked from the MSAs. Make sure newly introduced terms do not conflict with your AI and data governance policies.
With some of your existing apps, it’s possible you’ve never reviewed the agreements. Maybe you started with a very small deployment and decided it wasn’t necessary. If you scale up the deployment, though, and shift to an enterprise agreement, invest the time for that review. You could establish a policy that requires an audit of terms and conditions when a software deployment size reaches a certain threshold.
This work of directly engaging with vendors and reviewing updated MSAs could require significant resources. After all, some organizations use hundreds of SaaS applications. Still, finding a way to stay up to date on evolving agreements will be critical for maintaining security.
Ask about data protection
Beyond understanding how data is stored and managed, ask how it is protected. One software vendor might deploy an instance of their application for each customer, and then defend each instance using a full range of app security services.
Another vendor might use one big database for all their customers. This should raise a red flag. However, if they can prove that they are sufficiently encrypting or tokenizing data, ensuring that sensitive information cannot be exposed, you might still be willing to use that software.
Evaluate vendor priorities
With newer vendors that offer AI tools, assess their strategic priorities to determine whether they are a good, long-term fit for your organization. Let’s say you’ve been using software from a startup, and you’ve had some problems with that product. You then learn that the startup has just raised a new round of financing. Ask them how they plan to spend that money. If the answer is sales and not engineering, look elsewhere.
Compare notes within your community
One way to reduce the workload of reviewing all those MSAs and evaluating new vendors is to compare notes with other technology leaders. Ask your peers, What terms and conditions trends are you seeing across vendors? Are you OK with the changes?
Similarly, look for insights about the AI-only vendors you are considering. Ask other leaders, How confident are you that your data will remain protected if this company goes out of business?
Be prepared to walk away
Measuring the risks introduced by AI-based software can be very difficult. So, whether you are renewing software or evaluating a new application, ask yourself how comfortable you are with a risk you cannot precisely measure.
In some cases, you might decide that the potential risk outweighs the potential value. You could stop using software that has introduced unacceptable risks by incorporating AI capabilities. You could decide not to adopt a new AI app because you are not confident in the security of your data or the future of the vendor.
There is also some middle ground. For example, you could wait for a startup to mature before buying their software. Or you could make them prove the value of their app using a scaled-back selection of features — or a subset of your data.
Consider building instead of buying
Many SaaS vendors with AI-enhanced products want to reach a broad array of customers. Consequently, they design their products for the least common denominator across those potential customers.
However, as a CISO, I want software that will serve my organization deeply. For example, I want to use AI tools that can reduce the cognitive load on my employees, enabling them to be more productive and efficient within our organization. Those tools should draw on my data and then help my whole business work better. The problem is that it’s difficult to find off-the-shelf software that will meet my organization’s precise needs.
So, in some cases, it will make more sense to build than to buy. If you can produce capabilities that will benefit your business and deliver competitive value, while minimizing risk, building might be a better path. The right developer platform can help accelerate the creation of those AI apps.
We all feel the pressure to adopt AI and transform our organizations. Software vendors are eager to facilitate that transformation, and they are rapidly integrating new AI capabilities into their products. As a result, those of us responsible for buying software need to stay on our toes: It’s like the rules for software buying are changing. And today, we need to more carefully evaluate how software innovations affect security.
Not all applications will deliver sufficient value for the amount of risk we’re willing to accept. In those instances, building AI apps might be the right approach. The Cloudflare Developer Platform can help streamline the process of building and delivering AI apps. With Cloudflare Workers AI, you can develop and run AI apps at the edge, close to users. Meanwhile, AI Gateway provides visibility into and control over how people are using AI apps.
Even if you are just hoping to maintain the status quo with some enterprise software, recognize that AI capabilities will likely start appearing even in those legacy apps — and that will force you to change your buying process. Preparing for this great AI disruption of software buying will be essential for mitigating risk.
This article is part of a series on the latest trends and topics impacting today’s technology decision-makers.
Learn more about how AI is changing the way business works and how to harness it smartly and safely in the Ensuring safe AI practices: A CISO’s guide on how to create a scalable AI strategy ebook.
Get the ebook!Mike Hamilton – @mike-hamilton-us
CIO, Cloudflare
After reading this article you will be able to understand:
AI capabilities are being added to existing contractual agreements under the radar
6 best practices to consider as you buy or renew software
How to ensure compliance of AI and data governance policies