The proliferation of AI technology – even before companies are certain the best way to use it – calls on CIOs to lay the foundation for a strong response when the C-suite demands action, veteran IT leader Bruce Lee writes.
I did not use artificial intelligence to produce this essay. But the fact that I could have raises a crucial point facing CIOs today: AI is seeping into the tools we use every day, but how we should use it to transform our own companies core value is a work in progress.
In IT organizations across corporate America (outside of the tech companies themselves), we find ourselves comfortable in trying out AI-enabled advances in products we use – say in the systems we deploy to track help desk tickets, or to augment the work of call center representatives – all while not quite understanding what to do with AI in our own companies’ business.
In this article I am sharing four steps CIOs can consider to meet this moment.
Drawing from our collective experience of meeting the cyber challenges of recent years is, I believe, a useful way to frame what might be expected of CIOs in the AI era. Here’s why: the costly risks of cyberattacks snuck up on many enterprises. It was not until big consequential incidents occurred and were followed by the issuing of regulations requiring disclosures and remediation that companies and their boards of directors took cybersecurity seriously. Without knowing what new risks would emerge, we developed governance, expertise, and ecosystems to mitigate them.
The same kind of approach will help CIOs prepare for AI. It comprises four steps.
Step one: organize your governance like you did for cyber.
As with cyber, boards need to ensure they are holding executives accountable to understand the impacts of AI on their business, operations and customers. Forming an AI committee on the board can help the company focus on these issues while demonstrating the board is executing its responsibility to guide the company’s strategy.
Government oversight is another reason for board attention. We are likely to see regulators issuing new rules and frameworks to protect shareholders from losing their market value because of management inaction on the risks from AI, much as they did with the impact of cyber incidents. Data, the fuel of AI, needs additional protections. And data sovereignty will feature as it does in cyber. Legal experts have to understand risk, protections, and emerging case law. So, like cyber, AI is not just tech and risk but a whole company-governance issue. This framing is even making some companies think that extending their Cyber policy to cover their AI policy makes sense. Thinking of it like this can engage the right stakeholders at the right level within your company.
Step two: get access to AI adoption expertise.
AI is not all about risk and control. The technology also represents an opportunity for change that can be useful in any part of the business value chain or corporate systems stack that is amenable to prediction as a capability – the essence of generative AI. (Think summary AI to present a call agent with a caller’s interaction history, then suggested next steps to resolve a caller’s issue.)
Most non-tech companies are not in the business of assembling AI hardware, ecosystems or models. But they do have to be in the business of selecting and training AI models and associated activities. This includes reinforcement learning (training machine learning software to make optimal decisions); prompt engineering (crafting instructions for a generative AI model to respond); data protection (don’t give away your data, and don’t give the AI what it should not see even internally) and AI safety (making sure your chat bot can’t tell a customer how to make a bomb). And with all these new activities, we still need project and change management to implement the benefits.
All of these needs will require new skills.
Whether you go as far as naming a Chief AI officer is a company decision, but it is time to start accumulating skills in AI adoption by hiring, training, or partnering. As the CIO, people will look to you to meet this need as they see AI enablement as a tech responsibility.
Related article: By Bruce Lee |
Step three: Choose your AI ecosystem.
It’s clear that the major AI companies and big tech are in an arms race. So far that has been public and shared with researchers as each new model and advances in things like chain of thought reasoning (in which large-language models break down problems into smaller steps), model cascades (an approach that uses multiple machine learning models in a sequence, as one feeds the next); and caching (high-speed storage for data subsets to speed access to future requests) move what’s possible forward.
As we enter 2025, we are seeing the economics of training and model execution starting to influence what is now public domain and what is private. Private models distilling capabilities into public models mark an important decision point for CIOs, namely, choosing your AI eco system.
This is your classic CIO dilemma: which tech of many will you select? It used to be hardware, operating systems, programming languages and databases. More recently it was cloud partners and security services. Now you have to choose an AI ecosystem and its many associated tools.
Many companies may just use AI in vendor products they use (think Microsoft Copilot), but those that adopt AI for themselves are most likely to see a disruption in the application space. For example, using agentic workflows to augment existing applications with new features, such as integrating transcription and summation.
If you find value with AI in the application space, then choosing an ecosystem that is closest to your existing application development stack is most likely the best starting point, be it Microsoft, Google, or Amazon Web Services.
Step four: Determine how best to use AI.
When we have utilized a coding copilot, benefitted from meeting transcription and summation, filtered logs for incident root causes and all the other things vendors have done to help boost our productivity with AI, working out exactly what to do with the AI in our own company’s proprietary application stack puts many of us in uncharted territory.
Do you leave all the current Straight-Through-IT processing alone and focus only on work done by people? Do you break jobs down into tasks, then work out which tasks could benefit from AI? Does that benefit derive from automation and replacement of the person by AI? Or do you emphasize augmentation by AI to do all the things you previously wished to do but were resource constrained to get there?
Let’s say you are an HR leader. One of your tasks is to review comments submitted by staff in the annual engagement survey. Consider two options before you:
First option: Automate the task. You could use AI to summarize the comments. Then use sentiment analysis to find the managers that are not driving good engagement. Then, use AI tools to create manager improvement plans.
Second option: Eliminate the traditional engagement survey altogether. Have AI look at team emails, meeting transcripts and other markers of employee input and output. Then derive employee engagement levels from direct observation of the team’s working practices.
If you choose the second option, you also are making engagement measures an ongoing real-time assessment of your workforce rather than a periodic survey. That process then gets linked to your hiring practices and training. Now you have augmented the HR role to become real talent management by using what AI could do with far greater benefit to the company than if you just automate one task in the engagement survey process.
This step is the real manager’s dilemma and goes way beyond just the CIO’s brief. It amounts to an essential question: is AI for substitution or augmentation?
What’s right for your business? What’s better ultimately for society? And crucially, what safeguards might you have to put in place to be sure you get the result you hope for?
The decision becomes much easier if you have taken the steps above. As with the early days of cyber, you will have organized the stakeholders to create governance, acquired the talent, selected tools and experimented with what’s possible. That puts you at the heart of the of your company’s AI journey and with the rest of us trying to work out what the AI future will look like.
If you have yet to start, hopefully these thoughts can help you frame what you could do to join us on the journey. One thing we know for sure is that it’s going to get bumpy, so buckle up and take it one step at a time.

Written by Bruce Lee
Bruce Lee is currently an advisor to zScaler, a non-executive director for UK-listed Active Ops, a mentor to technology leadership students at Columbia and Northeastern Universities, and member of the International Advisory Board of the School of Business and Law at The Open University, the world’s largest distance-learning higher education institution. Over a 40-year career in IT, he has held a variety of leadership positions including CIO for Fannie Mae, CIO for the New York Stock Exchange Euronext Group, COO for HSBC Corporate and Investment Banking Americas, CIO of BNP Paribas Americas and, most recently, CTO for Centene, the largest government sector health care payer in the U.S. Bruce’s broad range of IT experience has included problem solving related to business growth, risk mitigation, regulatory compliance, and bringing industry-wide innovation to bear on real-world opportunities.