top of page
Sesame Software

Realistic Objectives for AI Projects: Why AI Readiness Depends on Understanding Your Business

  • Writer: Sesame Software
    Sesame Software
  • 4 days ago
  • 5 min read


Listen to: Realistic Objectives for AI Projects:Why AI Readiness Depends on Understanding Your Biz

AI is not the objective. Understanding your business is.

In my experience, that understanding is what true AI readiness actually looks like.


Recently, I received an email from one of our vendors:

“I meet with customers like yourself every day, and the most common buzzword I hear is AI. Does your business have any AI initiatives for this year and beyond? I'd love to connect with you to discuss how we've incorporated AI into our platform to help customers maximize their ROI.”


The message assumes that adding AI to a product automatically creates value. I haven’t responded—not because AI isn’t useful, but because I don’t view AI as an objective in itself. My objective is to build a sustainable, profitable company that values its customers, employees, and partners.


If I encounter a problem where machine learning or artificial intelligence can genuinely help, I’m happy to explore it. But adopting AI for its own sake rarely produces meaningful outcomes.


A shiny, brand-name toolbox with a thousand tools might look impressive in the garage, but most people will only ever use a small fraction of them. Many AI platforms are sold the same way: high tool density, impressive feature lists, and very little alignment to a specific business outcome. Most organizations don’t need more tools. They need clearer objectives and fewer assumptions.


Tools don’t create value on their own. Clarity does.


When AI Ambition Outpaces Accountability


Much of the AI conversation today is shaped by the pursuit of Artificial General Intelligence (AGI), the idea that machines will think like humans or outperform them. While this makes for compelling headlines, it often introduces a quiet but real cost inside organizations.


Very few businesses want autonomous decision-making without human accountability. Executives are ultimately responsible for outcomes, risk, compliance, and customer trust. Systems that obscure how decisions are made—or remove clear ownership—create governance challenges long before they create value.


When AGI-driven narratives dominate strategy discussions, budgets and attention can drift away from more immediate, solvable problems. The risk isn’t that organizations adopt AI too slowly, but that they allocate resources toward ambition before readiness, and spectacle before substance.


AI delivers the most value when it supports human judgment, not when it attempts to replace it.


AI Readiness Begins with Data That Reflects Reality

Once a team defines what it’s trying to accomplish and why, a foundational question appears almost immediately:

Where will the data come from, and does it accurately represent how the business actually works?


In many organizations, the honest answer is no.


Geographic data offers a simple example. When state and country fields are stored as free text, dozens of variations emerge for the same value: United States, USA, U.S.A., US, U.S., United States of America.


This isn’t an AI problem. It’s a data governance problem.


AI systems can tolerate noise, but they cannot correct systemic semantic errors, missing ground truth, or contradictory business rules. Models inherit the assumptions and structure embedded in the data they consume.


After more than 30 years of building corporate data warehouses, I’ve never worked on a project that didn’t surface surprises in the data. In one case, a client migrating from a legacy financial system to Oracle discovered their data couldn’t be corrected programmatically. Business rules had changed repeatedly over time, documentation was incomplete, and there was no reliable source of truth. The only viable option was manual review and re-entry.


AI can assist with classification, clustering, and anomaly detection. But when historical data reflects inconsistent or undocumented business logic, human judgment is still required to determine what is correct and what should change.


Data Problems Often Reveal Process Problems


In another project, a client discovered that service calls were being scheduled before customers had even signed up. This wasn’t a data quality issue caused by errors or omissions. It was a workaround created because the system couldn’t properly prioritize requests.


The data wasn’t wrong—it was faithfully representing a broken process.

This distinction matters. Sometimes data is messy because people make mistakes. Other times, it is messy because the business has adapted around system limitations. AI doesn’t resolve either problem on its own. In fact, it often exposes them.


That exposure is not a failure. It’s a signal.


Why Discovery Creates Value Before AI Ever Does


In the 1990s, business process reengineering became common as organizations adopted off-the-shelf enterprise software. Companies stopped building everything from scratch and benefited from the discipline embedded in standardized systems.


Today, the discovery phase of AI and machine learning initiatives offers a similar opportunity.


You don’t need a trained model to generate value. In many cases, the greatest return comes from examining data quality, lineage, and usage before automation begins. That work surfaces inefficiencies, workarounds, and outdated practices that quietly undermine reporting, operations, and decision-making.


Discovery does not slow innovation. It reduces risk, prevents misallocated investment, and avoids scaling the wrong solution. Organizations that skip this phase often find themselves with expensive pilots, abandoned models, and growing skepticism about AI’s value.


Practical Ways Organizations Build AI Readiness


Most teams improve data readiness through a combination of approaches:

A person working on a laptop displaying dashboards with charts, graphs, and analytics tools.
Clean data, clear objectives, and accountable processes create the foundation for meaningful outcomes.
  • Standardizing data after ingestion Cleaning and harmonizing data once it reaches a central repository can be cost-effective and minimizes disruption to downstream systems.

  • Applying transformations during data movement Transforming data as it is replicated between systems enforces documentation, improves consistency, and allows teams to address known issues incrementally.

  • Fixing the underlying business processes This approach delivers the greatest long-term impact and requires the most effort. It involves documenting current practices, defining intended behavior, and reinforcing it over time. Without this step, data issues tend to resurface, regardless of tooling.


Most organizations use a blend of all three, balancing speed, cost, and durability.


The Takeaway


AI is not the destination. AI readiness begins with a clear understanding of the business, supported by data that accurately reflects reality and processes that are intentionally designed.


Modern AI can mask data issues, but it cannot resolve their root causes. Those problems tend to reappear later as trust gaps, compliance risks, or explainability failures.


Adopting AI before fixing data and processes doesn’t create advantage—it accelerates inefficiency at scale. This is not an argument against AI. It is an argument for earning the right to use it.


Teams that invest in flexible, well-governed data foundations are better positioned to adopt AI responsibly, allocate budgets effectively, and deliver outcomes that stand up to scrutiny. Whether or not an AI model is ever deployed, that work creates value on its own.

Evaluate readiness for composable data pipelines with a short checklist designed to highlight quick wins, compliance requirements, and integration touchpoints for an initial pilot.


TL;DR


AI initiatives succeed or fail long before models are deployed. The discovery phase—examining data quality, structure, and business processes—often delivers the greatest return. While modern AI can tolerate noise, it inherits the assumptions and flaws embedded in the data that feeds it. Clean data, clear objectives, and accountable processes create the foundation for meaningful outcomes. AI works best when organizations earn the right to use it.

Written by Rick Banister, CEO of Sesame Software Sesame Software develops data capture and replication tools that ingest data from SaaS applications and databases into relational databases and data lakes, helping teams build reliable foundations for analytics, reporting, and future initiatives.

Found this post helpful? Share it with your network using the links below.

bottom of page