Suppose you want to create the best AI game engine.Or the best new AI-powered marketing recommendation agent.Or an AI personal tutor.
What kinds of things are likely to pop up as you move toward implementation? No matter what you’re doing, it’s likely that your end result will be powerful, innovative, and impressive to your audience.But it doesn’t just happen like magic, although to so many of us, LLMs seem to be doing magic every time they run.
Behind the scenes, it sometimes feels like one step forward and two steps back with efforts to shepherd new ideas to reality.
AI does a lot of the work, but it’s not yet fully autonomous in terms of self-design, and that means humans still have to do some of the heavy lifting.Knowing about common challenges can help human disruptors to succeed in meeting their goals with AI “summoning.” With that in mind, here’s some of what can trip up AI projects.
Big Ideas Cost Money
In any cost-averse environment, there’s this idea that decision makers should play it safe, and stick to the shallow end of the pool.The problem is that a lot of the best ideas are big ones, and can seem like longshots, or “moonshots,” at the outset.So there’s inherently a drive to shy away from the specific plans that could net the biggest gains in the end.
“What is often overlooked in all the hype around AI is that the models are trained on what has already been created, not what could be created,” writes Tom Green on Medium.
“So, there’s an aversion to risk.Because the fields of Web and UX design have developed best practices that suit the limitations of a screen-based medium, we tend to forget that the promise of AI is that we can create ideas nobody else has seen.”
The common thinking, for people who recognize this challenge, is that you have to push through some of this to get to the opportunity zone.
Liability: Bias, Data Privacy and More
In any AI project, you also have those boogeymen: bias, which can skew results, and privacy violations, which can give an initiative a black eye.Engineers are reading the provisions of the European GDPR, scrutinizing SWOT profiles, and trying to determine how to thread that needle, how to get the right data into the engine to support results, without going over the line and creating privacy risks.
“AI-powered products rely heavily on user data,” writes Sandesh Subedi at ProCreator.“When companies fail to clearly explain what’s collected and how it’s being used – it can silently erode trust and user experience.”
There’s also a high bar in terms of making sure AI tools don’t discriminate against various classes or sets of people, and since people tend to discriminate that way, it seems tricky to be able to completely avoid this when running something on a digital AI engine.
Interface Design and Changes
For some planners, the million-dollar question is this: how does the interface work?
Does the user access the technology through a browser? Through an app? How does key data funnel into the system? What cloud provisions are in place?
And then there’s the question of controls.Traditional programmers and engineers had to figure out where to put them, how to render them on a screen, etc.AI designers now also have to figure out what the user will control, and what AI will do on its own, and how to explain any black box aspects of the project, etc.
You can get tips from sources like this post at Vercel, and figure out some of the minutiae of how to lay out tools, but the essential challenge remains.
Getting Buy-In
Then there’s the human side of your organization: until we get to those all-AI factories and offices that Sam Altman and others have hinted at, you’re going to have to deal with getting consensus from people.And many of those people do not fully trust AI.Some don’t trust it at all.
And without the right agreements, things can happen.
“Projects lacking stakeholder trust or clarity of purpose often succumb to scope creep, budget overruns, or quiet stagnation,” writes Abhishek Sharma at Gururo.“Conversely, gaining support for AI initiatives ensures that these endeavors frequently deliver above-average ROI and become catalysts for further innovation.”
The Curse of Competition
Another unfortunate reality is that there are often opposing sets of stakeholders trying to one-up each other in the AI design and implementation process.
We had some discussion of this in a panel at the Imagination in Action event at Stanford this month, where Bing Gordon, Mark Pincus and Nitin Khanna went over some of these considerations.
“Right now, (in) Apple’s app ecosystem, the game industry, everyone’s pitted against each other,” Pincus said.“There is no concept of shared learning or anything.It’s all siloed, and kind of intentionally so.”
That’s another hill to climb when trying to get to the innovation that’s going to position someone or some enterprise for success.
More Quotes from Imagination in Action at Stanford
“The best interface is no interface.… It is not code that AI is dealing with, it is these interfaces.It’s our own domain specific language.We can instruct AI today to make changes in the gameplay and the game behavior.It is a lot more consistent, repeatable than someone trying to buy code again today.” – Nitin Khanna
“We’ve got to get to the point where the incremental cost of that shot-on-goal is cheap.
… when you get to games, especially when you get to 3d games, it’s just so heavy and expensive and slow.And when you start doing it in … these engines, it’s just so slow that it crushes innovation, because the cost is too high to do stupid ideas, and usually the stupid ones are the ones that work.” – Mark Pincus
Running the Race
Those who can get past these kinds of headwinds are likely to see some real value.LLMs are powerful tools, and they can transform our lives in so many different ways that it’s almost always worth experimenting.To the extent that we can incentivize this kind of exploring, we’re better off.Stay tuned..