Los Angles Wire

collapse
Home / Daily News Analysis / AI tools are everywhere, so why do most people still use them like it’s 2015?

AI tools are everywhere, so why do most people still use them like it’s 2015?

May 14, 2026  Twila Rosenbaum  3 views
AI tools are everywhere, so why do most people still use them like it’s 2015?

Artificial intelligence now sits inside almost every tool you open, from search engines and office apps to browsers, phones, and creative software. Updates keep adding assistants, copilots, and generators, each one promising to change how work gets done. On paper, adoption looks high. Millions of users already have these features available, often switched on by default, waiting inside menus most people rarely explore. Actual behavior moves more slowly. Many users still write documents line by line, search the web the same way they did years ago, and complete tasks manually, even when the software suggests another option.

The Reality of AI Adoption

The goal was never to replace creativity or talent, but to augment it, and that only works when people understand where the new capability fits into what they already do. In this article, we look at why AI tools are everywhere, yet everyday software use still feels stuck in the past. The real problem isn't access to AI, it's adoption. Software vendors are not moving slowly. New AI features appear in updates almost every week, added to tools people already use for writing, coding, design, search, and communication. Access is no longer the barrier. What's missing is the moment when the user actually learns where the new feature fits into their existing workflow.

Most software still expects people to figure that out on their own, which is why tools like WalkMe Learning Arc focus on teaching features within the application rather than sending users to separate documentation or training portals. The shift reflects a wider realization across the industry that releasing functionality does not mean people will use it, a problem also discussed in debates around AI oversight and usability in clarity as a strategy. Most learning still happens outside the tool itself. Users are expected to read guides, watch tutorials, or sit through formal sessions similar to traditional employee training programs, even though the real difficulty only appears once they are back inside the software, trying to complete a task under time pressure. In practice, people fall back on habits they already trust, ignoring features they never had time to explore properly. Innovation keeps moving forward, but user capabilities move at a different pace.

Feature Overload Is Making Modern Software Harder to Use

Modern apps are not struggling because they lack capability. They struggle because every update adds another layer on top of what was already there. AI did not replace old interfaces; it stacked on top of them, which means users now face more options, more panels, and more assistants than before. Even discussions about how AI analytics agents need guardrails, not more model size, reflect the same concern that adding intelligence does not automatically make software easier to use. Open almost any tool today and the pattern looks familiar: office software with built-in copilots and sidebars, design tools filled with generators, templates, and prompts, productivity apps with chatbots inside every menu, and platforms that expect users to learn through guides similar to employee training. When the interface becomes crowded, people stop experimenting and return to what they already know. More power sounds good in release notes, but in practice, it often means more decisions on every screen. That is why usage patterns often lag years behind the technology already available.

Consider the typical office worker: they open a word processor to draft a memo. The interface now includes a sidebar with a copilot, a menu that suggests AI-generated content, and a toolbar with dozens of options. Instead of exploring these, the user types the memo manually, just as they did ten years ago. The AI feature remains unused because the cost of learning it feels higher than the perceived benefit. This phenomenon is not limited to office software. In design tools, users often ignore generative fill features because they are tucked away in submenus. In coding environments, developers stick to manual debugging even when AI-powered suggestions could save time. The pattern is consistent across industries: availability does not equal usage.

People Don’t Resist AI; They Resist Changing How They Work

Most users are not against artificial intelligence. What they resist is changing the way they already know how to work. Once a routine feels reliable, people repeat it without thinking, even when the software offers a faster method. Habit becomes the default, which helps explain why the gap is growing between AI availability and real capability. While most employees are expected to use AI at work, only a minority feel properly trained to do so. Microsoft research shows that 66% of leaders say they wouldn't hire someone without AI skills. Many are learning on their own while job requirements move closer to the skill sets now associated with future new jobs developers rather than traditional roles. Learning a new workflow sounds simple until it interrupts real work. Muscle memory takes over, deadlines get closer, and there is rarely enough guidance inside the tool itself to make the new method feel safe to try. The gap between innovation and adoption is mostly human, not technical, which is why the next shift in AI will not come from better models alone.

This human factor is often underestimated by software vendors. They assume that if a feature is powerful enough, users will naturally gravitate toward it. But human cognition prefers the path of least resistance. When faced with a new AI tool, the user must first recognize its existence, then understand its purpose, then learn how to invoke it, and finally integrate it into their workflow. Each of these steps requires mental energy that users are reluctant to spend. Furthermore, the pressure of daily tasks leaves little room for experimentation. Users cannot afford to make mistakes during a live project, so they stick with what works. This risk aversion is rational, but it locks in outdated practices. The result is a productivity paradox: more advanced tools do not lead to more efficient work because the tools are not used.

The Next Wave of AI Will Focus on Teaching, Not Just Automating

The next phase of AI development is starting to move away from adding more features and toward helping users understand the ones already there. Instead of expecting people to read guides or watch tutorials like it's 2015, newer tools are beginning to guide actions directly within the interface, showing step-by-step suggestions as the task progresses. Copilots that recommend the next command, walkthroughs that appear in the middle of a workflow, and interfaces that adapt to how the user works are becoming more common across productivity, design, and development software. This shift is also why more teams are asking questions like how to choose a digital adoption platform, as learning is no longer something that happens before using software, but during it. The tools that stand out will not be the ones with the longest feature lists, but the ones people can actually understand without stopping their work to figure them out.

Several companies are already leading this change. WalkMe, for example, provides in-app guidance that highlights features exactly when they are needed. Microsoft's Copilot integrates directly into Office apps with natural language prompts that reduce the learning curve. Adobe's Sensei offers contextual suggestions within creative tools. These approaches share a common philosophy: instead of bombarding users with endless menus and manuals, they offer timely, minimal instruction that gradually builds competence. This method respects the user's time and cognitive load, making it easier to adopt new workflows. Moreover, it addresses the root cause of non-adoption: the lack of a safe, low-friction learning environment.

The implications are profound. If the industry shifts from a feature-centric to a learning-centric model, the benefits of AI could finally reach the majority of users. Productivity gains that have been promised for years might actually materialize. Teams will spend less time wrestling with software and more time producing creative, strategic work. The key is to recognize that AI's true potential is unlocked not by adding more intelligence, but by embedding the teaching of that intelligence into the user's natural flow. As more software vendors embrace this approach, the gap between availability and adoption will narrow. In the near future, we may look back at today's underused AI features as a relic of an era when we assumed that building powerful tools was enough. The real work, as it turns out, is in helping people use them.


Source: TNW | Insights News


Share:

Your experience on this site will be improved by allowing cookies Cookie Policy