The Individual Contributor Vs. The Enterprise: Why the IC is Winning (for now)

By Paul Shearer, Solution Architecture, VP

Introduction

Artificial Intelligence (AI) in the years 2023 and 2024 (so far) has all been about the individual. The distinction between an Individual Contributor (IC) and the Enterprise in the context of AI utilization illuminates the varying scales and objectives of AI integration across different facets of the workplace. An IC, typically a knowledge worker or creative, leverages AI tools directly to enhance personal productivity and creativity. These tools serve as augmentative aids, amplifying the individual's capacity to generate code, review contracts, or craft engaging content. The focus is on the immediate, tangible improvements in the individual's output, facilitated by a one-on-one interaction with AI technologies. In contrast, when we refer to the Enterprise, the scope expands to embedding AI within broader systems and processes, aiming for organizational efficiency and automation. The Enterprise's approach involves integrating AI into customer service chatbots, employee self-service portals, and other systemic functions that require not just individual productivity, but also a seamless, accurate, and compliant integration into the company's operational fabric. This distinction underscores the difference in complexity, scale, and the strategic objectives behind AI deployment from enhancing individual capabilities to transforming enterprise-wide operations.

The Empowered Individual Contributor

The realm of the individual contributor, encompassing knowledge workers and creatives, has been revolutionized by Large Language Models (LLMs). Through use cases ranging from code generation and contract review to content creation and brainstorming, AI has enabled a productivity surge, potentially tripling output. Such advancements highlight a crucial factor for success: human supervision and ensuring that AI serves as an augmentative tool rather than an autonomous agent.

Here are a few use case examples:

  • Code Generation: Using AI to write, review, or suggest improvements to code significantly speeding up development processes.
  • Test Case Generation: Automatically generating test cases for software development ensuring thorough coverage and identifying potential issues early in the development cycle.
  • Contract Review: Streamlining the process of reviewing legal documents by highlighting key terms, potential issues, or suggesting amendments, thereby reducing the workload on legal teams.
  • Content Creation: Aiding in the creation of written content, graphic design, or multimedia content allowing for quicker turnaround times and ideation processes.
  • Drafting Emails "In Your Voice": Personalizing email communication by adapting to the individual’s writing style ensuring consistency and saving time on correspondence.
  • Brainstorming and Ideation: Offering innovative ideas, suggestions, and creative directions based on a set of input criteria enhancing the creative process.
  • Language Translation and Localization: Assisting in translating content into multiple languages and localizing it to fit cultural nuances broadening the audience reach.
  • Research and Information Gathering: Streamlining the process of gathering information from various sources summarizing research findings and keeping up to date with industry trends.
  • Educational Tutoring and Training: Providing personalized learning experiences, tutorials, and training sessions based on the individual's learning pace and style.

The Enterprise Equation

On the flip side, enterprises embarking on AI integration face a different set of challenges. The ambition to leverage AI for customer service chatbots and employee self-service portals confronts the reality of ensuring accuracy, compliance, and security. The story of the Air Canada incident, where an AI chatbot fabricated a policy leading to enforced compliance by a court, exemplifies some of the risks organizations face. This narrative isn't just cautionary; it underscores why for now we still need a 'human in the fire control loop' or a method to vet AI-generated content rigorously.

Rethinking Mitigation: A Dual-Strategy Approach

The incident serves as a catalyst for re-evaluating risk mitigation in AI deployments. Traditional strategies like data sanitization and continuous learning, while valuable, fall short in addressing the nuanced demands of enterprise environments. Instead, a more refined approach emerges, focusing on Retrieval-Augmented Generation (RAG) and the deployment of a separate AI, a "Policy Bot."

Harnessing RAG for Factual Grounding

Retrieval-Augmented Generation (RAG) merges the expansive knowledge capacity of language models with the precision of targeted information retrieval to produce responses that are both accurate and contextually rich. To illustrate, consider a more technical and industry-specific scenario: a pharmaceutical researcher inquiring about the latest methodologies in CRISPR gene editing. Given the rapid pace of scientific advancements, an LLM alone might not possess the most current research or specific details, sort of like to a student entering an advanced chemistry exam with a general science knowledge but lacking the detailed information covered in the exam.

In this scenario, RAG acts similarly to a student who has brought a set of cheat notes to the exam but with a twist. Before answering, the system consults a vast, up-to-date database of scientific publications and research papers stored in a vector database (more about this in a future article)—to find the appropriate "cheat notes"—to retrieve the most relevant and current information about CRISPR methodologies. This retrieved data then serves to “augment” the foundational model to construct a detailed, informed response that not only covers the foundational aspects of CRISPR technology but also incorporates the latest breakthroughs and techniques that might not be widely known or could be too recent to have been included in the model's original training data.

This method allows RAG to provide specialized, up-to-the-minute information tailored to the query ensuring that the researcher receives an answer that reflects the cutting edge of genetic editing research. The "cheat notes" in this case are selected for their direct relevance to the query enabling the language model to produce responses that a broad, generalized knowledge base could not. This ensures that even highly specialized, industry-specific inquiries are answered with the depth and specificity required, making RAG an invaluable tool for professionals seeking the latest information in their fields.

Going back to our Air Canada example they were already using RAG and it by itself proved insufficient in the particular instance discussed. This is where an additional mitigation strategy could have helped.

The Policy Bot: Ensuring Compliance and Accuracy

The concept of a Policy Bot introduces an additional level of scrutiny in the utilization of language models within organizations. Envisioned as a separate language model specifically trained on the intricate details of an organization's policies, guidelines, and regulatory requirements, the Policy Bot acts as a gatekeeper ensuring that responses generated by other AI systems align with these established standards. When a primary language model generates a response to a query, the Policy Bot meticulously reviews this output before returning it to the user comparing it against a database of specific policy statements, regulatory texts, and compliance guidelines. If the initial response deviates from these policies or potentially breaches regulatory standards, the Policy Bot flags the issue redirecting that chat session to a human agent.

The key element with a Policy Bot is ensuring you use a different model, preferably from a completely separate company. Both the original chatbot and the policy bot would use the same RAG system. The idea is that when one model hallucinates an answer it's improbable the second also hallucinates an answer in the exact same way.

The key technical challenge to this approach is latency with a user experience. You'll want to ensure both models process very fast so that the pacing of the conversation feels ‘human-like.’ If the models take too long to respond it's going to be a bad experience for the human on the other side of the conversation.

AI from 2023 to 2024 has been a testament to its empowerment of the Individual Contributor. AI has emerged as a personal muse and workhorse, reshaping the landscape of personal achievement and innovation. Meanwhile, in the Enterprise realm, we are seeing the initial shape, albeit still fuzzy and distant, of how AI will become the linchpin of systemic transformation, driving efficiency, and redefining the boundaries of corporate capability. This bifurcation in AI’s application underscores a broader narrative of technological evolution: one where AI is not a one-size-fits-all solution but a versatile toolset that is reshaping the professional world at every level. As we look to the future, AI will continue to diverge along these two paths further entrenching its role as an indispensable ally in the quest for human progress and organizational excellence. 

Want to see your question answered in the series, or just want to subscribe for alerts on future issues? Simply fill out the form below!