There’s an old joke that used to go over very well in tech circles - “Machine learning is written in Python. AI is written in PowerPoint.” The technology we called AI was a combination of specific technologies that included machine and deep learning, natural language processing, robotic process automation and other next-gen technologies. Accessible combinations of those underlying technologies in easily-digestible, open-source applications are quickly making that joke obsolete. AI isn’t just PowerPoint anymore.
The new generation of AI is “generative,” meaning it creates new content from existing data sources and user input. Recent headlines featuring OpenAI’s ChatGPT and DALL-e applications, which generate text and art respectively, make it clear that AI is no longer just a buzzword, but a real and revolutionary technology that has the potential to change the kinds of work that a computer can do. DataRobot is accelerating predictive analytics on futuristic levels, and Google recently invested over $300M in AI startup Anthropic, as well as their own AI music generator. The major tech giants are racing toward commercial application of this new technology, and supply chain has already been called out as an area ripe for automation.
According to our friends at McKinsey, “Generative artificial intelligence (AI) describes algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos.” This is the big leap - instead of regurgitating known content and answers, these algorithms create entirely new outputs based on patterns in deep learning data. They are frequently contextually aware - that is, they have built-in feedback loops which improve the algorithm’s results -- you can “tweak” your results during an interaction until you get the output that matches your request. The end result has been everything from AI-generated blog posts, fascinating artwork, ingenious code, and oddly creative, if occasionally disturbing content.
Generative AI is wonderfully conversational and intuitive to use. With minimal instruction, everyday users can interact with a chat-bot, or generate an image of an elderly vampire riding a motorcycle in an impressionist style. You can play with ChatGPT yourself here (ask it to tell you a calming bedtime story suitable for small children, or get a 30-minute recipe using pork tenderloin).
You can create some fun images here.
My 30-second “Elderly Vampire Riding A Motorcycle, Impressionist Style”:
These technologies are built on massive datasets - ChatGPT, for example, was trained using 570GB of text - over 300 billion words were fed into the system. With the accessibility of media available through sources like Common Crawl, the source material for these generative tools is enormous, and growing exponentially. Built-in feedback loops mean the results are constantly being refined and improved.
New use cases of this technology include generating frequently asked questions, writing summary blogs, creating images, and even generating code without needing the technical knowledge to do so.
Because the datasets are (a) enormous (b) ever-changing and (c) not checked by humans for accuracy, the results from new generative AI technologies are not terribly predictable. Your mileage very much may vary - you can put in the same request moments apart and get wildly different results. Misinformation has the potential to multiply, as do inherent biases in data. You will get a result, but not always an accurate result - presented in a very confident way. A recent study put the accuracy of next-gen chatbots as low as 60% - fine when you’re goofing around, but problematic in an enterprise context. This number rises when you use more specific questions, but given the ease of use by end users, accuracy remains a substantial hurdle.
Security and compliance are additional concerns. Input and output data are not necessarily kept confidential or erased; large organizations like Amazon and Microsoft have issued warnings to employees stressing a policy not to input confidential information into AI tools. The US Copyright Office ruled in late 2022 that AI-generated content may not be copyrighted, but it is also capable of generating content that is virtually indistinguishable from copyrighted material - potentially opening users up to legal exposure for violations of intellectual property, or removing typical protections around work product and deliverables.
Generative AI has the potential to make procurement friendlier. Chatbots could guide users through the procurement process, providing helpful suggestions about where to start, what steps are required, assisting in form fills and approval submissions in an approachable, comfortable user interface. Procurement applications are notoriously bulky and unpleasant; chatbots can make an intimidating process accessible, and the reiterative experience allows for a more dynamic and natural flow with a lot less “rage clicking.”
As a tail spend company, we know that procurement is often where ugly data goes to die. Categorizing and enhancing millions of lines of data is frustrating, tedious and often impractical, but having underlying data that is flawed leads to bad reports and inaccurate results. GenerativeAI has the ability to enhance enormous datasets with “best guess” information that is increasingly accurate. We recently conducted some preliminary tests in this space that show promise around item data cleansing, and predict that this arena is ripe for development and expansion.
AI-generated contracts are a hot topic. So far, it looks like generative AI can do a passable job of a “first draft” or basic template of many common legal documents. I’ve found that it can be particularly helpful at reading and returning specific questions, for example “What is the limitation of liability in this contract? <Paste entire document>”. Depending on the dataset, the new AI technologies can identify areas of opportunity, read PDFs and other embedded text, and assist in improving metadata around existing contracts.
Given the enormous datasets, generative AI can make some spooky predictions, and can answer follow-up questions that emerge from a primary analysis. If you’ve ever done executive reporting, you know that generating one prediction often leads to a cascade of further questions - what if you filter it by only US customers? What if we assume inflation increases by 9% rather than 6, does the math still work? These sort of data rabbit-holes are the bane of reporting analysts and BI teams, but they may be a closer reality than we think.
Maybe - but the robots will need people to teach them, to oversee them, and to audit them. My advice when it comes to generative AI is the same as any automation technology - it’s amazing, but it’s not magic, and it requires people to design, implement and ensure that it doesn’t go off the rails. Accuracy, ethics, bias, compliance and business process will always need human oversight. Rather than fighting the future, procurement professionals need to stay abreast of the actual capabilities of new technologies, understand the limitations, and use common sense to automate the right things, in the right way, without exposing your enterprise to more risk.