
“People should stop training radiologists now. It’s just completely obvious that within five years deep learning is going to do better than radiologists”. Such were the words of Geoffrey Hinton in 2016, one of the fathers of today’s artificial intelligence. Yet, radiologists have not been replaced.
This prediction shows how over-enthusiasm regarding AI, even as an eminent expert, can lead to disconnect. At QAD, we help manufacturers and supply chains avoid this disconnect by focusing on the job to be done and deploying AI when it makes sense. What hides behind today’s state of AI? What does the future seem to hold, and how do we position ourselves in this landscape?
What the Present Shows
Uncertainty – Despite its incredible growth and world-wide adoption, generative AI models, such as ChatGPT, have yet to prove consistent value. Though GenAI responses are getting more impressive, the rate of change, importance of skillful prompting, and frequency of hallucinations (see Risk section below) make it difficult for companies to roll out. Roles that heavily involve text (including basic code) generation and summarization might shave time off of regular tasks but even then, human discernment is still irreplaceable. Much like self-driving cars, AI developers advise constant readiness to take the wheel, and people nearby who notice the autopilot often get nervous out of a blend of self-preservation and prejudice.
Risk – Large Language Models (LLMs) hallucinate: as they generate words and paragraphs, they stray from the original intent of their response, and even create fake facts from thin air – simply because they predict the most likely words one at a time, without as much as common sense or a world view. Judicial cases have been upended by lawyers using LLMs that ended up creating bogus cases as usable examples, all with the model assuring them that their outputs were completely sound and grounded in truth (see Mata v. Avianca). Risks can be even less obvious: AI models may induce behavioral manipulation through short-term gratification at the expense of long-term vision. Models also tend to exhibit biases, coming from the source: the data. Even societal biases emerge in models trained on large corpora, which is the case of most LLMs.
Trade-off – The more data an AI model has for training, the more information it can use for its task, and the more efficient it becomes. But this may come at the cost of data privacy and ethics. Even access to data can be problematic, with a lot of large models standing in the gray zone of copyright laws when using books, songs and other artistic pieces for their own training. Finally, large models become larger and larger, with ever increasing energy costs and environmental impacts. Just training GPT 3 used approximately 1,300 MWh, about as much as consumed by 130 US homes in a full year. Models that generate images from prompts can use as much energy as is required to fully charge a smartphone – just to generate one single image.
What the Future Holds
Despite the current state of the AI landscape, the potential is still tremendous. Contrary to automation, AI will have an impact on all strata of job occupations, including leading roles and high-skilled occupations. AI models will need to find the key to their financial and environmental sustainability, and adopters will most likely arm themselves with powerful advantages. What will determine their success lies in the underlying paradigm: how do we see AI? How much focus should we put on safety? Will AI replace occupations, or will it assist?
What QAD Proposes
Assistance – The systems we create do not aim to replace occupations. Instead, we believe AI must amplify human intelligence, and increase their productivity and efficiency by aiding them in their daily tasks.
No Single-Solution Approach – Generative AI is not the end all be all, machine learning and statistics are still at the core of our projects, with true in-house expertise to guide the designs of our models. We design models from scratch if needed to answer our customers’ needs.
Responsible AI – We strive to develop AI models that match or exceed customers’ expectations of quality and privacy, as well as compliance with their processes. We limit biases by ensuring data quality and diversity, and monitor our models’ performances over time. When using LLMs, we connect them to full knowledge bases in order for them to retrieve the most pertinent information with the least risk of hallucination thanks to concise, documented answers. And the source data is always generated by QAD or anonymized with explicit permission.
Pragmatic AI – Models will answer direct business use cases and needs with tangible customer ROI in mind throughout their design. A lot of people riding the AI wave develop a cookie-cutter solution and then try to find problems to solve with it. We find a problem to solve and build the best fitting solution instead, with model efficiency, cost-effectiveness and parsimony as our guidelines.
Generative AI, and even AI as a whole, is in its infancy. It is still light years away from what it promised and ROI is not great. Yet.
We believe that AI can be a net positive for any industry, and that the push toward its adoption will lead to new paradigms, provided it is approached with pragmatism. This pragmatism is fueled by one element that is at the core of our mindset, and constitutes the cornerstone of what we build: people.
Comments