Tag Archives: Data

Technology fatigue? Changing Procurement’s Perspective

As we collectively look forward to our summer time break, the opportunity to recharge the batteries and step back is a welcome rest. User fatigue is common challenge in technology evolution; user dissatisfaction, lack of trust in technology, frustration and lost interest reduces the level of user engagement and motivation in technology adoption.

Within Procurement, there are myriad of AI use cases that offer potential but currently lack tangible and demonstrable output.

Risk Aversion

AI is hampered by bad data but the lack of trust in the technology blinds decision makers in endorsing ‘finding a better way’. This vicious cycle creates a detrimental effect, or even the perception of high risk.

Organizations that are risk averse have a higher level of user fatigue (The Institute Risk Management Risk Appetite & Tolerance Guidance Paper, Sept 2011) . The balance between ‘reward’ and ‘risk’ must be explored and purposefully agreed by the business.

Gartner Research Feb 2024, ‘Embed Total Cost of Ownership (TCO) in Procurement Teams to Optimize Value’, recommends TCO principles to improve business performance success. This perspective requires a new set of value measures, triangulating additional datasets housed across various internal and external systems. Procurement organizations must prioritize data integrity to minimize user fatigue. Note: You can have the best AI technology, but if there is no trust or use case benefit, adoption and ROI remains zero.

Data integrity considers not only the accuracy and consistency, but the ways data is interconnected across disparate systems to create ‘single sources of truth’ with high quality data.

Addressing the risk that AI itself impacts the organization’s data quality is another concern. According to McKinsey, “some 71 percent of senior IT leaders believe generative AI technology is introducing new security risk to their data.”

90% see improving compliance and risk as important for driving their data-driven decision intelligence investments ​​​
Source: ProcureTech research

Increasing Complexity Trend

Accordingly to Gartner, increasing supply chain cost and complexity, as well as economic and geopolitical instability, are significantly impacting business margins and supply continuity. Inevitability this means that organizations will require more TCO data points to manage the evolving supply chain scope.

Structuring unstructured data will make it a business asset.

Back to Basics

Seemingly the procurement road to success is not only to change the perspective, but to take advantage of technology, they must revisit their risk and reward balance. Understanding how to deliver improved data integrity within the supply chain will support the journey of winning ‘hearts and minds’.

AI is intended to simulate human intelligence and the ability to acquire and apply knowledge in an application that users trust and adopt will be a critical success factor.

Execution is Everything. AI is coming home. Share your Perspective.

Transformation AI Blues

Despite the wave of use cases and significant investment in Gen AI, current success rates remain low.

Back in 2021 Gartner reported that 85% of AI projects fail to deliver, and only 53% of projects made it from prototype to production. A more recent study by Infosys Study Dec 2023 indicates that only 6% of European Gen AI use cases have created value.

The hype around AI has led to a explosion of exploration activities across many organizations. In one example, in the rush to join the bandwagon, a bank bought tens of thousands of GitHub Copilot licenses without a clear sense of how to apply the technology. As a result most of those projects will fail to generate competitive advantage.

7 out of 10 digital transformation initiatives are considered failures using value measures such as user adoption and overall ROI profitability metrics. There are many parallels between AI and digital transformation project causes of failure. Spending big money on AI does not guarantee transformational results — its implementation execution that realizes the value.

Data

Part of the failure can be attributed to bad data: too little, poor quality, not in the right place, in the wrong format, missing key data points and often unintelligible across disparate systems. It’s insufficiency limits the the transformation and without a plan to transform the data in a way that creates value, AI initiatives struggle to tame the beast.

AI Magazine March 2022

Often organizations complete a one-off data cleanse to initiate the digital transformation, however this is short-lived given the unaddressed constant generation of bad data. A deliberate focus is required to build better quality data from the start, linked across organizational systems, to create a ‘single source of truth’. AI use cases operating without these data build considerations never achieve ‘lift-off’ .

Oliver Wyman 400 C-level executives in Europe and the Americas survey suggests that Data privacy (25%) and security concerns (22%) are the top factors preventing AI adoption.

Success Strategies

Accordingly to the Gartner AI Hype Cycle, we have now reached the peak of inflated expectations, and during 2024 will be moving into the trough of disillusionment. The big question is how to begin creating measurable value.

Gartner Predictions


By 2025, growth in 90% of enterprise deployments of GenAI will slow as costs exceed value.

World Economic Forum Agenda May 2024 Implementation Recommendations

  1. Pick the right composition of leaders for the AI transformation
  2. Embrace complexity and novel approaches
  3. Incorporate design principles in human-to- machine interactions.
  4. Bring workers along on the AI transformation with upskilling
  5. Score quick wins via efficiency gains.

My thoughts echo these recommendations. Transformational AI is a cross-functional effort, however many organizational departments operate in silo’s and there is a need to appoint senior business managers with sufficient granular view across the organization to help articulate and determine the journey steps. Organizations tend to delegate to IT as a default which is a mistake given the business success measures.

As with any digital transformation, human-centric design principles are essential to ensure that the AI output is structured and formatted in a way that is clearly understood by the human, for human decision validation.

By 2028, more than 50% of enterprises that have built large AI models from scratch will abandon their efforts due to costs, complexity and technical debt in their deployments.



If you don’t measure it, you don’t get it


KPI’s are critical in objectively measuring value and assessing AI success:

# Business Goal alignment

# Delivering data-driven insights

# User adoption

# Performance

# ROI

Paradigm Shift

GenAI is not just a tool; it’s a paradigm shift.

In conclusion, AI deployment complexities mirror the same challenges associated with organizational digital transformation initiatives. Transformational AI disillusionment will start with the realization that a strong business case, energy and resilience are required to stay the journey. There are no short cuts.

Should we be more cautious or is the risk worth the reward?

It’s official, AI is a tool

Financial markets continue to surge driven by high profile AI investment announcements; a 2023 study in the US indicated that around 90% of the respondents have a limited understanding of AI (Pew Research August 2023). I do not blame them.

In the confusing world of tech, where 52% are more concerned than excited about the technology, the negative belief that AI can be a threat to human society prevails. The science fiction view of AI being an uber intelligent, self learning machine capable of creating untold havoc needs to be held in check with greater user education on what artificial intelligence is and is not.

In reality, we have a software algorithm that operates autonomously in the background functioning to automate a particular process, activity, and or task.

AI is a very pragmatic technology, that is just a tool to help us within one given domain to do things better

Dr. Kai-fu Lee, Chairman & CEO Sinovation Ventures

Definition Check

Generative AI uses text, images, audio, and video content to generate new ‘creative’ content according to the parameters set by the human user. ChatGPT etc.

Predictive AI  analyzes past data to discover patterns and then uses present and current data to forecast what will happen in the future. A prediction only, just like the weather forecast. The more data, the better the learning and testing procedure, the more accurate the prediction. We all ‘learn from our mistakes’.

Today Generative AI accounts for only 15% of the current use cases, and despite the growth of Generative AI, the bulk of the use cases, 85% , relate to Predictive AI ( CIO, “Generative AI is hot, but predictive AI remains the workhorse”, February 2024).

Data is the Material

We are building cognitive processes that understand human behavior which means AI is about data analytics, solving complex problems, and attempting to ‘think like a human’ to make our life easier. AI is constantly assessing the changing environment and ‘joining the dots’ to anticipate the next step. Our habitual routines are data patterns that that can predicted, and by employing the automation of certain resources, these anticipations will save time. After eliminating residual errors, applying a success probability, AI delivers an accurate recommendation.

AI applications are systems to process the Material

Devices or implements that carry out a particular function to help perform an activity or task, are defined as tools. In building cognitive solutions, modelling and refining the outcome result is intrinsic to the machine learning process. AI is a software tool. Like any tool, tools can be used for good and bad. This is not diminishing the risk, however the threat is human originated. Think deep fakes and other cons and scams.

Currently AI recommendations still require experts / us to validate and check the recommendation prior confirming the output. For example, check that travel booking recommendation shows London, England; not London Texas!

Forget artificial intelligence – in the brave new world of big data, it’s artificial idiocy we should be looking out for

Tom Chatfield, Author & Broadcaster

AI is still evolving; developing technology tools takes time.

Generative AI: What’s the procurement buzz?

Is having your procurement platform based on generative AI a ‘silver bullet’ breakthrough? I was recently involved in a discussion which got me thinking about the implications for business operations. As procurement is often tasked to leverage new ideas into business benefits, what does all this all mean………..

  • Generative AI generates new ‘creative’ content – written articles, art, music; think ChatGPT, Dall-E2, AIVA . Content represents the ability to communicate ideas and depends on the context and purpose. In the procurement world, examples include How to Guided Buying, Helpdesk FAQ, and data visualization. Generative AI is trained using ‘unstructured’ data and learns from data patterns.
  • Predictive AI utilizes data to generate predictions to support decision making. These insights are used by supply chain, finance and procurement functions to improve forecasting, optimization, fraud detection etc. and helps us make sense of the ocean of historical data that exists within an organization. Predictive AI supports the relevant classification of datasets, data correlation and trending to turn data into strategy formulation. Predictive AI is normally associated with ‘structured’ data.

Procurement frustrations

According to a survey back in 2020, 82% of supply chain leaders experience frustrations with AI (Secondmind, AI System Survey).

The biggest frustrations around AI were caused by a lack of reliable data (37%) and “rigid processes and internal structures” which prevented quick responses to changing market conditions (41%). 

AI relies on a large quantity of good reliable data.

i) Poor Quality Data – we all know this challenge!

The report said 96% felt this affected their ability to make effective decisions, with 50% saying they had to spend significant time manually analysing and interpreting the data to help inform decisions, and 31% highlighting expensive forecasting and planning mistakes.

Bad data or out of date data will corrupt the insight – Garbage in, Garbage out.

ii) Organizational barriers – we all know this challenge!

It takes a village to respond to an event, and any organizational disconnect, lack of alignment and barrier to react promptly to data-driven insights will hinder an organizations ability to leverage AI output.

Conclusion

Generative AI is considered core to this growth of AI; the AI market was valued at $ 136 billion in 2022, and predicted to grow at a compound annual rate of 37.3% from 2023 to 2030 (CIPS, Supply Management July 2023). However it is clear that presently it is still humans with the relevant expertise and experience supervising the inputs and maintaining responsibility for interpreting the outputs. The understandable concern is the need to establish oversight to ensure standards for responsible AI practices.

Generative AI will not solve Poor Data or Organizational challenges; and our ‘call to action ‘is the need to address these pivotal factors to best leverage the benefit of AI advancements. Any breakthrough is dependent on getting our house in order!

Final Note: Generative AI and Predictive AI are complimentary and becoming more symbiotic. Generative AI generates content for processes and can be used to create synthetic data for Predictive AI. Generative AI is able to use predictive processes to generate the next unit of content.

Help set the path. Champion data quality and organizational agility

Lets discuss ‘Cognitive’

Recent press articles had my mother phone me expressing concern that Artificial Intelligence would take away our jobs! I seem to recall they said the same thing about computers that spawned a global industry now worth over $5 trillion dollars.

Being passionate about technology and procurement, I am awaiting for the technology providers to explain how those frustrating business processes that users struggle to follow will be transformed to relegate the front and back office obsolete.

Disruptive technology often benefits us in ways we had not initially considered. There is one key ingredient I believe AI requires to transform our lives, COGNITIVE.

Cognitive involves human perception; it addresses how we think, learn and remember. Each of us is wired differently: How we interact, the level of intuition we employ to make sense of the world and intellectually reason a fact to form knowledge makes us who we are.

Any fool can know, the point is to understand.

Albert Einstein

Cognitive science connects with the way you think and behave. Our ability to process information, solve problems, interpret speech and visual signals, for example reading someones body language, helps us to form decisions. This will be core to how AI will create value. If we cannot interact or make sense of AI output, despite limitless intelligence and the endless possibly of insights, we will struggle to leverage its full potential. It’s akin to the cleverest human with poor interpersonal skills facing cultural barriers in a world dominated by us ordinary mainstream folks.

Cognitive AI enables a machine to infer, reason, and learn in a way that emulates the way humans do things. Cognitive AI does this by processing both structured and unstructured data, and experiencing interactions between humans and between machines. It is worth distinguishing between RPA (Robot Process Automation) which automates repetitive tasks using structured data. RPA has already made significant productivity and efficiency gains for many organizations.

Combine Cognitive AI with RPA and we then have cognitive bots able to reason and make decisions. The challenge is who is teaching the bot the right answer and defining the data structure; its us humans again. Outside the concern of AI bias, more to the point there is often no single right answer in life. After all what is right for you, may not be right for me. Our individual complexity can create frustration for others.

Cognitive bots analyze processes, recognize inefficiencies and create recommendations to increase productivity and quality. Humans remain the ultimate decision makers. Our role transforms to address how we configure and manage cognitive bots. Our individual workload just got more impactful!

It’s by learning new things in life that makes us grow. For me it’s a thumbs up opportunity.

Good Data is a Virtue

One of my favorite sayings with respect to data quality is ‘Garbage In, Garbage Out’. Data needs to be respected yet many organizations are apathetic around the need to control and manage data quality. It typically falls to certain administration functions to manage but often they lack the understanding of the base information.

And those same organizations are first to complain about bad data.

Quality is not an act, it is a habit

– Aristotle

Bad Habit Practices

Practice 1: Information is power. Let’s maintain our own data silo’s – information is leveraged with those that need to know. “It’s the way senior management like it!”

Practice 2: Role Perception: “I am too busy to waste time in administrating data entry. The procedure is not user friendly”; and there is little incentive, or penalty, to enter data completely.

Practice 3: Data capture importance: “It’s not my problem, someone else will check it and clean it”. This is also further complicated as many organizations are constantly changing – there is a lack of consistency, the goal posts are always moving!

Practice 4: Poor data transparency means it is easier to hide true performance. It suits to keep it opaque as “we are more likely to keep our jobs”.

Cost of Quality

The cost of quality rule (1:10:100) illustrates how the cost of error builds up exponentially as it moves down the value chain. This rule states the cost increases by a factor of 10 if an error remains undetected in each stage of the chain. For example, to remedy a quality issue when captured at the start of a manufacturing process costs $1; however if that same issue was to remain undetected at the end of the manufacturing process and go on to impact an external customer, the cost of quality failure would be $100. The learning is that prevention is better than the cure: prevention is less costly than correction, and less costly than failure.

Despite this rule, it seems some organizations pursue workaround strategies. These strategies unfortunately do not address the root cause issues, paper over the cracks, and end up being more costly than a direct, target source data strategy.

Correction Approaches

Challenge 1: Data Analysis, such as spend analytics solutions, will isolate bad data, and only map good data. Aside from being after the event, the challenge: Who trains the solution to know what good and bad is, and what happens with the bad data? Additionally if inputs are inconsistent and highly variable, this becomes a never ending high touch exercise, and any gaps will cause the entire data set to be questioned.

Challenge 2: Send all the data to be screened; audited and cleansed. As with Challenge #1, this suffers from the same limitations as well as throwing up potential delays. Do you have an army to administrate this? Probably not – a more robust approach is required!

Whilst these workarounds can compliment an organization’s capability in controlling and managing data, a better way is to initiate good data at the start.

Strategy Tips to achieve ‘right first time’ data

  1. Create user accountability – bad data and poor workmanship is a result of a cultural habit that disregards {good} data significance. Leaders must champion data quality.
  2. Join up data sources electronically. Ensure you have a single source of truth for the respective datasets.
  3. Standardize the data terminology, format and follow a logical hierarchy.
  4. Structure the data to ensure that it works for the different user perspectives. Enrich the data where appropriate for the user, but it must remain connected with the source of truth.
  5. Enter the complete information once. Combine a maniacal attention to detail at the start of the process with the use of templates and checklists. Design solution forms to elicit data entry in the most user friendly and intuitive manner, and avoid having forms that contain irrelevant fields or fields that are blank.
  6. Utilize automated system rule sets to perform stage 1 ‘checks and balances’. Prevent the garbage entry possibility!
  7. Reduce the temptation to have multiple approvals to validate the data. This approach has a poor success rate (as well as delaying the process). See item #1.
  8. Employ users that ‘get it’ – void those that do not. Good and bad data cannot be mixed – this corrupts the entire data set.

Data integrity is more than good data. It is about establishing processes that control and manage the data to ensure that it is accurate, consistent, complete and timely. With the emergence of big data, getting these fundamentals right will be critical.

We are all accountable for good data. To err is human, but to really foul things up requires a computer. Contact Us.

Painting by (Part) Numbers

There are some amazing painting by number kits available in the market today. Interestingly the concept was first introduced in the 16th century by Michelangleo to assign sections of his ceiling masterpieces to students to paint; pre-numbering each section to avoid mistakes. In isolation these sections seem not to make sense, but as a come together, they complete the whole picture beautifully.

Within manufacturing, the use of part numbers and bill of materials deliver a similar outcome to ensure consistency and repeatability. Services use procedures and checklists to ensure compliance. And for all the above, the goal is to standardize, manage and quality control outputs, as well as leveling-up productivity.

Heads Up: My Digital Opinion

Maintaining data quality is one of the biggest challenges within organizations, particularly rapidly growing organizations. This observation is not a commonly shared view, yet digitalization is often positioned as a silver bullet in magically solving organizations quality challenges. This over simplifies the underlying effort and work needed to define, structure and cleanse data that organizations can confidently trust.

For example, much of our Spend Analytics data accuracy relies on individuals entering and creating purchase requisitions correctly. If these inputs are free text, mainly descriptive, and open ended, inputs will absolutely vary dependent on the individual entering the dataset, and despite normalization (either via initial data screening, scrubbing or future use of RPA and AI applications); they will not be considered reliable enough to trust without further manual touch and investigation. If it is not 100% at the get-go, we need to check!

Where items can be procured via catalog and/or using a part list, these part number codifications deliver the confidence that the purchase is what it says it is, and supports an apple to apple analysis and baseline. Unfortunately not all businesses are that clear cut.

The question is how can we structure inputs, particularly in the acquisition of services, to improve the output. User training itself is not the solution. There are strategies that require collaboration with Finance (Chart of Accounts), Operations (Maintenance Schedules), Construction (Bill of Quantity), Accounts Payable (Invoices) etc. that redefine and reshape data entry to create more end to end codification, using guided assistance and part numbers (and variations thereof) to deliver a win/win. As with painting by numbers, in isolation, this may not make sense to those that are involved in a process subset, however if you trying to paint without numbers (and digitally operate) you are likely to struggle in delivering the picture you envisaged (artist skills aside!!).

Prepare, build and accelerate the journey to complete the picture. Contact Us.

Net Zero

Carbon neutrality is no longer a matter of if but when. The 2021 court case against Shell in Netherlands demonstrated it is no longer about ticking the boxes but taking real action. What does this mean for your supply side? Scope 3 emissions cover all indirect emissions that are incurred in the supply chain both upstream and downstream and, for many businesses, suppliers can account for between 50% to 80% of total emissions.

Having accurate information on your supply base is imperative; this creates the first key challenge, “data” capture. With potentially 1,000’s of suppliers, businesses who have not structured or enriched supplier master data with the appropriate dataset will struggle to get to grips with net zero. There are a number of third party solutions to help businesses, however unless these tools are embedded into a robust supplier on-boarding process and regular supplier review, the process itself becomes unwieldy and less meaningful.

Accompanying this data challenge, a change of perspective and approach must be applied to ensure the net zero objective can be operationalized in a meaningful way. Importantly this action must be measurable and capable of being validated by customers. Is your sustainability goal more than lip service? What changes have you made?

Need Help. Contact Us