How Generative AI helps improve your business

Nicolas Fekos
13 min readOct 11, 2023
Image by Alexandra_Koch from Pixabay

Quite a few years ago in a previous Medium article “How an AI Engine can improve your business” , I had mentioned the following:

“The huge technological progress of the last decade addressed and greatly impacted the basic human need of communication. Websites, blogs, emails, text messages, Facebook, Snapchat, Slack and smartphones on the one hand and Google on the other as the dominant intermediary, all in essence addressing our need to communicate.

I would refer to this as the ‘communication era’ that has “plateaued” in terms of potential truly disruptive innovation. The next era will be the ‘knowledge era’, where Artificial Intelligence (AI) will dominate. “

Now OpenAI has gulped the entire Web to usher in this “knowledge era” which could be seen as a new technology paradigm since now it is possible to infuse Natural Language Processing and Inference ubiquitously.

At Fortuit, we are now building Generative AI infused apps that solve real world problems.

Inevitably, the questions that arise are:

1. What is Generative AI (like ChatGPT)?

2. What are the business benefits of integrating Generative AI?

3. What are the Data Privacy implications?

4. To what extent are businesses integrating Generative AI?

Let’s go.

WHAT IS GENERATIVE AI (LIKE CHATGPT)?

To put the discussion into context it is important to describe what ChatGPT is in business and in more technical terms as even within the software industry AI and Machine Learning expertise is not ubiquitous.

Business Perspective

ChatGPT is a cutting-edge Generative AI tool designed to enhance customer interactions, automate repetitive tasks, and provide valuable insights. Example use cases include market research, customer support and sales / marketing efforts, allowing businesses to streamline communication processes, handle inquiries and even generate leads.

Its ability to understand and generate human-like text makes it a powerful asset for businesses seeking to improve their market position and enhance their customer engagement strategies. Businesses can leverage ChatGPT to create interactive and dynamic customer experiences, ultimately leading to increased customer satisfaction and loyalty.

Technical Perspective

For developers, ChatGPT is a tool that provides natural language understanding and generation capabilities to create intelligent, conversational interfaces to their applications.

ChatGPT is based on the Generative Pre-trained Transformer (GPT) architecture which is a type of deep learning (ML) model designed for natural language processing tasks. This Transformer architecture, referred to as a Large Language Model — LLM, was introduced in a paper titled “Attention Is All You Need” by Google engineers Vaswani et al. in 2017.

A GPT neural network architecture (LLM) is comprised of an “Encoder” and a “Decoder”.

Encoder

The encoder’s job is to read and understand user input, such as a sentence or a paragraph. It uses a mechanism called “self-attention” to capture relationships between words in the input, weighing the importance of each word as regards all other words, not concerning itself with the orders of the words as such.

In this way contextual information and dependencies between the input words are captured.

Decoder

The decoder is used for generating text by predicting the next word given the input based and the previously generated sequence of words.

Example

To illustrate with a simple example:

Input: “What is a Chabot”

Encoder:

· Input becomes: [0, 1 ,2, 3]

· Accordingly, the calculated “attention” scores: [0.2, 0.3, 0.4, 0.7], indicating that the word “Chabot” commands the highest interest (attention)

· The scores are multiplied by each word’s “vector”, where a vector is a numeric representation of a word in a vector “space” where similar words have similar vectors indicating contextual proximity (i.e., the words are usually seen together, like “hotel” and “travel”)

· The Encoder’s outputs: [0.2 * vector (What), 0.3 * vector (is), 0.4 * vector (a), 0.7 * vector (Chabot)]

Decoder

· The Encoder’s output is the Decoder’s input

· From the Encoders output, the decoder gets the probabilities of next potential words from the overall (English or other) vocabulary

· From these words, the decoder’s mechanism selects the one with the highest probability to come next, in this case this word ‘System”

· So, the generated response starts to become “What is a Chabot? … [system]”

· This is repeated until the desired response length is reached.

No more technical stuff from here on!

WHAT ARE THE BUSINESS BENEFITS OF INTEGRATING GENERATIVE AI

The critical question for businesses is what this huge new technological opportunity means and what are the benefits for a company’s workings overall through the application of Generative AI.

Furthermore, how can it seamlessly and safely be integrated into each business function with a view to improving performance and efficiency, connecting and centralizing these business processes? How to streamline Generative AI Integration into a coherent, simple long term AI Strategy?

Business 101

To put the above in context, lets briefly take a step back to Business 101 and see what processes make up a business.

Overall, business processes can be approximately divided into three categories: Core, Support and “Long Tail”

Core Processes

· Production

· Sales and Marketing

· Order Fulfillment

· Customer Support

· Product Development

Support Processes

· Human Resources (HR) & Recruiting

· Finance

· IT

· Procurement

Long Tail (everything else)

A long-tail process is a custom / ad hoc workflow in response to evolving business needs, usually reflecting a gap between systems, apps, departments, or workflows, usually addressed through manual work or an informal workflow.

Examples

· Contract Management

· Vendor approval

· New hire request

· Budget requests

Application of Generative AI to Business Processes

A huge opportunity exists for businesses to leapfrog competition by reimagining how humans get work done with generative AI applications at their side.

Generative AI can and is being applied to business processes, unlocking novel use cases and also speeding up, scaling, and improving existing ones.

Some potential business benefits of Generative AI integration are:

1. Automated Content Creation

2. Personalized Customer Experiences

3. Customer Service and Support

4. Language Translation and Localization

5. Cost Reduction

6. Innovation and Idea Generation

Overall, implementing generative AI enhances productivity, reduces costs, improves customer satisfaction and unlocks innovation across various business function.

But, to be more specific, below are some very beneficial use cases (among many more) in reference to the business processes described above.

Core Business Process: Sales and Marketing

A Sales AI Assistant to augment sales performance

· Imagine a customer sales call where the company’s “Sales AI Assistant” suggests upselling opportunities to the salesperson in real time based on the actual content of the conversation, drawing from CRM data, market trends, current news and social media influencer data. Before the call, the AI Assistant would provide a sales pitch draft. After the call, an assessment would be automatically generated that would allow the salesperson to provide feedback to the AI assistant with a view to helping it performance get better and better. The above does not mean replacing salespeople but teaming them up with generative AI.

Core Business Process: Product Development

Adding value to an existing product

Imagine if for every specific product a company sells an employee could be assigned to guide the customer from initial use and un-boxing to trouble shooting and to best use guidelines.

Customer satisfaction would sky rocket but of course the cost would be unsustainable. Imagine for example a specific Air Fryer product that has an accompanying specific AI Assistant implemented through a mobile app with speech to text and text to speech functionality that will guide product use from day one. Consider what this means for upselling, feedback, engagement, support and loyalty. This would be like hiring thousands of support employees, essentially for free.

Support Business Process: Human Resources

Company Culture and standards, training and on-boarding HR assistant

A generative HR AI Assistant trained on proprietary knowledge such as company policies, customer interaction guidelines, and overall company culture and governance could provide continuous guidance and support to all company employees. The same assistant could act as a company Learning Management System (LMS), and a new employee on-boarding guidance counselor.

Central AI Assistant — all processes, all assistants

Importantly, in conjunction to applying generative AI to the various processes described above, a core AI Assistant could be implemented as a “centralization” vehicle connecting, improving and monitoring all business processes and sub-assistants, so improving performance and efficiency and unlocking innovation across overall business functionality.

Generative AI Implementation Modes

All AI Assistants will have their own custom knowledge (LLM), and could potentially access data though API’s that connect to various company databases including ERP and CRM. All assistants would be implemented only as a Chat UX app accessible through mobile or the web with distinct authentication levels. All AI assistants would be minimal in terms of design and UX features, relying only on natural language chats sessions and a great amount of knowledge (unstructured) and data (structured).

Cloud implementation

· Custom LLM (hosted on the cloud on a service like Hugging Face Spaces which offers a simple way host ML apps)

· ChatGPT API leveraged to complement the Custom LLM as required.

· AI Assistant app implemented on the cloud on a service like Azure or AWS.

· AI Assistant implemented as an IOS or Android mobile app

As regards Data Privacy, OpenAI recently launched ChatGPT Enterprise to address related concerns.

ChatGPT Enterprise provides:

· Data encryption “at rest” and “in transit”

· Data privacy through AES 256 and TLS 1.2+

· SOC 2 compliance

· Customer prompts and sensitive corporate data remain untapped for OpenAI model training.

In-house Implementation

· LLM and Apps hosted within corporate IT infrastructure

· No access to ChatGPT API

Hugging Face has over 300,000 models readily available, including exceptionally powerful large-language models, allowing businesses to deploy their own GPT privately and locally.

Hybrid Implementation

Internal use AI Assistants (like the HR example above), could be implemented and made accessible only within the corporate IT infrastructure. Non-public documents such as meeting minutes, legal documents, product specifications, technical documentation, R&D reports, business procedures, and so on could be made available only to internal AI Assistants.

External use Assistants that rely on public company data clearly should use 3rd Party LLM services like ChatGPT, Google Bard and/or others.

In theory, In-house implementations are safer but much more expensive. But, if in-house security is not at enterprise level (like the specifications mentioned above), is it really safer?

DATA PRIVACY, COMPLIANCE AND SECURITY IMPLICATIONS OF GENERATIVE AI

To be clear, Data Privacy and Compliance requirements have always been an important aspect of data processing and should be a priority for all business process and IT Systems. These issues and are not something that requires prioritization due to the recent advent of Generative AI technology.

From a legal perspective, to date the United States lacks comprehensive, broadly applicable privacy protections like the European Union’s General Data Protection Regulation (“GDPR”) and its equivalent regulations in the United Kingdom and Switzerland, but does have a patchwork of federal and state laws addresses data privacy here at home.

For example, The Virginia Consumer Data Protection Act (“VCDPA”) regulates only collection and use of personal information of consumers (employee data is exempt from regulatory requirements). The California Privacy Rights Act (“CPRA”) requires that notice of the business’s personal data handling practices be provided prior to or at the time of initial data collection.

In terms of AI specific Regulation, there are various undergoing efforts (US, UK, EU), but it is a very difficult task. Innovation is stifled by over-regulation, and even if certain countries do over-regulate, others will clearly under-regulate.

Let’s briefly look at the basic requirements for some of the regulatory directives mentioned above.

European Union: GDPR (General Data Protection Act)

Basic Principles

o Lawful, fair and transparent processing

o Limitation of purpose, data and storage

o Data subject rights

o Consent

o Personal data breaches

o Privacy by Design

o Data Protection Impact Assessment

o Data transfers

o Data Protection Officer

o Awareness and training

United States: California Consumer Privacy Act (CCPA) (proxy for other potential regulation)

Applies to businesses:

o with an annual gross revenue of over $25 million

o that gather, buy, sell, or receive personal information of over 50,000 California residents, households, or devices

o derive more than 50% of annual revenue from selling the personal information of California residents

Basic Principles

o Right to Disclosure

o Right to Access

o Right to Contact Information.

o Right to be Forgotten.

o Opt-out of Data Sales and Marketing.

o Right to Fair Treatment.

o Periodic Privacy Policy Updates.

United Kingdom: Data Protection Act

Basic Principles

o Used fairly, lawfully and transparently

o Used for specified, explicit purposes

o Used in a way that is adequate, relevant and limited to only what is necessary

o Accurate and, where necessary, kept up to date

o Kept for no longer than is necessary

o Handled in a way that ensures appropriate security, including protection against unlawful or unauthorized processing, access, loss, destruction or damage

Approaches to Generative AI Data Privacy, Compliance, Security

Sandboxing

Sandboxing is strategy to keep data safe when working with AI models. Sandboxing means creating a controlled environment within which a system or application operates. So a company can have internal apps working only with an internal (sandboxed) network (lan) environment.

Data Obfuscation

Obfuscation means replacing sensitive data with encrypted forms, sending this version to the LLM, and process the response by replacing the replaced data with the original version.

For example, sending the following to ChatGPT:

“Joe Smith lives at 4607 Rodeo LN Los Angeles CA 90016–5609 USA”. What is Joe’s current address?”

would be translated to “<NAME-1>” lives at4607 Rodeo LN Los Angeles CA 90016–5609 USA. What is :<NAME-1>”’s current address?”

The ChatGPT response “<NAME-1> lives at 4607 Rodeo LN Los Angeles CA 90016–5609 USA.” Would be converted to “Joe Smith lives at 4607 Rodeo LN Los Angeles CA 90016–5609 USA.” before being sent to the user.

Moderation and Fact Checking API

In all LLM use cases, a Moderation and Fact Checking API should be deployed. This would filter out a large range of inappropriate material in AI and/or user-generated content. At the same time, it would fact check both the input and output.

LLMs do not recall facts like databases but instead generate text that reads like human-written text that will always sound right, but in some cases may be incorrect. This is known as “hallucination”.

To mitigate this, in addition to the Moderation and Fact Checking API a custom LLM containing certified company knowledge should be included and tapped into initially by the AI Assistant.

Use of Disclaimers

As regards all AI applications, in addition to recorded user terms and conditions acceptance, disclaimers should clearly indicate that the content has been generated by AI and may not always be accurate and/or taken as medical, financial (or other) advice.

Transparency and Data Best Practices

Transparency about how Generative AI models have been trained is important. Users must always know what data might and might not be collected about them when using generative AI. There should always be an accessible mechanism for users to request data deletion or opt-out of certain data processing activities.

Data de-identification and anonymization, identification of Personal Identifying Information (PII) data, data loss prevention and data minimization should always be considered when developing any Generative AI application.

TO WHAT EXTENT ARE BUSINESSES INTEGRATING GENERATIVE AI?

OpenAI’s blog post reports ChatGPT’s impressive adoption. With over 80 percent uptake in Fortune 500 companies, industry leaders such as Block, Canva, and PwC are using ChatGPT Enterprise for tasks ranging from coding to communications.

Overall, AI integration is already underway in a significant number of businesses today with an ever-increasing trend as reflected in a recent Forbes Advisor Study shown below.

According to this study, “a significant number of businesses (53%) apply AI to improve production processes, while 51% adopt AI for process automation and 52% utilize it for search engine optimization tasks such as keyword research. Companies are also leveraging AI for data aggregation (40%), idea generation (38%) and minimizing safety risks (38%). In addition, AI is being used to streamline internal communications, plans, presentations and reports (46%). Businesses employ AI for writing code (31%) and website copy (29%) as well.”

Another study by Gartner revealed that 45 percent of top-level executives mentioned that exposure to ChatGPT had motivated them to boost their investments in AI. This trend willlikely to continue with the introduction of ChatGPT Enterprise.

Takeaway

Generative AI is a major and rare opportunity for today’s businesses to transform themselves into less rigid and less bureaucratic entities, enabling a transition to a nimbler and more productive version of themselves, more akin to smaller niche companies.

Current generative AI and other AI technologies have the potential to automate work activities that absorb a substantial percent of employees’ time today, allowing focus on more creative and productive activities.

The potential benefits of Generative AI are significant. Risks and safeguards must be considered and implemented. In fact, integration is inevitable.

P.S. Although I hold an MSc from QMW London in Artificial Intelligence — Advanced Methods in Computer Science and have technical expertise in ML Models, I am still continuously impressed by the workings of ChatGPT — GPT-4.

I asked it to read this article, check grammar and spelling, tell me how much time would be required for someone to read it and to give me some NEW Gen AI insights.

It answered saying that 12 minutes of reading would be required (stating the average read time per word), indicated grammar and spelling issues, and gave some great, deep insights referring to Reskilling, Impact on education, Cultural Implications, Accessibility/Inclusivity and Environmental Impact.

I said “Thanks great answer” and it responded with “if there’s anything else I can help you with, feel free to ask, oh and happy reading”. From a technical perspective that “happy reading” part at the end is impressive, reflecting the extent of ChatGPT’s contextual understanding at more than one dimension, implying that AI systems are becoming increasingly sophisticated in their ability to comprehend and respond to human input, enhancing user experiences and interactions.

--

--

Nicolas Fekos

Nicolas holds an MSc in Artificial Intelligence from QMW, London and a BSc in Computer Science, Greenwich Univ. He is the founder of https://fortuit.ai