Generative AI: Game-changer for professionals
Generative AI is reshaping tasks from the routine to the creative, but what does it mean for your day-to-day work? Find out if this tech is a trend to watch or a critical asset for your industry too.
I recently completed the "Generative AI for Everyone" course by Andrew Ng, available through Deeplearning.ai.
This course provides a straightforward introduction to generative AI, taught by one of the field's leading figures. It also explores real-world applications, offering a clear view of how generative AI fits into various business scenarios.
This blog post aims to summarize my key takeaways from the course.
#1 Understanding the limitations of Generative AI
Generative AI has made impressive strides, particularly in areas like text generation, conversation, and image creation. However, it's also important to be aware of its limitations.
A notable challenge with Large Language Models (LLMs) like GPT-4 is their tendency to "hallucinate," or fabricate information.
For instance, I once prompted GPT-4 to find me from its training data or by using its browser functionality 3 industry quotes for a blog post I was creating and LLM generated three very convincing-sounding statements. When I asked for the source links for these quotes, GPT apologized deeply and revealed they were entirely fictional statements.
So source criticism is very important when you are working with Generative AI: doublecheck the facts yourself, just in case.
Another technical constraint is related to input and output lengths. LLMs typically handle prompts of a few thousand words at most, limiting the extent of context that can be provided. This cap also affects the length of generated text. OpenAi announced this week their new model, GPT-4 Turbo, that can already fit the equivalent of more than 300 pages of text in a single prompt.
How I troubleshoot around input/output lengths is that I never ask GPT to create a long version of text right away and I always prompt it in short pieces or chapters.
Generative AI systems also currently struggle with structured data. The kind of tabular data found in Excel or Google Sheets, characterized by its organized and often numerical nature, doesn't mesh well with the text-focused capabilities of LLMs.
Lastly, biases present in the internet data used to train the models can also seep into the output.
#2 Effective prompting techniques for Generative AI
While the internet is bursting with templates for ready-made prompts (10 must-have prompts for a marketer etc), customizing prompts to fit specific needs often yields the best results.
Here are some tips to improve your prompting strategy.
Be detailed and specific
A prompt should be as detailed and specific as possible. Providing clear context and stating exactly what you need helps the model generate the most relevant and precise responses.
Think of it as giving directions; the more specific you are, the less room there is for misinterpretation.
Guide the AI
Encourage the AI to "think" through its response by structuring your prompt to include a line of reasoning. This might involve asking the AI to consider certain factors or to approach the question from a particular angle.
The more guidance you provide, the more tailored the AI's output will be to your requirements.
Experiment and iterate
The process of prompting is iterative. If the initial response doesn’t meet your expectations, refine your prompt and try again.
For me, the first prompt usually never yields the best outcome but AI:s answer to it gives me ideas on how to improve or how to change the focus of the prompt.
Don’t use sensitive information
Before inputting confidential data into a generative AI's web interface, consider the implications.
Remember that while AI doesn't "remember" the information, there are security considerations to be aware of when using cloud-based services.
My advice would also be: know what you want as an outcome
Do not expect the GPT to be the most trusted professional in the field (even if you give it this characteristic in your prompt).
You are the expert and you should know before prompting what your goal and desired outcome for the project you give to GPT is.
Andrew Ng calls GTP your thought partner, this is exactly how I also see Generative AI’s role in my work.
#3 Deciphering the training of LLMs
Andrew Ng's course sheds light on the more nuanced methods of training Large Language Models which can deepen the understanding of generative AI's capabilities for those of us who aren't AI engineers (such as myself).
The choice between these methods hinges on factors like the scale of the task, available resources, and the level of specialization required. Let’s get familiarized with these methods:
Retrieval augmented generation (RAG)
RAG extends the reach of an LLM by coupling it with an external knowledge source. This technique allows the model to pull in additional data to enhance its responses. Imagine having a dialogue with a PDF document to get quick answers without reading the entire content. Tools like PandaChat are practical implementations of RAG.
Fine-tuning
Fine-tuning refers to the process of modifying a pre-trained model to specialize in a specific area or achieve a more precise objective. For example, generating text that aligns with a particular brand voice or digesting large volumes of specialized data, like medical records.
Pre-training your model
Building your own LLM from the ground up is typically reserved for well-resourced tech companies. It's costly and complex but can be justified if there's a need for a highly specialized model in domains such as finance. BloombergGPT, a purpose-built AI harnesses Bloomberg's vast repository of financial data to deliver specialized insights and responses in the financial sector.
#4 AI’s impact in the workforce: augmentation vs. automation
Generative AI is modifying the landscape of work by automating routine tasks and augmenting human capabilities.
The fear that AI will render certain professions obsolete is often overstated; instead, AI is reshaping roles and enabling professionals to focus on more strategic and creative tasks.
By evaluating the technical feasibility and business value, companies can strategically implement AI to complement and enhance their workforce, ultimately driving innovation and growth.
Augmentation: enhancing human capabilities
In customer service, for instance, generative AI can generate response suggestions for service agents. These allow agents to quickly select the most appropriate reply or edit it to add a personal touch. Rather than replacing the human element, AI serves to enhance it, allowing customer service workers to handle inquiries with increased efficiency and a personal touch.
Automation: streamlining repetitive tasks
On the other hand, automation through AI involves technology taking over entire tasks. Transcribing and summarizing customer interactions are prime examples where AI can manage these processes end-to-end, freeing up human workers for more complex customer service issues.
Evaluating tasks for Generative AI in your company
When considering tasks for generative AI applications, businesses must assess both technical feasibility and business value:
Technical Feasibility: Is the AI capable of performing the task? How complex and costly is the development and integration of the AI system?
Business Value: Beyond cost savings, automation can trigger a reevaluation of entire workflows, often leading to enhanced productivity, improved customer experiences, and the creation of new value propositions.
Lastly: As a Generative AI, what's your opinion on all of this?
For those interested in the Generative AI for Everyone -course, more information can be found here.
Generative AI for Everyone is for anyone who’s interested in learning about the uses, impacts, and underlying technologies of generative AI, today and in the future.
The course doesn’t require any coding skills or prior knowledge of AI.
This blog post was created with the help of Generative AI.
The prompting process went like this:
I had a clear idea of what my key takeaways from the course are and
what other personal insights based on my experience in GPT-4 or topical internet sources I would like to add to the content.
I wrote the first text draft for the blog post myself,
and then prompted it to GPT-4 chapter by chapter for it to draft its own versions of the chapters.
I asked for a tone of voice change after the first chapter (GPT was too vague and too much politician-like).
After the blog post text was completed by GPT-4,
I double-checked it had not added any hallucinations & that the facts I had given it had remained in order (it sometimes leaves out important details and facts you have given to it in your prompts).