In my nano tips series on ChatGPT so far, I’ve covered Data Storytelling and Visualization, and Technical Prompts. Since βChatGPT can now see, hear, and speakβ, I thought it would be a good idea to create some additional videos to explain those multimodal capabilities.
In this course, you will learn how to configure advanced data analysis in ChatGPT, execute multistep document interpretation, join multiple tables with prompts, and analyze data using linear regression.
Along the way, I will demonstrate a handful of skills useful for designers such as how to extract and organize text from images, code an app element from an image, create UX design annotations, conduct text-to-image experimentation with DALL-E, and much more.
Also, by the end of the course, youβll be prepared to browse the internet with Bing and utilize AI-enhanced voice conversations and commands.
Here is a quick sneak peek into my favorite video from the course:
Extract and organize text from images from Nano Tips for Navigating Advanced Data Analysis, Vision, and Voice in ChatGPT with Lachezar Arabadzhiev by Lachezar Arabadzhiev
The Curriculum
10 Tips for Using Advanced Data Analysis, Vision, and Voice in ChatGPT
- Configure advanced data analysis in ChatGPT
- Execute multistep document interpretation(In progress)
- Join multiple tables with prompts
- Analyze data with linear regression
- Extract and organize text from images
- Code an app element from an image
- Create UX design annotations
- Text-to-image experimentation with DALL-E
- Browse the internet with Bing
- Voice conversations and commands
If you have any questions, feel free to do one of the following:
π₯ Explore my LinkedIn Learning courses on storytelling through data and design!
π° Follow me on LinkedIn, and click the π at the top of my profile page to stay up to date with my latest content!
β Subscribe to receive the by-weekly βMy Next Story Isβ¦β newsletter. Yep, it is my brand new newsletter!